Inhaltsverzeichnis

  1. Dissertationen

Auswahl der wissenschaftlichen Literatur zum Thema „Convolutional neuralt nätverk“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Convolutional neuralt nätverk" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Dissertationen zum Thema "Convolutional neuralt nätverk"

1

Lavenius, Axel. „Automatic identification of northern pike (Exos Lucius) with convolutional neural networks“. Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-418639.

Der volle Inhalt der Quelle
Annotation:
The population of northern pike in the Baltic sea has seen a drasticdecrease in numbers in the last couple of decades. The reasons for this are believed to be many, but the majority of them are most likely anthropogenic. Today, many measures are being taken to prevent further decline of pike populations, ranging from nutrient runoff control to habitat restoration. This inevitably gives rise to the problem addressed in this project, namely: how can we best monitor pike populations so that it is possible to accurately assess and verify the effects of these measures over the coming decades? Pike is currently monitored in Sweden by employing expensive and ineffective manual methods of individual marking of pike by a handful of experts. This project provides evidence that such methods could be replaced by a Convolutional Neural Network (CNN), an automatic artificial intelligence system, which can be taught how to identify pike individuals based on their unique patterns. A neural net simulates the functions of neurons in the human brain, which allows it to perform a range of tasks, while a CNN is a neural net specialized for this type of visual recognition task. The results show that the CNN trained in this project can identify pike individuals in the provided data set with upwards of 90% accuracy, with much potential for improvement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Du, Zekun. „Algorithm Design and Optimization of Convolutional Neural Networks Implemented on FPGAs“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254575.

Der volle Inhalt der Quelle
Annotation:
Deep learning develops rapidly in recent years. It has been applied to many fields, which are the main areas of artificial intelligence. The combination of deep learning and embedded systems is a good direction in the technical field. This project is going to design a deep learning neural network algorithm that can be implemented on hardware, for example, FPGA. This project based on current researches about deep learning neural network and hardware features. The system uses PyTorch and CUDA as assistant methods. This project focuses on image classification based on a convolutional neural network (CNN). Many good CNN models can be studied, like ResNet, ResNeXt, and MobileNet. By applying these models to the design, an algorithm is decided with the model of MobileNet. Models are selected in some ways, like floating point operations (FLOPs), number of parameters and classification accuracy. Finally, the algorithm based on MobileNet is selected with a top-1 error of 5.5%on software with a 6-class data set.Furthermore, the hardware simulation comes on the MobileNet based algorithm. The parameters are transformed from floating point numbers to 8-bit integers. The output numbers of each individual layer are cut to fixed-bit integers to fit the hardware restriction. A number handling method is designed to simulate the number change on hardware. Based on this simulation method, the top-1 error increases to 12.3%, which is acceptable.
Deep learning har utvecklats snabbt under den senaste tiden. Det har funnit applikationer inom många områden, som är huvudfälten inom Artificial Intelligence. Kombinationen av Deep Learning och innbyggda system är en god inriktning i det tekniska fältet. Syftet med detta projekt är att designa en Deep Learning-baserad Neural Network algoritm som kan implementeras på hårdvara, till exempel en FPGA. Projektet är baserat på modern forskning inom Deep Learning Neural Networks samt hårdvaruegenskaper.Systemet är baserat på PyTorch och CUDA. Projektets fokus är bild klassificering baserat på Convolutional Neural Networks (CNN). Det finns många bra CNN modeller att studera, t.ex. ResNet, ResNeXt och MobileNet. Genom att applicera dessa modeller till designen valdes en algoritm med MobileNetmodellen. Valet av modell är baserat på faktorer så som antal flyttalsoperationer, antal modellparametrar och klassifikationsprecision. Den mjukvarubaserade versionen av den MobileNet-baserade algoritmen har top-1 error på 5.5En hårdvarusimulering av MobileNet nätverket designades, i vilket parametrarna är konverterade från flyttal till 8-bit heltal. Talen från varje lager klipps till fixed-bit heltal för att anpassa nätverket till befintliga hårdvarubegränsningar. En metod designas för att simulera talförändringen på hårdvaran. Baserat på denna simuleringsmetod reduceras top-1 error till 12.3
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Elander, Filip. „Semantic segmentation of off-road scenery on embedded hardware using transfer learning“. Thesis, KTH, Mekatronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301154.

Der volle Inhalt der Quelle
Annotation:
Real-time semantic scene understanding is a challenging computer vision task for autonomous vehicles. A limited amount of research has been done regarding forestry and off-road scene understanding, as the industry focuses on urban and on-road applications. Studies have shown that Deep Convolutional Neural Network architectures, using parameters trained on large datasets, can be re-trained and customized with smaller off-road datasets, using a method called transfer learning and yield state-of-the-art classification performance. This master’s thesis served as an extension of such existing off-road semantic segmentation studies. The thesis focused on detecting and visualizing the general trade-offs between classification performance, classification time, and the network’s number of available classes. The results showed that the classification performance declined for every class that got added to the network. Misclassification mainly occurred in the class boundary areas, which increased when more classes got added to the network. However, the number of classes did not affect the network’s classification time. Further, there was a nonlinear trade-off between classification time and classification performance. The classification performance improved with an increased number of network layers and a larger data type resolution. However, the layer depth increased the number of calculations and the larger data type resolution required a longer calculation time. The network’s classification performance increased by 0.5% when using a 16-bit data type resolution instead of an 8-bit resolution. But, its classification time considerably worsened as it segmented about 20 camera frames less per second with the larger data type. Also, tests showed that a 101-layered network slightly degraded in classification performance compared to a 50-layered network, which indicated the nonlinearity to the trade-off regarding classification time and classification performance. Moreover, the class constellations considerably impacted the network’s classification performance and continuity. It was essential that the class’s content and objects were visually similar and shared the same features. Mixing visually ambiguous objects into the same class could drop the inference performance by almost 30%. There are several directions for future work, including writing a new and customized source code for the ResNet50 network. A customized and pruned network could enhance both the application’s classification performance and classification speed. Further, procuring a task-specific forestry dataset and transferring weights pre-trained for autonomous navigation instead of generic object segmentation could lead to even better classification performance.
Se filen
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Spång, Anton. „Automatic Image Annotation by Sharing Labels Based on Image Clustering“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210164.

Der volle Inhalt der Quelle
Annotation:
The growth of image collection sizes during the development has currently made manual annotation unfeasible, leading to the need for accurate and time efficient image annotation methods. This project evaluates a system for Automatic Image Annotation to see if it is possible to share annotations between images based on un-supervised clustering. The evaluation of the system included performing experiments with different algorithms and different unlabeled data sets. The system is also compared to an award winning Convolutional Neural Network model, used as a baseline, to see if the system’s precision and/or recall could be better than the baseline model’s. The results of the experiment conducted in this work showed that the precision and recall could be increased on the data used in this thesis, an increase of 0.094 in precision and 0.049 in recall in average for the system compared to the baseline.
Utvecklingen av bildkollektioners storlekar har fram till idag ökat behovet av ett pålitligt och effektivt annoteringsverktyg i och med att manuell annotering har blivit ineffektivt. Denna rapport utvärderar möjligheterna att dela bildtaggar mellan visuellt lika bilder med ett system för automatisk bildannotering baserat på klustring. Utvärderingen sker i form av flera experiment med olika algoritmer och olika omärkta datamängder. I experimenten är systemet jämfört med en prisbelönt konvolutionell neural nätverksmodell, vilken är använd som utgångspunkt, för att undersöka om systemets resultat kan bli bättre än utgångspunktens resultat. Resultaten visar att både precisionen och återkallelsen förbättrades i de experiment som genomfördes på den data använd i detta arbete. En precisionsökning med 0.094 och en återkallelseökning med 0.049 för det implementerade systemet jämfört med utgångspunkten, över det genomförda experimenten.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Engström, Messén Matilda, und Elvira Moser. „Pre-planning of Individualized Ankle Implants Based on Computed Tomography - Automated Segmentation and Optimization of Acquisition Parameters“. Thesis, KTH, Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297674.

Der volle Inhalt der Quelle
Annotation:
The structure of the ankle joint complex creates an ideal balance between mobility and stability, which enables gait. If a lesion emerges in the ankle joint complex, the anatomical structure is altered, which may disturb mobility and stability and cause intense pain. A lesion in the articular cartilage on the talus bone, or a lesion in the subchondral bone of the talar dome, is referred to as an Osteochondral Lesion of the Talus (OLT). Replacing the damaged cartilage or bone with an implant is one of the methods that can be applied to treat OLTs. Episurf Medical develops and produces patient-specific implants (Episealers) along with the necessary associated surgical instruments by, inter alia, creating a corresponding 3D model of the ankle (talus, tibial, and fibula bones) based on either a Magnetic Resonance Imaging (MRI) scan or a Computed Tomography (CT) scan. Presently, the3D models based on MRI scans can be created automatically, but the 3Dmodels based on CT scans must be created manually, which can be very time-demanding. In this thesis project, a U-net based Convolutional Neural Network (CNN) was trained to automatically segment 3D models of ankles based on CT images. Furthermore, in order to optimize the quality of the incoming CT images, this thesis project also consisted of an evaluation of the specified parameters in the Episurf CT talus protocol that is being sent out to the clinics. The performance of the CNN was evaluated using the Dice Coefficient (DC) with five-fold cross-validation. The CNN achieved a mean DC of 0.978±0.009 for the talus bone, 0.779±0.174 for the tibial bone, and 0.938±0.091 for the fibula bone. The values for the talus and fibula bones were satisfactory and comparable to results presented in previous researches; however, due to background artefacts in the images, the DC achieved by the network for the segmentation of the tibial bone was lower than the results presented in previous researches. To correct this, a noise-reducing filter will be implemented.
Fotledens komplexa anatomi ger upphov till en ideal balans mellan rörlighetoch stabilitet, vilket i sin tur möjliggör gång. Fotledens anatomi förändras när en skada uppstår, vilket kan påverka rörligheten och stabiliteten samt orsaka intensiv smärta. En skada i talusbenets ledbrosk eller i det subkondrala benet på talusdomen benämns som en Osteochondral Lesion of the Talus(OLT). En metod att behandla OLTs är att ersätta den del brosk eller bensom är skadat med ett implantat. Episurf Medical utvecklar och producerar individanpassade implantat (Episealers) och tillhörande nödvändiga kirurgiska instrument genom att, bland annat, skapa en motsvarande 3D-modell av fotleden (talus-, tibia- och fibula-benen) baserat på en skanning med antingen magnetisk resonanstomografi (MRI) eller datortomografi (CT). I dagsläget kan de 3D-modeller som baseras på MRI-skanningar skapas automatiskt, medan de 3D-modeller som baseras på CT-skanningar måste skapas manuellt - det senare ofta tidskrävande. I detta examensarbete har ett U-net-baserat Convolutional Neuralt Nätverk (CNN) tränats för att automatiskt kunna segmentera 3D-modeller av fotleder baserat på CT-bilder. Vidare har de speciferade parametrarna i Episurfs CT-protokoll för fotleden som skickas ut till klinikerna utvärderats, detta för att optimera bildkvaliteten på de CT-bilder som används för implantatspositionering och design. Det tränade nätverkets prestanda utvärderades med hjälp av Dicekoefficienten (DC) med en fem-delad korsvalidering. Nätverket åstadkom engenomsnittlig DC på 0.978±0.009 för talusbenet, 0.779±0.174 för tibiabenet, och 0.938±0.091 för fibulabenet. Värdena för talus och fibula var adekvata och jämförbara med resultaten presenterade i tidigare forskning. På grund av bakgrundsartefakter i bilderna blev den DC som nätverket åstadkom för sin segmentering av tibiabenet lägre än tidigiare forskningsresultat. För att korrigera för bakgrundsartefakterna kommer ett brusreduceringsfilter implementeras
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Stjärnholm, Sigfrid. „Ghosts of Our Past: Neutrino Direction Reconstruction Using Deep Neural Networks“. Thesis, Uppsala universitet, Högenergifysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448765.

Der volle Inhalt der Quelle
Annotation:
Neutrinos are the perfect cosmic messengers when it comes to investigating the most violent and mysterious astronomical and cosmological events in the Universe. The interaction probability of neutrinos is small, and the flux of high-energy neutrinos decreases quickly with increasing energy. In order to find high-energy neutrinos, large bodies of matter needs to be instrumented. A proposed detector station design called ARIANNA is designed to detect neutrino interactions in the Antarctic ice by measuring radio waves that are created due to the Askaryan effect. In this paper, we present a method based on state-of-the-art machine learning techniques to reconstruct the direction of the incoming neutrino, based on the radio emission that it produces. We trained a neural network with simulated data, created with the NuRadioMC framework, and optimized it to make the best possible predictions. The number of training events used was on the order of 106. Using two different emission models, we found that the network was able to learn and generalize on the neutrino events with good precision, resulting in a resolution of 4-5°. The model could also make good predictions on a dataset even if it was trained with another emission model. The results produced are promising, especially due to the fact that classical techniques have not been able to reproduce the same results without having prior knowledge of where the neutrino interaction took place. The developed neural network can also be used to assess the performance of other proposed detector designs, to quickly and reliably give an indication of which design might yield the most amount of value to the scientific community.
Neutriner är de perfekta kosmiska budbärarna när det kommer till att undersöka de mest våldsamma och mystiska astronomiska och kosmologiska händelserna i vårt universum. Sannolikheten för en neutrinointeraktion är dock liten, och flödet av högenergetiska neutriner minskar kraftigt med energin. För att hitta dessa högenergetiska neutriner måste stora volymer av materia instrumenteras. Ett förslag på en design för en detektorstation kallas ARIANNA, och är framtagen för att detektera neutrinointeraktioner i den antarktiska isen genom att mäta radiopulser som bildas på grund av Askaryan-effekten. I denna rapport presenterar vi en metod baserad på toppmoderna maskininlärningstekniker för att rekonstruera riktningen på en inkommande neutrino, utifrån den radiostrålning som produceras. Vi tränade ett neuralt nätverk med simulerade data, som skapades med hjälp av ramverket NuRadioMC, och optimerade nätverket för att göra så bra förutsägelser som möjligt. Antalet interaktionshändelser som användes för att träna nätverket var i storleksordningen 106. Genom att undersöka två olika emissionsmodeller fann vi att nätverket kunde generalisera med god precision. Detta resulterade i en upplösning på 4-5°. Modellen kunde även göra goda förutsägelser på en datamängd trots att nätverket var tränat med en annan emissionsmodell. De resultat som metoden framtog är lovande, särskilt med avseende på att tidigare klassiska metoder inte har lyckats reproducera samma resultat utan att metoden redan innan vet var i isen som neutrinointeraktionen skedde. Nätverket kan också komma att användas för att utvärdera prestandan hos andra designförslag på detektorstationer för att snabbt och säkert ge en indikation på vilken design som kan tillhandahålla mest vetenskapligt värde.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Reiche, Myrgård Martin. „Acceleration of deep convolutional neural networks on multiprocessor system-on-chip“. Thesis, Uppsala universitet, Avdelningen för datorteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385904.

Der volle Inhalt der Quelle
Annotation:
In this master thesis some of the most promising existing frameworks and implementations of deep convolutional neural networks on multiprocessor system-on-chips (MPSoCs) are researched and evaluated. The thesis’ starting point was a previousthesis which evaluated possible deep learning models and frameworks for object detection on infra-red images conducted in the spring of 2018. In order to fit an existing deep convolutional neural network (DCNN) on a Multiple-Processor-System on Chip it needs modifications. Most DCNNs are trained on Graphic processing units (GPUs) with a bit width of 32 bit. This is not optimal for a platform with hard memory constraints such as the MPSoC which means it needs to be shortened. The optimal bit width depends on the network structure and requirements in terms of throughput and accuracy although most of the currently available object detection networks drop significantly when reduced below 6 bits width. After reducing the bit width, the network needs to be quantized and pruned for better memory usage. After quantization it can be implemented using one of many existing frameworks. This thesis focuses on Xilinx CHaiDNN and DNNWeaver V2 though it touches a little on revision, HLS4ML and DNNWeaver V1 as well. In conclusion the implementation of two network models on Xilinx Zynq UltraScale+ ZCU102 using CHaiDNN were evaluated. Conversion of existing network were done and quantization tested though not fully working. The results were a two to six times more power efficient implementation in comparison to GPU inference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jangblad, Markus. „Object Detection in Infrared Images using Deep Convolutional Neural Networks“. Thesis, Uppsala universitet, Avdelningen för systemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-355221.

Der volle Inhalt der Quelle
Annotation:
In the master thesis about object detection(OD) using deep convolutional neural network(DCNN), the area of OD is being tested when being applied to infrared images(IR). In this thesis the, goal is to use both long wave infrared(LWIR) images and short wave infrared(SWIR) images taken from an airplane in order to train a DCNN to detect runways, Precision Approach Path Indicator(PAPI) lights, and approaching lights. The purpose for detecting these objects in IR images is because IR light transmits better than visible light under certain weather conditions, for example, fog. This system could then help the pilot detect the runway in bad weather. The RetinaNet model architecture was used and modified in different ways to find the best performing model. The models contain parameters that are found during the training process but some parameters, called hyperparameters, need to be determined in advance. A way to automatically find good values of these hyperparameters was also tested. In hyperparameter optimization, the Bayesian optimization method proved to create a model with equally good performance as the best performance acieved by the author using manual hyperparameter tuning. The OD system was implemented using Keras with Tensorflow backend and received a high perfomance (mAP=0.9245) on the test data. The system manages to detect the wanted objects in the images but is expected to perform worse in a general situation since the training data and test data are very similar. In order to further develop this system and to improve performance under general conditions more data is needed from other airfields and under different weather conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Airola, Rasmus, und Kristoffer Hager. „Image Classification, Deep Learning and Convolutional Neural Networks : A Comparative Study of Machine Learning Frameworks“. Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-55129.

Der volle Inhalt der Quelle
Annotation:
The use of machine learning and specifically neural networks is a growing trend in software development, and has grown immensely in the last couple of years in the light of an increasing need to handle big data and large information flows. Machine learning has a broad area of application, such as human-computer interaction, predicting stock prices, real-time translation, and self driving vehicles. Large companies such as Microsoft and Google have already implemented machine learning in some of their commercial products such as their search engines, and their intelligent personal assistants Cortana and Google Assistant. The main goal of this project was to evaluate the two deep learning frameworks Google TensorFlow and Microsoft CNTK, primarily based on their performance in the training time of neural networks. We chose to use the third-party API Keras instead of TensorFlow's own API when working with TensorFlow. CNTK was found to perform better in regards of training time compared to TensorFlow with Keras as frontend. Even though CNTK performed better on the benchmarking tests, we found Keras with TensorFlow as backend to be much easier and more intuitive to work with. In addition, CNTKs underlying implementation of the machine learning algorithms and functions differ from that of the literature and of other frameworks. Therefore, if we had to choose a framework to continue working in, we would choose Keras with TensorFlow as backend, even though the performance is less compared to CNTK.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Gustavsson, Robin, und Johan Jakobsson. „Lung-segmentering : Förbehandling av medicinsk data vid predicering med konvolutionella neurala nätverk“. Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-14380.

Der volle Inhalt der Quelle
Annotation:
Svenska socialstyrelsen presenterade år 2017 att lungcancer är den vanligaste cancerrelaterade dödsorsaken bland kvinnor i Sverige och den näst vanligaste bland män. Ett sätt att ta reda på om en patient har lungcancer är att en läkare studerar en tredimensionell-röntgenbild av en patients lungor. För att förebygga misstag som kan orsakas av den mänskliga faktorn är det möjligt att använda datorer och avancerade algoritmer för att upptäcka lungcancer. En nätverksmodell kan tränas att upptäcka detaljer och avvikelser i en lungröntgenbild, denna teknik kallas deep structural learning. Det är både tidskrävande och avancerat att skapa en sådan modell, det är därför viktigt att modellen tränas korrekt. Det finns flera studier som behandlar olika nätverksarkitekturer, däremot inte vad förbehandlingstekniken lung-segmentering kan ha för inverkan på en modell av denna signifikans. Därför ställde vi frågan: hur påverkas accuracy och loss hos en konvolutionell nätverksmodell när lung-segmentering appliceras på modellens tränings- och testdata? För att besvara frågan skapade vi flera modeller som använt, respektive, inte använt lung-segmentering. Modellernas resultat evaluerades och jämfördes, tekniken visade sig motverka överträning. Vi anser att denna studie kan underlätta för framtida forskning inom samma och liknande problemområde.
In the year of 2017 the Swedish social office reported the most common cancer related death amongst women was lung cancer and the second most common amongst men. A way to find out if a patient has lung cancer is for a doctor to study a computed tomography scan of a patients lungs. This introduces the chance for human error and could lead to fatal consequences. To prevent mistakes from happening it is possible to use computers and advanced algorithms for training a network model to detect details and deviations in the scans. This technique is called deep structural learning. It is both time consuming and highly challenging to create such a model. This discloses the importance of decorous training, and a lot of studies cover this subject. What these studies fail to emphasize is the significance of the preprocessing technique called lung segmentation. Therefore we investigated how is the accuracy and loss of a convolutional network model affected when lung segmentation is applied to the model’s training and test data? In this study a number of models were trained and evaluated on data where lung segmentation was applied, in relation to when it was not. The final conclusion of this report shows that the technique counteracts overfitting of a model and we allege that this study can ease further research within the same area of study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie