Dissertations / Theses on the topic 'Deep Learning, Database'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 22 dissertations / theses for your research on the topic 'Deep Learning, Database.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Khaghani, Farnaz. "A Deep Learning Approach to Predict Accident Occurrence Based on Traffic Dynamics." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98801.
Full textM.S.
Rapid traffic accident detection/prediction is essential for scaling down non-recurrent conges- tion caused by traffic accidents, avoiding secondary accidents, and accelerating emergency system responses. In this study, we propose a framework that uses large-scale historical traffic speed and traffic flow data along with the relevant weather information to obtain robust traffic patterns. The predicted traffic patterns can be coupled with the real traffic data to detect anomalous behavior that often results in traffic incidents in the roadways. Our framework consists of two major steps. First, we estimate the speed values of traffic at each point based on the historical speed and flow values of locations before and after each point on the roadway. Second, we compare the estimated values with the actual ones and introduce the ones that are significantly different as an anomaly. The anomaly points are the potential points and times that an accident occurs and causes a change in the normal behavior of the roadways. Our study shows the potential of the approach in detecting the accidents while exhibiting promising performance in detecting the accident occurrence at a time close to the actual time of occurrence.
Jiang, Haotian. "WEARABLE COMPUTING TECHNOLOGIES FOR DISTRIBUTED LEARNING." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1571072941323463.
Full textChillet, Alice. "Sensitive devices Identification through learning of radio-frequency fingerprint." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS051.
Full textIdentifying so-called sensitive devices is subject to various security or energy consumption constraints, making conventional identification methods unsuitable. To meet these constraints, it is possible to use intrinsic faults in the device’s transmission chain to identify them. These faults alter the transmitted signal, creating an inherently unique and non-reproducible signature known as the Radio Frequency (RF) fingerprint. To identify a device using its RF fingerprint, it is possible to use imperfection estimation methods to extract a signature that can be used by a classifier, or to use learning methods such as neural networks. However, the ability of a neural network to recognize devices in a particular context is highly dependent on the training database. This thesis proposes a virtual database generator based on RF transmission and imperfection models. These virtual databases allow us to better understand the ins and outs of RF identification and to propose solutions to make identification more robust. Secondly, we are looking at the complexity of the identification solution in two ways. The first involves the use of intricate programmable graphs, which are reinforcement learning models based on genetic evolution techniques that are less complex than neural networks. The second is to use pruning on neural networks found in the literature to reduce their complexity
Tamascelli, Nicola. "A Machine Learning Approach to Predict Chattering Alarms." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Find full textMcCullen, Jeffrey Reynolds. "Predicting the Effects of Sedative Infusion on Acute Traumatic Brain Injury Patients." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/105140.
Full textMaster of Science
Patients with Traumatic Brain Injury (TBI) often require sedative agents to facilitate intubation and prevent further brain injury by reducing anxiety and decreasing level of consciousness. It is important for clinicians to choose the sedative that is most conducive to optimizing patient outcomes. Hence, the purpose of our research is to provide guidance to aid this decision. Additionally, we compare different modeling approaches to provide insights into their relative strengths and weaknesses. To achieve this goal, we investigated whether the exposure of particular sedatives (fentanyl, propofol, versed, ativan, and precedex) was associated with different hospital discharge locations for patients with TBI. From best to worst, these discharge locations are home, rehabilitation, nursing home, remains hospitalized, and death. Our results show that versed was associated with better discharge locations and ativan was associated with worse discharge locations. The fact that versed is often used for alternative purposes may account for its association with better discharge locations. Further research is necessary to further investigate this and the possible negative effects of using ativan to facilitate intubation. We also found that other variables that influence discharge disposition are age, the Northeast region, and other variables pertaining to the clinical state of the patient (severity of illness metrics, etc.). By comparing the different modeling approaches, we found that the new deep learning methods were difficult to interpret but provided a slight improvement in performance after optimization. Traditional methods such as linear ii i regression allowed us to interpret the model output and make the aforementioned clinical insights. However, generalized additive models (GAMs) are often more practical because they can better accommodate other class distributions and domains.
Mondani, Lorenzo. "Analisi dati inquinamento atmosferico mediante machine learning." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16168/.
Full textBarbieri, Edoardo. "Analisi dell'efficienza di System on Chip su applicazioni parallele." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16759/.
Full textTallman, Jake T. "SOARNET, Deep Learning Thermal Detection For Free Flight." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2339.
Full textFalade, Joannes Chiderlos. "Identification rapide d'empreintes digitales, robuste à la dissimulation d'identité." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMC231.
Full textBiometrics are increasingly used for identification purposes due to the close relationship between the person and their identifier (such as fingerprint). We focus this thesis on the issue of identifying individuals from their fingerprints. The fingerprint is a biometric data widely used for its efficiency, simplicity and low cost of acquisition. The fingerprint comparison algorithms are mature and it is possible to obtain in less than 500 ms a similarity score between a reference template (enrolled on an electronic passport or database) and an acquired template. However, it becomes very important to check the identity of an individual against an entire population in a very short time (a few seconds). This is an important issue due to the size of the biometric database (containing a set of individuals of the order of a country). Thus, the first part of the subject of this thesis concerns the identification of individuals using fingerprints. Our topic focuses on the identification with N being at the scale of a million and representing the population of a country for example. Then, we use classification and indexing methods to structure the biometric database and speed up the identification process. We have implemented four identification methods selected from the state of the art. A comparative study and improvements were proposed on these methods. We also proposed a new fingerprint indexing solution to perform the identification task which improves existing results. A second aspect of this thesis concerns security. A person may want to conceal their identity and therefore do everything possible to defeat the identification. With this in mind, an individual may provide a poor quality fingerprint (fingerprint portion, low contrast by lightly pressing the sensor...) or provide an altered fingerprint (impression intentionally damaged, removal of the impression with acid, scarification...). It is therefore in the second part of this thesis to detect dead fingers and spoof fingers (silicone, 3D fingerprint, latent fingerprint) used by malicious people to attack the system. In general, these methods use machine learning techniques and deep learning. Secondly, we proposed a new presentation attack detection solution based on the use of statistical descriptors on the fingerprint. Thirdly, we have also build three presentation attacks detection workflow for fake fingerprint using deep learning. Among these three deep solutions implemented, two come from the state of the art; then the third an improvement that we propose. Our solutions are tested on the LivDet competition databases for presentation attack detection
Frizzi, Sebastien. "Apprentissage profond en traitement d'images : application pour la détection de fumée et feu." Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL0007.
Full textResearchers have found a strong correlation between hot summers and the frequency and intensity of forestfires. Global warming due to greenhouse gases such as carbon dioxide is increasing the temperature in someparts of the world. Fires release large amounts of greenhouse gases, causing an increase in the earth'saverage temperature, which in turn causes an increase in forest fires... Fires destroy millions of hectares offorest areas, ecosystems sheltering numerous species and have a significant cost for our societies. Theprevention and control of fires must be a priority to stop this infernal spiral.In this context, smoke detection is very important because it is the first clue of an incipient fire. Fire andespecially smoke are difficult objects to detect in visible images due to their complexity in terms of shape, colorand texture. However, deep learning coupled with video surveillance can achieve this goal. Convolutionalneural network (CNN) architecture is able to detect smoke and fire in RGB images with very good accuracy.Moreover, these structures can segment smoke as well as fire in real time. The richness of the deep networklearning database is a very important element allowing a good generalization.This manuscript presents different deep architectures based on convolutional networks to detect and localizesmoke and fire in video images in the visible domain
Janbain, Imad. "Apprentissage Ρrοfοnd dans l'Ηydrοlοgie de l'Estuaire de la Seine : Recοnstructiοn des Dοnnées Ηistοriques et Ρrévisiοn Ηydraulique." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMR033.
Full textThis PhD thesis explores the application of deep learning (DL) algorithms to address hydrological challenges in the Seine River basin, France’s second longest river. The Seine’s intricate hydraulic regime, shaped by variable rainfall, tributaries, human interventions, and tidal fluctuations, presents an ideal scenario for advanced computational techniques. DL models, particularly recurrent-based neural networks and attention mechanisms, were chosen for their ability to capture long-term temporal dependencies in time series data, outperforming traditional machine learning (ML) models and their reduced need for manual calibration compared to physical-based models.The research focuses on developing custom methodologies to enhance DL efficiency and optimize its application to specific challenges within the Seine River Basin. Key challenges include addressing complex interactions within the study area, predicting extreme flood events, managing data limitations, and reconstructing missing historical databases crucial for analyzing water level fluctuations in response to variables such as climatic changes. The objective is to uncover insights, bridge data gaps, and enhance flood prediction accuracy, particularly for extreme events, thereby advancing smarter water management solutions.Detailed across four articles, our contributions showcase the effectiveness of DL in various hydrological challenges and applications: filling missing water level data gaps that may span several months in hourly records, projecting water quality parameters over 15 years in the past, analyzing station interactions, and predicting extreme flood events on both large (up to 7 days ahead in daily data) and small scales (up to 24 hours in hourly data).Proposed techniques such as the Mini-Look-Back decomposition approach, automated historical reconstruction strategies, custom loss functions, and extensive feature engineering highlight the versatility and efficacy of DL models in overcoming data limitations and outperforming traditional methods. The research emphasizes interpretability alongside prediction accuracy, providing insights into the complex dynamics of hydrological systems. These findings underscore the potential of DL and the developed methodologies in hydrological applications while suggesting broader applicability across various fields dealing with time series data
Leclerc, Sarah Marie-Solveig. "Automatisation de la segmentation sémantique de structures cardiaques en imagerie ultrasonore par apprentissage supervisé." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI121.
Full textThe analysis of medical images plays a critical role in cardiology. Ultrasound imaging, as a real-time, low cost and bed side applicable modality, is nowadays the most commonly used image modality to monitor patient status and perform clinical cardiac diagnosis. However, the semantic segmentation (i.e the accurate delineation and identification) of heart structures is a difficult task due to the low quality of ultrasound images, characterized in particular by the lack of clear boundaries. To compensate for missing information, the best performing methods before this thesis relied on the integration of prior information on cardiac shape or motion, which in turns reduced the adaptability of the corresponding methods. Furthermore, such approaches require man- ual identifications of key points to be adapted to a given image, which makes the full process difficult to reproduce. In this thesis, we propose several original fully-automatic algorithms for the semantic segmentation of echocardiographic images based on supervised learning ap- proaches, where the resolution of the problem is automatically set up using data previously analyzed by trained cardiologists. From the design of a dedicated dataset and evaluation platform, we prove in this project the clinical applicability of fully-automatic supervised learning methods, in particular deep learning methods, as well as the possibility to improve the robustness by incorporating in the full process the prior automatic detection of regions of interest
Salem, Tawfiq. "Learning to Map the Visual and Auditory World." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/86.
Full textDahmane, Khouloud. "Analyse d'images par méthode de Deep Learning appliquée au contexte routier en conditions météorologiques dégradées." Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC020.
Full textNowadays, vision systems are becoming more and more used in the road context. They ensure safety and facilitate mobility. These vision systems are generally affected by the degradation of weather conditions, like heavy fog or strong rain, phenomena limiting the visibility and thus reducing the quality of the images. In order to optimize the performance of the vision systems, it is necessary to have a reliable detection system for these adverse weather conditions.There are meteorological sensors dedicated to physical measurement, but they are expensive. Since cameras are already installed on the road, they can simultaneously perform two functions: image acquisition for surveillance applications and physical measurement of weather conditions instead of dedicated sensors. Following the great success of convolutional neural networks (CNN) in classification and image recognition, we used a deep learning method to study the problem of meteorological classification. The objective of our study is to first seek to develop a classifier of time, which discriminates between "normal" conditions, fog and rain. In a second step, once the class is known, we seek to develop a model for measuring meteorological visibility.The use of CNN requires the use of train and test databases. For this, two databases were used, "Cerema-AWP database" (https://ceremadlcfmds.wixsite.com/cerema-databases), and the "Cerema-AWH database", which has been acquired since 2017 on the Fageole site on the highway A75. Each image of the two bases is labeled automatically thanks to meteorological data collected on the site to characterize various levels of precipitation for rain and fog.The Cerema-AWH base, which was set up as part of our work, contains 5 sub-bases: normal day conditions, heavy fog, light fog, heavy rain and light rain. Rainfall intensities range from 0 mm/h to 70mm/h and fog weather visibilities range from 50m to 1800m. Among the known neural networks that have demonstrated their performance in the field of recognition and classification, we can cite LeNet, ResNet-152, Inception-v4 and DenseNet-121. We have applied these networks in our adverse weather classification system. We start by the study of the use of convolutional neural networks. The nature of the input data and the optimal hyper-parameters that must be used to achieve the best results. An analysis of the different components of a neural network is done by constructing an instrumental neural network architecture. The conclusions drawn from this analysis show that we must use deep neural networks. This type of network is able to classify five meteorological classes of Cerema-AWH base with a classification score of 83% and three meteorological classes with a score of 99%Then, an analysis of the input and output data was made to study the impact of scenes change, the input's data and the meteorological classes number on the classification result.Finally, a database transfer method is developed. We study the portability from one site to another of our adverse weather conditions classification system. A classification score of 63% by making a transfer between a public database and Cerema-AWH database is obtained.After the classification, the second step of our study is to measure the meteorological visibility of the fog. For this, we use a neural network that generates continuous values. Two fog variants were tested: light and heavy fog combined and heavy fog (road fog) only. The evaluation of the result is done using a correlation coefficient R² between the real values and the predicted values. We compare this coefficient with the correlation coefficient between the two sensors used to measure the weather visibility on site. Among the results obtained and more specifically for road fog, the correlation coefficient reaches a value of 0.74 which is close to the physical sensors value (0.76)
Yang, Lixuan. "Structuring of image databases for the suggestion of products for online advertising." Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1102/document.
Full textThe topic of the thesis is the extraction and segmentation of clothing items from still images using techniques from computer vision, machine learning and image description, in view of suggesting non intrusively to the users similar items from a database of retail products. We firstly propose a dedicated object extractor for dress segmentation by combining local information with a prior learning. A person detector is applied to localize sites in the image that are likely to contain the object. Then, an intra-image two-stage learning process is developed to roughly separate foreground pixels from the background. Finally, the object is finely segmented by employing an active contour algorithm that takes into account the previous segmentation and injects specific knowledge about local curvature in the energy function.We then propose a new framework for extracting general deformable clothing items by using a three stage global-local fitting procedure. A set of template initiates an object extraction process by a global alignment of the model, followed by a local search minimizing a measure of the misfit with respect to the potential boundaries in the neighborhood. The results provided by each template are aggregated, with a global fitting criterion, to obtain the final segmentation.In our latest work, we extend the output of a Fully Convolution Neural Network to infer context from local units(superpixels). To achieve this we optimize an energy function,that combines the large scale structure of the image with the locallow-level visual descriptions of superpixels, over the space of all possiblepixel labellings. In addition, we introduce a novel dataset called RichPicture, consisting of 1000 images for clothing extraction from fashion images.The methods are validated on the public database and compares favorably to the other methods according to all the performance measures considered
Stephanos, Dembe. "Machine Learning Approaches to Dribble Hand-off Action Classification with SportVU NBA Player Coordinate Data." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/etd/3908.
Full textYang, Lixuan. "Structuring of image databases for the suggestion of products for online advertising." Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1102.
Full textThe topic of the thesis is the extraction and segmentation of clothing items from still images using techniques from computer vision, machine learning and image description, in view of suggesting non intrusively to the users similar items from a database of retail products. We firstly propose a dedicated object extractor for dress segmentation by combining local information with a prior learning. A person detector is applied to localize sites in the image that are likely to contain the object. Then, an intra-image two-stage learning process is developed to roughly separate foreground pixels from the background. Finally, the object is finely segmented by employing an active contour algorithm that takes into account the previous segmentation and injects specific knowledge about local curvature in the energy function.We then propose a new framework for extracting general deformable clothing items by using a three stage global-local fitting procedure. A set of template initiates an object extraction process by a global alignment of the model, followed by a local search minimizing a measure of the misfit with respect to the potential boundaries in the neighborhood. The results provided by each template are aggregated, with a global fitting criterion, to obtain the final segmentation.In our latest work, we extend the output of a Fully Convolution Neural Network to infer context from local units(superpixels). To achieve this we optimize an energy function,that combines the large scale structure of the image with the locallow-level visual descriptions of superpixels, over the space of all possiblepixel labellings. In addition, we introduce a novel dataset called RichPicture, consisting of 1000 images for clothing extraction from fashion images.The methods are validated on the public database and compares favorably to the other methods according to all the performance measures considered
SUN, HAO-SYUAN, and 孫晧烜. "Keyword Extraction from Law Database Using Deep Learning." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/v3j5b2.
Full text國立中正大學
資訊工程研究所
105
In recent years, international exchanges and trade cooperation become frequent. Legal practice among nations increases as well. To ensure consistency of legal document, constructing a standard translation dictionary as a translation reference is necessary. However, there are plenty of laws and regulations which cover a wide range of area. If we manually mark all legal keywords, it will be highly inefficient. In this thesis, we propose an automatic Chinese legal keyword extraction algorithm based on deep learning technology, specifically, Back Propagation Neural Network (BPNN). This system helps extract legal keywords from legal documents which legal expert would identify. This system consists of two parts: candidate keyword generation and keyword identification. First, the keyword candidate set is generated by using the word segmentation and combination method proposed in this study. Compared with word segmentation without combination, this method effectively improves the coverage of the actual legal keyword set. Furthermore, compared with the traditional n-gram combination of Chinese words, it can significantly reduce the number of keyword candidates and cost less time on following classification. To identify legal keywords based on BPNN, specific features are first defined based on the characteristics of legal keywords identified by experts to improve the performance of BPNN. A real world data set is then used to train the BPNN. Experimental results show the effectiveness of the proposed approach with the overall average accuracy, precision, recall and f-measure values of 92.6%, 89.2%, 88.2% and 88.2% respectively. All of these measures have been significantly improved as compared to our previous work.
Al-Waisy, Alaa S., Rami S. R. Qahwaji, Stanley S. Ipson, and Shumoos Al-Fahdawi. "A multimodal deep learning framework using local feature representations for face recognition." 2017. http://hdl.handle.net/10454/13122.
Full textThe most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that theCurvelet transform, a newanisotropic and multidirectional transform, can efficiently represent themain structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR)framework, to add feature representations by training aDBNon top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets.
Chen, Han-Ting, and 陳漢庭. "Indoor Spatial and Image Information Inquiry through a 3D Modeler for a Deep Learning Indoor Positioning/Mapping Database." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/y9tp9j.
Full text國立中興大學
電機工程學系所
107
To solve the problem of getting lost indoors of an unfamiliar building, Our team proposes photos taken by a mobile phone and its inertia information to position and sketch the indoor floor plan. The deep learning algorithm in the research of this team requires a lot of indoor information in different rooms. It is not easy to obtain real-world data.Therefore, this paper uses the virtual data output by SketchUp as an aid. This paper focuses on providing virtual data by using SketchUp and Python automation. This thesis also trains a deep learning algorithm by different data sets (virtual cat/dog and real cat/dog). The result shows Transfer Learning can improve the accuracy.
Syue, Jhih-Chen, and 薛至辰. "Indoor Spatial and Image Information Inquiry through a Mobile Platform for a Deep Learning Indoor Positioning/Mapping Database." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/89qkmn.
Full textChen, Sin You, and 陳信佑. "Construction of interactive database for biological big data and deep learning analytics platform using protist proteomes and MASS spectrums as examples." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107CGU05392015%22.&searchmode=basic.
Full text