Tesis sobre el tema "Unsupervised deep neural networks"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Unsupervised deep neural networks".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Donati, Lorenzo. "Domain Adaptation through Deep Neural Networks for Health Informatics". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14888/.
Texto completoAhn, Euijoon. "Unsupervised Deep Feature Learning for Medical Image Analysis". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23002.
Texto completoCherti, Mehdi. "Deep generative neural networks for novelty generation : a foundational framework, metrics and experiments". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS029/document.
Texto completoIn recent years, significant advances made in deep neural networks enabled the creation of groundbreaking technologies such as self-driving cars and voice-enabled personal assistants. Almost all successes of deep neural networks are about prediction, whereas the initial breakthroughs came from generative models. Today, although we have very powerful deep generative modeling techniques, these techniques are essentially being used for prediction or for generating known objects (i.e., good quality images of known classes): any generated object that is a priori unknown is considered as a failure mode (Salimans et al., 2016) or as spurious (Bengio et al., 2013b). In other words, when prediction seems to be the only possible objective, novelty is seen as an error that researchers have been trying hard to eliminate. This thesis defends the point of view that, instead of trying to eliminate these novelties, we should study them and the generative potential of deep nets to create useful novelty, especially given the economic and societal importance of creating new objects in contemporary societies. The thesis sets out to study novelty generation in relationship with data-driven knowledge models produced by deep generative neural networks. Our first key contribution is the clarification of the importance of representations and their impact on the kind of novelties that can be generated: a key consequence is that a creative agent might need to rerepresent known objects to access various kinds of novelty. We then demonstrate that traditional objective functions of statistical learning theory, such as maximum likelihood, are not necessarily the best theoretical framework for studying novelty generation. We propose several other alternatives at the conceptual level. A second key result is the confirmation that current models, with traditional objective functions, can indeed generate unknown objects. This also shows that even though objectives like maximum likelihood are designed to eliminate novelty, practical implementations do generate novelty. Through a series of experiments, we study the behavior of these models and the novelty they generate. In particular, we propose a new task setup and metrics for selecting good generative models. Finally, the thesis concludes with a series of experiments clarifying the characteristics of models that can exhibit novelty. Experiments show that sparsity, noise level, and restricting the capacity of the net eliminates novelty and that models that are better at recognizing novelty are also good at generating novelty
Kilinc, Ismail Ozsel. "Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings". Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7415.
Texto completoMcClintick, Kyle W. "Training Data Generation Framework For Machine-Learning Based Classifiers". Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1276.
Texto completoBoschini, Matteo. "Unsupervised Learning of Scene Flow". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16226/.
Texto completoKalinicheva, Ekaterina. "Unsupervised satellite image time series analysis using deep learning techniques". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS335.
Texto completoThis thesis presents a set of unsupervised algorithms for satellite image time series (SITS) analysis. Our methods exploit machine learning algorithms and, in particular, neural networks to detect different spatio-temporal entities and their eventual changes in the time.In our thesis, we aim to identify three different types of temporal behavior: no change areas, seasonal changes (vegetation and other phenomena that have seasonal recurrence) and non-trivial changes (permanent changes such as constructions or demolishment, crop rotation, etc). Therefore, we propose two frameworks: one for detection and clustering of non-trivial changes and another for clustering of “stable” areas (seasonal changes and no change areas). The first framework is composed of two steps which are bi-temporal change detection and the interpretation of detected changes in a multi-temporal context with graph-based approaches. The bi-temporal change detection is performed for each pair of consecutive images of the SITS and is based on feature translation with autoencoders (AEs). At the next step, the changes from different timestamps that belong to the same geographic area form evolution change graphs. The graphs are then clustered using a recurrent neural networks AE model to identify different types of change behavior. For the second framework, we propose an approach for object-based SITS clustering. First, we encode SITS with a multi-view 3D convolutional AE in a single image. Second, we perform a two steps SITS segmentation using the encoded SITS and original images. Finally, the obtained segments are clustered exploiting their encoded descriptors
Yuan, Xiao. "Graph neural networks for spatial gene expression analysis of the developing human heart". Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-427330.
Texto completoVENTURA, FRANCESCO. "Explaining black-box deep neural models' predictions, behaviors, and performances through the unsupervised mining of their inner knowledge". Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912972.
Texto completoLi, Yingzhen. "Approximate inference : new visions". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277549.
Texto completoOquab, Maxime. "Convolutional neural networks : towards less supervision for visual recognition". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE061.
Texto completoConvolutional Neural Networks are flexible learning algorithms for computer vision that scale particularly well with the amount of data that is provided for training them. Although these methods had successful applications already in the ’90s, they were not used in visual recognition pipelines because of their lesser performance on realistic natural images. It is only after the amount of data and the computational power both reached a critical point that these algorithms revealed their potential during the ImageNet challenge of 2012, leading to a paradigm shift in visual recogntion. The first contribution of this thesis is a transfer learning setup with a Convolutional Neural Network for image classification. Using a pre-training procedure, we show that image representations learned in a network generalize to other recognition tasks, and their performance scales up with the amount of data used in pre-training. The second contribution of this thesis is a weakly supervised setup for image classification that can predict the location of objects in complex cluttered scenes, based on a dataset indicating only with the presence or absence of objects in training images. The third contribution of this thesis aims at finding possible paths for progress in unsupervised learning with neural networks. We study the recent trend of Generative Adversarial Networks and propose two-sample tests for evaluating models. We investigate possible links with concepts related to causality, and propose a two-sample test method for the task of causal discovery. Finally, building on a recent connection with optimal transport, we investigate what these generative algorithms are learning from unlabeled data
Ackerman, Wesley. "Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains". BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8684.
Texto completoBelharbi, Soufiane. "Neural networks regularization through representation learning". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR10/document.
Texto completoNeural network models and deep models are one of the leading and state of the art models in machine learning. They have been applied in many different domains. Most successful deep neural models are the ones with many layers which highly increases their number of parameters. Training such models requires a large number of training samples which is not always available. One of the fundamental issues in neural networks is overfitting which is the issue tackled in this thesis. Such problem often occurs when the training of large models is performed using few training samples. Many approaches have been proposed to prevent the network from overfitting and improve its generalization performance such as data augmentation, early stopping, parameters sharing, unsupervised learning, dropout, batch normalization, etc. In this thesis, we tackle the neural network overfitting issue from a representation learning perspective by considering the situation where few training samples are available which is the case of many real world applications. We propose three contributions. The first one presented in chapter 2 is dedicated to dealing with structured output problems to perform multivariate regression when the output variable y contains structural dependencies between its components. Our proposal aims mainly at exploiting these dependencies by learning them in an unsupervised way. Validated on a facial landmark detection problem, learning the structure of the output data has shown to improve the network generalization and speedup its training. The second contribution described in chapter 3 deals with the classification task where we propose to exploit prior knowledge about the internal representation of the hidden layers in neural networks. This prior is based on the idea that samples within the same class should have the same internal representation. We formulate this prior as a penalty that we add to the training cost to be minimized. Empirical experiments over MNIST and its variants showed an improvement of the network generalization when using only few training samples. Our last contribution presented in chapter 4 showed the interest of transfer learning in applications where only few samples are available. The idea consists in re-using the filters of pre-trained convolutional networks that have been trained on large datasets such as ImageNet. Such pre-trained filters are plugged into a new convolutional network with new dense layers. Then, the whole network is trained over a new task. In this contribution, we provide an automatic system based on such learning scheme with an application to medical domain. In this application, the task consists in localizing the third lumbar vertebra in a 3D CT scan. A pre-processing of the 3D CT scan to obtain a 2D representation and a post-processing to refine the decision are included in the proposed system. This work has been done in collaboration with the clinic "Rouen Henri Becquerel Center" who provided us with data
Dekhtiar, Jonathan. "Deep Learning and unsupervised learning to automate visual inspection in the manufacturing industry". Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2513.
Texto completoAlthough studied since 1970, automatic visual inspection on production lines still struggles to be applied on a large scale and at low cost. The methods used depend greatly on the availability of domain experts. This inevitably leads to increased costs and reduced flexibility in the methods used. Since 2012, advances in the field of Deep Learning have enabled many advances in this direction, particularly thanks to convolutional neura networks that have achieved near-human performance in many areas associated with visual perception (e.g. object recognition and detection, etc.). This thesis proposes an unsupervised approach to meet the needs of automatic visual inspection. This method, called AnoAEGAN, combines adversarial learning and the estimation of a probability density function. These two complementary approaches make it possible to jointly estimate the pixel-by-pixel probability of a visual defect on an image. The model is trained from a very limited number of images (i.e. less than 1000 images) without using expert knowledge to "label" the data beforehand. This method allows increased flexibility with a limited training time and therefore great versatility, demonstrated on ten different tasks without any modification of the model. This method should reduce development costs and the time required to deploy in production. This method can also be deployed in a complementary way to a supervised approach in order to benefit from the advantages of each approach
Choi, Jin-Woo. "Action Recognition with Knowledge Transfer". Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/101780.
Texto completoDoctor of Philosophy
Recent progress on deep learning has shown remarkable action recognition performance. The remarkable performance is often achieved by transferring the knowledge learned from existing large-scale data to the small-scale data specific to applications. However, existing action recog- nition models do not always work well on new tasks and datasets because of the following two problems. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor performance on the new datasets and tasks. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small-scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. To tackle the first problem, I propose to learn scene-invariant action representations to mitigate background scene- biased human action recognition models for the first problem. Specifically, the proposed method learns representations that cannot predict the scene types and the correct actions when there is no evidence. I validate the proposed method's effectiveness by transferring the pre-trained model to multiple action understanding tasks. The results show consistent improvement over the baselines for every task and dataset. To handle the second problem, I formulate human action recognition as an unsupervised learning problem on the target data. In this setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already existing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Here, we have many labeled videos as source data and sparsely labeled videos as target data. The setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject color, spatial, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
Yogeswaran, Arjun. "Self-Organizing Neural Visual Models to Learn Feature Detectors and Motion Tracking Behaviour by Exposure to Real-World Data". Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37096.
Texto completoNyamapfene, Abel. "Unsupervised multimodal neural networks". Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/844064/.
Texto completoStella, Federico. "Learning a Local Reference Frame for Point Clouds using Spherical CNNs". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20197/.
Texto completoNair, Karthik. "Optimisation of autoencoders for prediction of SNPs determining phenotypes in wheat". Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-437451.
Texto completoSala, Cardoso Enric. "Advanced energy management strategies for HVAC systems in smart buildings". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668528.
Texto completoL’eficàcia dels sistemes de gestió d’energia per afrontar el consum d’energia en edificis és un tema que ha rebut un interès en augment durant els darrers anys a causa de la creixent demanda global d’energia i del gran percentatge d’energia que n’utilitzen actualment els edificis. L’escala d’aquest sector ha atret l'atenció de nombrosa investigació amb l’objectiu de descobrir possibles vies de millora i materialitzar-les amb l’ajuda de recents avenços tecnològics que es podrien aprofitar per disminuir les necessitats energètiques dels edificis. Concretament, en l’àrea d’instal·lacions de calefacció, ventilació i climatització, la disponibilitat de grans bases de dades històriques als sistemes de gestió d’edificis fa possible l’estudi de com d'eficients són realment aquests sistemes quan s’encarreguen d'assegurar el confort dels seus ocupants. En realitat, informes recents indiquen que hi ha una diferència entre el rendiment operatiu ideal i el rendiment generalment assolit a la pràctica. En conseqüència, aquesta tesi considera la investigació de noves estratègies de gestió de l’energia per a instal·lacions de calefacció, ventilació i climatització en edificis, destinades a reduir la diferència de rendiment mitjançant l’ús de mètodes basats en dades per tal d'augmentar el seu coneixement contextual, permetent als sistemes de gestió dirigir l’operació cap a zones de treball amb un rendiment superior. Això inclou tant l’avanç de metodologies de modelat capaces d’extreure coneixement de bases de dades de comportaments històrics d’edificis a través de la previsió de càrregues de consum i l’estimació del rendiment operatiu dels equips que recolzin la identificació del context operatiu i de les necessitats energètiques d’un edifici, tant com del desenvolupament d’una estratègia d’optimització multi-objectiu generalitzable per tal de minimitzar el consum d’energia mentre es satisfan aquestes necessitats energètiques. Els resultats experimentals obtinguts a partir de la implementació de les metodologies desenvolupades mostren un potencial important per augmentar l'eficiència energètica dels sistemes de climatització, mentre que són prou genèrics com per permetre el seu ús en diferents instal·lacions i suportant equips diversos. En conclusió, durant aquesta tesi es va desenvolupar, implementar i validar un marc d’anàlisi i actuació complet mitjançant una base de dades experimental adquirida en una planta pilot durant el període d’investigació de la tesi. Els resultats obtinguts demostren l’eficàcia de les contribucions de manera individual i, en conjunt, representen una solució idònia per ajudar a augmentar el rendiment de les instal·lacions de climatització sense afectar el confort dels seus ocupants
Macdonald, Donald. "Unsupervised neural networks for visualisation of data". Thesis, University of the West of Scotland, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395687.
Texto completoBerry, Ian Michael. "Data classification using unsupervised artificial neural networks". Thesis, University of Sussex, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390079.
Texto completoHarpur, George Francis. "Low entropy coding with unsupervised neural networks". Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627227.
Texto completoLiu, Qian. "Deep spiking neural networks". Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.
Texto completoWalcott, Terry Hugh. "Market prediction for SMEs using unsupervised neural networks". Thesis, University of East London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.532991.
Texto completoVetcha, Sarat Babu. "Fault diagnosis in pumps by unsupervised neural networks". Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300604.
Texto completoSquadrani, Lorenzo. "Deep neural networks and thermodynamics". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Buscar texto completoMancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.
Texto completoAbbasi, Mahdieh. "Toward robust deep neural networks". Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67766.
Texto completoIn this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the “protection” level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector.
Caron, Mathilde. "Unsupervised Representation Learning with Clustering in Deep Convolutional Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227926.
Texto completoDetta examensarbete behandlar problemet med oövervakat lärande av visuella representationer med djupa konvolutionella neurala nätverk (CNN). Detta är en av de viktigaste faktiska utmaningarna i datorseende för att överbrygga klyftan mellan oövervakad och övervakad representationstjänst. Vi föreslår ett nytt och enkelt sätt att träna CNN på helt omärkta dataset. Vår metod består i att tillsammans optimera en gruppering av representationerna och träna ett CNN med hjälp av grupperna som tillsyn. Vi utvärderar modellerna som tränats med vår metod på standardöverföringslärande experiment från litteraturen. Vi finner att vår metod överträffar alla självövervakade och oövervakade, toppmoderna tillvägagångssätt, hur sofistikerade de än är. Ännu viktigare är att vår metod överträffar de metoderna även när den oövervakade träningsuppsättningen inte är ImageNet men en godtycklig delmängd av bilder från Flickr.
Manenti, Céline. "Découverte d'unités linguistiques à l'aide de méthodes d'apprentissage non supervisé". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30074.
Texto completoThe discovery of elementary linguistic units (phonemes, words) only from sound recordings is an unresolved problem that arouses a strong interest from the community of automatic speech processing, as evidenced by the many recent contributions of the state of the art. During this thesis, we focused on using neural networks to answer the problem. We approached the problem using neural networks in a supervised, poorly supervised and multilingual manner. We have developed automatic phoneme segmentation and phonetic classification tools based on convolutional neural networks. The automatic segmentation tool obtained 79% F-measure on the BUCKEYE conversational speech corpus. This result is similar to a human annotator according to the inter-annotator agreement provided by the creators of the corpus. In addition, it does not need a lot of data (about ten minutes per speaker and 5 different speakers) to be effective. In addition, it is portable to other languages (especially for poorly endowed languages such as xitsonga). The phonetic classification system makes it possible to set the various parameters and hyperparameters that are useful for an unsupervised scenario. In the unsupervised context, the neural networks (Auto-Encoders) allowed us to generate new parametric representations, concentrating the information of the input frame and its neighboring frames. We studied their utility for audio compression from the raw signal, for which they were effective (low RMS, even at 99% compression). We also carried out an innovative pre-study on a different use of neural networks, to generate vectors of parameters not from the outputs of the layers but from the values of the weights of the layers. These parameters are designed to mimic Linear Predictive Coefficients (LPC). In the context of the unsupervised discovery of phoneme-like units (called pseudo-phones in this memory) and the generation of new phonetically discriminative parametric representations, we have coupled a neural network with a clustering tool (k-means ). The iterative alternation of these two tools allowed the generation of phonetically discriminating parameters for the same speaker: low rates of intra-speaker ABx error of 7.3% for English, 8.5% for French and 8 , 4% for Mandarin were obtained. These results allow an absolute gain of about 4% compared to the baseline (conventional parameters MFCC) and are close to the best current approaches (1% more than the winner of the Zero Resource Speech Challenge 2017). The inter-speaker results vary between 12% and 15% depending on the language, compared to 21% to 25% for MFCCs
Bishop, Griffin R. "Unsupervised Semantic Segmentation through Cross-Instance Representation Similarity". Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1371.
Texto completoLängkvist, Martin. "Modeling time-series with deep networks". Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-39415.
Texto completoLu, Yifei. "Deep neural networks and fraud detection". Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-331833.
Texto completoKalogiras, Vasileios. "Sentiment Classification with Deep Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217858.
Texto completoSentiment analysis is a subfield of natural language processing (NLP) that attempts to analyze the sentiment of written text.It is is a complex problem that entails different challenges. For this reason, it has been studied extensively. In the past years traditional machine learning algorithms or handcrafted methodologies used to provide state of the art results. However, the recent deep learning renaissance shifted interest towards end to end deep learning models. On the one hand this resulted into more powerful models but on the other hand clear mathematical reasoning or intuition behind distinct models is still lacking. As a result, in this thesis, an attempt to shed some light on recently proposed deep learning architectures for sentiment classification is made.A study of their differences is performed as well as provide empirical results on how changes in the structure or capacity of a model can affect its accuracy and the way it represents and ''comprehends'' sentences.
Choi, Keunwoo. "Deep neural networks for music tagging". Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/46029.
Texto completoYin, Yonghua. "Random neural networks for deep learning". Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/64917.
Texto completoZagoruyko, Sergey. "Weight parameterizations in deep neural networks". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1129/document.
Texto completoMultilayer neural networks were first proposed more than three decades ago, and various architectures and parameterizations were explored since. Recently, graphics processing units enabled very efficient neural network training, and allowed training much larger networks on larger datasets, dramatically improving performance on various supervised learning tasks. However, the generalization is still far from human level, and it is difficult to understand on what the decisions made are based. To improve on generalization and understanding we revisit the problems of weight parameterizations in deep neural networks. We identify the most important, to our mind, problems in modern architectures: network depth, parameter efficiency, and learning multiple tasks at the same time, and try to address them in this thesis. We start with one of the core problems of computer vision, patch matching, and propose to use convolutional neural networks of various architectures to solve it, instead of manual hand-crafting descriptors. Then, we address the task of object detection, where a network should simultaneously learn to both predict class of the object and the location. In both tasks we find that the number of parameters in the network is the major factor determining it's performance, and explore this phenomena in residual networks. Our findings show that their original motivation, training deeper networks for better representations, does not fully hold, and wider networks with less layers can be as effective as deeper with the same number of parameters. Overall, we present an extensive study on architectures and weight parameterizations, and ways of transferring knowledge between them
Ioannou, Yani Andrew. "Structural priors in deep neural networks". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278976.
Texto completoBillman, Linnar y Johan Hullberg. "Speech Reading with Deep Neural Networks". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-360022.
Texto completoWang, Shenhao. "Deep neural networks for choice analysis". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129894.
Texto completoCataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 117-128).
As deep neural networks (DNNs) outperform classical discrete choice models (DCMs) in many empirical studies, one pressing question is how to reconcile them in the context of choice analysis. So far researchers mainly compare their prediction accuracy, treating them as completely different modeling methods. However, DNNs and classical choice models are closely related and even complementary. This dissertation seeks to lay out a new foundation of using DNNs for choice analysis. It consists of three essays, which respectively tackle the issues of economic interpretation, architectural design, and robustness of DNNs by using classical utility theories. Essay 1 demonstrates that DNNs can provide economic information as complete as the classical DCMs.
The economic information includes choice predictions, choice probabilities, market shares, substitution patterns of alternatives, social welfare, probability derivatives, elasticities, marginal rates of substitution (MRS), and heterogeneous values of time (VOT). Unlike DCMs, DNNs can automatically learn the utility function and reveal behavioral patterns that are not prespecified by modelers. However, the economic information from DNNs can be unreliable because the automatic learning capacity is associated with three challenges: high sensitivity to hyperparameters, model non-identification, and local irregularity. To demonstrate the strength of DNNs as well as the three issues, I conduct an empirical experiment by applying the DNNs to a stated preference survey and discuss successively the full list of economic information extracted from the DNNs. Essay 2 designs a particular DNN architecture with alternative-specific utility functions (ASU-DNN) by using prior behavioral knowledge.
Theoretically, ASU-DNN reduces the estimation error of fully connected DNN (F-DNN) because of its lighter architecture and sparser connectivity, although the constraint of alternative-specific utility could cause ASU-DNN to exhibit a larger approximation error. Both ASU-DNN and F-DNN can be treated as special cases of DNN architecture design guided by utility connectivity graph (UCG). Empirically, ASU-DNN has 2-3% higher prediction accuracy than F-DNN. The alternative-specific connectivity constraint, as a domain-knowledge- based regularization method, is more effective than other regularization methods. This essay demonstrates that prior behavioral knowledge can be used to guide the architecture design of DNN, to function as an effective domain-knowledge-based regularization method, and to improve both the interpretability and predictive power of DNNs in choice analysis.
Essay 3 designs a theory-based residual neural network (TB-ResNet) with a two-stage training procedure, which synthesizes decision-making theories and DNNs in a linear manner. Three instances of TB-ResNets based on choice modeling (CM-ResNets), prospect theory (PT-ResNets), and hyperbolic discounting (HD-ResNets) are designed. Empirically, compared to the decision-making theories, the three instances of TB-ResNets predict significantly better in the out-of-sample test and become more interpretable owing to the rich utility function augmented by DNNs. Compared to the DNNs, the TB-ResNets predict better because the decision-making theories aid in localizing and regularizing the DNN models. TB-ResNets also become more robust than DNNs because the decision-making theories stablize the local utility function and the input gradients.
This essay demonstrates that it is both feasible and desirable to combine the handcrafted utility theory and automatic utility specification, with joint improvement in prediction, interpretation, and robustness.
by Shenhao Wang.
Ph. D. in Computer and Urban Science
Ph.D.inComputerandUrbanScience Massachusetts Institute of Technology, Department of Urban Studies and Planning
Sunnegårdh, Christina. "Scar detection using deep neural networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299576.
Texto completoObjektdetektion är en metod inom datorseende som inkluderar både lokalisering och klassificering av objekt i bilder. Antalet användningsområden för metoden växer ständigt och denna studie undersöker det outforskade området av att använda djupa neurala nätverk för detektering av ärr. Studien utforskar även att använda detektering av ärr som grund för den binära klassificeringsuppgiften att bestämma om bilder innehåller ett synligt ärr eller inte. Två förtränade objektdetekteringsmodeller, Faster R-CNN och RetinaNet, tränades med olika hyperparametrar på 1830 manuellt märkta bilder. Faster RCNN Inception ResNet V2 uppnådde bäst resultat med avseende på average precision (AP), tätt följd av Faster R-CNN ResNet50 och slutligen RetinaNet. Resultatet indikerar både överlägsenhet av Faster R-CNN gentemot RetinaNet, såväl som att använda Inception ResNet V2 för särdragsextrahering. Detta beror med stor sannolikhet på dess användning av faltningsfilter i flera storlekar på samma nivåer i nätverket. Gällande detekteringstid per bild var RetinaNet snabbast, följd av Faster R-CNN ResNet50 och slutligen Faster R-CNN Inception ResNet V2. För den binära klassificeringsuppgiften testades modellerna på 200 bilder, där hälften av bilderna innehöll tydligt synliga ärr. Faster RCNN ResNet50 uppnådde högst träffsäkerhet, följt av Faster R-CNN Inception ResNet V2 och till sist RetinaNet. Medan träffsäkerheten för RetinaNet huvudsakligen bestraffades på grund av att ha förbisett ärr i bilder, så detekterade Faster R-CNN Inception ResNet V2 ett flertal faktiska ärr som inte datamärkts på grund av bristande bildkvalitet. Detta kan dock vara en fråga om subjektiv datamärkning och att modellen bestraffas för något som andra gånger skulle kunna anses korrekt. Sammanfattningsvis visar denna studie lovande resultat av att använda objektdetektion för att detektera ärr i bilder. Medan tvåstegsmodellen Faster R-CNN har övertaget sett till AP, har enstegsmodellen RetinaNet övertaget sett till detekteringstid. Förslag för framtida arbete inkluderar att lägga större vikt vid märkning av data för att eliminera potentiell subjektivitet, samt inkludera träningsdata innehållande objekt som modellerna misstog för ärr. Exempel på detta är öppna sår, knogar och bakgrundsobjekt som visuellt liknar ärr.
Landeen, Trevor J. "Association Learning Via Deep Neural Networks". DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7028.
Texto completoSrivastava, Sanjana. "On foveation of deep neural networks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123134.
Texto completoThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 61-63).
The human ability to recognize objects is impaired when the object is not shown in full. "Minimal images" are the smallest regions of an image that remain recognizable for humans. [26] show that a slight modification of the location and size of the visible region of the minimal image produces a sharp drop in human recognition accuracy. In this paper, we demonstrate that such drops in accuracy due to changes of the visible region are a common phenomenon between humans and existing state-of- the-art convolutional neural networks (CNNs), and are much more prominent in CNNs. We found many cases where CNNs classified one region correctly and the other incorrectly, though they only differed by one row or column of pixels, and were often bigger than the average human minimal image size. We show that this phenomenon is independent from previous works that have reported lack of invariance to minor modifications in object location in CNNs. Our results thus reveal a new failure mode of CNNs that also affects humans to a lesser degree. They expose how fragile CNN recognition ability is for natural images even without synthetic adversarial patterns being introduced. This opens potential for CNN robustness in natural images to be brought to the human level by taking inspiration from human robustness methods. One of these is eccentricity dependence, a model of human focus in which attention to the visual input degrades proportional to distance from the focal point [7]. We demonstrate that applying the "inverted pyramid" eccentricity method, a multi-scale input transformation, makes CNNs more robust to useless background features than a standard raw-image input. Our results also find that using the inverted pyramid method generally reduces useless background pixels, therefore reducing required training data.
by Sanjana Srivastava.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Grechka, Asya. "Image editing with deep neural networks". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS683.pdf.
Texto completoImage editing has a rich history which dates back two centuries. That said, "classic" image editing requires strong artistic skills as well as considerable time, often in the scale of hours, to modify an image. In recent years, considerable progress has been made in generative modeling which has allowed realistic and high-quality image synthesis. However, real image editing is still a challenge which requires a balance between novel generation all while faithfully preserving parts of the original image. In this thesis, we will explore different approaches to edit images, leveraging three families of generative networks: GANs, VAEs and diffusion models. First, we study how to use a GAN to edit a real image. While methods exist to modify generated images, they do not generalize easily to real images. We analyze the reasons for this and propose a solution to better project a real image into the GAN's latent space so as to make it editable. Then, we use variational autoencoders with vector quantification to directly obtain a compact image representation (which we could not obtain with GANs) and optimize the latent vector so as to match a desired text input. We aim to constrain this problem, which on the face could be vulnerable to adversarial attacks. We propose a method to chose the hyperparameters while optimizing simultaneously the image quality and the fidelity to the original image. We present a robust evaluation protocol and show the interest of our method. Finally, we abord the problem of image editing from the view of inpainting. Our goal is to synthesize a part of an image while preserving the rest unmodified. For this, we leverage pre-trained diffusion models and build off on their classic inpainting method while replacing, at each denoising step, the part which we do not wish to modify with the noisy real image. However, this method leads to a disharmonization between the real and generated parts. We propose an approach based on calculating a gradient of a loss which evaluates the harmonization of the two parts. We guide the denoising process with this gradient
Plumbley, Mark David. "An information-theoretic approach to unsupervised connectionist models". Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387051.
Texto completoAl, Chami Zahi. "Estimation de la qualité des données multimedia en temps réel". Thesis, Pau, 2021. http://www.theses.fr/2021PAUU3066.
Texto completoOver the past decade, data providers have been generating and streaming a large amount of data, including images, videos, audio, etc. In this thesis, we will be focusing on processing images since they are the most commonly shared between the users on the global inter-network. In particular, treating images containing faces has received great attention due to its numerous applications, such as entertainment and social media apps. However, several challenges could arise during the processing and transmission phase: firstly, the enormous number of images shared and produced at a rapid pace requires a significant amount of time to be processed and delivered; secondly, images are subject to a wide range of distortions during the processing, transmission, or combination of many factors that could damage the images’content. Two main contributions are developed. First, we introduce a Full-Reference Image Quality Assessment Framework in Real-Time, capable of:1) preserving the images’content by ensuring that some useful visual information can still be extracted from the output, and 2) providing a way to process the images in real-time in order to cope with the huge amount of images that are being received at a rapid pace. The framework described here is limited to processing those images that have access to their reference version (a.k.a Full-Reference). Secondly, we present a No-Reference Image Quality Assessment Framework in Real-Time. It has the following abilities: a) assessing the distorted image without having its distortion-free image, b) preserving the most useful visual information in the images before publishing, and c) processing the images in real-time, even though the No-Reference image quality assessment models are considered very complex. Our framework offers several advantages over the existing approaches, in particular: i. it locates the distortion in an image in order to directly assess the distorted parts instead of processing the whole image, ii. it has an acceptable trade-off between quality prediction accuracy and execution latency, andiii. it could be used in several applications, especially these that work in real-time. The architecture of each framework is presented in the chapters while detailing the modules and components of the framework. Then, a number of simulations are made to show the effectiveness of our approaches to solve our challenges in relation to the existing approaches
Haddad, Josef y Carl Piehl. "Unsupervised anomaly detection in time series with recurrent neural networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259655.
Texto completoArtificiella neurala nätverk (ANN) har tillämpats på många problem. Däremot försöker inte de flesta ANN-modeller efterlikna hjärnan i detalj. Ett exempel på ett ANN som är begränsat till att efterlikna hjärnan är Hierarchical Temporal Memory (HTM). Denna studie tillämpar HTM och Long Short-Term Memory (LSTM) på avvikelsedetektionsproblem i tidsserier för att undersöka vilka styrkor och svagheter de har för detta problem. Avvikelserna i denna studie är begränsade till punktavvikelser och tidsserierna är i endast en variabel. Redan existerande implementationer som utnyttjar dessa nätverk för oövervakad avvikelsedetektionsproblem i tidsserier används i denna studie. Vi använder främst våra egna syntetiska tidsserier för att undersöka hur nätverken hanterar brus och hur de hanterar olika egenskaper som en tidsserie kan ha. Våra resultat visar att båda nätverken kan hantera brus och prestationsskillnaden rörande brusrobusthet var inte tillräckligt stor för att urskilja modellerna. LSTM presterade bättre än HTM på att upptäcka punktavvikelser i våra syntetiska tidsserier som följer en sinuskurva men en slutsats angående vilket nätverk som presterar bäst överlag är fortfarande oavgjord.
Galtier, Mathieu. "A mathematical approach to unsupervised learning in recurrent neural networks". Paris, ENMP, 2011. https://pastel.hal.science/pastel-00667368.
Texto completoIn this thesis, we propose to give a mathematical sense to the claim: the neocortex builds itself a model of its environment. We study the neocortex as a network of spiking neurons undergoing slow STDP learning. By considering that the number of neurons is close to infinity, we propose a new mean-field method to find the ''smoother'' equation describing the firing-rate of populations of these neurons. Then, we study the dynamics of this averaged system with learning. By assuming the modification of the synapses' strength is very slow compared the activity of the network, it is possible to use tools from temporal averaging theory. They lead to showing that the connectivity of the network always converges towards a single equilibrium point which can be computed explicitely. This connectivity gathers the knowledge of the network about the world. Finally, we analyze the equilibrium connectivity and compare it to the inputs. By seeing the inputs as the solution of a dynamical system, we are able to show that the connectivity embedded the entire information about this dynamical system. Indeed, we show that the symmetric part of the connectivity leads to finding the manifold over which the inputs dynamical system is defined, and that the anti-symmetric part of the connectivity corresponds to the vector field of the inputs dynamical system. In this context, the network acts as a predictor of the future events in its environment
Chen, Zhe. "Augmented Context Modelling Neural Networks". Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20654.
Texto completo