Dissertations / Theses on the topic 'Sparse Deep Learning'

To see the other types of publications on this topic, follow the link: Sparse Deep Learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Sparse Deep Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tavanaei, Amirhossein. "Spiking Neural Networks and Sparse Deep Learning." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10807940.

Full text
Abstract:

This document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs.

The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.

APA, Harvard, Vancouver, ISO, and other styles
2

Beretta, Davide. "Experience Replay in Sparse Rewards Problems using Deep Reinforcement Techniques." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17531/.

Full text
Abstract:
In questo lavoro si introduce il lettore al Reinforcement Learning, un'area del Machine Learning su cui negli ultimi anni è stata fatta molta ricerca. In seguito vengono presentate alcune modifiche ad ACER, un algoritmo noto e molto interessante che fa uso di Experience Replay. Lo scopo è quello di cercare di aumentarne le performance su problemi generali ma in particolar modo sugli sparse reward problem. Per verificare la bontà delle idee proposte è utilizzato Montezuma's Revenge, un gioco sviluppato per Atari 2600 e considerato tra i più difficili da trattare.
APA, Harvard, Vancouver, ISO, and other styles
3

Benini, Francesco. "Predicting death in games with deep reinforcement learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20755/.

Full text
Abstract:
Il contesto in cui si pone l'elaborato è una branca del machine learning, chiamato reinforcement learning. Quest'elaborato si pone come obiettivo di migliorare il lavoro sviluppato dal collega M. Conciatori. In questa tesi ci si vuole soffermare sui giochi con ricompense molto sparse, dove la soluzione precedente non era riuscita a conseguire traguardi. I giochi con ricompense sparse sono quelli in cui l'agente prima di ottenere un premio, che gli faccia comprendere che sta eseguendo la sequenza di azioni corretta, deve compiere un gran numero di azioni. Tra i giochi con queste caratteristiche, ci si è focalizzati su uno, Montezuma's Revenge. Montezuma's Revenge si distingue perché per ottenere il primo reward è necessario eseguire un gran numero di azioni. Per questo la totalità degli algoritmi sviluppati non è riuscita ad ottenere risultati soddisfacenti. L'idea di proseguire il lavoro del collega M. Conciatori è nata proprio dal fatto che Lower Bound DQN riusciva solo ad ottenere la prima ricompensa. Ci si è posti, perciò, come scopo principale di trovare una soluzione per poter ottenere risultati ottimali e si è, a tal fine, pensato di prevedere la morte dell'agente, aiutandolo, di conseguenza, ad evitare le azioni sbagliate e guadagnare maggiori ricompense. L'agente in questo contesto impiega più tempo per esplorare l'ambiente e conoscere quali comportamenti hanno un compenso positivo. In conseguenza di questo si è pensato di venire in aiuto dell'agente restituendogli una penalità per ciò che era dannoso al suo modo di agire, perciò, attribuendo una sanzione a tutte quelle azioni che causano la terminazione dell'episodio e quindi la sua morte. Le esperienze negative si memorizzano in un buffer apposito, chiamato done buffer, dal quale si estraggono poi per allenare la rete. Nel momento in cui l'agente si troverà nuovamente nella stessa situazione saprà quale azione sia meglio evitare, e con il tempo anche quale scegliere.
APA, Harvard, Vancouver, ISO, and other styles
4

Vekhande, Swapnil Sudhir. "Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90182.

Full text
Abstract:
Computed Tomography (CT) finds applications across domains like medical diagnosis, security screening, and scientific research. In medical imaging, CT allows physicians to diagnose injuries and disease more quickly and accurately than other imaging techniques. However, CT is one of the most significant contributors of radiation dose to the general population and the required radiation dose for scanning could lead to cancer. On the other hand, a shallow radiation dose could sacrifice image quality causing misdiagnosis. To reduce the radiation dose, sparse-view CT, which includes capturing a smaller number of projections, becomes a promising alternative. However, the image reconstructed from linearly interpolated views possesses severe artifacts. Recently, Deep Learning-based methods are increasingly being used to interpret the missing data by learning the nature of the image formation process. The current methods are promising but operate mostly in the image domain presumably due to lack of projection data. Another limitation is the use of simulated data with less sparsity (up to 75%). This research aims to interpolate the missing sparse-view CT in the sinogram domain using deep learning. To this end, a residual U-Net architecture has been trained with patch-wise projection data to minimize Euclidean distance between the ground truth and the interpolated sinogram. The model can generate highly sparse missing projection data. The results show improvement in SSIM and RMSE by 14% and 52% respectively with respect to the linear interpolation-based methods. Thus, experimental sparse-view CT data with 90% sparsity has been successfully interpolated while improving CT image quality.
Master of Science
Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
APA, Harvard, Vancouver, ISO, and other styles
5

Hoori, Ammar O. "MULTI-COLUMN NEURAL NETWORKS AND SPARSE CODING NOVEL TECHNIQUES IN MACHINE LEARNING." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5743.

Full text
Abstract:
Accurate and fast machine learning (ML) algorithms are highly vital in artificial intelligence (AI) applications. In complex dataset problems, traditional ML methods such as radial basis function neural network (RBFN), sparse coding (SC) using dictionary learning, and particle swarm optimization (PSO) provide trivial results, large structure, slow training, and/or slow testing. This dissertation introduces four novel ML techniques: the multi-column RBFN network (MCRN), the projected dictionary learning algorithm (PDL) and the multi-column adaptive and non-adaptive particle swarm optimization techniques (MC-APSO and MC-PSO). These novel techniques provide efficient alternatives for traditional ML techniques. Compared to traditional ML techniques, the novel ML techniques demonstrate more accurate results, faster training and testing timing, and parallelized structured solutions. MCRN deploys small RBFNs in a parallel structure to speed up both training and testing. Each RBFN is trained with a subset of the dataset and the overall structure provides results that are more accurate. PDL introduces a conceptual dictionary learning method in updating the dictionary atoms with the reconstructed input blocks. This method improves the sparsity of extracted features and hence, the image denoising results. MC-PSO and MC-APSO provide fast and more accurate alternatives to the PSO and APSO slow evolutionary techniques. MC-PSO and MC-APSO use multi-column parallelized RBFN structure to improve results and speed with a wide range of classification dataset problems. The novel techniques are trained and tested using benchmark dataset problems and the results are compared with the state-of-the-art counterpart techniques to evaluate their performance. Novel techniques’ results show superiority over techniques in accuracy and speed in most of the experimental results, which make them good alternatives in solving difficult ML problems.
APA, Harvard, Vancouver, ISO, and other styles
6

Bonfiglioli, Luca. "Identificazione efficiente di reti neurali sparse basata sulla Lottery Ticket Hypothesis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Frankle e Carbin 2018, data una rete densa inizializzata casualmente, mostrano che esistono sottoreti sparse di tale rete che possono ottenere accuratezze superiori alla rete densa e richiedono meno iterazioni di addestramento per raggiungere l’early stop. Tali sottoreti sono indicate con il nome di winning ticket. L’identificazione di questi ultimi richiede tuttavia almeno un addestramento completo del modello denso, il che ne limita l’impiego pratico, se non come tecnica di compressione. In questa tesi, si mira a trovare una variante più efficiente dei metodi di magnitude based pruning proposti in letteratura, valutando diversi metodi euristici e data driven per ottenere winning ticket senza completare l’addestramento della rete densa. Confrontandosi con i risultati di Zhou et al. 2019, si mostra come l’accuratezza all’inizializzazione di un winning ticket non sia predittiva dell’accuratezza finale raggiunta dopo l’addestramento e come, di conseguenza, ottimizzare l’accuratezza al momento di inizializzazione non garantisca altrettanto elevate accuratezze dopo il riaddestramento. Viene inoltre mostrata la presenza di good ticket, ovvero un intero spettro di reti sparse con performance confrontabili, almeno lungo una dimensione, con quelle dei winning ticket, e come sia possibile identificare sottoreti che rientrano in questa categoria anche dopo poche iterazioni di addestramento della rete densa iniziale. L’identificazione di queste reti sparse avviene in modo simile a quanto proposto da You et al. 2020, mediante una predizione del winning ticket effettuata prima del completamento dell’addestramento della rete densa. Viene mostrato che l’utilizzo di euristiche alternative al magnitude based pruning per effettuare queste predizioni consente, con costi computazionali marginalmente superiori, di ottenere predizioni significativamente migliori sulle architetture prese in esame.
APA, Harvard, Vancouver, ISO, and other styles
7

Pawlowski, Filip igor. "High-performance dense tensor and sparse matrix kernels for machine learning." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.

Full text
Abstract:
Dans cette thèse, nous développons des algorithmes à haute performance pour certains calculs impliquant des tenseurs denses et des matrices éparses. Nous abordons les opérations du noyau qui sont utiles pour les tâches d'apprentissage de la machine, telles que l'inférence avec les réseaux neuronaux profonds. Nous développons des structures de données et des techniques pour réduire l'utilisation de la mémoire, pour améliorer la localisation des données et donc pour améliorer la réutilisation du cache des opérations du noyau. Nous concevons des algorithmes parallèles à mémoire séquentielle et à mémoire partagée.Dans la première partie de la thèse, nous nous concentrons sur les noyaux tenseurs denses. Les noyaux tenseurs comprennent la multiplication tenseur-vecteur (TVM), la multiplication tenseur-matrice (TMM) et la multiplication tenseur-tendeur (TTM). Parmi ceux-ci, la MVT est la plus liée à la largeur de bande et constitue un élément de base pour de nombreux algorithmes. Nous proposons une nouvelle structure de données qui stocke le tenseur sous forme de blocs, qui sont ordonnés en utilisant la courbe de remplissage de l'espace connue sous le nom de courbe de Morton (ou courbe en Z). L'idée clé consiste à diviser le tenseur en blocs suffisamment petits pour tenir dans le cache et à les stocker selon l'ordre de Morton, tout en conservant un ordre simple et multidimensionnel sur les éléments individuels qui les composent. Ainsi, des routines BLAS haute performance peuvent être utilisées comme micro-noyaux pour chaque bloc. Les résultats démontrent non seulement que l'approche proposée est plus performante que les variantes de pointe jusqu'à 18%, mais aussi que l'approche proposée induit 71% de moins d'écart-type d'échantillon pour le MVT dans les différents modes possibles. Enfin, nous étudions des algorithmes de mémoire partagée parallèles pour la MVT qui utilisent la structure de données proposée. Nos résultats sur un maximum de 8 systèmes de prises montrent une performance presque maximale pour l'algorithme proposé pour les tenseurs à 2, 3, 4 et 5 dimensions.Dans la deuxième partie de la thèse, nous explorons les calculs épars dans les réseaux de neurones en nous concentrant sur le problème d'inférence profonde épars à haute performance. L'inférence sparse DNN est la tâche d'utiliser les réseaux sparse DNN pour classifier un lot d'éléments de données formant, dans notre cas, une matrice de caractéristiques sparse. La performance de l'inférence clairsemée dépend de la parallélisation efficace de la matrice clairsemée - la multiplication matricielle clairsemée (SpGEMM) répétée pour chaque couche dans la fonction d'inférence. Nous introduisons ensuite l'inférence modèle-parallèle, qui utilise un partitionnement bidimensionnel des matrices de poids obtenues à l'aide du logiciel de partitionnement des hypergraphes. Enfin, nous introduisons les algorithmes de tuilage modèle-parallèle et de tuilage hybride, qui augmentent la réutilisation du cache entre les couches, et utilisent un module de synchronisation faible pour cacher le déséquilibre de charge et les coûts de synchronisation. Nous évaluons nos techniques sur les données du grand réseau du IEEE HPEC 2019 Graph Challenge sur les systèmes à mémoire partagée et nous rapportons jusqu'à 2x l'accélération par rapport à la ligne de base
In this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
APA, Harvard, Vancouver, ISO, and other styles
8

Abbasnejad, Iman. "Learning spatio-temporal features for efficient event detection." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121184/1/Iman_Abbasnejad_Thesis.pdf.

Full text
Abstract:
This thesis has addressed the topic of event detection in videos, which is a challenging problem as events to be detected, can be complex, correlated, and may require the detection of different objects and human actions. To address these challenges, the thesis has developed effective strategies for learning the spatio-temporal features of events. Improved event detection performance has been demonstrated on several real-world challenging databases. The outcome of our research will be useful for a number of applications including human computer interaction, robotics and video surveillance.
APA, Harvard, Vancouver, ISO, and other styles
9

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Full text
Abstract:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further, we investigate several architectures as well as study the effect of adding noise and lowering the resolution of the provided depth image. Our results show that models provided low resolution noisy data performs on par with the models provided unaltered depth.
I det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
APA, Harvard, Vancouver, ISO, and other styles
10

Moreau, Thomas. "Représentations Convolutives Parcimonieuses -- application aux signaux physiologiques et interpétabilité de l'apprentissage profond." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN054/document.

Full text
Abstract:
Les représentations convolutives extraient des motifs récurrents qui aident à comprendre la structure locale dans un jeu de signaux. Elles sont adaptées pour l’analyse des signaux physiologiques, qui nécessite des visualisations mettant en avant les informations pertinentes. Ces représentations sont aussi liées aux modèles d’apprentissage profond. Dans ce manuscrit, nous décrivons des avancées algorithmiques et théoriques autour de ces modèles. Nous montrons d’abord que l’Analyse du Spectre Singulier permet de calculer efficacement une représentation convolutive. Cette représentation est dense et nous décrivons une procédure automatisée pour la rendre plus interprétable. Nous proposons ensuite un algorithme asynchrone, pour accélérer le codage parcimonieux convolutif. Notre algorithme présente une accélération super-linéaire. Dans une seconde partie, nous analysons les liens entre représentations et réseaux de neurones. Nous proposons une étape d’apprentissage supplémentaire, appelée post-entraînement, qui permet d’améliorer les performances du réseau entraîné, en s’assurant que la dernière couche soit optimale. Puis nous étudions les mécanismes qui rendent possible l’accélération du codage parcimonieux avec des réseaux de neurones. Nous montrons que cela est lié à une factorisation de la matrice de Gram du dictionnaire. Finalement, nous illustrons l’intérêt de l’utilisation des représentations convolutives pour les signaux physiologiques. L’apprentissage de dictionnaire convolutif est utilisé pour résumer des signaux de marche et le mouvement du regard est soustrait de signaux oculométriques avec l’Analyse du Spectre Singulier
Convolutional representations extract recurrent patterns which lead to the discovery of local structures in a set of signals. They are well suited to analyze physiological signals which requires interpretable representations in order to understand the relevant information. Moreover, these representations can be linked to deep learning models, as a way to bring interpretability intheir internal representations. In this disserta tion, we describe recent advances on both computational and theoretical aspects of these models.First, we show that the Singular Spectrum Analysis can be used to compute convolutional representations. This representation is dense and we describe an automatized procedure to improve its interpretability. Also, we propose an asynchronous algorithm, called DICOD, based on greedy coordinate descent, to solve convolutional sparse coding for long signals. Our algorithm has super-linear acceleration.In a second part, we focus on the link between representations and neural networks. An extra training step for deep learning, called post-training, is introduced to boost the performances of the trained network by making sure the last layer is optimal. Then, we study the mechanisms which allow to accelerate sparse coding algorithms with neural networks. We show that it is linked to afactorization of the Gram matrix of the dictionary.Finally, we illustrate the relevance of convolutional representations for physiological signals. Convolutional dictionary learning is used to summarize human walk signals and Singular Spectrum Analysis is used to remove the gaze movement in young infant’s oculometric recordings
APA, Harvard, Vancouver, ISO, and other styles
11

Carvalho, Micael. "Deep representation spaces." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.

Full text
Abstract:
Ces dernières années, les techniques d’apprentissage profond ont fondamentalement transformé l'état de l'art de nombreuses applications de l'apprentissage automatique, devenant la nouvelle approche standard pour plusieurs d’entre elles. Les architectures provenant de ces techniques ont été utilisées pour l'apprentissage par transfert, ce qui a élargi la puissance des modèles profonds à des tâches qui ne disposaient pas de suffisamment de données pour les entraîner à partir de zéro. Le sujet d'étude de cette thèse couvre les espaces de représentation créés par les architectures profondes. Dans un premier temps, nous étudions les propriétés de leurs espaces, en prêtant un intérêt particulier à la redondance des dimensions et la précision numérique de leurs représentations. Nos résultats démontrent un fort degré de robustesse, pointant vers des schémas de compression simples et puissants. Ensuite, nous nous concentrons sur le l'affinement de ces représentations. Nous choisissons d'adopter un problème multi-tâches intermodal et de concevoir une fonction de coût capable de tirer parti des données de plusieurs modalités, tout en tenant compte des différentes tâches associées au même ensemble de données. Afin d'équilibrer correctement ces coûts, nous développons également un nouveau processus d'échantillonnage qui ne prend en compte que des exemples contribuant à la phase d'apprentissage, c'est-à-dire ceux ayant un coût positif. Enfin, nous testons notre approche sur un ensemble de données à grande échelle de recettes de cuisine et d'images associées. Notre méthode améliore de 5 fois l'état de l'art sur cette tâche, et nous montrons que l'aspect multitâche de notre approche favorise l'organisation sémantique de l'espace de représentation, lui permettant d'effectuer des sous-tâches jamais vues pendant l'entraînement, comme l'exclusion et la sélection d’ingrédients. Les résultats que nous présentons dans cette thèse ouvrent de nombreuses possibilités, y compris la compression de caractéristiques pour les applications distantes, l'apprentissage multi-modal et multitâche robuste et l'affinement de l'espace des caractéristiques. Pour l'application dans le contexte de la cuisine, beaucoup de nos résultats sont directement applicables dans une situation réelle, en particulier pour la détection d'allergènes, la recherche de recettes alternatives en raison de restrictions alimentaires et la planification de menus
In recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
APA, Harvard, Vancouver, ISO, and other styles
12

Cherief-Abdellatif, Badr-Eddine. "Contributions to the theoretical study of variational inference and robustness." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG001.

Full text
Abstract:
Cette thèse de doctorat traite de l'inférence variationnelle et de la robustesse en statistique et en machine learning. Plus précisément, elle se concentre sur les propriétés statistiques des approximations variationnelles et sur la conception d'algorithmes efficaces pour les calculer de manière séquentielle, et étudie les estimateurs basés sur le Maximum Mean Discrepancy comme règles d'apprentissage qui sont robustes à la mauvaise spécification du modèle.Ces dernières années, l'inférence variationnelle a été largement étudiée du point de vue computationnel, cependant, la littérature n'a accordé que peu d'attention à ses propriétés théoriques jusqu'à très récemment. Dans cette thèse, nous étudions la consistence des approximations variationnelles dans divers modèles statistiques et les conditions qui assurent leur consistence. En particulier, nous abordons le cas des modèles de mélange et des réseaux de neurones profonds. Nous justifions également d'un point de vue théorique l'utilisation de la stratégie de maximisation de l'ELBO, un critère numérique qui est largement utilisé dans la communauté VB pour la sélection de modèle et dont l'efficacité a déjà été confirmée en pratique. En outre, l'inférence Bayésienne offre un cadre d'apprentissage en ligne attrayant pour analyser des données séquentielles, et offre des garanties de généralisation qui restent valables même en cas de mauvaise spécification des modèles et en présence d'adversaires. Malheureusement, l'inférence Bayésienne exacte est rarement tractable en pratique et des méthodes d'approximation sont généralement employées, mais ces méthodes préservent-elles les propriétés de généralisation de l'inférence Bayésienne ? Dans cette thèse, nous montrons que c'est effectivement le cas pour certains algorithmes d'inférence variationnelle (VI). Nous proposons de nouveaux algorithmes tempérés en ligne et nous en déduisons des bornes de généralisation. Notre résultat théorique repose sur la convexité de l'objectif variationnel, mais nous soutenons que notre résultat devrait être plus général et présentons des preuves empiriques à l'appui. Notre travail donne des justifications théoriques en faveur des algorithmes en ligne qui s'appuient sur des méthodes Bayésiennes approchées.Une autre question d'intérêt majeur en statistique qui est abordée dans cette thèse est la conception d'une procédure d'estimation universelle. Cette question est d'un intérêt majeur, notamment parce qu'elle conduit à des estimateurs robustes, un thème d'actualité en statistique et en machine learning. Nous abordons le problème de l'estimation universelle en utilisant un estimateur de minimisation de distance basé sur la Maximum Mean Discrepancy. Nous montrons que l'estimateur est robuste à la fois à la dépendance et à la présence de valeurs aberrantes dans le jeu de données. Nous mettons également en évidence les liens qui peuvent exister avec les estimateurs de minimisation de distance utilisant la distance L2. Enfin, nous présentons une étude théorique de l'algorithme de descente de gradient stochastique utilisé pour calculer l'estimateur, et nous étayons nos conclusions par des simulations numériques. Nous proposons également une version Bayésienne de notre estimateur, que nous étudions à la fois d'un point de vue théorique et d'un point de vue computationnel
This PhD thesis deals with variational inference and robustness. More precisely, it focuses on the statistical properties of variational approximations and the design of efficient algorithms for computing them in an online fashion, and investigates Maximum Mean Discrepancy based estimators as learning rules that are robust to model misspecification.In recent years, variational inference has been extensively studied from the computational viewpoint, but only little attention has been put in the literature towards theoretical properties of variational approximations until very recently. In this thesis, we investigate the consistency of variational approximations in various statistical models and the conditions that ensure the consistency of variational approximations. In particular, we tackle the special case of mixture models and deep neural networks. We also justify in theory the use of the ELBO maximization strategy, a model selection criterion that is widely used in the Variational Bayes community and is known to work well in practice.Moreover, Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even under model mismatch and with adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference? In this thesis, we show that this is indeed the case for some variational inference algorithms. We propose new online, tempered variational algorithms and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that our result should hold more generally and present empirical evidence in support of this. Our work presents theoretical justifications in favor of online algorithms that rely on approximate Bayesian methods. Another point that is addressed in this thesis is the design of a universal estimation procedure. This question is of major interest, in particular because it leads to robust estimators, a very hot topic in statistics and machine learning. We tackle the problem of universal estimation using a minimum distance estimator based on the Maximum Mean Discrepancy. We show that the estimator is robust to both dependence and to the presence of outliers in the dataset. We also highlight the connections that may exist with minimum distance estimators using L2-distance. Finally, we provide a theoretical study of the stochastic gradient descent algorithm used to compute the estimator, and we support our findings with numerical simulations. We also propose a Bayesian version of our estimator, that we study from both a theoretical and a computational points of view
APA, Harvard, Vancouver, ISO, and other styles
13

Al, chanti Dawood. "Analyse Automatique des Macro et Micro Expressions Faciales : Détection et Reconnaissance par Machine Learning." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT058.

Full text
Abstract:
L’analyse automatique des expressions faciales représente à l’heure actuelle une problématique importante associée à de multiples applications telles que la reconnaissance de visages ou encore les interactions homme machine. Dans cette thèse, nous nous attaquons au problème de la reconnaissance d’expressions faciales à partir d’une image ou d’une séquence d’images. Nous abordons le problème sous trois angles.Tout d’abord, nous étudions les macro-expressions faciales et nous proposons de comparer l’efficacité de trois descripteurs différents. Cela conduit au développement d’un algorithme de reconnaissance d’expressions basé sur des descripteurs bas niveau encodés dans un modèle de type sac de mots, puis d’un algorithme basé sur des descripteurs de moyen niveau associés à une représentation éparse et enfin d’un algorithme d’apprentissage profond tenant compte de descripteurs haut niveau. Notre objectif lors de la comparaison de ces trois algorithmes est de trouver la représentation des informations de visages la plus discriminante pour reconnaitre des expressions faciales en étant donc capable de s’affranchir des sources de variabilités que sont les facteurs de variabilité intrinsèques tels que l’apparence du visage ou la manière de réaliser une expression donnée et les facteurs de variabilité extrinsèques tels que les variations d’illumination, de pose, d’échelle, de résolution, de bruit ou d’occultations. Nous examinons aussi l’apport de descripteurs spatio-temporels capables de prendre en compte des informations dynamiques utiles pour séparer les classes ambigües.La grosse limitation des méthodes de classification supervisée est qu’elles sont très coûteuses en termes de labélisation de données. Afin de s’affranchir en partie de cette limitation, nous avons étudié dans un second temps, comment utiliser des méthodes de transfert d’apprentissage de manière à essayer d’étendre les modèles appris sur un ensemble donné de classes d’émotions à des expressions inconnues du processus d’apprentissage. Ainsi nous nous sommes intéressés à l’adaptation de domaine et à l’apprentissage avec peu ou pas de données labélisées. La méthode proposée nous permet de traiter des données non labélisées provenant de distributions différentes de celles du domaine source de l’apprentissage ou encore des données qui ne concernent pas les mêmes labels mais qui partagent le même contexte. Le transfert de connaissance s’appuie sur un apprentissage euclidien et des réseaux de neurones convolutifs de manière à définir une fonction de mise en correspondance entre les informations visuelles provenant des expressions faciales et un espace sémantique issu d’un modèle de langage naturel.Dans un troisième temps, nous nous sommes intéressés à la reconnaissance des micro-expressions faciales. Nous proposons un algorithme destiné à localiser ces micro-expressions dans une séquence d’images depuis l’image initiale (onset image) jusqu’à l’image finale (offset image) et à déterminer les régions des images qui sont affectées par les micro-déformations associées aux micro-expressions. Le problème est abordé sous un angle de détection d’anomalies ce qui se justifie par le fait que les déformations engendrées par les micro-expressions sont a priori un phénomène plus rare que celles produites par toutes les autres causes de déformation du visage telles que les macro-expressions, les clignements des yeux, les mouvements de la tête… Ainsi nous proposons un réseau de neurones auto-encodeur récurrent destiné à capturer les changements spatiaux et temporels associés à toutes les déformations du visage autres que celles dues aux micro-expressions. Ensuite, nous apprenons un modèle statistique basé sur un mélange de gaussiennes afin d’estimer la densité de probabilité de ces déformations autres que celles dues aux micro-expressions.Tous nos algorithmes sont testés et évalués sur des bases d’expressions faciales actées et/ou spontanées
Facial expression analysis is an important problem in many biometric tasks, such as face recognition, face animation, affective computing and human computer interface. In this thesis, we aim at analyzing facial expressions of a face using images and video sequences. We divided the problem into three leading parts.First, we study Macro Facial Expressions for Emotion Recognition and we propose three different levels of feature representations. Low-level feature through a Bag of Visual Word model, mid-level feature through Sparse Representation and hierarchical features through a Deep Learning based method. The objective of doing this is to find the most effective and efficient representation that contains distinctive information of expressions and that overcomes various challenges coming from: 1) intrinsic factors such as appearance and expressiveness variability and 2) extrinsic factors such as illumination, pose, scale and imaging parameters, e.g., resolution, focus, imaging, noise. Then, we incorporate the time dimension to extract spatio-temporal features with the objective to describe subtle feature deformations to discriminate ambiguous classes.Second, we direct our research toward transfer learning, where we aim at Adapting Facial Expression Category Models to New Domains and Tasks. Thus we study domain adaptation and zero shot learning for developing a method that solves the two tasks jointly. Our method is suitable for unlabelled target datasets coming from different data distributions than the source domain and for unlabelled target datasets with different label distributions but sharing the same context as the source domain. Therefore, to permit knowledge transfer between domains and tasks, we use Euclidean learning and Convolutional Neural Networks to design a mapping function that map the visual information coming from facial expressions into a semantic space coming from a Natural Language model that encodes the visual attribute description or use the label information. The consistency between the two subspaces is maximized by aligning them using the visual feature distribution.Third, we study Micro Facial Expression Detection. We propose an algorithm to spot micro-expression segments including the onset and offset frames and to spatially pinpoint in each image space the regions involved in the micro-facial muscle movements. The problem is formulated into Anomaly Detection due to the fact that micro-expressions occur infrequently and thus leading to few data generation compared to natural facial behaviours. In this manner, first, we propose a deep Recurrent Convolutional Auto-Encoder to capture spatial and motion feature changes of natural facial behaviours. Then, a statistical based model for estimating the probability density function of normal facial behaviours while associating a discriminating score to spot micro-expressions is learned based on a Gaussian Mixture Model. Finally, an adaptive thresholding technique for identifying micro expressions from natural facial behaviour is proposed.Our algorithms are tested over deliberate and spontaneous facial expression benchmarks
APA, Harvard, Vancouver, ISO, and other styles
14

Rossini, Eugenio. "Deep Learning and Nonlinear PDEs in High-Dimensional Spaces." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16859/.

Full text
Abstract:
Questa tesi è incentrata sull'analisi di un algoritmo che permette di approssimare una soluzione per PDE ad alta dimensionalità. Tale algoritmo utilizza le nuove tecniche sviluppate in ambito della teoria delle Reti Neurali (Deep Learning) per affrontare un problema di difficile risoluzione. Il problema principale che affligge i modelli governati da PDE semilineari paraboliche in dimensione 100 prende il nome di "maledizione della dimensionalità". Questo non permette di utilizzare algoritmi deterministici come Galerkin o Elementi Finiti, poichè il costo computazionale cresce esponenzialmente rispetto alla dimensione del problema. La non linearità invece rende impossibile l'utilizzo di metodi probabilistici di tipo Monte Carlo. L'introduzione di algoritmi di Machine Learning permette di superare questi problemi con facilità e con ottimi risultati. Le performance di questo algoritmo sono state testate su un problema pratico di finanza matematica: la risoluzione dell'equazione di Black e Scholes per il prezzo delle opzioni Europee con rischio di default.
APA, Harvard, Vancouver, ISO, and other styles
15

Rastgoufard, Rastin. "Multi-Label Latent Spaces with Semi-Supervised Deep Generative Models." ScholarWorks@UNO, 2018. https://scholarworks.uno.edu/td/2486.

Full text
Abstract:
Expert labeling, tagging, and assessment are far more costly than the processes of collecting raw data. Generative modeling is a very powerful tool to tackle this real-world problem. It is shown here how these models can be used to allow for semi-supervised learning that performs very well in label-deficient conditions. The foundation for the work in this dissertation is built upon visualizing generative models' latent spaces to gain deeper understanding of data, analyze faults, and propose solutions. A number of novel ideas and approaches are presented to improve single-label classification. This dissertation's main focus is on extending semi-supervised Deep Generative Models for solving the multi-label problem by proposing unique mathematical and programming concepts and organization. In all naive mixtures, using multiple labels is detrimental and causes each label's predictions to be worse than models that utilize only a single label. Examining latent spaces reveals that in many cases, large regions in the models generate meaningless results. Enforcing a priori independence is essential, and only when applied can multi-label models outperform the best single-label models. Finally, a novel learning technique called open-book learning is described that is capable of surpassing the state-of-the-art classification performance of generative models for multi-labeled, semi-supervised data sets.
APA, Harvard, Vancouver, ISO, and other styles
16

Pallotti, Davide. "Integrazione di dati di disparità sparsi in algoritmi per la visione stereo basati su deep-learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16633/.

Full text
Abstract:
La visione stereo consiste nell’estrarre informazioni di profondità da una scena a partire da una vista sinistra e una vista destra. Il problema si riduce a determinare punti corrispondenti nelle due immagini, che nel caso di immagini rettificate risultano traslati solo orizzontalmente, di una distanza detta disparità. Tra gli algoritmi stereo tradizionali spiccano SGM e la sua implementazione rSGM. SGM minimizza una funzione di costo definita su un volume dei costi, che misura la somiglianza degli intorni di potenziali punti omologhi per numerosi valori di disparità. L’abilità delle reti neurali convoluzionali (CNN) nello svolgere attività di percezione ha rivoluzionato l’approccio alla visione stereo. Un esempio di CNN stereo è GC-Net, adatta alla sperimentazione dato il numero contenuto di parametri. Anche GC-Net produce la mappa di disparità a partire da un volume dei costi, ottenuto combinando feature estratte dalle due viste. Obiettivo di questa tesi è integrare in un algoritmo stereo dati di disparità sparsi suggeriti dall’esterno, con l’intento di migliorare l’accuratezza. L’idea proposta è di utilizzare i dati noti associati a punti sparsi per modulare i valori corrispondenti a quegli stessi punti nel volume dei costi. Inizialmente sperimenteremo questo approccio su GC-Net. Dapprima faremo uso di disparità estratte casualmente dalla ground truth: ciò permetterà di verificare la bontà del metodo e simulerà l’impiego di un sensore di profondità a bassa risoluzione. Dopodiché impiegheremo gli output di SGM e rSGM, ancora campionati casualmente, chiedendoci se ciò risulti già in un primo miglioramento rispetto alla sola GC-Net. In seguito saggeremo l’applicabilità di questo stesso metodo a un algoritmo tradizionale, rSGM, utilizzando soltanto la ground truth come fonte di disparità. Infine riprenderemo l’idea di fornire a GC-Net l’aiuto di rSGM, ma sceglieremo solo i punti più promettenti rispetto a una misura di confidenza calcolata con la rete neurale CCNN.
APA, Harvard, Vancouver, ISO, and other styles
17

Mehr, Éloi. "Unsupervised Learning of 3D Shape Spaces for 3D Modeling." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS566.

Full text
Abstract:
Bien que les données 3D soient de plus en plus populaires, en particulier avec la démocratisation des expériences de réalité virtuelle et augmentée, il reste très difficile de manipuler une forme 3D, même pour des designers ou des experts. Partant d’une base de données d’instances 3D d’une ou plusieurs catégories d’objets, nous voulons apprendre la variété des formes plausibles en vue de développer de nouveaux outils intelligents de modélisation et d’édition 3D. Cependant, cette variété est souvent bien plus complexe comparée au domaine 2D. En effet, les surfaces 3D peuvent être représentées en utilisant plusieurs plongements distincts, et peuvent aussi exhiber des alignements ou des topologies différentes. Dans cette thèse, nous étudions la variété des formes plausibles à la lumière des défis évoqués précédemment, en approfondissant trois points de vue différents. Tout d'abord, nous considérons la variété comme un espace quotient, dans le but d’apprendre la géométrie intrinsèque des formes à partir d’une base de données où les modèles 3D ne sont pas co-alignés. Ensuite, nous supposons que la variété est non connexe, ce qui aboutit à un nouveau modèle d’apprentissage profond capable d’automatiquement partitionner et apprendre les formes selon leur typologie. Enfin, nous étudions la conversion d’une entrée 3D non structurée vers une géométrie exacte, représentée comme un arbre structuré de primitives solides continues
Even though 3D data is becoming increasingly more popular, especially with the democratization of virtual and augmented experiences, it remains very difficult to manipulate a 3D shape, even for designers or experts. Given a database containing 3D instances of one or several categories of objects, we want to learn the manifold of plausible shapes in order to develop new intelligent 3D modeling and editing tools. However, this manifold is often much more complex compared to the 2D domain. Indeed, 3D surfaces can be represented using various embeddings, and may also exhibit different alignments and topologies. In this thesis we study the manifold of plausible shapes in the light of the aforementioned challenges, by deepening three different points of view. First of all, we consider the manifold as a quotient space, in order to learn the shapes’ intrinsic geometry from a dataset where the 3D models are not co-aligned. Then, we assume that the manifold is disconnected, which leads to a new deep learning model that is able to automatically cluster and learn the shapes according to their typology. Finally, we study the conversion of an unstructured 3D input to an exact geometry, represented as a structured tree of continuous solid primitives
APA, Harvard, Vancouver, ISO, and other styles
18

Chi, Lu-cheng, and 紀律呈. "An Improved Deep Reinforcement Learning with Sparse Rewards." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/eq94pr.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
107
In reinforcement learning, how an agent explores in an environment with sparse rewards is a long-standing problem. An improved deep reinforcement learning described in this thesis encourages an agent to explore unvisited environmental states in an environment with sparse rewards. In deep reinforcement learning, an agent directly uses an image observation from environment as an input to the neural network. However, some neglected observations from environment, such as depth, might provide valuable information. An improved deep reinforcement learning described in this thesis is based on the Actor-Critic algorithm and uses the convolutional neural network as a hetero-encoder between an image input and other observations from environment. In the environment with sparse rewards, we use these neglected observations from environment as a target output of supervised learning and provide an agent denser training signals through supervised learning to bootstrap reinforcement learning. In addition, we use the loss from supervised learning as the feedback for an agent’s exploration behavior in an environment, called the label reward, to encourage an agent to explore unvisited environmental states. Finally, we construct multiple neural networks by Asynchronous Advantage Actor-Critic algorithm and learn the policy with multiple agents. An improved deep reinforcement learning described in this thesis is compared with other deep reinforcement learning in an environment with sparse rewards and achieves better performance.
APA, Harvard, Vancouver, ISO, and other styles
19

YANG, FU-MING, and 楊富名. "Sparse Matrix Based Image Enhancement for Target Detection using Deep Learning." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/68a2ej.

Full text
Abstract:
碩士
國立雲林科技大學
資訊工程系
105
In many paper, it developed algorithm to target detection which can be divide into supervised and unsupervised approaches. This thesis used hyper algorithm and convolution neural network to conduct target detection. The method of feature extraction used filter to scan the image and input this powerful feature to the neural network classifier. Support vector machine can find the hyperplane to classify target in the image. Linear discriminant analysis can project the data to low dimension that divide the data into two class precisely. In order to raise the result of choosing wrong target, it can be use image pre-processing to enhance the target in the image. It can use Robust PCA to divide the image into Low-rank matrix and Sparse matrix. Sparse matrix is the noise in the image and maybe represent the target in the image. Therefore, this thesis proposed combination with sparse matrix that can promote the outcome. The experiments show that can use image pre-processing to get target and effectively raise the result of choosing wrong target.
APA, Harvard, Vancouver, ISO, and other styles
20

Peng, Yu-Shao, and 彭宇劭. "Exploring Sparse Features in Deep Reinforcement Learning towards Fast Disease Diagnosis." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/43q7bz.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
106
Self disease diagnosis is becoming more and more important in recent years. Irrespective of whether the area has sufficient medical resources or not, the system for self disease diagnosis plays an important role. Although patients can conveniently search medical information from the internet, search results are often inaccurate. A robust system for self disease diagnosis needs to provide accurate prediction and fast disease diagnosis. However, these two properties are in conflict with each other. In addition, only a few symptoms per patient appear in a disease, that is, the feature space is sparse. The previous methods do not consider the sparse feature problem and result in inferior performance when dealing with a large number of possible diseases. In this work, we formulate the disease diagnosis problem as a sequential decision making process, and propose a reinforcement learning algorithm RE^2 to improve the performance of self disease diagnosis. To overcome the sparse feature problem, we propose a reward shaping technique and a reconstruction technique in RE^2. Reward shaping can guide the search towards symptoms that actually appear on the patient. Reconstruction can guide the agent to learn correlations between symptoms. Together, they can find symptom queries that yield key positive responses from a patient with high probability. Consequently, by using these techniques, the agent can obtain much improved diagnoses and state-of-the-art results in different experimental settings.
APA, Harvard, Vancouver, ISO, and other styles
21

CHUANG, HSIANG-LUNG, and 莊翔隆. "Deep Learning of Approximate Message Passing Algorithm based on Sparse Superposition Code." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9n346y.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
107
In coding and information theory, it has been a crucial issue to develop a computationally efficient, capacity-achieving code. This paper is based on two coding schemes. One is Sparse Superposition Codes (or Sparse Regression Codes) and the other one is Approximate Message Passing Algorithm. Especially for Approximate Message Passing Algorithm, we will make some improvement and optimization to it through machine learning. Under additive white Gaussian noise channel (AWGN) and power constraint circumstance, not only is Sparse Superposition Codes both feasible in encoding and decoding, but it can also nearly achieve channel capacity. Its codewords are defined by a matrix which is Gaussian distributed and will be divided into several sections. By selecting some column vectors from every section to do linear combination, we can generate its codewords. During iterative decoding, Approximate Message Passing Algorithm will process many matrix multiplications and quantization and it is very similar to the operation of deep learning scheme, which can easily realize by constructing neuron networks. However, traditional Approximate Message Passing Algorithm needs to prepare many parameters before computation. Thus, it is very possible to improve it by deep learning. Finally, in this paper, we propose two different ways to deal with different length of codewords.
APA, Harvard, Vancouver, ISO, and other styles
22

Srinivas, Suraj. "Learning Compact Architectures for Deep Neural Networks." Thesis, 2017. http://etd.iisc.ernet.in/2005/3581.

Full text
Abstract:
Deep neural networks with millions of parameters are at the heart of many state of the art computer vision models. However, recent works have shown that models with much smaller number of parameters can often perform just as well. A smaller model has the advantage of being faster to evaluate and easier to store - both of which are crucial for real-time and embedded applications. While prior work on compressing neural networks have looked at methods based on sparsity, quantization and factorization of neural network layers, we look at the alternate approach of pruning neurons. Training Neural Networks is often described as a kind of `black magic', as successful training requires setting the right hyper-parameter values (such as the number of neurons in a layer, depth of the network, etc ). It is often not clear what these values should be, and these decisions often end up being either ad-hoc or driven through extensive experimentation. It would be desirable to automatically set some of these hyper-parameters for the user so as to minimize trial-and-error. Combining this objective with our earlier preference for smaller models, we ask the following question - for a given task, is it possible to come up with small neural network architectures automatically? In this thesis, we propose methods to achieve the same. The work is divided into four parts. First, given a neural network, we look at the problem of identifying important and unimportant neurons. We look at this problem in a data-free setting, i.e; assuming that the data the neural network was trained on, is not available. We propose two rules for identifying wasteful neurons and show that these suffice in such a data-free setting. By removing neurons based on these rules, we are able to reduce model size without significantly affecting accuracy. Second, we propose an automated learning procedure to remove neurons during the process of training. We call this procedure ‘Architecture-Learning’, as this automatically discovers the optimal width and depth of neural networks. We empirically show that this procedure is preferable to trial-and-error based Bayesian Optimization procedures for selecting neural network architectures. Third, we connect ‘Architecture-Learning’ to a popular regularize called ‘Dropout’, and propose a novel regularized which we call ‘Generalized Dropout’. From a Bayesian viewpoint, this method corresponds to a hierarchical extension of the Dropout algorithm. Empirically, we observe that Generalized Dropout corresponds to a more flexible version of Dropout, and works in scenarios where Dropout fails. Finally, we apply our procedure for removing neurons to the problem of removing weights in a neural network, and achieve state-of-the-art results in scarifying neural networks.
APA, Harvard, Vancouver, ISO, and other styles
23

(8892395), Yao Chen. "Inferential GANs and Deep Feature Selection with Applications." Thesis, 2020.

Find full text
Abstract:
Deep nueral networks (DNNs) have become popular due to their predictive power and flexibility in model fitting. In unsupervised learning, variational autoencoders (VAEs) and generative adverarial networks (GANs) are two most popular and successful generative models. How to provide a unifying framework combining the best of VAEs and GANs in a principled way is a challenging task. In supervised learning, the demand for high-dimensional data analysis has grown significantly, especially in the applications of social networking, bioinformatics, and neuroscience. How to simultaneously approximate the true underlying nonlinear system and identify relevant features based on high-dimensional data (typically with the sample size smaller than the dimension, a.k.a. small-n-large-p) is another challenging task.

In this dissertation, we have provided satisfactory answers for these two challenges. In addition, we have illustrated some promising applications using modern machine learning methods.

In the first chapter, we introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. GANs have been impactful on many problems and applications but suffer from unstable training. The Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. The iWGAN model jointly learns an encoder network and a generator network motivated by the iterative primal dual optimization process. The encoder network maps the observed samples to the latent space and the generator network maps the samples from the latent space to the data space. We establish the generalization error bound of iWGANs to theoretically justify the performance of iWGANs. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that the iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of iWGANs by obtaining a competitive and stable performance with state-of-the-art for benchmark datasets.

In the second chapter, we present a general framework for high-dimensional nonlinear variable selection using deep neural networks under the framework of supervised learning. The network architecture includes both a selection layer and approximation layers. The problem can be cast as a sparsity-constrained optimization with a sparse parameter in the selection layer and other parameters in the approximation layers. This problem is challenging due to the sparse constraint and the nonconvex optimization. We propose a novel algorithm, called Deep Feature Selection, to estimate both the sparse parameter and the other parameters. Theoretically, we establish the algorithm convergence and the selection consistency when the objective function has a Generalized Stable Restricted Hessian. This result provides theoretical justifications of our method and generalizes known results for high-dimensional linear variable selection. Simulations and real data analysis are conducted to demonstrate the superior performance of our method.

In the third chapter, we develop a novel methodology to classify the electrocardiograms (ECGs) to normal, atrial fibrillation and other cardiac dysrhythmias as defined by the Physionet Challenge 2017. More specifically, we use piecewise linear splines for the feature selection and a gradient boosting algorithm for the classifier. In the algorithm, the ECG waveform is fitted by a piecewise linear spline, and morphological features related to the piecewise linear spline coefficients are extracted. XGBoost is used to classify the morphological coefficients and heart rate variability features. The performance of the algorithm was evaluated by the PhysioNet Challenge database (3658 ECGs classified by experts). Our algorithm achieves an average F1 score of 81% for a 10-fold cross validation and also achieved 81% for F1 score on the independent testing set. This score is similar to the top 9th score (81%) in the official phase of the Physionet Challenge 2017.

In the fourth chapter, we introduce a novel region-selection penalty in the framework of image-on-scalar regression to impose sparsity of pixel values and extract active regions simultaneously. This method helps identify regions of interest (ROI) associated with certain disease, which has a great impact on public health. Our penalty combines the Smoothly Clipped Absolute Deviation (SCAD) regularization, enforcing sparsity, and the SCAD of total variation (TV) regularization, enforcing spatial contiguity, into one group, which segments contiguous spatial regions against zero-valued background. Efficient algorithm is based on the alternative direction method of multipliers (ADMM) which decomposes the non-convex problem into two iterative optimization problems with explicit solutions. Another virtue of the proposed method is that a divide and conquer learning algorithm is developed, thereby allowing scaling to large images. Several examples are presented and the experimental results are compared with other state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
24

"Towards Developing Computer Vision Algorithms and Architectures for Real-world Applications." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.51662.

Full text
Abstract:
abstract: Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for object segmentation and feature extraction for objects and actions recognition in video data, and sparse feature selection algorithms for medical image analysis, as well as automated feature extraction using convolutional neural network for blood cancer grading. To detect and classify objects in video, the objects have to be separated from the background, and then the discriminant features are extracted from the region of interest before feeding to a classifier. Effective object segmentation and feature extraction are often application specific, and posing major challenges for object detection and classification tasks. In this dissertation, we address effective object flow based ROI generation algorithm for segmenting moving objects in video data, which can be applied in surveillance and self driving vehicle areas. Optical flow can also be used as features in human action recognition algorithm, and we present using optical flow feature in pre-trained convolutional neural network to improve performance of human action recognition algorithms. Both algorithms outperform the state-of-the-arts at their time. Medical images and videos pose unique challenges for image understanding mainly due to the fact that the tissues and cells are often irregularly shaped, colored, and textured, and hand selecting most discriminant features is often difficult, thus an automated feature selection method is desired. Sparse learning is a technique to extract the most discriminant and representative features from raw visual data. However, sparse learning with \textit{L1} regularization only takes the sparsity in feature dimension into consideration; we improve the algorithm so it selects the type of features as well; less important or noisy feature types are entirely removed from the feature set. We demonstrate this algorithm to analyze the endoscopy images to detect unhealthy abnormalities in esophagus and stomach, such as ulcer and cancer. Besides sparsity constraint, other application specific constraints and prior knowledge may also need to be incorporated in the loss function in sparse learning to obtain the desired results. We demonstrate how to incorporate similar-inhibition constraint, gaze and attention prior in sparse dictionary selection for gastroscopic video summarization that enable intelligent key frame extraction from gastroscopic video data. With recent advancement in multi-layer neural networks, the automatic end-to-end feature learning becomes feasible. Convolutional neural network mimics the mammal visual cortex and can extract most discriminant features automatically from training samples. We present using convolutinal neural network with hierarchical classifier to grade the severity of Follicular Lymphoma, a type of blood cancer, and it reaches 91\% accuracy, on par with analysis by expert pathologists. Developing real world computer vision applications is more than just developing core vision algorithms to extract and understand information from visual data; it is also subject to many practical requirements and constraints, such as hardware and computing infrastructure, cost, robustness to lighting changes and deformation, ease of use and deployment, etc.The general processing pipeline and system architecture for the computer vision based applications share many similar design principles and architecture. We developed common processing components and a generic framework for computer vision application, and a versatile scale adaptive template matching algorithm for object detection. We demonstrate the design principle and best practices by developing and deploying a complete computer vision application in real life, building a multi-channel water level monitoring system, where the techniques and design methodology can be generalized to other real life applications. The general software engineering principles, such as modularity, abstraction, robust to requirement change, generality, etc., are all demonstrated in this research.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2018
APA, Harvard, Vancouver, ISO, and other styles
25

Goodfellow, Ian. "Deep learning of representations and its application to computer vision." Thèse, 2014. http://hdl.handle.net/1866/11674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

"Deep Active Learning Explored Across Diverse Label Spaces." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.49076.

Full text
Abstract:
abstract: Deep learning architectures have been widely explored in computer vision and have depicted commendable performance in a variety of applications. A fundamental challenge in training deep networks is the requirement of large amounts of labeled training data. While gathering large quantities of unlabeled data is cheap and easy, annotating the data is an expensive process in terms of time, labor and human expertise. Thus, developing algorithms that minimize the human effort in training deep models is of immense practical importance. Active learning algorithms automatically identify salient and exemplar samples from large amounts of unlabeled data and can augment maximal information to supervised learning models, thereby reducing the human annotation effort in training machine learning models. The goal of this dissertation is to fuse ideas from deep learning and active learning and design novel deep active learning algorithms. The proposed learning methodologies explore diverse label spaces to solve different computer vision applications. Three major contributions have emerged from this work; (i) a deep active framework for multi-class image classication, (ii) a deep active model with and without label correlation for multi-label image classi- cation and (iii) a deep active paradigm for regression. Extensive empirical studies on a variety of multi-class, multi-label and regression vision datasets corroborate the potential of the proposed methods for real-world applications. Additional contributions include: (i) a multimodal emotion database consisting of recordings of facial expressions, body gestures, vocal expressions and physiological signals of actors enacting various emotions, (ii) four multimodal deep belief network models and (iii) an in-depth analysis of the effect of transfer of multimodal emotion features between source and target networks on classification accuracy and training time. These related contributions help comprehend the challenges involved in training deep learning models and motivate the main goal of this dissertation.
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2018
APA, Harvard, Vancouver, ISO, and other styles
27

(9193706), Michael Siyuan Wang. "Evaluating Tangent Spaces, Distances, and Deep Learning Models to Develop Classifiers for Brain Connectivity Data." Thesis, 2020.

Find full text
Abstract:
A better, more optimized processing pipeline for functional connectivity (FC) data will likely accelerate practical advances within the field of neuroimaging. When using correlation-based measures of FC, researchers have recently employed a few data-driven methods to maximize its predictive power. In this study, we apply a few of these post-processing methods in both task, twin, and subject identification problems. First, we employ PCA reconstruction of the original dataset, which has been successfully used to maximize subject-level identifiability. We show there is dataset-dependent optimal PCA reconstruction for task and twin identification. Next, we analyze FCs in their native geometry using tangent space projection with various mean covariance reference matrices. We demonstrate that the tangent projection of the original FCs can drastically increase subject and twin identification rates. For example, the identification rate of 106 MZ twin pairs increased from 0.487 of the original FCs to 0.943 after tangent projection with the logarithmic Euclidean reference matrix. We also use Schaefer’s variable parcellation sizes to show that increasing parcellation granularity in general increases twin and subject identification rates. Finally, we show that our custom convolutional neural network classifier achieves an average task identification rate of 0.986, surpassing state-of-the-art results. These post-processing methods are promising for future research in functional connectome predictive modeling and, if optimized further, can likely be extended into clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Touati, Redha. "Détection de changement en imagerie satellitaire multimodale." Thèse, 2019. http://hdl.handle.net/1866/22662.

Full text
Abstract:
The purpose of this research is to study the detection of temporal changes between two (or more) multimodal images satellites, i.e., between two different imaging modalities acquired by two heterogeneous sensors, giving for the same scene two images encoded differently and depending on the nature of the sensor used for each acquisition. The two (or multiple) multimodal satellite images are acquired and coregistered at two different dates, usually before and after an event. In this study, we propose new models belonging to different categories of multimodal change detection in remote sensing imagery. As a first contribution, we present a new constraint scenario expressed on every pair of pixels existing in the before and after image change. A second contribution of our work is to propose a spatio-temporal textural gradient operator expressed with complementary norms and also a new filtering strategy of the difference map resulting from this operator. Another contribution consists in constructing an observation field from a pair of pixels and to infer a solution maximum a posteriori sense. A fourth contribution is proposed which consists to build a common feature space for the two heterogeneous images. Our fifth contribution lies in the modeling of patterns of change by anomalies and on the analysis of reconstruction errors which we propose to learn a non-supervised model from a training base consisting only of patterns of no-change in order that the built model reconstruct the normal patterns (non-changes) with a small reconstruction error. In the sixth contribution, we propose a pairwise learning architecture based on a pseudosiamese CNN network that takes as input a pair of data instead of a single data and constitutes two partly uncoupled CNN parallel network streams (descriptors) followed by a decision network that includes fusion layers and a loss layer in the sense of the entropy criterion. The proposed models are enough flexible to be used effectively in the monomodal change detection case.
Cette recherche a pour objet l’étude de la détection de changements temporels entre deux (ou plusieurs) images satellitaires multimodales, i.e., avec deux modalités d’imagerie différentes acquises par deux capteurs hétérogènes donnant pour la même scène deux images encodées différemment suivant la nature du capteur utilisé pour chacune des prises de vues. Les deux (ou multiples) images satellitaires multimodales sont prises et co-enregistrées à deux dates différentes, avant et après un événement. Dans le cadre de cette étude, nous proposons des nouveaux modèles de détection de changement en imagerie satellitaire multimodale semi ou non supervisés. Comme première contribution, nous présentons un nouveau scénario de contraintes exprimé sur chaque paire de pixels existant dans l’image avant et après changement. Une deuxième contribution de notre travail consiste à proposer un opérateur de gradient textural spatio-temporel exprimé avec des normes complémentaires ainsi qu’une nouvelle stratégie de dé-bruitage de la carte de différence issue de cet opérateur. Une autre contribution consiste à construire un champ d’observation à partir d’une modélisation par paires de pixels et proposer une solution au sens du maximum a posteriori. Une quatrième contribution est proposée et consiste à construire un espace commun de caractéristiques pour les deux images hétérogènes. Notre cinquième contribution réside dans la modélisation des zones de changement comme étant des anomalies et sur l’analyse des erreurs de reconstruction dont nous proposons d’apprendre un modèle non-supervisé à partir d’une base d’apprentissage constituée seulement de zones de non-changement afin que le modèle reconstruit les motifs de non-changement avec une faible erreur. Dans la dernière contribution, nous proposons une architecture d’apprentissage par paires de pixels basée sur un réseau CNN pseudo-siamois qui prend en entrée une paire de données au lieu d’une seule donnée et est constituée de deux flux de réseau (descripteur) CNN parallèles et partiellement non-couplés suivis d’un réseau de décision qui comprend de couche de fusion et une couche de classification au sens du critère d’entropie. Les modèles proposés s’avèrent assez flexibles pour être utilisés efficacement dans le cas des données-images mono-modales.
APA, Harvard, Vancouver, ISO, and other styles
29

Zumer, Jeremie. "Influencing the Properties of Latent Spaces." Thèse, 2016. http://hdl.handle.net/1866/18767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography