Academic literature on the topic 'Sparse Deep Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse Deep Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse Deep Learning"

1

Chai, Xintao, Genyang Tang, Kai Lin, Zhe Yan, Hanming Gu, Ronghua Peng, Xiaodong Sun, and Wenjun Cao. "Deep learning for multitrace sparse-spike deconvolution." GEOPHYSICS 86, no. 3 (April 8, 2021): V207—V218. http://dx.doi.org/10.1190/geo2020-0342.1.

Full text
Abstract:
Sparse-spike deconvolution (SSD) is an important method for seismic resolution enhancement. With the wavelet given, many trace-by-trace SSD methods have been proposed for extracting an estimate of the reflection-coefficient series from stacked traces. The main drawbacks of trace-by-trace methods are that they neither use the information from the adjacent seismograms nor do they take full advantage of the inherent spatial continuity of the seismic data. Although several multitrace methods have been consequently proposed, these methods generally rely on different assumptions and theories and require different parameter settings for different data applications. Therefore, traditional methods demand intensive human-computer interaction. This requirement undoubtedly does not fit the current dominant trend of intelligent seismic exploration. Therefore, we have developed a deep learning (DL)-based multitrace SSD approach. The approach transforms the input 2D/3D seismic data into the corresponding SSD result by training end-to-end encoder-decoder-style 2D/3D convolutional neural networks (CNN). Our key motivations are that DL is effective for mining complicated relations from data, the 2D/3D CNN can take multitrace information into account naturally, the additional information contributes to the SSD result with better spatial continuity, and parameter tuning is not necessary for CNN predictions. We determine the significance of the learning rate for the training process’s convergence. Benchmarking tests on the field 2D/3D seismic data confirm that the approach yields accurate high-resolution results that are mostly in agreement with the well logs, the DL-based multitrace SSD results generated by the 2D/3D CNNs are better than the trace-by-trace SSD results, and the 3D CNN outperforms the 2D CNN for 3D data application.
APA, Harvard, Vancouver, ISO, and other styles
2

Kerrigan, Joshua, Paul La Plante, Saul Kohn, Jonathan C. Pober, James Aguirre, Zara Abdurashidova, Paul Alexander, et al. "Optimizing sparse RFI prediction using deep learning." Monthly Notices of the Royal Astronomical Society 488, no. 2 (July 8, 2019): 2605–15. http://dx.doi.org/10.1093/mnras/stz1865.

Full text
Abstract:
ABSTRACT Radio frequency interference (RFI) is an ever-present limiting factor among radio telescopes even in the most remote observing locations. When looking to retain the maximum amount of sensitivity and reduce contamination for Epoch of Reionization studies, the identification and removal of RFI is especially important. In addition to improved RFI identification, we must also take into account computational efficiency of the RFI-Identification algorithm as radio interferometer arrays such as the Hydrogen Epoch of Reionization Array (HERA) grow larger in number of receivers. To address this, we present a deep fully convolutional neural network (DFCN) that is comprehensive in its use of interferometric data, where both amplitude and phase information are used jointly for identifying RFI. We train the network using simulated HERA visibilities containing mock RFI, yielding a known ‘ground truth’ data set for evaluating the accuracy of various RFI algorithms. Evaluation of the DFCN model is performed on observations from the 67 dish build-out, HERA-67, and achieves a data throughput of 1.6 × 105 HERA time-ordered 1024 channelled visibilities per hour per GPU. We determine that relative to an amplitude only network including visibility phase adds important adjacent time–frequency context which increases discrimination between RFI and non-RFI. The inclusion of phase when predicting achieves a recall of 0.81, precision of 0.58, and F2 score of 0.75 as applied to our HERA-67 observations.
APA, Harvard, Vancouver, ISO, and other styles
3

De Cnudde, Sofie, Yanou Ramon, David Martens, and Foster Provost. "Deep Learning on Big, Sparse, Behavioral Data." Big Data 7, no. 4 (December 1, 2019): 286–307. http://dx.doi.org/10.1089/big.2019.0095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Davoudi, Neda, Xosé Luís Deán-Ben, and Daniel Razansky. "Deep learning optoacoustic tomography with sparse data." Nature Machine Intelligence 1, no. 10 (September 16, 2019): 453–60. http://dx.doi.org/10.1038/s42256-019-0095-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Trampert, Patrick, Sabine Schlabach, Tim Dahmen, and Philipp Slusallek. "Deep Learning for Sparse Scanning Electron Microscopy." Microscopy and Microanalysis 25, S2 (August 2019): 158–59. http://dx.doi.org/10.1017/s1431927619001521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tanuja, Nukapeyyi. "Medical Image Fusion Using Deep Learning Mechanism." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 128–36. http://dx.doi.org/10.22214/ijraset.2022.39809.

Full text
Abstract:
Abstract: Sparse representation(SR) model named convolutional sparsity based morphological component analysis is introduced for pixel-level medical image fusion. The CS-MCA model can achieve multicomponent and global SRs of source images, by integrating MCA and convolutional sparse representation(CSR) into a unified optimization framework. In the existing method, the CSRs of its gradient and texture components are obtained by the CSMCA model using pre-learned dictionaries. Then for each image component, sparse coefficients of all the source images are merged and then fused component is reconstructed using the corresponding dictionary. In the extension mechanism, we are using deep learning based pyramid decomposition. Now a days deep learning is a very demanding technology. Deep learning is used for image classification, object detection, image segmentation, image restoration. Keywords: CNN, CT, MRI, MCA, CS-MCA.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Hongpeng, Chahine Ibrahim, Wei Xing Zheng, and Wei Pan. "Sparse Bayesian deep learning for dynamic system identification." Automatica 144 (October 2022): 110489. http://dx.doi.org/10.1016/j.automatica.2022.110489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xing, and Lei Zhang. "Unbalanced data processing using deep sparse learning technique." Future Generation Computer Systems 125 (December 2021): 480–84. http://dx.doi.org/10.1016/j.future.2021.05.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Antholzer, Stephan, Markus Haltmeier, and Johannes Schwab. "Deep learning for photoacoustic tomography from sparse data." Inverse Problems in Science and Engineering 27, no. 7 (September 11, 2018): 987–1005. http://dx.doi.org/10.1080/17415977.2018.1518444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xie, Weicheng, Xi Jia, Linlin Shen, and Meng Yang. "Sparse deep feature learning for facial expression recognition." Pattern Recognition 96 (December 2019): 106966. http://dx.doi.org/10.1016/j.patcog.2019.106966.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sparse Deep Learning"

1

Tavanaei, Amirhossein. "Spiking Neural Networks and Sparse Deep Learning." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10807940.

Full text
Abstract:

This document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs.

The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.

APA, Harvard, Vancouver, ISO, and other styles
2

Beretta, Davide. "Experience Replay in Sparse Rewards Problems using Deep Reinforcement Techniques." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17531/.

Full text
Abstract:
In questo lavoro si introduce il lettore al Reinforcement Learning, un'area del Machine Learning su cui negli ultimi anni è stata fatta molta ricerca. In seguito vengono presentate alcune modifiche ad ACER, un algoritmo noto e molto interessante che fa uso di Experience Replay. Lo scopo è quello di cercare di aumentarne le performance su problemi generali ma in particolar modo sugli sparse reward problem. Per verificare la bontà delle idee proposte è utilizzato Montezuma's Revenge, un gioco sviluppato per Atari 2600 e considerato tra i più difficili da trattare.
APA, Harvard, Vancouver, ISO, and other styles
3

Benini, Francesco. "Predicting death in games with deep reinforcement learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20755/.

Full text
Abstract:
Il contesto in cui si pone l'elaborato è una branca del machine learning, chiamato reinforcement learning. Quest'elaborato si pone come obiettivo di migliorare il lavoro sviluppato dal collega M. Conciatori. In questa tesi ci si vuole soffermare sui giochi con ricompense molto sparse, dove la soluzione precedente non era riuscita a conseguire traguardi. I giochi con ricompense sparse sono quelli in cui l'agente prima di ottenere un premio, che gli faccia comprendere che sta eseguendo la sequenza di azioni corretta, deve compiere un gran numero di azioni. Tra i giochi con queste caratteristiche, ci si è focalizzati su uno, Montezuma's Revenge. Montezuma's Revenge si distingue perché per ottenere il primo reward è necessario eseguire un gran numero di azioni. Per questo la totalità degli algoritmi sviluppati non è riuscita ad ottenere risultati soddisfacenti. L'idea di proseguire il lavoro del collega M. Conciatori è nata proprio dal fatto che Lower Bound DQN riusciva solo ad ottenere la prima ricompensa. Ci si è posti, perciò, come scopo principale di trovare una soluzione per poter ottenere risultati ottimali e si è, a tal fine, pensato di prevedere la morte dell'agente, aiutandolo, di conseguenza, ad evitare le azioni sbagliate e guadagnare maggiori ricompense. L'agente in questo contesto impiega più tempo per esplorare l'ambiente e conoscere quali comportamenti hanno un compenso positivo. In conseguenza di questo si è pensato di venire in aiuto dell'agente restituendogli una penalità per ciò che era dannoso al suo modo di agire, perciò, attribuendo una sanzione a tutte quelle azioni che causano la terminazione dell'episodio e quindi la sua morte. Le esperienze negative si memorizzano in un buffer apposito, chiamato done buffer, dal quale si estraggono poi per allenare la rete. Nel momento in cui l'agente si troverà nuovamente nella stessa situazione saprà quale azione sia meglio evitare, e con il tempo anche quale scegliere.
APA, Harvard, Vancouver, ISO, and other styles
4

Vekhande, Swapnil Sudhir. "Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90182.

Full text
Abstract:
Computed Tomography (CT) finds applications across domains like medical diagnosis, security screening, and scientific research. In medical imaging, CT allows physicians to diagnose injuries and disease more quickly and accurately than other imaging techniques. However, CT is one of the most significant contributors of radiation dose to the general population and the required radiation dose for scanning could lead to cancer. On the other hand, a shallow radiation dose could sacrifice image quality causing misdiagnosis. To reduce the radiation dose, sparse-view CT, which includes capturing a smaller number of projections, becomes a promising alternative. However, the image reconstructed from linearly interpolated views possesses severe artifacts. Recently, Deep Learning-based methods are increasingly being used to interpret the missing data by learning the nature of the image formation process. The current methods are promising but operate mostly in the image domain presumably due to lack of projection data. Another limitation is the use of simulated data with less sparsity (up to 75%). This research aims to interpolate the missing sparse-view CT in the sinogram domain using deep learning. To this end, a residual U-Net architecture has been trained with patch-wise projection data to minimize Euclidean distance between the ground truth and the interpolated sinogram. The model can generate highly sparse missing projection data. The results show improvement in SSIM and RMSE by 14% and 52% respectively with respect to the linear interpolation-based methods. Thus, experimental sparse-view CT data with 90% sparsity has been successfully interpolated while improving CT image quality.
Master of Science
Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
APA, Harvard, Vancouver, ISO, and other styles
5

Hoori, Ammar O. "MULTI-COLUMN NEURAL NETWORKS AND SPARSE CODING NOVEL TECHNIQUES IN MACHINE LEARNING." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5743.

Full text
Abstract:
Accurate and fast machine learning (ML) algorithms are highly vital in artificial intelligence (AI) applications. In complex dataset problems, traditional ML methods such as radial basis function neural network (RBFN), sparse coding (SC) using dictionary learning, and particle swarm optimization (PSO) provide trivial results, large structure, slow training, and/or slow testing. This dissertation introduces four novel ML techniques: the multi-column RBFN network (MCRN), the projected dictionary learning algorithm (PDL) and the multi-column adaptive and non-adaptive particle swarm optimization techniques (MC-APSO and MC-PSO). These novel techniques provide efficient alternatives for traditional ML techniques. Compared to traditional ML techniques, the novel ML techniques demonstrate more accurate results, faster training and testing timing, and parallelized structured solutions. MCRN deploys small RBFNs in a parallel structure to speed up both training and testing. Each RBFN is trained with a subset of the dataset and the overall structure provides results that are more accurate. PDL introduces a conceptual dictionary learning method in updating the dictionary atoms with the reconstructed input blocks. This method improves the sparsity of extracted features and hence, the image denoising results. MC-PSO and MC-APSO provide fast and more accurate alternatives to the PSO and APSO slow evolutionary techniques. MC-PSO and MC-APSO use multi-column parallelized RBFN structure to improve results and speed with a wide range of classification dataset problems. The novel techniques are trained and tested using benchmark dataset problems and the results are compared with the state-of-the-art counterpart techniques to evaluate their performance. Novel techniques’ results show superiority over techniques in accuracy and speed in most of the experimental results, which make them good alternatives in solving difficult ML problems.
APA, Harvard, Vancouver, ISO, and other styles
6

Bonfiglioli, Luca. "Identificazione efficiente di reti neurali sparse basata sulla Lottery Ticket Hypothesis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Frankle e Carbin 2018, data una rete densa inizializzata casualmente, mostrano che esistono sottoreti sparse di tale rete che possono ottenere accuratezze superiori alla rete densa e richiedono meno iterazioni di addestramento per raggiungere l’early stop. Tali sottoreti sono indicate con il nome di winning ticket. L’identificazione di questi ultimi richiede tuttavia almeno un addestramento completo del modello denso, il che ne limita l’impiego pratico, se non come tecnica di compressione. In questa tesi, si mira a trovare una variante più efficiente dei metodi di magnitude based pruning proposti in letteratura, valutando diversi metodi euristici e data driven per ottenere winning ticket senza completare l’addestramento della rete densa. Confrontandosi con i risultati di Zhou et al. 2019, si mostra come l’accuratezza all’inizializzazione di un winning ticket non sia predittiva dell’accuratezza finale raggiunta dopo l’addestramento e come, di conseguenza, ottimizzare l’accuratezza al momento di inizializzazione non garantisca altrettanto elevate accuratezze dopo il riaddestramento. Viene inoltre mostrata la presenza di good ticket, ovvero un intero spettro di reti sparse con performance confrontabili, almeno lungo una dimensione, con quelle dei winning ticket, e come sia possibile identificare sottoreti che rientrano in questa categoria anche dopo poche iterazioni di addestramento della rete densa iniziale. L’identificazione di queste reti sparse avviene in modo simile a quanto proposto da You et al. 2020, mediante una predizione del winning ticket effettuata prima del completamento dell’addestramento della rete densa. Viene mostrato che l’utilizzo di euristiche alternative al magnitude based pruning per effettuare queste predizioni consente, con costi computazionali marginalmente superiori, di ottenere predizioni significativamente migliori sulle architetture prese in esame.
APA, Harvard, Vancouver, ISO, and other styles
7

Pawlowski, Filip igor. "High-performance dense tensor and sparse matrix kernels for machine learning." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.

Full text
Abstract:
Dans cette thèse, nous développons des algorithmes à haute performance pour certains calculs impliquant des tenseurs denses et des matrices éparses. Nous abordons les opérations du noyau qui sont utiles pour les tâches d'apprentissage de la machine, telles que l'inférence avec les réseaux neuronaux profonds. Nous développons des structures de données et des techniques pour réduire l'utilisation de la mémoire, pour améliorer la localisation des données et donc pour améliorer la réutilisation du cache des opérations du noyau. Nous concevons des algorithmes parallèles à mémoire séquentielle et à mémoire partagée.Dans la première partie de la thèse, nous nous concentrons sur les noyaux tenseurs denses. Les noyaux tenseurs comprennent la multiplication tenseur-vecteur (TVM), la multiplication tenseur-matrice (TMM) et la multiplication tenseur-tendeur (TTM). Parmi ceux-ci, la MVT est la plus liée à la largeur de bande et constitue un élément de base pour de nombreux algorithmes. Nous proposons une nouvelle structure de données qui stocke le tenseur sous forme de blocs, qui sont ordonnés en utilisant la courbe de remplissage de l'espace connue sous le nom de courbe de Morton (ou courbe en Z). L'idée clé consiste à diviser le tenseur en blocs suffisamment petits pour tenir dans le cache et à les stocker selon l'ordre de Morton, tout en conservant un ordre simple et multidimensionnel sur les éléments individuels qui les composent. Ainsi, des routines BLAS haute performance peuvent être utilisées comme micro-noyaux pour chaque bloc. Les résultats démontrent non seulement que l'approche proposée est plus performante que les variantes de pointe jusqu'à 18%, mais aussi que l'approche proposée induit 71% de moins d'écart-type d'échantillon pour le MVT dans les différents modes possibles. Enfin, nous étudions des algorithmes de mémoire partagée parallèles pour la MVT qui utilisent la structure de données proposée. Nos résultats sur un maximum de 8 systèmes de prises montrent une performance presque maximale pour l'algorithme proposé pour les tenseurs à 2, 3, 4 et 5 dimensions.Dans la deuxième partie de la thèse, nous explorons les calculs épars dans les réseaux de neurones en nous concentrant sur le problème d'inférence profonde épars à haute performance. L'inférence sparse DNN est la tâche d'utiliser les réseaux sparse DNN pour classifier un lot d'éléments de données formant, dans notre cas, une matrice de caractéristiques sparse. La performance de l'inférence clairsemée dépend de la parallélisation efficace de la matrice clairsemée - la multiplication matricielle clairsemée (SpGEMM) répétée pour chaque couche dans la fonction d'inférence. Nous introduisons ensuite l'inférence modèle-parallèle, qui utilise un partitionnement bidimensionnel des matrices de poids obtenues à l'aide du logiciel de partitionnement des hypergraphes. Enfin, nous introduisons les algorithmes de tuilage modèle-parallèle et de tuilage hybride, qui augmentent la réutilisation du cache entre les couches, et utilisent un module de synchronisation faible pour cacher le déséquilibre de charge et les coûts de synchronisation. Nous évaluons nos techniques sur les données du grand réseau du IEEE HPEC 2019 Graph Challenge sur les systèmes à mémoire partagée et nous rapportons jusqu'à 2x l'accélération par rapport à la ligne de base
In this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
APA, Harvard, Vancouver, ISO, and other styles
8

Abbasnejad, Iman. "Learning spatio-temporal features for efficient event detection." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121184/1/Iman_Abbasnejad_Thesis.pdf.

Full text
Abstract:
This thesis has addressed the topic of event detection in videos, which is a challenging problem as events to be detected, can be complex, correlated, and may require the detection of different objects and human actions. To address these challenges, the thesis has developed effective strategies for learning the spatio-temporal features of events. Improved event detection performance has been demonstrated on several real-world challenging databases. The outcome of our research will be useful for a number of applications including human computer interaction, robotics and video surveillance.
APA, Harvard, Vancouver, ISO, and other styles
9

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Full text
Abstract:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further, we investigate several architectures as well as study the effect of adding noise and lowering the resolution of the provided depth image. Our results show that models provided low resolution noisy data performs on par with the models provided unaltered depth.
I det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
APA, Harvard, Vancouver, ISO, and other styles
10

Moreau, Thomas. "Représentations Convolutives Parcimonieuses -- application aux signaux physiologiques et interpétabilité de l'apprentissage profond." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN054/document.

Full text
Abstract:
Les représentations convolutives extraient des motifs récurrents qui aident à comprendre la structure locale dans un jeu de signaux. Elles sont adaptées pour l’analyse des signaux physiologiques, qui nécessite des visualisations mettant en avant les informations pertinentes. Ces représentations sont aussi liées aux modèles d’apprentissage profond. Dans ce manuscrit, nous décrivons des avancées algorithmiques et théoriques autour de ces modèles. Nous montrons d’abord que l’Analyse du Spectre Singulier permet de calculer efficacement une représentation convolutive. Cette représentation est dense et nous décrivons une procédure automatisée pour la rendre plus interprétable. Nous proposons ensuite un algorithme asynchrone, pour accélérer le codage parcimonieux convolutif. Notre algorithme présente une accélération super-linéaire. Dans une seconde partie, nous analysons les liens entre représentations et réseaux de neurones. Nous proposons une étape d’apprentissage supplémentaire, appelée post-entraînement, qui permet d’améliorer les performances du réseau entraîné, en s’assurant que la dernière couche soit optimale. Puis nous étudions les mécanismes qui rendent possible l’accélération du codage parcimonieux avec des réseaux de neurones. Nous montrons que cela est lié à une factorisation de la matrice de Gram du dictionnaire. Finalement, nous illustrons l’intérêt de l’utilisation des représentations convolutives pour les signaux physiologiques. L’apprentissage de dictionnaire convolutif est utilisé pour résumer des signaux de marche et le mouvement du regard est soustrait de signaux oculométriques avec l’Analyse du Spectre Singulier
Convolutional representations extract recurrent patterns which lead to the discovery of local structures in a set of signals. They are well suited to analyze physiological signals which requires interpretable representations in order to understand the relevant information. Moreover, these representations can be linked to deep learning models, as a way to bring interpretability intheir internal representations. In this disserta tion, we describe recent advances on both computational and theoretical aspects of these models.First, we show that the Singular Spectrum Analysis can be used to compute convolutional representations. This representation is dense and we describe an automatized procedure to improve its interpretability. Also, we propose an asynchronous algorithm, called DICOD, based on greedy coordinate descent, to solve convolutional sparse coding for long signals. Our algorithm has super-linear acceleration.In a second part, we focus on the link between representations and neural networks. An extra training step for deep learning, called post-training, is introduced to boost the performances of the trained network by making sure the last layer is optimal. Then, we study the mechanisms which allow to accelerate sparse coding algorithms with neural networks. We show that it is linked to afactorization of the Gram matrix of the dictionary.Finally, we illustrate the relevance of convolutional representations for physiological signals. Convolutional dictionary learning is used to summarize human walk signals and Singular Spectrum Analysis is used to remove the gaze movement in young infant’s oculometric recordings
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sparse Deep Learning"

1

Huang, Thomas S. Deep Learning Through Sparse and Low-Rank Modeling. Elsevier Science & Technology, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Thomas S. Deep Learning Through Sparse and Low-Rank Modeling. Elsevier Science & Technology Books, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Deep Learning Through Sparse and Low-Rank Modeling. Elsevier, 2019. http://dx.doi.org/10.1016/c2017-0-00154-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mehta, Vaishali, Dolly Sharma, Monika Mangla, Anita Gehlot, Rajesh Singh, and Sergio Márquez Sánchez, eds. Challenges and Opportunities for Deep Learning Applications in Industry 4.0. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/97898150360601220101.

Full text
Abstract:
The competence of deep learning for the automation and manufacturing sector has received astonishing attention in recent times. The manufacturing industry has recently experienced a revolutionary advancement despite several issues. One of the limitations for technical progress is the bottleneck encountered due to the enormous increase in data volume for processing, comprising various formats, semantics, qualities and features. Deep learning enables detection of meaningful features that are difficult to perform using traditional methods. The book takes the reader on a technological voyage of the industry 4.0 space. Chapters highlight recent applications of deep learning and the associated challenges and opportunities it presents for automating industrial processes and smart applications. Chapters introduce the reader to a broad range of topics in deep learning and machine learning. Several deep learning techniques used by industrial professionals are covered, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical project methodology. Readers will find information on the value of deep learning in applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. The book also discusses prospective research directions that focus on the theory and practical applications of deep learning in industrial automation. Therefore, the book aims to serve as a comprehensive reference guide for industrial consultants interested in industry 4.0, and as a handbook for beginners in data science and advanced computer science courses.
APA, Harvard, Vancouver, ISO, and other styles
5

Carolini, Gabriella Y. Equity, Evaluation, and International Cooperation. Oxford University PressOxford, 2022. http://dx.doi.org/10.1093/oso/9780192865489.001.0001.

Full text
Abstract:
Abstract This book considers whether South–South Cooperation (SSC) is any different from other international partnerships in practice. This question often gets lost in conventional scholarship on SSC and international cooperation, which privileges macro-level narratives of how cooperation mechanisms fit within geopolitical concerns and shape the outcomes of foreign aid. This book instead offers an answer from the ground up. It highlights two main lessons from the close examination of the ecosystem of international cooperation projects in the urban water-and-sanitation sector in Maputo, Mozambique. First, the book shows that macro labels attributed to international cooperation reflect very little about how cooperation projects operate on the ground and the equity consequences of their work. Second, how projects are designed, implemented, and evaluated does matter to the quality of learning that emanates from partnerships. Beyond the geopolitical and technical proximities favored by the SSC discourse, this book argues that what matters in practice is whether hierarchy or heterarchy is institutionalized in the governance of cooperation projects; whether project partners are locally embedded in shared work spaces; and whether practitioners value flexibility and recognize the epistemic value of learning from all partners as peers. A strong evaluation culture within the international development industry, however, still subjugates such equity-based concerns and deep learning in projects to accountability, reinforcing orthodox power asymmetries in cooperation and sustaining epistemic and distributive injustice. This book instead provides a framework for how project evaluations, as a key narrative instrument of development, can instead promote distributive, procedural, and epistemic justice in international cooperation projects.
APA, Harvard, Vancouver, ISO, and other styles
6

Broom and Fraser’s domestic animal behaviour and welfare. 6th ed. Wallingford: CABI, 2021. http://dx.doi.org/10.1079/9781789249835.0000.

Full text
Abstract:
Abstract The 6th edition of this book contains 42 chapters on one biology, ethics, sentience and sustainability; behaviour and welfare concepts; describing, recording and measuring behaviour; learning, cognition and behaviour development; motivation; evolution and optimality; welfare assessment; defence and attack behaviour; finding and acquiring food; body care; locomotion and space occupancy; exploration; spacing behaviour; rest and sleep; general and social behaviour; human-domestic animal interactions; seasonal and reproductive behaviour; sexual behaviour; fetal and parturient behaviour; maternal and neonatal behaviour; juvenile and play behaviour; handling, transport and humane control of domestic animals; stunning and slaughter; welfare and behaviour in relation to disease; different types of abnormal behaviours and the breeding, feeding, housing and welfare of cattle, sheep, goats, pigs, poultry, fishes, deer, camelids, ostriches, furbearing animals, horses, other equids, draught animals, rabbits, dogs, cats and other pets and welfare in a moral world. The book is illustrated with many photographs and includes a much-expanded reference list, an author index and a subject index.
APA, Harvard, Vancouver, ISO, and other styles
7

Oswald, Laura R. Doing Semiotics. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198822028.001.0001.

Full text
Abstract:
Structural semiotics is a hybrid of communication science and anthropology that accounts for the deep cultural codes that structure communication and sociality, endow things with value, move us through constructed space, and moderate our encounters with change. Doing Semiotics: A Research Guide for Marketers at the Edge of Culture, shows readers how to leverage these codes to solve business problems, foster innovation, and create meaningful experiences for consumers. In addition to the basic principles and methods of applied semiotics, the book introduces the reader to branding basics, strategic decision-making, and cross-cultural marketing management. The guide can be used to supplement my previous books, Marketing Semiotics (2012) and Creating Value (2015), with practical exercises, examples, extended team projects and evaluation criteria. The work guides students through the application of learnings to all phases of semiotics-based projects for communications, brand equity management, design strategy, new product development, and public policy management. In addition to grids and tables for sorting data and mapping cultural dimensions of a market, the book includes useful interview protocols for use in focus groups, in-depth interviews, and ethnographic studies. Each chapter also includes expert case studies and essays from the perspectives of Marcel Danesi, Rachel Lawes, Christian Pinson, Laura Santamaria, and Laura Oswald.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sparse Deep Learning"

1

Moons, Bert, Daniel Bankman, and Marian Verhelst. "ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing." In Embedded Deep Learning, 115–51. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99223-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Xiangyi, Huaping Liu, Xinying Xu, and Fuchun Sun. "Denoising Deep Extreme Learning Machines for Sparse Representation." In Proceedings in Adaptation, Learning and Optimization, 235–47. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28373-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Suk, Heung-Il, and Dinggang Shen. "Deep Ensemble Sparse Regression Network for Alzheimer’s Disease Diagnosis." In Machine Learning in Medical Imaging, 113–21. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47157-0_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Shicheng, Xiaoguo Yang, Haoming Zhang, Chaoyu Zheng, and Yugen Yi. "DSGRAE: Deep Sparse Graph Regularized Autoencoder for Anomaly Detection." In Machine Learning for Cyber Security, 254–65. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20099-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Xin, Zhiqiang Hou, Wangsheng Yu, and Zefenfen Jin. "Online Fast Deep Learning Tracker Based on Deep Sparse Neural Networks." In Lecture Notes in Computer Science, 186–98. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71607-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Junzhou, and Zheng Xu. "Cell Detection with Deep Learning Accelerated by Sparse Kernel." In Deep Learning and Convolutional Neural Networks for Medical Image Computing, 137–57. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Eric-Juwei, Mukesh Prasad, Deepak Puthal, Nabin Sharma, Om Kumar Prasad, Po-Hao Chin, Chin-Teng Lin, and Michael Blumenstein. "Deep Learning Based Face Recognition with Sparse Representation Classification." In Neural Information Processing, 665–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Michael C., Jonathan Schwartz, Huihuo Zheng, Yi Jiang, Robert Hovden, and Yimo Han. "Atomic Defect Identification with Sparse Sampling and Deep Learning." In Driving Scientific and Engineering Discoveries Through the Integration of Experiment, Big Data, and Modeling and Simulation, 455–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96498-6_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fakhfakh, Mohamed, Bassem Bouaziz, Lotfi Chaari, and Faiez Gargouri. "Efficient Bayesian Learning of Sparse Deep Artificial Neural Networks." In Lecture Notes in Computer Science, 78–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01333-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Bo, Sheng Ma, Yuan Yuan, Yi Dai, Wei Jiang, Xiang Hou, Xiao Yi, and Rui Xu. "SparG: A Sparse GEMM Accelerator for Deep Learning Applications." In Algorithms and Architectures for Parallel Processing, 529–47. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22677-9_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse Deep Learning"

1

Tang, Jianhao, Zhenni Li, Shengli Xie, Shuxue Ding, Shaolong Zheng, and Xueni Chen. "Deep sparse representation via deep dictionary learning for reinforcement learning." In 2022 41st Chinese Control Conference (CCC). IEEE, 2022. http://dx.doi.org/10.23919/ccc55666.2022.9902583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gale, Trevor, Matei Zaharia, Cliff Young, and Erich Elsen. "Sparse GPU Kernels for Deep Learning." In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2020. http://dx.doi.org/10.1109/sc41405.2020.00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sokar, Ghada, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, and Peter Stone. "Dynamic Sparse Training for Deep Reinforcement Learning." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/477.

Full text
Abstract:
Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training time for dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recently, learning efficient DRL agents has received increasing attention. Yet, current methods focus on accelerating inference time. In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process. The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training. Experiments on continuous control tasks show that our dynamic sparse agents achieve higher performance than the equivalent dense methods, reduce the parameter count and floating-point operations (FLOPs) by 50%, and have a faster learning speed that enables reaching the performance of dense agents with 40−50% reduction in the training steps.
APA, Harvard, Vancouver, ISO, and other styles
4

Saponara, Sergio, Abdussalam Elhanashi, and Alessio Gagliardi. "Reconstruct fingerprint images using deep learning and sparse autoencoder algorithms." In Real-Time Image Processing and Deep Learning 2021, edited by Nasser Kehtarnavaz and Matthias F. Carlsohn. SPIE, 2021. http://dx.doi.org/10.1117/12.2585707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Radlak, Krystian, Michal Szczepankiewicz, and Bogdan Smolka. "Defending against sparse adversarial attacks using impulsive noise reduction filters." In Real-Time Image Processing and Deep Learning 2021, edited by Nasser Kehtarnavaz and Matthias F. Carlsohn. SPIE, 2021. http://dx.doi.org/10.1117/12.2587999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

He, Yunlong, Koray Kavukcuoglu, Yun Wang, Arthur Szlam, and Yanjun Qi. "Unsupervised Feature Learning by Deep Sparse Coding." In Proceedings of the 2014 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2014. http://dx.doi.org/10.1137/1.9781611973440.103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Faming. "Consistent Sparse Deep Learning: Theory and Computation." In 3nd International Conference on Statistics: Theory and Applications (ICSTA'21). Avestia Publishing, 2021. http://dx.doi.org/10.11159/icsta21.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wen, Weijing, Fan Yang, Yangfeng Su, Dian Zhou, and Xuan Zeng. "Learning Sparse Patterns in Deep Neural Networks." In 2019 IEEE 13th International Conference on ASIC (ASICON). IEEE, 2019. http://dx.doi.org/10.1109/asicon47005.2019.8983429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Shiyao, Jingfei Jiang, Jinwei Xu, Chaorun Liu, Yuanhong He, Xiaohang Liu, and Lei Gao. "Sparkle: A High Efficient Sparse Matrix Multiplication Accelerator for Deep Learning." In 2022 IEEE 40th International Conference on Computer Design (ICCD). IEEE, 2022. http://dx.doi.org/10.1109/iccd56317.2022.00077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hamza, Syed A., and Moeness G. Amin. "Learning Sparse Array Capon Beamformer Design Using Deep Learning Approach." In 2020 IEEE Radar Conference (RadarConf20). IEEE, 2020. http://dx.doi.org/10.1109/radarconf2043947.2020.9266359.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sparse Deep Learning"

1

Zhao, Y., C. Liao, and X. Shen. Exploring Deep Learning and Sparse Matrix Format Selection. Office of Scientific and Technical Information (OSTI), March 2018. http://dx.doi.org/10.2172/1426119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography