Academic literature on the topic 'Sparse Deep Learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse Deep Learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Sparse Deep Learning"
Chai, Xintao, Genyang Tang, Kai Lin, Zhe Yan, Hanming Gu, Ronghua Peng, Xiaodong Sun, and Wenjun Cao. "Deep learning for multitrace sparse-spike deconvolution." GEOPHYSICS 86, no. 3 (April 8, 2021): V207—V218. http://dx.doi.org/10.1190/geo2020-0342.1.
Full textKerrigan, Joshua, Paul La Plante, Saul Kohn, Jonathan C. Pober, James Aguirre, Zara Abdurashidova, Paul Alexander, et al. "Optimizing sparse RFI prediction using deep learning." Monthly Notices of the Royal Astronomical Society 488, no. 2 (July 8, 2019): 2605–15. http://dx.doi.org/10.1093/mnras/stz1865.
Full textDe Cnudde, Sofie, Yanou Ramon, David Martens, and Foster Provost. "Deep Learning on Big, Sparse, Behavioral Data." Big Data 7, no. 4 (December 1, 2019): 286–307. http://dx.doi.org/10.1089/big.2019.0095.
Full textDavoudi, Neda, Xosé Luís Deán-Ben, and Daniel Razansky. "Deep learning optoacoustic tomography with sparse data." Nature Machine Intelligence 1, no. 10 (September 16, 2019): 453–60. http://dx.doi.org/10.1038/s42256-019-0095-3.
Full textTrampert, Patrick, Sabine Schlabach, Tim Dahmen, and Philipp Slusallek. "Deep Learning for Sparse Scanning Electron Microscopy." Microscopy and Microanalysis 25, S2 (August 2019): 158–59. http://dx.doi.org/10.1017/s1431927619001521.
Full textTanuja, Nukapeyyi. "Medical Image Fusion Using Deep Learning Mechanism." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 128–36. http://dx.doi.org/10.22214/ijraset.2022.39809.
Full textZhou, Hongpeng, Chahine Ibrahim, Wei Xing Zheng, and Wei Pan. "Sparse Bayesian deep learning for dynamic system identification." Automatica 144 (October 2022): 110489. http://dx.doi.org/10.1016/j.automatica.2022.110489.
Full textLi, Xing, and Lei Zhang. "Unbalanced data processing using deep sparse learning technique." Future Generation Computer Systems 125 (December 2021): 480–84. http://dx.doi.org/10.1016/j.future.2021.05.034.
Full textAntholzer, Stephan, Markus Haltmeier, and Johannes Schwab. "Deep learning for photoacoustic tomography from sparse data." Inverse Problems in Science and Engineering 27, no. 7 (September 11, 2018): 987–1005. http://dx.doi.org/10.1080/17415977.2018.1518444.
Full textXie, Weicheng, Xi Jia, Linlin Shen, and Meng Yang. "Sparse deep feature learning for facial expression recognition." Pattern Recognition 96 (December 2019): 106966. http://dx.doi.org/10.1016/j.patcog.2019.106966.
Full textDissertations / Theses on the topic "Sparse Deep Learning"
Tavanaei, Amirhossein. "Spiking Neural Networks and Sparse Deep Learning." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10807940.
Full textThis document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs.
The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.
Beretta, Davide. "Experience Replay in Sparse Rewards Problems using Deep Reinforcement Techniques." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17531/.
Full textBenini, Francesco. "Predicting death in games with deep reinforcement learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20755/.
Full textVekhande, Swapnil Sudhir. "Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90182.
Full textMaster of Science
Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
Hoori, Ammar O. "MULTI-COLUMN NEURAL NETWORKS AND SPARSE CODING NOVEL TECHNIQUES IN MACHINE LEARNING." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5743.
Full textBonfiglioli, Luca. "Identificazione efficiente di reti neurali sparse basata sulla Lottery Ticket Hypothesis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Find full textPawlowski, Filip igor. "High-performance dense tensor and sparse matrix kernels for machine learning." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.
Full textIn this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
Abbasnejad, Iman. "Learning spatio-temporal features for efficient event detection." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121184/1/Iman_Abbasnejad_Thesis.pdf.
Full textMöckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.
Full textI det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Moreau, Thomas. "Représentations Convolutives Parcimonieuses -- application aux signaux physiologiques et interpétabilité de l'apprentissage profond." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN054/document.
Full textConvolutional representations extract recurrent patterns which lead to the discovery of local structures in a set of signals. They are well suited to analyze physiological signals which requires interpretable representations in order to understand the relevant information. Moreover, these representations can be linked to deep learning models, as a way to bring interpretability intheir internal representations. In this disserta tion, we describe recent advances on both computational and theoretical aspects of these models.First, we show that the Singular Spectrum Analysis can be used to compute convolutional representations. This representation is dense and we describe an automatized procedure to improve its interpretability. Also, we propose an asynchronous algorithm, called DICOD, based on greedy coordinate descent, to solve convolutional sparse coding for long signals. Our algorithm has super-linear acceleration.In a second part, we focus on the link between representations and neural networks. An extra training step for deep learning, called post-training, is introduced to boost the performances of the trained network by making sure the last layer is optimal. Then, we study the mechanisms which allow to accelerate sparse coding algorithms with neural networks. We show that it is linked to afactorization of the Gram matrix of the dictionary.Finally, we illustrate the relevance of convolutional representations for physiological signals. Convolutional dictionary learning is used to summarize human walk signals and Singular Spectrum Analysis is used to remove the gaze movement in young infant’s oculometric recordings
Books on the topic "Sparse Deep Learning"
Huang, Thomas S. Deep Learning Through Sparse and Low-Rank Modeling. Elsevier Science & Technology, 2019.
Find full textHuang, Thomas S. Deep Learning Through Sparse and Low-Rank Modeling. Elsevier Science & Technology Books, 2019.
Find full textDeep Learning Through Sparse and Low-Rank Modeling. Elsevier, 2019. http://dx.doi.org/10.1016/c2017-0-00154-4.
Full textMehta, Vaishali, Dolly Sharma, Monika Mangla, Anita Gehlot, Rajesh Singh, and Sergio Márquez Sánchez, eds. Challenges and Opportunities for Deep Learning Applications in Industry 4.0. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/97898150360601220101.
Full textCarolini, Gabriella Y. Equity, Evaluation, and International Cooperation. Oxford University PressOxford, 2022. http://dx.doi.org/10.1093/oso/9780192865489.001.0001.
Full textBroom and Fraser’s domestic animal behaviour and welfare. 6th ed. Wallingford: CABI, 2021. http://dx.doi.org/10.1079/9781789249835.0000.
Full textOswald, Laura R. Doing Semiotics. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198822028.001.0001.
Full textBook chapters on the topic "Sparse Deep Learning"
Moons, Bert, Daniel Bankman, and Marian Verhelst. "ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing." In Embedded Deep Learning, 115–51. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99223-5_5.
Full textCheng, Xiangyi, Huaping Liu, Xinying Xu, and Fuchun Sun. "Denoising Deep Extreme Learning Machines for Sparse Representation." In Proceedings in Adaptation, Learning and Optimization, 235–47. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28373-9_20.
Full textSuk, Heung-Il, and Dinggang Shen. "Deep Ensemble Sparse Regression Network for Alzheimer’s Disease Diagnosis." In Machine Learning in Medical Imaging, 113–21. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47157-0_14.
Full textLi, Shicheng, Xiaoguo Yang, Haoming Zhang, Chaoyu Zheng, and Yugen Yi. "DSGRAE: Deep Sparse Graph Regularized Autoencoder for Anomaly Detection." In Machine Learning for Cyber Security, 254–65. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20099-1_21.
Full textWang, Xin, Zhiqiang Hou, Wangsheng Yu, and Zefenfen Jin. "Online Fast Deep Learning Tracker Based on Deep Sparse Neural Networks." In Lecture Notes in Computer Science, 186–98. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71607-7_17.
Full textHuang, Junzhou, and Zheng Xu. "Cell Detection with Deep Learning Accelerated by Sparse Kernel." In Deep Learning and Convolutional Neural Networks for Medical Image Computing, 137–57. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1_9.
Full textCheng, Eric-Juwei, Mukesh Prasad, Deepak Puthal, Nabin Sharma, Om Kumar Prasad, Po-Hao Chin, Chin-Teng Lin, and Michael Blumenstein. "Deep Learning Based Face Recognition with Sparse Representation Classification." In Neural Information Processing, 665–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_67.
Full textCao, Michael C., Jonathan Schwartz, Huihuo Zheng, Yi Jiang, Robert Hovden, and Yimo Han. "Atomic Defect Identification with Sparse Sampling and Deep Learning." In Driving Scientific and Engineering Discoveries Through the Integration of Experiment, Big Data, and Modeling and Simulation, 455–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96498-6_28.
Full textFakhfakh, Mohamed, Bassem Bouaziz, Lotfi Chaari, and Faiez Gargouri. "Efficient Bayesian Learning of Sparse Deep Artificial Neural Networks." In Lecture Notes in Computer Science, 78–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01333-1_7.
Full textWang, Bo, Sheng Ma, Yuan Yuan, Yi Dai, Wei Jiang, Xiang Hou, Xiao Yi, and Rui Xu. "SparG: A Sparse GEMM Accelerator for Deep Learning Applications." In Algorithms and Architectures for Parallel Processing, 529–47. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22677-9_28.
Full textConference papers on the topic "Sparse Deep Learning"
Tang, Jianhao, Zhenni Li, Shengli Xie, Shuxue Ding, Shaolong Zheng, and Xueni Chen. "Deep sparse representation via deep dictionary learning for reinforcement learning." In 2022 41st Chinese Control Conference (CCC). IEEE, 2022. http://dx.doi.org/10.23919/ccc55666.2022.9902583.
Full textGale, Trevor, Matei Zaharia, Cliff Young, and Erich Elsen. "Sparse GPU Kernels for Deep Learning." In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2020. http://dx.doi.org/10.1109/sc41405.2020.00021.
Full textSokar, Ghada, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, and Peter Stone. "Dynamic Sparse Training for Deep Reinforcement Learning." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/477.
Full textSaponara, Sergio, Abdussalam Elhanashi, and Alessio Gagliardi. "Reconstruct fingerprint images using deep learning and sparse autoencoder algorithms." In Real-Time Image Processing and Deep Learning 2021, edited by Nasser Kehtarnavaz and Matthias F. Carlsohn. SPIE, 2021. http://dx.doi.org/10.1117/12.2585707.
Full textRadlak, Krystian, Michal Szczepankiewicz, and Bogdan Smolka. "Defending against sparse adversarial attacks using impulsive noise reduction filters." In Real-Time Image Processing and Deep Learning 2021, edited by Nasser Kehtarnavaz and Matthias F. Carlsohn. SPIE, 2021. http://dx.doi.org/10.1117/12.2587999.
Full textHe, Yunlong, Koray Kavukcuoglu, Yun Wang, Arthur Szlam, and Yanjun Qi. "Unsupervised Feature Learning by Deep Sparse Coding." In Proceedings of the 2014 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2014. http://dx.doi.org/10.1137/1.9781611973440.103.
Full textLiang, Faming. "Consistent Sparse Deep Learning: Theory and Computation." In 3nd International Conference on Statistics: Theory and Applications (ICSTA'21). Avestia Publishing, 2021. http://dx.doi.org/10.11159/icsta21.004.
Full textWen, Weijing, Fan Yang, Yangfeng Su, Dian Zhou, and Xuan Zeng. "Learning Sparse Patterns in Deep Neural Networks." In 2019 IEEE 13th International Conference on ASIC (ASICON). IEEE, 2019. http://dx.doi.org/10.1109/asicon47005.2019.8983429.
Full textXu, Shiyao, Jingfei Jiang, Jinwei Xu, Chaorun Liu, Yuanhong He, Xiaohang Liu, and Lei Gao. "Sparkle: A High Efficient Sparse Matrix Multiplication Accelerator for Deep Learning." In 2022 IEEE 40th International Conference on Computer Design (ICCD). IEEE, 2022. http://dx.doi.org/10.1109/iccd56317.2022.00077.
Full textHamza, Syed A., and Moeness G. Amin. "Learning Sparse Array Capon Beamformer Design Using Deep Learning Approach." In 2020 IEEE Radar Conference (RadarConf20). IEEE, 2020. http://dx.doi.org/10.1109/radarconf2043947.2020.9266359.
Full textReports on the topic "Sparse Deep Learning"
Zhao, Y., C. Liao, and X. Shen. Exploring Deep Learning and Sparse Matrix Format Selection. Office of Scientific and Technical Information (OSTI), March 2018. http://dx.doi.org/10.2172/1426119.
Full text