Auswahl der wissenschaftlichen Literatur zum Thema „Sparse deep neural networks“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Sparse deep neural networks" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Sparse deep neural networks"
Scardapane, Simone, Danilo Comminiello, Amir Hussain und Aurelio Uncini. „Group sparse regularization for deep neural networks“. Neurocomputing 241 (Juni 2017): 81–89. http://dx.doi.org/10.1016/j.neucom.2017.02.029.
Der volle Inhalt der QuelleZang, Ke, Wenqi Wu und Wei Luo. „Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks“. Sensors 21, Nr. 19 (25.09.2021): 6410. http://dx.doi.org/10.3390/s21196410.
Der volle Inhalt der QuelleWu, Kailun, Yiwen Guo und Changshui Zhang. „Compressing Deep Neural Networks With Sparse Matrix Factorization“. IEEE Transactions on Neural Networks and Learning Systems 31, Nr. 10 (Oktober 2020): 3828–38. http://dx.doi.org/10.1109/tnnls.2019.2946636.
Der volle Inhalt der QuelleGangopadhyay, Briti, Pallab Dasgupta und Soumyajit Dey. „Safety Aware Neural Pruning for Deep Reinforcement Learning (Student Abstract)“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 13 (26.06.2023): 16212–13. http://dx.doi.org/10.1609/aaai.v37i13.26966.
Der volle Inhalt der QuellePetschenig, Horst, und Robert Legenstein. „Quantized rewiring: hardware-aware training of sparse deep neural networks“. Neuromorphic Computing and Engineering 3, Nr. 2 (26.05.2023): 024006. http://dx.doi.org/10.1088/2634-4386/accd8f.
Der volle Inhalt der QuelleBelay, Kaleab. „Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 11 (28.06.2022): 13126–27. http://dx.doi.org/10.1609/aaai.v36i11.21699.
Der volle Inhalt der QuelleKaur, Mandeep, und Pradip Kumar Yadava. „A Review on Classification of Images with Convolutional Neural Networks“. International Journal for Research in Applied Science and Engineering Technology 11, Nr. 7 (31.07.2023): 658–63. http://dx.doi.org/10.22214/ijraset.2023.54704.
Der volle Inhalt der QuelleBi, Jia, und Steve R. Gunn. „Sparse Deep Neural Network Optimization for Embedded Intelligence“. International Journal on Artificial Intelligence Tools 29, Nr. 03n04 (Juni 2020): 2060002. http://dx.doi.org/10.1142/s0218213020600027.
Der volle Inhalt der QuelleGallicchio, Claudio, und Alessio Micheli. „Fast and Deep Graph Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 3898–905. http://dx.doi.org/10.1609/aaai.v34i04.5803.
Der volle Inhalt der QuelleTartaglione, Enzo, Andrea Bragagnolo, Attilio Fiandrotti und Marco Grangetto. „LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks“. Neural Networks 146 (Februar 2022): 230–37. http://dx.doi.org/10.1016/j.neunet.2021.11.029.
Der volle Inhalt der QuelleDissertationen zum Thema "Sparse deep neural networks"
Tavanaei, Amirhossein. „Spiking Neural Networks and Sparse Deep Learning“. Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10807940.
Der volle Inhalt der QuelleThis document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs.
The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.
Le, Quoc Tung. „Algorithmic and theoretical aspects of sparse deep neural networks“. Electronic Thesis or Diss., Lyon, École normale supérieure, 2023. http://www.theses.fr/2023ENSL0105.
Der volle Inhalt der QuelleSparse deep neural networks offer a compelling practical opportunity to reduce the cost of training, inference and storage, which are growing exponentially in the state of the art of deep learning. In this presentation, we will introduce an approach to study sparse deep neural networks through the lens of another related problem: sparse matrix factorization, i.e., the problem of approximating a (dense) matrix by the product of (multiple) sparse factors. In particular, we identify and investigate in detail some theoretical and algorithmic aspects of a variant of sparse matrix factorization named fixed support matrix factorization (FSMF) in which the set of non-zero entries of sparse factors are known. Several fundamental questions of sparse deep neural networks such as the existence of optimal solutions of the training problem or topological properties of its function space can be addressed using the results of (FSMF). In addition, by applying the results of (FSMF), we also study the butterfly parametrization, an approach that consists of replacing (large) weight matrices by the products of extremely sparse and structured ones in sparse deep neural networks
Hoori, Ammar O. „MULTI-COLUMN NEURAL NETWORKS AND SPARSE CODING NOVEL TECHNIQUES IN MACHINE LEARNING“. VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5743.
Der volle Inhalt der QuelleVekhande, Swapnil Sudhir. „Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction“. Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90182.
Der volle Inhalt der QuelleMaster of Science
Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
Carvalho, Micael. „Deep representation spaces“. Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.
Der volle Inhalt der QuelleIn recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
Pawlowski, Filip igor. „High-performance dense tensor and sparse matrix kernels for machine learning“. Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.
Der volle Inhalt der QuelleIn this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
Thom, Markus [Verfasser]. „Sparse neural networks / Markus Thom“. Ulm : Universität Ulm. Fakultät für Ingenieurwissenschaften und Informatik, 2015. http://d-nb.info/1067496319/34.
Der volle Inhalt der QuelleLiu, Qian. „Deep spiking neural networks“. Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.
Der volle Inhalt der QuelleSquadrani, Lorenzo. „Deep neural networks and thermodynamics“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Den vollen Inhalt der Quelle findenMancevo, del Castillo Ayala Diego. „Compressing Deep Convolutional Neural Networks“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.
Der volle Inhalt der QuelleBücher zum Thema "Sparse deep neural networks"
A, Renzetti N., und Jet Propulsion Laboratory (U.S.), Hrsg. The Deep Space Network as an instrument for radio science research: Power system stability applications of artificial neural networks. Pasadena, Calif: National Aeronautics and Space Administration, Jet Propulsion Laboratory, California Institute of Technology, 1993.
Den vollen Inhalt der Quelle findenAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.
Der volle Inhalt der QuelleAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.
Der volle Inhalt der QuelleMoolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.
Der volle Inhalt der QuelleCaterini, Anthony L., und Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.
Der volle Inhalt der QuelleRazaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?]: [publisher not identified], 2020.
Den vollen Inhalt der Quelle findenFingscheidt, Tim, Hanno Gottschalk und Sebastian Houben, Hrsg. Deep Neural Networks and Data for Automated Driving. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.
Der volle Inhalt der QuelleModrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.
Der volle Inhalt der QuelleIba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.
Der volle Inhalt der QuelleLu, Le, Yefeng Zheng, Gustavo Carneiro und Lin Yang, Hrsg. Deep Learning and Convolutional Neural Networks for Medical Image Computing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1.
Der volle Inhalt der QuelleBuchteile zum Thema "Sparse deep neural networks"
Moons, Bert, Daniel Bankman und Marian Verhelst. „ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing“. In Embedded Deep Learning, 115–51. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99223-5_5.
Der volle Inhalt der QuelleWang, Xin, Zhiqiang Hou, Wangsheng Yu und Zefenfen Jin. „Online Fast Deep Learning Tracker Based on Deep Sparse Neural Networks“. In Lecture Notes in Computer Science, 186–98. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71607-7_17.
Der volle Inhalt der QuelleHuang, Zehao, und Naiyan Wang. „Data-Driven Sparse Structure Selection for Deep Neural Networks“. In Computer Vision – ECCV 2018, 317–34. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01270-0_19.
Der volle Inhalt der QuelleFakhfakh, Mohamed, Bassem Bouaziz, Lotfi Chaari und Faiez Gargouri. „Efficient Bayesian Learning of Sparse Deep Artificial Neural Networks“. In Lecture Notes in Computer Science, 78–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01333-1_7.
Der volle Inhalt der QuelleDey, Sourya, Yinan Shao, Keith M. Chugg und Peter A. Beerel. „Accelerating Training of Deep Neural Networks via Sparse Edge Processing“. In Artificial Neural Networks and Machine Learning – ICANN 2017, 273–80. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68600-4_32.
Der volle Inhalt der QuelleHuang, Junzhou, und Zheng Xu. „Cell Detection with Deep Learning Accelerated by Sparse Kernel“. In Deep Learning and Convolutional Neural Networks for Medical Image Computing, 137–57. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1_9.
Der volle Inhalt der QuelleMatsumoto, Wataru, Manabu Hagiwara, Petros T. Boufounos, Kunihiko Fukushima, Toshisada Mariyama und Zhao Xiongxin. „A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices“. In Neural Information Processing, 397–404. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46681-1_48.
Der volle Inhalt der QuelleXu, Ting, Bo Zhang, Baoju Zhang, Taekon Kim und Yi Wang. „Sparse Deep Neural Network Based Directional Modulation Design“. In Lecture Notes in Electrical Engineering, 503–11. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-7545-7_51.
Der volle Inhalt der QuelleDai, Qionghai, und Yue Gao. „Neural Networks on Hypergraph“. In Artificial Intelligence: Foundations, Theory, and Algorithms, 121–43. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_7.
Der volle Inhalt der QuelleMarinò, Giosuè Cataldo, Gregorio Ghidoli, Marco Frasca und Dario Malchiodi. „Reproducing the Sparse Huffman Address Map Compression for Deep Neural Networks“. In Reproducible Research in Pattern Recognition, 161–66. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76423-4_12.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Sparse deep neural networks"
Keyvanrad, Mohammad Ali, und Mohammad Mehdi Homayounpour. „Normal sparse Deep Belief Network“. In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280688.
Der volle Inhalt der QuelleHuang, Sitao, Carl Pearson, Rakesh Nagi, Jinjun Xiong, Deming Chen und Wen-mei Hwu. „Accelerating Sparse Deep Neural Networks on FPGAs“. In 2019 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2019. http://dx.doi.org/10.1109/hpec.2019.8916419.
Der volle Inhalt der QuelleObmann, Daniel, Johannes Schwab und Markus Haltmeier. „Sparse synthesis regularization with deep neural networks“. In 2019 13th International conference on Sampling Theory and Applications (SampTA). IEEE, 2019. http://dx.doi.org/10.1109/sampta45681.2019.9030953.
Der volle Inhalt der QuelleWen, Weijing, Fan Yang, Yangfeng Su, Dian Zhou und Xuan Zeng. „Learning Sparse Patterns in Deep Neural Networks“. In 2019 IEEE 13th International Conference on ASIC (ASICON). IEEE, 2019. http://dx.doi.org/10.1109/asicon47005.2019.8983429.
Der volle Inhalt der QuelleBi, Jia, und Steve R. Gunn. „Sparse Deep Neural Networks for Embedded Intelligence“. In 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2018. http://dx.doi.org/10.1109/ictai.2018.00016.
Der volle Inhalt der QuelleJing, How, und Yu Tsao. „Sparse maximum entropy deep belief nets“. In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706749.
Der volle Inhalt der QuelleXu, Lie, Chiu-Sing Choy und Yi-Wen Li. „Deep sparse rectifier neural networks for speech denoising“. In 2016 IEEE International Workshop on Acoustic Signal Enhancement (IWAENC). IEEE, 2016. http://dx.doi.org/10.1109/iwaenc.2016.7602891.
Der volle Inhalt der QuelleToth, Laszlo. „Phone recognition with deep sparse rectifier neural networks“. In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639016.
Der volle Inhalt der QuellePironkov, Gueorgui, Stephane Dupont und Thierry Dutoit. „Investigating sparse deep neural networks for speech recognition“. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE, 2015. http://dx.doi.org/10.1109/asru.2015.7404784.
Der volle Inhalt der QuelleMitsuno, Kakeru, Junichi Miyao und Takio Kurita. „Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks“. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207531.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Sparse deep neural networks"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang und Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, Dezember 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Der volle Inhalt der QuelleKoh, Christopher Fu-Chai, und Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1557202.
Der volle Inhalt der QuelleShevitski, Brian, Yijing Watkins, Nicole Man und Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), April 2023. http://dx.doi.org/10.2172/1984848.
Der volle Inhalt der QuelleLandon, Nicholas. A survey of repair strategies for deep neural networks. Ames (Iowa): Iowa State University, August 2022. http://dx.doi.org/10.31274/cc-20240624-93.
Der volle Inhalt der QuelleTalathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), Juni 2017. http://dx.doi.org/10.2172/1366924.
Der volle Inhalt der QuelleArmstrong, Derek Elswick, und Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), Mai 2020. http://dx.doi.org/10.2172/1623398.
Der volle Inhalt der QuelleThulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya und Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), Juni 2019. http://dx.doi.org/10.2172/1525811.
Der volle Inhalt der QuelleEllis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson und Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), Oktober 2020. http://dx.doi.org/10.2172/1677521.
Der volle Inhalt der QuelleEllis, Austin, Lenz Fielder, Gabriel Popoola, Normand Modine, John Stephens, Aidan Thompson und Sivasankaran Rajamanickam. Accelerating Finite-Temperature Kohn-Sham Density Functional Theory with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), Juni 2021. http://dx.doi.org/10.2172/1817970.
Der volle Inhalt der QuelleChronopoulos, Ilias, Katerina Chrysikou, George Kapetanios, James Mitchell und Aristeidis Raftapostolos. Deep Neural Network Estimation in Panel Data Models. Federal Reserve Bank of Cleveland, Juli 2023. http://dx.doi.org/10.26509/frbc-wp-202315.
Der volle Inhalt der Quelle