Academic literature on the topic 'Deep learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Deep learning"
Wang, Yipu, and Stuart Perrin. "Deep Chinese Teaching and Learning Model Based on Deep Learning." International Journal of Languages, Literature and Linguistics 10, no. 1 (2024): 32–35. http://dx.doi.org/10.18178/ijlll.2024.10.1.479.
Full textChagas, Edgar Thiago De Oliveira. "Deep Learning e suas aplicações na atualidade." Revista Científica Multidisciplinar Núcleo do Conhecimento 04, no. 05 (May 8, 2019): 05–26. http://dx.doi.org/10.32749/nucleodoconhecimento.com.br/administracao/deep-learning.
Full textJaiswal, Tarun, and Sushma Jaiswal. "Deep Learning in Medicine." International Journal of Trend in Scientific Research and Development Volume-3, Issue-4 (June 30, 2019): 212–17. http://dx.doi.org/10.31142/ijtsrd23641.
Full textJaiswal, Tarun, and Sushma Jaiswal. "Deep Learning Based Pain Treatment." International Journal of Trend in Scientific Research and Development Volume-3, Issue-4 (June 30, 2019): 193–211. http://dx.doi.org/10.31142/ijtsrd23639.
Full textAthani Samarth Kumar, Abusufiyan. "Cryptocurrency Prediction using Deep Learning." International Journal of Science and Research (IJSR) 12, no. 3 (March 5, 2023): 1253–57. http://dx.doi.org/10.21275/sr23319215511.
Full textBhadiyadra, Yash. "Object Detection with Deep Learning." International Journal of Science and Research (IJSR) 12, no. 7 (July 5, 2023): 1300–1304. http://dx.doi.org/10.21275/mr23717204529.
Full textP C, Haris, and Dr Srikanth V. "Smart Eye Using Deep Learning." International Journal of Research Publication and Reviews 5, no. 3 (March 2, 2024): 467–70. http://dx.doi.org/10.55248/gengpi.5.0324.0615.
Full textChagas, Edgar Thiago De Oliveira. "Deep Learning and its applications today." Revista Científica Multidisciplinar Núcleo do Conhecimento 04, no. 05 (May 8, 2019): 05–26. http://dx.doi.org/10.32749/nucleodoconhecimento.com.br/business-administration/deep-learning-2.
Full textZitar, Raed Abu, Ammar EL-Hassan, and Oraib AL-Sahlee. "Deep Learning Recommendation System for Course Learning Outcomes Assessment." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (October 31, 2019): 1491–78. http://dx.doi.org/10.5373/jardcs/v11sp10/20192993.
Full textAkgül, İsmail, and Yıldız Aydın. "OBJECT RECOGNITION WITH DEEP LEARNING AND MACHINE LEARNING METHODS." NWSA Academic Journals 17, no. 4 (October 29, 2022): 54–61. http://dx.doi.org/10.12739/nwsa.2022.17.4.2a0189.
Full textDissertations / Theses on the topic "Deep learning"
Dufourq, Emmanuel. "Evolutionary deep learning." Doctoral thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/30357.
Full textHe, Fengxiang. "Theoretical Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25674.
Full textFRACCAROLI, MICHELE. "Explainable Deep Learning." Doctoral thesis, Università degli studi di Ferrara, 2023. https://hdl.handle.net/11392/2503729.
Full textThe great success that Machine and Deep Learning has achieved in areas that are strategic for our society such as industry, defence, medicine, etc., has led more and more realities to invest and explore the use of this technology. Machine Learning and Deep Learning algorithms and learned models can now be found in almost every area of our lives. From phones to smart home appliances, to the cars we drive. So it can be said that this pervasive technology is now in touch with our lives, and therefore we have to deal with it. This is why eXplainable Artificial Intelligence or XAI was born, one of the research trends that are currently in vogue in the field of Deep Learning and Artificial Intelligence. The idea behind this line of research is to make and/or design the new Deep Learning algorithms so that they are interpretable and comprehensible to humans. This necessity is due precisely to the fact that neural networks, the mathematical model underlying Deep Learning, act like a black box, making the internal reasoning they carry out to reach a decision incomprehensible and untrustable to humans. As we are delegating more and more important decisions to these mathematical models, it is very important to be able to understand the motivations that lead these models to make certain decisions. This is because we have integrated them into the most delicate processes of our society, such as medical diagnosis, autonomous driving or legal processes. The work presented in this thesis consists in studying and testing Deep Learning algorithms integrated with symbolic Artificial Intelligence techniques. This integration has a twofold purpose: to make the models more powerful, enabling them to carry out reasoning or constraining their behaviour in complex situations, and to make them interpretable. The thesis focuses on two macro topics: the explanations obtained through neuro-symbolic integration and the exploitation of explanations to make the Deep Learning algorithms more capable or intelligent. The neuro-symbolic integration was addressed twice, by experimenting with the integration of symbolic algorithms with neural networks. A first approach was to create a system to guide the training of the networks themselves in order to find the best combination of hyper-parameters to automate the design of these networks. This is done by integrating neural networks with Probabilistic Logic Programming (PLP). This integration makes it possible to exploit probabilistic rules tuned by the behaviour of the networks during the training phase or inherited from the experience of experts in the field. These rules are triggered when a problem occurs during network training. This generates an explanation of what was done to improve the training once a particular issue was identified. A second approach was to make probabilistic logic systems cooperate with neural networks for medical diagnosis on heterogeneous data sources. The second topic addressed in this thesis concerns the exploitation of explanations. In particular, the explanations one can obtain from neural networks are used in order to create attention modules that help in constraining and improving the performance of neural networks. All works developed during the PhD and described in this thesis have led to the publications listed in Chapter 14.2.
Halle, Alex, and Alexander Hasse. "Topologieoptimierung mittels Deep Learning." Technische Universität Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A34343.
Full textGoh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.
Full textRecent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
Geirsson, Gunnlaugur. "Deep learning exotic derivatives." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-430410.
Full textWülfing, Jan [Verfasser], and Martin [Akademischer Betreuer] Riedmiller. "Stable deep reinforcement learning." Freiburg : Universität, 2019. http://d-nb.info/1204826188/34.
Full textWhite, Martin. "Deep Learning Software Repositories." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1516639667.
Full textSun, Haozhe. "Modularity in deep learning." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG090.
Full textThis Ph.D. thesis is dedicated to enhancing the efficiency of Deep Learning by leveraging the principle of modularity. It contains several main contributions: a literature survey on modularity in Deep Learning; the introduction of OmniPrint and Meta-Album, tools that facilitate the investigation of data modularity; case studies examining the effects of episodic few-shot learning, an instance of data modularity; a modular evaluation mechanism named LTU for assessing privacy risks; and the method RRR for reusing pre-trained modular models to create more compact versions. Modularity, which involves decomposing an entity into sub-entities, is a prevalent concept across various disciplines. This thesis examines modularity across three axes of Deep Learning: data, task, and model. OmniPrint and Meta-Album assist in benchmarking modular models and exploring data modularity's impacts. LTU ensures the reliability of the privacy assessment. RRR significantly enhances the utilization efficiency of pre-trained modular models. Collectively, this thesis bridges the modularity principle with Deep Learning and underscores its advantages in selected fields of Deep Learning, contributing to more resource-efficient Artificial Intelligence
Arnold, Ludovic. "Learning Deep Representations : Toward a better new understanding of the deep learning paradigm." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00842447.
Full textBooks on the topic "Deep learning"
Saefken, Benjamin, Alexander Silbersdorff, and Christoph Weisser, eds. Learning deep. Göttingen: Göttingen University Press, 2020. http://dx.doi.org/10.17875/gup2020-1338.
Full textBishop, Christopher M., and Hugh Bishop. Deep Learning. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-45468-4.
Full textKruse, René-Marcel, Benjamin Säfken, Alexander Silbersdorff, and Christoph Weisser, eds. Learning Deep Textwork. Göttingen: Göttingen University Press, 2021. http://dx.doi.org/10.17875/gup2021-1608.
Full textRodriguez, Andres. Deep Learning Systems. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-031-01769-8.
Full textFergus, Paul, and Carl Chalmers. Applied Deep Learning. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04420-5.
Full textCalin, Ovidiu. Deep Learning Architectures. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3.
Full textEl-Amir, Hisham, and Mahmoud Hamdy. Deep Learning Pipeline. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5349-6.
Full textMatsushita, Kayo, ed. Deep Active Learning. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-5660-4.
Full textMichelucci, Umberto. Applied Deep Learning. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3790-8.
Full textMoons, Bert, Daniel Bankman, and Marian Verhelst. Embedded Deep Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-99223-5.
Full textBook chapters on the topic "Deep learning"
Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Recurrent Neural Networks." In Deep Learning, 157–83. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-7.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Generative Models." In Deep Learning, 209–25. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-9.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Intel OpenVino: A Must-Know Deep Learning Toolkit." In Deep Learning, 245–60. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-11.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Machine Learning: The Fundamentals." In Deep Learning, 29–64. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-3.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Interview Questions and Answers." In Deep Learning, 261–88. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-12.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "The Deep Learning Framework." In Deep Learning, 65–79. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-4.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "CNN Architectures: An Evolution." In Deep Learning, 121–55. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-6.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Introduction to Deep Learning." In Deep Learning, 1–11. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-1.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Autoencoders." In Deep Learning, 185–207. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-8.
Full textVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "The Tools and the Prerequisites." In Deep Learning, 13–28. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-2.
Full textConference papers on the topic "Deep learning"
"DEEP-ML 2019 Program Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00007.
Full text"DEEP-ML 2019 Organizing Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00006.
Full text"Keynote Abstracts." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00008.
Full text"[Title page i]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00001.
Full text"[Title page iii]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00002.
Full text"[Copyright notice]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00003.
Full text"Table of contents." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00004.
Full text"Message from the DEEP-ML 2019 Chairs." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00005.
Full textKaskavalci, Halil Can, and Sezer Goren. "A Deep Learning Based Distributed Smart Surveillance Architecture using Edge and Cloud Computing." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00009.
Full textLee, Kyu Beom, and Hyu Soung Shin. "An Application of a Deep Learning Algorithm for Automatic Detection of Unexpected Accidents Under Bad CCTV Monitoring Conditions in Tunnels." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00010.
Full textReports on the topic "Deep learning"
Catanach, Thomas, and Jed Duersch. Efficient Generalizable Deep Learning. Office of Scientific and Technical Information (OSTI), September 2018. http://dx.doi.org/10.2172/1760400.
Full textDell, Melissa. Deep Learning for Economists. Cambridge, MA: National Bureau of Economic Research, August 2024. http://dx.doi.org/10.3386/w32768.
Full textGroh, Micah. NOvA Reconstruction using Deep Learning. Office of Scientific and Technical Information (OSTI), June 2018. http://dx.doi.org/10.2172/1462092.
Full textGeiss, Andrew, Joseph Hardin, Sam Silva, William Jr., Adam Varble, and Jiwen Fan. Deep Learning for Ensemble Forecasting. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769692.
Full textHarris, James, Shannon Kinkead, Dylan Fox, and Yang Ho. Continual Learning for Pattern Recognizers using Neurogenesis Deep Learning. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1855019.
Full textDraelos, Timothy John, Nadine E. Miner, Christopher C. Lamb, Craig Michael Vineyard, Kristofor David Carlson, Conrad D. James, and James Bradley Aimone. Neurogenesis Deep Learning: Extending deep networks to accommodate new classes. Office of Scientific and Technical Information (OSTI), December 2016. http://dx.doi.org/10.2172/1505351.
Full textBalaji, Praveen. Detecting Stellar Streams through Deep Learning. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1637622.
Full textLi, Li. Deep Learning for Hydro-Biogeochemistry Processes. Office of Scientific and Technical Information (OSTI), March 2021. http://dx.doi.org/10.2172/1769693.
Full textEydenberg, Michael, Lisa Batsch-Smith, Charles Bice, Logan Blakely, Michael Bynum, Fani Boukouvala, Anya Castillo, et al. Resilience Enhancements through Deep Learning Yields. Office of Scientific and Technical Information (OSTI), September 2022. http://dx.doi.org/10.2172/1890044.
Full textSingh, Rahul. Reconstructing material microstructures using deep learning. Ames (Iowa): Iowa State University, January 2019. http://dx.doi.org/10.31274/cc-20240624-1194.
Full text