Academic literature on the topic 'Deep learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep learning"

1

Wang, Yipu, and Stuart Perrin. "Deep Chinese Teaching and Learning Model Based on Deep Learning." International Journal of Languages, Literature and Linguistics 10, no. 1 (2024): 32–35. http://dx.doi.org/10.18178/ijlll.2024.10.1.479.

Full text
Abstract:
Deep learning is a more situational and reflective way of learning that integrates complex knowledge and skills into intuitive thinking. As a language that closely combines sound, form and meaning, Chinese teaching and learning from the perspective of deep learning can help break through the limitations of the current teaching model that only focuses on certain language knowledge or cultural behaviors. This paper combines deep learning with international Chinese education, creates deep Chinese teaching and learning model including “four stages and ten steps”, and carries out practical application and teaching effect test. The results show that the deep Chinese teaching and learning model is conducive to improving students’ discourse presentation ability and comprehensive skills, cultivating the learners’ autonomous learning ability and intercultural communication competence, and strengthening the integration of language teaching and cultural teaching. At the same time, this model also has some limitations, need to be further adjusted and optimized.
APA, Harvard, Vancouver, ISO, and other styles
2

Chagas, Edgar Thiago De Oliveira. "Deep Learning e suas aplicações na atualidade." Revista Científica Multidisciplinar Núcleo do Conhecimento 04, no. 05 (May 8, 2019): 05–26. http://dx.doi.org/10.32749/nucleodoconhecimento.com.br/administracao/deep-learning.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jaiswal, Tarun, and Sushma Jaiswal. "Deep Learning in Medicine." International Journal of Trend in Scientific Research and Development Volume-3, Issue-4 (June 30, 2019): 212–17. http://dx.doi.org/10.31142/ijtsrd23641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jaiswal, Tarun, and Sushma Jaiswal. "Deep Learning Based Pain Treatment." International Journal of Trend in Scientific Research and Development Volume-3, Issue-4 (June 30, 2019): 193–211. http://dx.doi.org/10.31142/ijtsrd23639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Athani Samarth Kumar, Abusufiyan. "Cryptocurrency Prediction using Deep Learning." International Journal of Science and Research (IJSR) 12, no. 3 (March 5, 2023): 1253–57. http://dx.doi.org/10.21275/sr23319215511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhadiyadra, Yash. "Object Detection with Deep Learning." International Journal of Science and Research (IJSR) 12, no. 7 (July 5, 2023): 1300–1304. http://dx.doi.org/10.21275/mr23717204529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

P C, Haris, and Dr Srikanth V. "Smart Eye Using Deep Learning." International Journal of Research Publication and Reviews 5, no. 3 (March 2, 2024): 467–70. http://dx.doi.org/10.55248/gengpi.5.0324.0615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chagas, Edgar Thiago De Oliveira. "Deep Learning and its applications today." Revista Científica Multidisciplinar Núcleo do Conhecimento 04, no. 05 (May 8, 2019): 05–26. http://dx.doi.org/10.32749/nucleodoconhecimento.com.br/business-administration/deep-learning-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zitar, Raed Abu, Ammar EL-Hassan, and Oraib AL-Sahlee. "Deep Learning Recommendation System for Course Learning Outcomes Assessment." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (October 31, 2019): 1491–78. http://dx.doi.org/10.5373/jardcs/v11sp10/20192993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Akgül, İsmail, and Yıldız Aydın. "OBJECT RECOGNITION WITH DEEP LEARNING AND MACHINE LEARNING METHODS." NWSA Academic Journals 17, no. 4 (October 29, 2022): 54–61. http://dx.doi.org/10.12739/nwsa.2022.17.4.2a0189.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Deep learning"

1

Dufourq, Emmanuel. "Evolutionary deep learning." Doctoral thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/30357.

Full text
Abstract:
The primary objective of this thesis is to investigate whether evolutionary concepts can improve the performance, speed and convenience of algorithms in various active areas of machine learning research. Deep neural networks are exhibiting an explosion in the number of parameters that need to be trained, as well as the number of permutations of possible network architectures and hyper-parameters. There is little guidance on how to choose these and brute-force experimentation is prohibitively time consuming. We show that evolutionary algorithms can help tame this explosion of freedom, by developing an algorithm that robustly evolves near optimal deep neural network architectures and hyper-parameters across a wide range of image and sentiment classification problems. We further develop an algorithm that automatically determines whether a given data science problem is of classification or regression type, successfully choosing the correct problem type with more than 95% accuracy. Together these algorithms show that a great deal of the current "art" in the design of deep learning networks - and in the job of the data scientist - can be automated. Having discussed the general problem of optimising deep learning networks the thesis moves on to a specific application: the automated extraction of human sentiment from text and images of human faces. Our results reveal that our approach is able to outperform several public and/or commercial text sentiment analysis algorithms using an evolutionary algorithm that learned to encode and extend sentiment lexicons. A second analysis looked at using evolutionary algorithms to estimate text sentiment while simultaneously compressing text data. An extensive analysis of twelve sentiment datasets reveal that accurate compression is possible with 3.3% loss in classification accuracy even with 75% compression of text size, which is useful in environments where data volumes are a problem. Finally, the thesis presents improvements to automated sentiment analysis of human faces to identify emotion, an area where there has been a tremendous amount of progress using convolutional neural networks. We provide a comprehensive critique of past work, highlight recommendations and list some open, unanswered questions in facial expression recognition using convolutional neural networks. One serious challenge when implementing such networks for facial expression recognition is the large number of trainable parameters which results in long training times. We propose a novel method based on evolutionary algorithms, to reduce the number of trainable parameters whilst simultaneously retaining classification performance, and in some cases achieving superior performance. We are robustly able to reduce the number of parameters on average by 95% with no loss in classification accuracy. Overall our analyses show that evolutionary algorithms are a valuable addition to machine learning in the deep learning era: automating, compressing and/or improving results significantly, depending on the desired goal.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Fengxiang. "Theoretical Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25674.

Full text
Abstract:
Deep learning has long been criticised as a black-box model for lacking sound theoretical explanation. During the PhD course, I explore and establish theoretical foundations for deep learning. In this thesis, I present my contributions positioned upon existing literature: (1) analysing the generalizability of the neural networks with residual connections via complexity and capacity-based hypothesis complexity measures; (2) modeling stochastic gradient descent (SGD) by stochastic differential equations (SDEs) and their dynamics, and further characterizing the generalizability of deep learning; (3) understanding the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems, which sheds light in reconciling the over-representation and excellent generalizability of deep learning; and (4) discovering the interplay between generalization, privacy preservation, and adversarial robustness, which have seen rising concerns in deep learning deployment.
APA, Harvard, Vancouver, ISO, and other styles
3

FRACCAROLI, MICHELE. "Explainable Deep Learning." Doctoral thesis, Università degli studi di Ferrara, 2023. https://hdl.handle.net/11392/2503729.

Full text
Abstract:
Il grande successo che il Deep Learning ha ottenuto in ambiti strategici per la nostra società quali l'industria, la difesa, la medicina etc., ha portanto sempre più realtà a investire ed esplorare l'utilizzo di questa tecnologia. Ormai si possono trovare algoritmi di Machine Learning e Deep Learning quasi in ogni ambito della nostra vita. Dai telefoni, agli elettrodomestici intelligenti fino ai veicoli che guidiamo. Quindi si può dire che questa tecnologia pervarsiva è ormai a contatto con le nostre vite e quindi dobbiamo confrontarci con essa. Da questo nasce l’eXplainable Artificial Intelligence o XAI, uno degli ambiti di ricerca che vanno per la maggiore al giorno d'oggi in ambito di Deep Learning e di Intelligenza Artificiale. Il concetto alla base di questo filone di ricerca è quello di rendere e/o progettare i nuovi algoritmi di Deep Learning in modo che siano affidabili, interpretabili e comprensibili all'uomo. Questa necessità è dovuta proprio al fatto che le reti neurali, modello matematico che sta alla base del Deep Learning, agiscono come una scatola nera, rendendo incomprensibile all'uomo il ragionamento interno che compiono per giungere ad una decisione. Dato che stiamo delegando a questi modelli matematici decisioni sempre più importanti, integrandole nei processi più delicati della nostra società quali, ad esempio, la diagnosi medica, la guida autonoma o i processi di legge, è molto importante riuscire a comprendere le motivazioni che portano questi modelli a produrre determinati risultati. Il lavoro presentato in questa tesi consiste proprio nello studio e nella sperimentazione di algoritmi di Deep Learning integrati con tecniche di Intelligenza Artificiale simbolica. Questa integrazione ha un duplice scopo: rendere i modelli più potenti, consentendogli di compiere ragionamenti o vincolandone il comportamento in situazioni complesse, e renderli interpretabili. La tesi affronta due macro argomenti: le spiegazioni ottenute grazie all'integrazione neuro-simbolica e lo sfruttamento delle spiegazione per rendere gli algoritmi di Deep Learning più capaci o intelligenti. Il primo macro argomento si concentra maggiormente sui lavori svolti nello sperimentare l'integrazione di algoritmi simbolici con le reti neurali. Un approccio è stato quelli di creare un sistema per guidare gli addestramenti delle reti stesse in modo da trovare la migliore combinazione di iper-parametri per automatizzare la progettazione stessa di queste reti. Questo è fatto tramite l'integrazione di reti neurali con la Programmazione Logica Probabilistica (PLP) che consente di sfruttare delle regole probabilistiche indotte dal comportamento delle reti durante la fase di addestramento o ereditate dall'esperienza maturata dagli esperti del settore. Queste regole si innescano allo scatenarsi di un problema che il sistema rileva durate l'addestramento della rete. Questo ci consente di ottenere una spiegazione di cosa è stato fatto per migliorare l'addestramento una volta identificato un determinato problema. Un secondo approccio è stato quello di far cooperare sistemi logico-probabilistici con reti neurali per la diagnosi medica da fonti di dati eterogenee. La seconda tematica affrontata in questa tesi tratta lo sfruttamento delle spiegazioni che possiamo ottenere dalle rete neurali. In particolare, queste spiegazioni sono usate per creare moduli di attenzione che aiutano a vincolare o a guidare le reti neurali portandone ad avere prestazioni migliorate. Tutti i lavori sviluppati durante il dottorato e descritti in questa tesi hanno portato alle pubblicazioni elencate nel Capitolo 14.2.
The great success that Machine and Deep Learning has achieved in areas that are strategic for our society such as industry, defence, medicine, etc., has led more and more realities to invest and explore the use of this technology. Machine Learning and Deep Learning algorithms and learned models can now be found in almost every area of our lives. From phones to smart home appliances, to the cars we drive. So it can be said that this pervasive technology is now in touch with our lives, and therefore we have to deal with it. This is why eXplainable Artificial Intelligence or XAI was born, one of the research trends that are currently in vogue in the field of Deep Learning and Artificial Intelligence. The idea behind this line of research is to make and/or design the new Deep Learning algorithms so that they are interpretable and comprehensible to humans. This necessity is due precisely to the fact that neural networks, the mathematical model underlying Deep Learning, act like a black box, making the internal reasoning they carry out to reach a decision incomprehensible and untrustable to humans. As we are delegating more and more important decisions to these mathematical models, it is very important to be able to understand the motivations that lead these models to make certain decisions. This is because we have integrated them into the most delicate processes of our society, such as medical diagnosis, autonomous driving or legal processes. The work presented in this thesis consists in studying and testing Deep Learning algorithms integrated with symbolic Artificial Intelligence techniques. This integration has a twofold purpose: to make the models more powerful, enabling them to carry out reasoning or constraining their behaviour in complex situations, and to make them interpretable. The thesis focuses on two macro topics: the explanations obtained through neuro-symbolic integration and the exploitation of explanations to make the Deep Learning algorithms more capable or intelligent. The neuro-symbolic integration was addressed twice, by experimenting with the integration of symbolic algorithms with neural networks. A first approach was to create a system to guide the training of the networks themselves in order to find the best combination of hyper-parameters to automate the design of these networks. This is done by integrating neural networks with Probabilistic Logic Programming (PLP). This integration makes it possible to exploit probabilistic rules tuned by the behaviour of the networks during the training phase or inherited from the experience of experts in the field. These rules are triggered when a problem occurs during network training. This generates an explanation of what was done to improve the training once a particular issue was identified. A second approach was to make probabilistic logic systems cooperate with neural networks for medical diagnosis on heterogeneous data sources. The second topic addressed in this thesis concerns the exploitation of explanations. In particular, the explanations one can obtain from neural networks are used in order to create attention modules that help in constraining and improving the performance of neural networks. All works developed during the PhD and described in this thesis have led to the publications listed in Chapter 14.2.
APA, Harvard, Vancouver, ISO, and other styles
4

Halle, Alex, and Alexander Hasse. "Topologieoptimierung mittels Deep Learning." Technische Universität Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A34343.

Full text
Abstract:
Die Topologieoptimierung ist die Suche einer optimalen Bauteilgeometrie in Abhängigkeit des Einsatzfalls. Für komplexe Probleme kann die Topologieoptimierung aufgrund eines hohen Detailgrades viel Zeit- und Rechenkapazität erfordern. Diese Nachteile der Topologieoptimierung sollen mittels Deep Learning reduziert werden, so dass eine Topologieoptimierung dem Konstrukteur als sekundenschnelle Hilfe dient. Das Deep Learning ist die Erweiterung künstlicher neuronaler Netzwerke, mit denen Muster oder Verhaltensregeln erlernt werden können. So soll die bislang numerisch berechnete Topologieoptimierung mit dem Deep Learning Ansatz gelöst werden. Hierzu werden Ansätze, Berechnungsschema und erste Schlussfolgerungen vorgestellt und diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
5

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Full text
Abstract:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
APA, Harvard, Vancouver, ISO, and other styles
6

Geirsson, Gunnlaugur. "Deep learning exotic derivatives." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-430410.

Full text
Abstract:
Monte Carlo methods in derivative pricing are computationally expensive, in particular for evaluating models partial derivatives with regard to inputs. This research proposes the use of deep learning to approximate such valuation models for highly exotic derivatives, using automatic differentiation to evaluate input sensitivities. Deep learning models are trained to approximate Phoenix Autocall valuation using a proprietary model used by Svenska Handelsbanken AB. Models are trained on large datasets of low-accuracy (10^4 simulations) Monte Carlo data, successfully learning the true model with an average error of 0.1% on validation data generated by 10^8 simulations. A specific model parametrisation is proposed for 2-day valuation only, to be recalibrated interday using transfer learning. Automatic differentiation approximates sensitivity to (normalised) underlying asset prices with a mean relative error generally below 1.6%. Overall error when predicting sensitivity to implied volatililty is found to lie within 10%-40%. Near identical results are found by finite difference as automatic differentiation in both cases. Automatic differentiation is not successful at capturing sensitivity to interday contract change in value, though errors of 8%-25% are achieved by finite difference. Model recalibration by transfer learning proves to converge over 15 times faster and with up to 14% lower relative error than training using random initialisation. The results show that deep learning models can efficiently learn Monte Carlo valuation, and that these can be quickly recalibrated by transfer learning. The deep learning model gradient computed by automatic differentiation proves a good approximation of the true model sensitivities. Future research proposals include studying optimised recalibration schedules, using training data generated by single Monte Carlo price paths, and studying additional parameters and contracts.
APA, Harvard, Vancouver, ISO, and other styles
7

Wülfing, Jan [Verfasser], and Martin [Akademischer Betreuer] Riedmiller. "Stable deep reinforcement learning." Freiburg : Universität, 2019. http://d-nb.info/1204826188/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

White, Martin. "Deep Learning Software Repositories." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1516639667.

Full text
Abstract:
Bridging the abstraction gap between artifacts and concepts is the essence of software engineering (SE) research problems. SE researchers regularly use machine learning to bridge this gap, but there are three fundamental issues with traditional applications of machine learning in SE research. Traditional applications are too reliant on labeled data. They are too reliant on human intuition, and they are not capable of learning expressive yet efficient internal representations. Ultimately, SE research needs approaches that can automatically learn representations of massive, heterogeneous, datasets in situ, apply the learned features to a particular task and possibly transfer knowledge from task to task. Improvements in both computational power and the amount of memory in modern computer architectures have enabled new approaches to canonical machine learning tasks. Specifically, these architectural advances have enabled machines that are capable of learning deep, compositional representations of massive data depots. The rise of deep learning has ushered in tremendous advances in several fields. Given the complexity of software repositories, we presume deep learning has the potential to usher in new analytical frameworks and methodologies for SE research and the practical applications it reaches. This dissertation examines and enables deep learning algorithms in different SE contexts. We demonstrate that deep learners significantly outperform state-of-the-practice software language models at code suggestion on a Java corpus. Further, these deep learners for code suggestion automatically learn how to represent lexical elements. We use these representations to transmute source code into structures for detecting similar code fragments at different levels of granularity—without declaring features for how the source code is to be represented. Then we use our learning-based framework for encoding fragments to intelligently select and adapt statements in a codebase for automated program repair. In our work on code suggestion, code clone detection, and automated program repair, everything for representing lexical elements and code fragments is mined from the source code repository. Indeed, our work aims to move SE research from the art of feature engineering to the science of automated discovery.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Haozhe. "Modularity in deep learning." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG090.

Full text
Abstract:
L'objectif de cette thèse est de rendre l'apprentissage profond plus efficace en termes de ressources en appliquant le principe de modularité. La thèse comporte plusieurs contributions principales : une étude de la littérature sur la modularité dans l'apprentissage profond; la conception d'OmniPrint et de Meta-Album, des outils qui facilitent l'étude de la modularité des données; des études de cas examinant les effets de l'apprentissage épisodique, un exemple de modularité des données; un mécanisme d'évaluation modulaire appelé LTU pour évaluer les risques en matière de protection de la vie privée; et la méthode RRR pour réutiliser des modèles modulaires pré-entraînés afin d'en construire des versions plus compactes. La modularité, qui implique la décomposition d'une entité en sous-entités, est un concept répandu dans diverses disciplines. Cette thèse examine la modularité sur trois axes de l'apprentissage profond : les données, la tâche et le modèle. OmniPrint et Meta-Album facilitent de comparer les modèles modulaires et d'explorer les impacts de la modularité des données. LTU garantit la fiabilité de l'évaluation de la protection de la vie privée. RRR améliore l'efficacité de l'utilisation des modèles modulaires pré-entraînés. Collectivement, cette thèse fait le lien entre le principe de modularité et l'apprentissage profond et souligne ses avantages dans certains domaines de l'apprentissage profond, contribuant ainsi à une intelligence artificielle plus efficace en termes de ressources
This Ph.D. thesis is dedicated to enhancing the efficiency of Deep Learning by leveraging the principle of modularity. It contains several main contributions: a literature survey on modularity in Deep Learning; the introduction of OmniPrint and Meta-Album, tools that facilitate the investigation of data modularity; case studies examining the effects of episodic few-shot learning, an instance of data modularity; a modular evaluation mechanism named LTU for assessing privacy risks; and the method RRR for reusing pre-trained modular models to create more compact versions. Modularity, which involves decomposing an entity into sub-entities, is a prevalent concept across various disciplines. This thesis examines modularity across three axes of Deep Learning: data, task, and model. OmniPrint and Meta-Album assist in benchmarking modular models and exploring data modularity's impacts. LTU ensures the reliability of the privacy assessment. RRR significantly enhances the utilization efficiency of pre-trained modular models. Collectively, this thesis bridges the modularity principle with Deep Learning and underscores its advantages in selected fields of Deep Learning, contributing to more resource-efficient Artificial Intelligence
APA, Harvard, Vancouver, ISO, and other styles
10

Arnold, Ludovic. "Learning Deep Representations : Toward a better new understanding of the deep learning paradigm." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00842447.

Full text
Abstract:
Since 2006, deep learning algorithms which rely on deep architectures with several layers of increasingly complex representations have been able to outperform state-of-the-art methods in several settings. Deep architectures can be very efficient in terms of the number of parameters required to represent complex operations which makes them very appealing to achieve good generalization with small amounts of data. Although training deep architectures has traditionally been considered a difficult problem, a successful approach has been to employ an unsupervised layer-wise pre-training step to initialize deep supervised models. First, unsupervised learning has many benefits w.r.t. generalization because it only relies on unlabeled data which is easily found. Second, the possibility to learn representations layer by layer instead of all layers at once improves generalization further and reduces computational time. However, deep learning is a very recent approach and still poses a lot of theoretical and practical questions concerning the consistency of layer-wise learning with many layers and difficulties such as evaluating performance, performing model selection and optimizing layers. In this thesis we first discuss the limitations of the current variational justification for layer-wise learning which does not generalize well to many layers. We ask if a layer-wise method can ever be truly consistent, i.e. capable of finding an optimal deep model by training one layer at a time without knowledge of the upper layers. We find that layer-wise learning can in fact be consistent and can lead to optimal deep generative models. To do this, we introduce the Best Latent Marginal (BLM) upper bound, a new criterion which represents the maximum log-likelihood of a deep generative model where the upper layers are unspecified. We prove that maximizing this criterion for each layer leads to an optimal deep architecture, provided the rest of the training goes well. Although this criterion cannot be computed exactly, we show that it can be maximized effectively by auto-encoders when the encoder part of the model is allowed to be as rich as possible. This gives a new justification for stacking models trained to reproduce their input and yields better results than the state-of-the-art variational approach. Additionally, we give a tractable approximation of the BLM upper-bound and show that it can accurately estimate the final log-likelihood of models. Taking advantage of these theoretical advances, we propose a new method for performing layer-wise model selection in deep architectures, and a new criterion to assess whether adding more layers is warranted. As for the difficulty of training layers, we also study the impact of metrics and parametrization on the commonly used gradient descent procedure for log-likelihood maximization. We show that gradient descent is implicitly linked with the metric of the underlying space and that the Euclidean metric may often be an unsuitable choice as it introduces a dependence on parametrization and can lead to a breach of symmetry. To mitigate this problem, we study the benefits of the natural gradient and show that it can restore symmetry, regrettably at a high computational cost. We thus propose that a centered parametrization may alleviate the problem with almost no computational overhead.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Deep learning"

1

Saefken, Benjamin, Alexander Silbersdorff, and Christoph Weisser, eds. Learning deep. Göttingen: Göttingen University Press, 2020. http://dx.doi.org/10.17875/gup2020-1338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bishop, Christopher M., and Hugh Bishop. Deep Learning. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-45468-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kruse, René-Marcel, Benjamin Säfken, Alexander Silbersdorff, and Christoph Weisser, eds. Learning Deep Textwork. Göttingen: Göttingen University Press, 2021. http://dx.doi.org/10.17875/gup2021-1608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rodriguez, Andres. Deep Learning Systems. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-031-01769-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fergus, Paul, and Carl Chalmers. Applied Deep Learning. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04420-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Calin, Ovidiu. Deep Learning Architectures. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

El-Amir, Hisham, and Mahmoud Hamdy. Deep Learning Pipeline. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5349-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Matsushita, Kayo, ed. Deep Active Learning. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-5660-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Michelucci, Umberto. Applied Deep Learning. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3790-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moons, Bert, Daniel Bankman, and Marian Verhelst. Embedded Deep Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-99223-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Deep learning"

1

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Recurrent Neural Networks." In Deep Learning, 157–83. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Generative Models." In Deep Learning, 209–25. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Intel OpenVino: A Must-Know Deep Learning Toolkit." In Deep Learning, 245–60. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Machine Learning: The Fundamentals." In Deep Learning, 29–64. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Interview Questions and Answers." In Deep Learning, 261–88. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "The Deep Learning Framework." In Deep Learning, 65–79. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "CNN Architectures: An Evolution." In Deep Learning, 121–55. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Introduction to Deep Learning." In Deep Learning, 1–11. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Autoencoders." In Deep Learning, 185–207. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "The Tools and the Prerequisites." In Deep Learning, 13–28. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep learning"

1

"DEEP-ML 2019 Program Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"DEEP-ML 2019 Organizing Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Keynote Abstracts." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"[Title page i]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"[Title page iii]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"[Copyright notice]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Table of contents." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Message from the DEEP-ML 2019 Chairs." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaskavalci, Halil Can, and Sezer Goren. "A Deep Learning Based Distributed Smart Surveillance Architecture using Edge and Cloud Computing." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Kyu Beom, and Hyu Soung Shin. "An Application of a Deep Learning Algorithm for Automatic Detection of Unexpected Accidents Under Bad CCTV Monitoring Conditions in Tunnels." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deep learning"

1

Catanach, Thomas, and Jed Duersch. Efficient Generalizable Deep Learning. Office of Scientific and Technical Information (OSTI), September 2018. http://dx.doi.org/10.2172/1760400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dell, Melissa. Deep Learning for Economists. Cambridge, MA: National Bureau of Economic Research, August 2024. http://dx.doi.org/10.3386/w32768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Groh, Micah. NOvA Reconstruction using Deep Learning. Office of Scientific and Technical Information (OSTI), June 2018. http://dx.doi.org/10.2172/1462092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geiss, Andrew, Joseph Hardin, Sam Silva, William Jr., Adam Varble, and Jiwen Fan. Deep Learning for Ensemble Forecasting. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harris, James, Shannon Kinkead, Dylan Fox, and Yang Ho. Continual Learning for Pattern Recognizers using Neurogenesis Deep Learning. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1855019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Draelos, Timothy John, Nadine E. Miner, Christopher C. Lamb, Craig Michael Vineyard, Kristofor David Carlson, Conrad D. James, and James Bradley Aimone. Neurogenesis Deep Learning: Extending deep networks to accommodate new classes. Office of Scientific and Technical Information (OSTI), December 2016. http://dx.doi.org/10.2172/1505351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Balaji, Praveen. Detecting Stellar Streams through Deep Learning. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1637622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Li. Deep Learning for Hydro-Biogeochemistry Processes. Office of Scientific and Technical Information (OSTI), March 2021. http://dx.doi.org/10.2172/1769693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Eydenberg, Michael, Lisa Batsch-Smith, Charles Bice, Logan Blakely, Michael Bynum, Fani Boukouvala, Anya Castillo, et al. Resilience Enhancements through Deep Learning Yields. Office of Scientific and Technical Information (OSTI), September 2022. http://dx.doi.org/10.2172/1890044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Rahul. Reconstructing material microstructures using deep learning. Ames (Iowa): Iowa State University, January 2019. http://dx.doi.org/10.31274/cc-20240624-1194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography