Dissertationen zum Thema „Apprentissage profonds“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Apprentissage profonds" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Franceschi, Jean-Yves. „Apprentissage de représentations et modèles génératifs profonds dans les systèmes dynamiques“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS014.
Der volle Inhalt der QuelleThe recent rise of deep learning has been motivated by numerous scientific breakthroughs, particularly regarding representation learning and generative modeling. However, most of these achievements have been obtained on image or text data, whose evolution through time remains challenging for existing methods. Given their importance for autonomous systems to adapt in a constantly evolving environment, these challenges have been actively investigated in a growing body of work. In this thesis, we follow this line of work and study several aspects of temporality and dynamical systems in deep unsupervised representation learning and generative modeling. Firstly, we present a general-purpose deep unsupervised representation learning method for time series tackling scalability and adaptivity issues arising in practical applications. We then further study in a second part representation learning for sequences by focusing on structured and stochastic spatiotemporal data: videos and physical phenomena. We show in this context that performant temporal generative prediction models help to uncover meaningful and disentangled representations, and conversely. We highlight to this end the crucial role of differential equations in the modeling and embedding of these natural sequences within sequential generative models. Finally, we more broadly analyze in a third part a popular class of generative models, generative adversarial networks, under the scope of dynamical systems. We study the evolution of the involved neural networks with respect to their training time by describing it with a differential equation, allowing us to gain a novel understanding of this generative model
Bietti, Alberto. „Méthodes à noyaux pour les réseaux convolutionnels profonds“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM051.
Der volle Inhalt der QuelleThe increased availability of large amounts of data, from images in social networks, speech waveforms from mobile devices, and large text corpuses, to genomic and medical data, has led to a surge of machine learning techniques. Such methods exploit statistical patterns in these large datasets for making accurate predictions on new data. In recent years, deep learning systems have emerged as a remarkably successful class of machine learning algorithms, which rely on gradient-based methods for training multi-layer models that process data in a hierarchical manner. These methods have been particularly successful in tasks where the data consists of natural signals such as images or audio; this includes visual recognition, object detection or segmentation, and speech recognition.For such tasks, deep learning methods often yield the best known empirical performance; yet, the high dimensionality of the data and large number of parameters of these models make them challenging to understand theoretically. Their success is often attributed in part to their ability to exploit useful structure in natural signals, such as local stationarity or invariance, for instance through choices of network architectures with convolution and pooling operations. However, such properties are still poorly understood from a theoretical standpoint, leading to a growing gap between the theory and practice of machine learning. This thesis is aimed towards bridging this gap, by studying spaces of functions which arise from given network architectures, with a focus on the convolutional case. Our study relies on kernel methods, by considering reproducing kernel Hilbert spaces (RKHSs) associated to certain kernels that are constructed hierarchically based on a given architecture. This allows us to precisely study smoothness, invariance, stability to deformations, and approximation properties of functions in the RKHS. These representation properties are also linked with optimization questions when training deep networks with gradient methods in some over-parameterized regimes where such kernels arise. They also suggest new practical regularization strategies for obtaining better generalization performance on small datasets, and state-of-the-art performance for adversarial robustness on image tasks
Lucas, Thomas. „Modèles génératifs profonds : sur-généralisation et abandon de mode“. Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM049.
Der volle Inhalt der QuelleThis dissertation explores the topic of generative modelling of natural images,which is the task of fitting a data generating distribution.Such models can be used to generate artificial data resembling the true data, or to compress images.Latent variable models, which are at the core of our contributions, seek to capture the main factors of variations of an image into a variable that can be manipulated.In particular we build on two successful latent variable generative models, the generative adversarial network (GAN) and Variational autoencoder (VAE) models.Recently GANs significantly improved the quality of images generated by deep models, obtaining very compelling samples.Unfortunately these models struggle to capture all the modes of the original distribution, ie they do not cover the full variability of the dataset.Conversely, likelihood based models such as VAEs typically cover the full variety of the data well and provide an objective measure of coverage.However these models produce samples of inferior visual quality that are more easily distinguished from real ones.The work presented in this thesis strives for the best of both worlds: to obtain compelling samples while modelling the full support of the distribution.To achieve that, we focus on i) the optimisation problems used and ii) practical model limitations that hinder performance.The first contribution of this manuscript is a deep generative model that encodes global image structure into latent variables, built on the VAE, and autoregressively models low level detail.We propose a training procedure relying on an auxiliary loss function to control what information is captured by the latent variables and what information is left to an autoregressive decoder.Unlike previous approaches to such hybrid models, ours does not need to restrict the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables.The second contribution builds on the standard GAN model, which trains a discriminator network to provide feedback to a generative network.The discriminator usually assesses the quality of individual samples, which makes it hard to evaluate the variability of the data.Instead we propose to feed the discriminator with emph{batches} that mix both true and fake samples, and train it to predict the ratio of true samples in the batch.These batches work as approximations of the distribution of generated images and allows the discriminator to approximate distributional statistics.We introduce an architecture that is well suited to solve this problem efficiently,and show experimentally that our approach reduces mode collapse in GANs on two synthetic datasets, and obtains good results on the CIFAR10 and CelebA datasets.The mutual shortcomings of VAEs and GANs can in principle be addressed by training hybrid models that use both types of objective.In our third contribution, we show that usual parametric assumptions made in VAEs induce a conflict between them, leading to lackluster performance of hybrid models.We propose a solution based on deep invertible transformations, that trains a feature space in which usual assumptions can be made without harm.Our approach provides likelihood computations in image space while being able to take advantage of adversarial training.It obtains GAN-like samples that are competitive with fully adversarial models while improving likelihood scores over existing hybrid models at the time of publication, which is a significant advancement
Walker, Emmanuelle Le Ray Anne. „Réflexions sur le développement des concepts chez les jeunes sourds profonds“. [S.l.] : [s.n.], 2007. http://castore.univ-nantes.fr/castore/GetOAIRef?idDoc=19576.
Der volle Inhalt der QuelleMedrouk, Indira Lisa. „Réseaux profonds pour la classification des opinions multilingue“. Electronic Thesis or Diss., Paris 8, 2018. http://www.theses.fr/2018PA080081.
Der volle Inhalt der QuelleIn the era of social networks where everyone can claim to be a contentproducer, the growing interest in research and industry is an indisputablefact for the opinion mining domain.This thesis is mainly addressing a Web inherent characteristic reflectingits globalized and multilingual character.To address the multilingual opinion mining issue, the proposed model isinspired by the process of acquiring simultaneous languages with equal intensityamong young children. The incorporate corpus-based input is raw, usedwithout any pre-processing, translation, annotation nor additional knowledgefeatures. For the machine learning approach, we use two different deep neuralnetworks. The evaluation of the proposed model was executed on corpusescomposed of four different languages, namely French, English, Greek and Arabic,to emphasize the ability of a deep learning model in order to establishthe sentiment polarity of reviews and topics classification in a multilingualenvironment. The various experiments combining corpus size variations forbi and quadrilingual grouping languages, presented to our models withoutadditional modules, have shown that, such as children bilingual competencedevelopment, which is linked to quality and quantity of their immersion in thelinguistic context, the network learns better in a rich and varied environment.As part of the problem of opinion classification, the second part of thethesis presents a comparative study of two models of deep networks : convolutionalnetworks and recurrent networks. Our contribution consists in demonstratingtheir complementarity according to their combinations in a multilingualcontext
Blot, Michaël. „Étude de l'apprentissage et de la généralisation des réseaux profonds en classification d'images“. Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS412.
Der volle Inhalt der QuelleArtificial intelligence is experiencing a resurgence in recent years. This is due to the growing ability to collect and store a considerable amount of digitized data. These huge databases allow machine learning algorithms to respond to certain tasks through supervised learning. Among the digitized data, images remain predominant in the modern environment. Huge datasets have been created. moreover, the image classification has allowed the development of previously neglected models, deep neural networks or deep learning. This family of algorithms demonstrates a great facility to learn perfectly datasets, even very large. Their ability to generalize remains largely misunderstood, but the networks of convolutions are today the undisputed state of the art. From a research and application point of view of deep learning, the demands will be more and more demanding, requiring to make an effort to bring the performances of the neuron networks to the maximum of their capacities. This is the purpose of our research, whose contributions are presented in this thesis. We first looked at the issue of training and considered accelerating it through distributed methods. We then studied the architectures in order to improve them without increasing their complexity. Finally, we particularly study the regularization of network training. We studied a regularization criterion based on information theory that we deployed in two different ways
Langlois, Julien. „Vision industrielle et réseaux de neurones profonds : application au dévracage de pièces plastiques industrielles“. Thesis, Nantes, 2019. http://www.theses.fr/2019NANT4010/document.
Der volle Inhalt der QuelleThis work presents a pose estimation method from a RGB image of industrial parts placed in a bin. In a first time, neural networks are used to segment a certain number of parts in the scene. After applying an object mask to the original image, a second network is inferring the local depth of the part. Both the local pixel coordinates of the part and the local depth are used in two networks estimating the orientation of the object as a quaternion and its translation on the Z axis. Finally, a registration module working on the back-projected local depth and the 3D model of the part is refining the pose inferred from the previous networks. To deal with the lack of annotated real images in an industrial context, an data generation process is proposed. By using various light parameters, the dataset versatility allows to anticipate multiple challenging exploitation scenarios within an industrial environment
Ogier, du Terrail Jean. „Réseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC276/document.
Der volle Inhalt der QuelleThe following manuscript is an attempt to tackle the problem of small vehicles detection in vertical aerial imagery through the use of deep learning algorithms. The specificities of the matter allows the use of innovative techniques leveraging the invariance and self similarities of automobiles/planes vehicles seen from the sky.We will start by a thorough study of single shot detectors. Building on that we will examine the effect of adding multiple stages to the detection decision process. Finally we will try to come to grips with the domain adaptation problem in detection through the generation of better looking synthetic data and its use in the training process of these detectors
Lathuiliere, Stéphane. „Modèles profonds de régression et applications à la vision par ordinateur pour l'interaction homme-robot“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM026/document.
Der volle Inhalt der QuelleIn order to interact with humans, robots need to perform basic perception taskssuch as face detection, human pose estimation or speech recognition. However, in order have a natural interaction with humans, the robot needs to modelhigh level concepts such as speech turns, focus of attention or interactions between participants in a conversation. In this manuscript, we follow a top-downapproach. On the one hand, we present two high-level methods that model collective human behaviors. We propose a model able to recognize activities thatare performed by different groups of people jointly, such as queueing, talking.Our approach handles the general case where several group activities can occur simultaneously and in sequence. On the other hand, we introduce a novelneural network-based reinforcement learning approach for robot gaze control.Our approach enables a robot to learn and adapt its gaze control strategy inthe context of human-robot interaction. The robot is able to learn to focus itsattention on groups of people from its own audio-visual experiences.Second, we study in detail deep learning approaches for regression prob-lems. Regression problems are crucial in the context of human-robot interaction in order to obtain reliable information about head and body poses or theage of the persons facing the robot. Consequently, these contributions are really general and can be applied in many different contexts. First, we proposeto couple a Gaussian mixture of linear inverse regressions with a convolutionalneural network. Second, we introduce a Gaussian-uniform mixture model inorder to make the training algorithm more robust to noisy annotations. Finally,we perform a large-scale study to measure the impact of several architecturechoices and extract practical recommendations when using deep learning approaches in regression tasks. For each of these contributions, a strong experimental validation has been performed with real-time experiments on the NAOrobot or on large and diverse data-sets
Carbajal, Guillaume. „Apprentissage profond bout-en-bout pour le rehaussement de la parole“. Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0017.
Der volle Inhalt der QuelleThis PhD falls within the development of hands-free telecommunication systems, more specifically smart speakers in domestic environments. The user interacts with another speaker at a far-end point and can be typically a few meters away from this kind of system. The microphones are likely to capture sounds of the environment which are added to the user's voice, such background noise, acoustic echo and reverberation. These types of distortion degrade speech quality, intelligibility and listening comfort for the far-end speaker, and must be reduced. Filtering methods can reduce individually each of these types of distortion. Reducing all of them implies combining the corresponding filtering methods. As these methods interact with each other which can deteriorate the user's speech, they must be jointly optimized. First of all, we introduce an acoustic echo reduction approach which combines an echo cancellation filter with a residual echo postfilter designed to adapt to the echo cancellation filter. To do so, we propose to estimate the postfilter coefficients using the short term spectra of multiple known signals, including the output of the echo cancellation filter, as inputs to a neural network. We show that this approach improves the performance and the robustness of the postfilter in terms of echo reduction, while limiting speech degradation, on several scenarios in real conditions. Secondly, we describe a joint approach for multichannel reduction of echo, reverberation and noise. We propose to simultaneously model the target speech and undesired residual signals after echo cancellation and dereveberation in a probabilistic framework, and to jointly represent their short-term spectra by means of a recurrent neural network. We develop a block-coordinate ascent algorithm to update the echo cancellation and dereverberation filters, as well as the postfilter that reduces the undesired residual signals. We evaluate our approach on real recordings in different conditions. We show that it improves speech quality and reduction of echo, reverberation and noise compared to a cascade of individual filtering methods and another joint reduction approach. Finally, we present an online version of our approach which is suitable for time-varying acoustic conditions. We evaluate the perceptual quality achieved on real examples where the user moves during the conversation
Blier, Léonard. „Some Principled Methods for Deep Reinforcement Learning“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG040.
Der volle Inhalt der QuelleThis thesis develops and studies some principled methods for Deep Learning (DL) and deep Reinforcement Learning (RL).In Part II, we study the efficiency of DL models from the context of the Minimum Description Length principle, which formalize Occam's razor, and holds that a good model of data is a model that is good at losslessly compressing the data, including the cost of describing the model itself. Deep neural networks might seem to go against this principle given the large number of parameters to be encoded. Surprisingly, we demonstrate experimentally the ability of deep neural networks to compress the training data even when accounting for parameter encoding, hence showing that DL approaches are well principled from this information theory viewpoint.In Part III, we tackle two limitations of standard approaches in DL and RL, and develop principled methods, improving robustness empirically.The first one concerns optimisation of deep learning models with SGD, and the cost of finding the optimal learning rate, which prevents using a new method out of the box without hyperparameter tuning. When design a principled optimisation method for DL, 'All Learning Rates At Once' : each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude. Perhaps surprisingly, Alrao performs close to SGD with an optimally tuned learning rate, for various architectures and problems.The second one tackles near continuous-time RL environments (such as robotics, control environment, …) : we show that time discretization (number of action per second) in as a critical factor, and that empirically, Q-learning-based approaches collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.The main part of this thesis, (Part IV), studies the Successor States Operator in RL, and how it can improve sample efficiency of policy evaluation. In an environment with a very sparse reward, learning the value function is a hard problem. At the beginning of training, no learning will occur until a reward is observed. This highlight the fact that not all the observed information is used. Leveraging this information might lead to better sample efficiency. The Successor State Operator is an object that expresses the value functions of all possible reward functions for a given, fixed policy. Learning the successor state operator can be done without reward signals, and can extract information from every observed transition, illustrating an unsupervised reinforcement learning approach.We offer a formal treatment of these objects in both finite and continuous spaces with function approximators. We present several learning algorithms and associated results. Similarly to the value function, the successor states operator satisfies a Bellman equation. Additionally, it also satisfies two other fixed point equations: a backward Bellman equation and a Bellman-Newton equation, expressing path compositionality in the Markov process. These new relation allow us to generalize from observed trajectories in several ways, potentially leading to more sample efficiency. Every of these equations lead to corresponding algorithms for any function approximators such as neural networks.Finally, (Part V) the study of the successor states operator and its algorithms allow us to derive unbiased methods in the setting of multi-goal RL, dealing with the issue of extremely sparse rewards. We additionally show that the popular Hindsight Experience Replay algorithm, known to be biased, is actually unbiased in the large class of deterministic environments
Hardy, Corentin. „Contribution au développement de l’apprentissage profond dans les systèmes distribués“. Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S020/document.
Der volle Inhalt der QuelleDeep learning enables the development of a growing number of services. However, it requires large training databases and a lot of computing power. In order to reduce the costs of this deep learning, we propose a distributed computing setup to enable collaborative learning. Future users can participate with their devices and their data without moving private data in datacenters. We propose methods to train deep neural network in this distibuted system context
Mercadier, Yves. „Classification automatique de textes par réseaux de neurones profonds : application au domaine de la santé“. Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS068.
Der volle Inhalt der QuelleThis Ph.D focuses on the analysis of textual data in the health domain and in particular on the supervised multi-class classification of data from biomedical literature and social media.One of the major difficulties when exploring such data by supervised learning methods is to have a sufficient number of data sets for models training. Indeed, it is generally necessary to label manually the data before performing the learning step. The large size of the data sets makes this labellisation task very expensive, which should be reduced with semi-automatic systems.In this context, active learning, in which the Oracle intervenes to choose the best examples to label, is promising. The intuition is as follows: by choosing the smartly the examples and not randomly, the models should improve with less effort for the oracle and therefore at lower cost (i.e. with less annotated examples). In this PhD, we will evaluate different active learning approaches combined with recent deep learning models.In addition, when small annotated data set is available, one possibility of improvement is to artificially increase the data quantity during the training phase, by automatically creating new data from existing data. More precisely, we inject knowledge by taking into account the invariant properties of the data with respect to certain transformations. The augmented data can thus cover an unexplored input space, avoid overfitting and improve the generalization of the model. In this Ph.D, we will propose and evaluate a new approach for textual data augmentation.These two contributions will be evaluated on different textual datasets in the medical domain
Ducoffe, Mélanie. „Active learning et visualisation des données d'apprentissage pour les réseaux de neurones profonds“. Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4115/document.
Der volle Inhalt der QuelleOur work is presented in three separate parts which can be read independently. Firstly we propose three active learning heuristics that scale to deep neural networks: We scale query by committee, an ensemble active learning methods. We speed up the computation time by sampling a committee of deep networks by applying dropout on the trained model. Another direction was margin-based active learning. We propose to use an adversarial perturbation to measure the distance to the margin. We also establish theoretical bounds on the convergence of our Adversarial Active Learning strategy for linear classifiers. Some inherent properties of adversarial examples opens up promising opportunity to transfer active learning data from one network to another. We also derive an active learning heuristic that scales to both CNN and RNN by selecting the unlabeled data that minimize the variational free energy. Secondly, we focus our work on how to fasten the computation of Wasserstein distances. We propose to approximate Wasserstein distances using a Siamese architecture. From another point of view, we demonstrate the submodular properties of Wasserstein medoids and how to apply it in active learning. Eventually, we provide new visualization tools for explaining the predictions of CNN on a text. First, we hijack an active learning strategy to confront the relevance of the sentences selected with active learning to state-of-the-art phraseology techniques. These works help to understand the hierarchy of the linguistic knowledge acquired during the training of CNNs on NLP tasks. Secondly, we take advantage of deconvolution networks for image analysis to present a new perspective on text analysis to the linguistic community that we call Text Deconvolution Saliency
Chen, Mickaël. „Learning with weak supervision using deep generative networks“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS024.
Der volle Inhalt der QuelleMany successes of deep learning rely on the availability of massive annotated datasets that can be exploited by supervised algorithms. Obtaining those labels at a large scale, however, can be difficult, or even impossible in many situations. Designing methods that are less dependent on annotations is therefore a major research topic, and many semi-supervised and weakly supervised methods have been proposed. Meanwhile, the recent introduction of deep generative networks provided deep learning methods with the ability to manipulate complex distributions, allowing for breakthroughs in tasks such as image edition and domain adaptation. In this thesis, we explore how these new tools can be useful to further alleviate the need for annotations. Firstly, we tackle the task of performing stochastic predictions. It consists in designing systems for structured prediction that take into account the variability in possible outputs. We propose, in this context, two models. The first one performs predictions on multi-view data with missing views, and the second one predicts possible futures of a video sequence. Then, we study adversarial methods to learn a factorized latent space, in a setting with two explanatory factors but only one of them is annotated. We propose models that aim to uncover semantically consistent latent representations for those factors. One model is applied to the conditional generation of motion capture data, and another one to multi-view data. Finally, we focus on the task of image segmentation, which is of crucial importance in computer vision. Building on previously explored ideas, we propose a model for object segmentation that is entirely unsupervised
Bisot, Victor. „Apprentissage de représentations pour l'analyse de scènes sonores“. Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0016.
Der volle Inhalt der QuelleThis thesis work focuses on the computational analysis of environmental sound scenes and events. The objective of such tasks is to automatically extract information about the context in which a sound has been recorded. The interest for this area of research has been rapidly increasing in the last few years leading to a constant growth in the number of works and proposed approaches. We explore and contribute to the main families of approaches to sound scene and event analysis, going from feature engineering to deep learning. Our work is centered at representation learning techniques based on nonnegative matrix factorization, which are particularly suited to analyse multi-source environments such as acoustic scenes. As a first approach, we propose a combination of image processing features with the goal of confirming that spectrograms contain enough information to discriminate sound scenes and events. From there, we leave the world of feature engineering to go towards automatically learning the features. The first step we take in that direction is to study the usefulness of matrix factorization for unsupervised feature learning techniques, especially by relying on variants of NMF. Several of the compared approaches allow us indeed to outperform feature engineering approaches to such tasks. Next, we propose to improve the learned representations by introducing the TNMF model, a supervised variant of NMF. The proposed TNMF models and algorithms are based on jointly learning nonnegative dictionaries and classifiers by minimising a target classification cost. The last part of our work highlights the links and the compatibility between NMF and certain deep neural network systems by proposing and adapting neural network architectures to the use of NMF as an input representation. The proposed models allow us to get state of the art performance on scene classification and overlapping event detection tasks. Finally we explore the possibility of jointly learning NMF and neural networks parameters, grouping the different stages of our systems in one optimisation problem
Cherti, Mehdi. „Deep generative neural networks for novelty generation : a foundational framework, metrics and experiments“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS029/document.
Der volle Inhalt der QuelleIn recent years, significant advances made in deep neural networks enabled the creation of groundbreaking technologies such as self-driving cars and voice-enabled personal assistants. Almost all successes of deep neural networks are about prediction, whereas the initial breakthroughs came from generative models. Today, although we have very powerful deep generative modeling techniques, these techniques are essentially being used for prediction or for generating known objects (i.e., good quality images of known classes): any generated object that is a priori unknown is considered as a failure mode (Salimans et al., 2016) or as spurious (Bengio et al., 2013b). In other words, when prediction seems to be the only possible objective, novelty is seen as an error that researchers have been trying hard to eliminate. This thesis defends the point of view that, instead of trying to eliminate these novelties, we should study them and the generative potential of deep nets to create useful novelty, especially given the economic and societal importance of creating new objects in contemporary societies. The thesis sets out to study novelty generation in relationship with data-driven knowledge models produced by deep generative neural networks. Our first key contribution is the clarification of the importance of representations and their impact on the kind of novelties that can be generated: a key consequence is that a creative agent might need to rerepresent known objects to access various kinds of novelty. We then demonstrate that traditional objective functions of statistical learning theory, such as maximum likelihood, are not necessarily the best theoretical framework for studying novelty generation. We propose several other alternatives at the conceptual level. A second key result is the confirmation that current models, with traditional objective functions, can indeed generate unknown objects. This also shows that even though objectives like maximum likelihood are designed to eliminate novelty, practical implementations do generate novelty. Through a series of experiments, we study the behavior of these models and the novelty they generate. In particular, we propose a new task setup and metrics for selecting good generative models. Finally, the thesis concludes with a series of experiments clarifying the characteristics of models that can exhibit novelty. Experiments show that sparsity, noise level, and restricting the capacity of the net eliminates novelty and that models that are better at recognizing novelty are also good at generating novelty
Cîrstea, Bogdan-Ionut. „Contribution à la reconnaissance de l'écriture manuscrite en utilisant des réseaux de neurones profonds et le calcul quantique“. Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0059.
Der volle Inhalt der QuelleIn this thesis, we provide several contributions from the fields of deep learning and quantum computation to handwriting recognition. We begin by integrating some of the more recent deep learning techniques (such as dropout, batch normalization and different activation functions) into convolutional neural networks and show improved performance on the well-known MNIST dataset. We then propose Tied Spatial Transformer Networks (TSTNs), a variant of Spatial Transformer Networks (STNs) with shared weights, as well as different training variants of the TSTN. We show improved performance on a distorted variant of the MNIST dataset. In another work, we compare the performance of Associative Long Short-Term Memory (ALSTM), a recently introduced recurrent neural network (RNN) architecture, against Long Short-Term Memory (LSTM), on the Arabic handwriting recognition IFN-ENIT dataset. Finally, we propose a neural network architecture, which we name a hybrid classical-quantum neural network, which can integrate and take advantage of quantum computing. While our simulations are performed using classical computation (on a GPU), our results on the Fashion-MNIST dataset suggest that exponential improvements in computational requirements might be achievable, especially for recurrent neural networks trained for sequence classification
Simonnet, Edwin. „Réseaux de neurones profonds appliqués à la compréhension de la parole“. Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1006/document.
Der volle Inhalt der QuelleThis thesis is a part of the emergence of deep learning and focuses on spoken language understanding assimilated to the automatic extraction and representation of the meaning supported by the words in a spoken utterance. We study a semantic concept tagging task used in a spoken dialogue system and evaluated with the French corpus MEDIA. For the past decade, neural models have emerged in many natural language processing tasks through algorithmic advances or powerful computing tools such as graphics processors. Many obstacles make the understanding task complex, such as the difficult interpretation of automatic speech transcriptions, as many errors are introduced by the automatic recognition process upstream of the comprehension module. We present a state of the art describing spoken language understanding and then supervised automatic learning methods to solve it, starting with classical systems and finishing with deep learning techniques. The contributions are then presented along three axes. First, we develop an efficient neural architecture consisting of a bidirectional recurrent network encoder-decoder with attention mechanism. Then we study the management of automatic recognition errors and solutions to limit their impact on our performances. Finally, we envisage a disambiguation of the comprehension task making the systems more efficient
Nerlikar, Vivek. „Digital Twin in Structural Health Monitoring for Aerospace using Machine Learning“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG080.
Der volle Inhalt der QuelleModern engineering systems and structures often utilize a combination of materials such as metals, concrete, and composites, carefully optimized to achieve superior performance in their designated functions while also minimizing overall economic costs. Primarily, engineering structures are subjected to dynamic loads during their operational life. The manufacturing issues and/or the perpetual dynamic operations often lead to some changes into a system that adversely impact its present and/or future performance; these changes can be defined as damage. The identification of damage is a crucial process that ensures the smooth functioning of equipment or structures throughout their life cycle. It alerts the maintenance department to take the necessary measures for repair. Structural Health Monitoring (SHM) is a potential damage identification technique which has attracted more attention in the last few decades. It has the capability to overcome the downsides of traditional Non-Destructive Testing (NDT). In this thesis, we used Ultrasonic Guided Waves (GW) technique for SHM. However, sensitivity of GW to Environmental and Operational Conditions (EOC) modify the response signals to mask defect signatures. This makes it difficult to isolate defect signatures using methods such as baseline comparison, where damage-free GW signals are compared with current acquisitions Baseline-free methods can be an alternative, but they are limited to simple geometries. Moreover, high sensitivity of GW to EOC and measurement noise poses a challenge in modelling GW through physics-based models. The recent advancements in Machine Learning (ML) has created a new modelling axis, including data-driven modelling and physics-based modelling, often referred to as Scientific ML. Data-driven modelling is extremely helpful to model the phenomena that cannot be explained by physics, allowing for the isolation of subtle defect signatures and the development of robust damage detection procedures. However, ML-based methods require more data to capture all the information to enhance the generalization capability of ML models. SHM, on the other hand, tends to generate mostly damage-free data, as damage episodes seldom occur. This particular gap can be filled through physics-based modeling. In this approach, the modeling capabilities of physics-based models are combined with measurement data to explain unexplainable phenomena using ML. The primary objective of this thesis is to develop a data-driven damage detection methodology for identifying defects in composite panels. This methodology is designed for monitoring similar structures, such as wind or jet turbine blades, without requiring pristine (damage-free) states of all structures, thereby avoiding the need for direct baseline comparisons. The second goal is to develop a physics-based ML model for integrating physics-based simulations with experimental data within the context of a Digital Twin. The development of this physics-based ML model involves multi-fidelity modeling and surrogate modeling. To validate this model, we utilize an experimental and simulation dataset of an Aluminium panel. Furthermore, the developed model is employed to generate realistic GW responses at the required damage size and sensor path. These generated signals are then used to compute a Probability of Detection (POD) curve, assessing the reliability of a GW-based SHM system
Caubriere, Antoine. „Du signal au concept : réseaux de neurones profonds appliqués à la compréhension de la parole“. Thesis, Le Mans, 2021. https://tel.archives-ouvertes.fr/tel-03177996.
Der volle Inhalt der QuelleThis thesis is part of the deep learning applied to spoken language understanding. Until now, this task was performed through a pipeline of components implementing, for example, a speech recognition system, then different natural language processing, before involving a language understanding system on enriched automatic transcriptions. Recently, work in the field of speech recognition has shown that it is possible to produce a sequence of words directly from the acoustic signal. Within the framework of this thesis, the aim is to exploit these advances and extend them to design a system composed of a single neural model fully optimized for the spoken language understanding task, from signal to concept. First, we present a state of the art describing the principles of deep learning, speech recognition, and speech understanding. Then, we describe the contributions made along three main axes. We propose a first system answering the problematic posed and apply it to a task of named entities recognition. Then, we propose a transfer learning strategy guided by a curriculum learning approach. This strategy is based on the generic knowledge learned to improve the performance of a neural system on a semantic concept extraction task. Then, we perform an analysis of the errors produced by our approach, while studying the functioning of the proposed neural architecture. Finally, we set up a confidence measure to evaluate the reliability of a hypothesis produced by our system
Mohammad, Noshine. „Exploration des modèles d’apprentissage statistique profonds couplés à la spectrométrie de masse pour améliorer la surveillance épidémiologique des maladies infectieuses“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS617.
Der volle Inhalt der QuelleMALDI-TOF (matrix assisted laser desorption and ionisation time of flight) mass spectrometry is a rapid and robust diagnostic method for microbiology, enabling microorganism species to be identified on the basis of their protein fingerprint in the mass spectrum. However, the clinical and epidemiological applications of this technology remain limited by the bioinformatics tools available. This thesis focuses on the application of deep statistical learning models to MALDI-TOF mass spectrometry data for the purpose of epidemiological surveillance of infectious diseases. This includes the monitoring of fungal and mycobacterial epidemics in hospitals, as well as the characterisation of Anopheles vectors of malaria.We examined the impact of sample preparation methods and computer analysis of mass spectra on improving learning, in order to identify epidemic fungal clones in hospitals and prevent their spread. Our study showed that the convolution neural network (CNN) has a high potential for identifying the spectra of specific Candida parapsilosis clones, achieving 94% accuracy by optimising essential parameters (culture media, growth time, and the spectra acquisition machine). To detect epidemic Aspergillus flavus clones in multicentre hospital cohorts, the CNN was also able to classify most isolates correctly, achieving accuracy of over 93% for two of the three instruments used. We have also shown that by using optimised deep learning models, such as a CNN and a temporal convolution neural network (TCN), we can predict the age of mosquitoes with an average accuracy of two days (best mean absolute error: 1.74 days). This approach will enable us to effectively monitor the age structure of wild Anopheles mosquito populations and target them more effectively with control measures. Finally, we demonstrated the performance of various neural network architectures and mass spectra representation methods, using different cohorts covering various epidemiological issues such as age prediction, identification of closely related species of Anopheles mosquitoes, distinction between closely related subspecies, and detection of resistance in Mycobacterium abscessus. The study showed that of the different models evaluated, the best performing models, such as TCNs and a recurrent neural network, were able to achieve notable results, reaching an identification accuracy of 93% for closely related Anopheles species and 95% for Mycobacterium abscessus subspecies. In addition, the use of CNN and TCN enabled the detection of resistant strains in Mycobacterium abscessus with an accuracy of over 97%. This thesis highlights the use of deep learning in conjunction with MALDI-TOF, a hitherto little explored approach. With the widespread availability of MALDI-TOF instruments and the possibility of coupling analyses to online applications using deep learning, this approach looks promising, opening the way to other epidemiological applications beyond simple species identification, such as detecting epidemiological clusters of drug-resistant microorganisms, monitoring the transmission of bacterial and fungal diseases, and evaluating the effectiveness of targeted vector control interventions
Dvornik, Mikita. „Learning with Limited Annotated Data for Visual Understanding“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM050.
Der volle Inhalt der QuelleThe ability of deep-learning methods to excel in computer vision highly depends on the amount of annotated data available for training. For some tasks, annotation may be too costly and labor intensive, thus becoming the main obstacle to better accuracy. Algorithms that learn from data automatically, without human supervision, perform substantially worse than their fully-supervised counterparts. Thus, there is a strong motivation to work on effective methods for learning with limited annotations. This thesis proposes to exploit prior knowledge about the task and develops more effective solutions for scene understanding and few-shot image classification.Main challenges of scene understanding include object detection, semantic and instance segmentation. Similarly, all these tasks aim at recognizing and localizing objects, at region- or more precise pixel-level, which makes the annotation process difficult. The first contribution of this manuscript is a Convolutional Neural Network (CNN) that performs both object detection and semantic segmentation. We design a specialized network architecture, that is trained to solve both problems in one forward pass, and operates in real-time. Thanks to the multi-task training procedure, both tasks benefit from each other in terms of accuracy, with no extra labeled data.The second contribution introduces a new technique for data augmentation, i.e., artificially increasing the amount of training data. It aims at creating new scenes by copy-pasting objects from one image to another, within a given dataset. Placing an object in a right context was found to be crucial in order to improve scene understanding performance. We propose to model visual context explicitly using a CNN that discovers correlations between object categories and their typical neighborhood, and then proposes realistic locations for augmentation. Overall, pasting objects in ``right'' locations allows to improve object detection and segmentation performance, with higher gains in limited annotation scenarios.For some problems, the data is extremely scarce, and an algorithm has to learn new concepts from a handful of examples. Few-shot classification consists of learning a predictive model that is able to effectively adapt to a new class, given only a few annotated samples. While most current methods concentrate on the adaptation mechanism, few works have tackled the problem of scarce training data explicitly. In our third contribution, we show that by addressing the fundamental high-variance issue of few-shot learning classifiers, it is possible to significantly outperform more sophisticated existing techniques. Our approach consists of designing an ensemble of deep networks to leverage the variance of the classifiers, and introducing new strategies to encourage the networks to cooperate, while encouraging prediction diversity. By matching different networks outputs on similar input images, we improve model accuracy and robustness, comparing to classical ensemble training. Moreover, a single network obtained by distillation shows similar to the full ensemble performance and yields state-of-the-art results with no computational overhead at test time
Goh, Hanlin. „Apprentissage de Représentations Visuelles Profondes“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00948376.
Der volle Inhalt der QuelleRossi, Simone. „Improving Scalability and Inference in Probabilistic Deep Models“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.
Der volle Inhalt der QuelleThroughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
Hocquet, Guillaume. „Class Incremental Continual Learning in Deep Neural Networks“. Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.
Der volle Inhalt der QuelleWe are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Sanabria, Rosas Laura Melissa. „Détection et caractérisation des moments saillants pour les résumés automatiques“. Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4104.
Der volle Inhalt der QuelleVideo content is present in an ever-increasing number of fields, both scientific and commercial. Sports, particularly soccer, is one of the industries that has invested the most in the field of video analytics, due to the massive popularity of the game. Although several state-of-the-art methods rely on handcrafted heuristics to generate summaries of soccer games, they have proven that multiple modalities help detect the best actions of the game. On the other hand, the field of general-purpose video summarization has advanced rapidly, offering several deep learning approaches. However, many of them are based on properties that are not feasible for sports videos. Video content has been for many years the main source for automatic tasks in soccer but the data that registers all the events happening on the field have become lately very important in sports analytics, since these event data provide richer information and requires less processing. Considering that in automatic sports summarization, the goal is not only to show the most important actions of the game, but also to evoke as much emotion as those evoked by human editors, we propose a method to generate the summary of a soccer match video exploiting the event metadata of the entire match and the content broadcast on TV. We have designed an architecture, introducing (1) a Multiple Instance Learning method that takes into account the sequential dependency among events, (2) a hierarchical multimodal attention layer that grasps the importance of each event in an action and (3) a method to automatically generate multiple summaries of a soccer match by sampling from a ranking distribution, providing multiple candidate summaries which are similar enough but with relevant variability to provide different options to the final user.We also introduced solutions to some additional challenges in the field of sports summarization. Based on the internal signals of an attention model that uses event data as input, we proposed a method to analyze the interpretability of our model through a graphical representation of actions where the x-axis of the graph represents the sequence of events, and the y-axis is the weight value learned by the attention layer. This new representation provides a new tool for the editor containing meaningful information to decide whether an action is important. We also proposed the use of keyword spotting and boosting techniques to detect every time a player is mentioned by the commentators as a solution for the missing event data
Vialatte, Jean-Charles. „Convolution et apprentissage profond sur graphes“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0118/document.
Der volle Inhalt der QuelleConvolutional neural networks have proven to be the deep learning model that performs best on regularly structured datasets like images or sounds. However, they cannot be applied on datasets with an irregular structure (e.g. sensor networks, citation networks, MRIs). In this thesis, we develop an algebraic theory of convolutions on irregular domains. We construct a family of convolutions that are based on group actions (or, more generally, groupoid actions) that acts on the vertex domain and that have properties that depend on the edges. With the help of these convolutions, we propose extensions of convolutional neural netowrks to graph domains. Our researches lead us to propose a generic formulation of the propagation between layers, that we call the neural contraction. From this formulation, we derive many novel neural network models that can be applied on irregular domains. Through benchmarks and experiments, we show that they attain state-of-the-art performances, and beat them in some cases
Moradi, Fard Maziar. „Apprentissage de représentations de données dans un apprentissage non-supervisé“. Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM053.
Der volle Inhalt der QuelleDue to the great impact of deep learning on variety fields of machine learning, recently their abilities to improve clustering approaches have been investi- gated. At first, deep learning approaches (mostly Autoencoders) have been used to reduce the dimensionality of the original space and to remove possible noises (also to learn new data representations). Such clustering approaches that utilize deep learning approaches are called Deep Clustering. This thesis focuses on developing Deep Clustering models which can be used for different types of data (e.g., images, text). First we propose a Deep k-means (DKM) algorithm where learning data representations (through a deep Autoencoder) and cluster representatives (through the k-means) are performed in a joint way. The results of our DKM approach indicate that this framework is able to outperform similar algorithms in Deep Clustering. Indeed, our proposed framework is able to truly and smoothly backpropagate the loss function error through all learnable variables.Moreover, we propose two frameworks named SD2C and PCD2C which are able to integrate respectively seed words and pairwise constraints into end-to-end Deep Clustering frameworks. In fact, by utilizing such frameworks, the users can observe the reflection of their needs in clustering. Finally, the results obtained from these frameworks indicate their ability to obtain more tailored results
Balikas, Georgios. „Explorer et apprendre à partir de collections de textes multilingues à l'aide des modèles probabilistes latents et des réseaux profonds“. Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM054/document.
Der volle Inhalt der QuelleText is one of the most pervasive and persistent sources of information. Content analysis of text in its broad sense refers to methods for studying and retrieving information from documents. Nowadays, with the ever increasing amounts of text becoming available online is several languages and different styles, content analysis of text is of tremendous importance as it enables a variety of applications. To this end, unsupervised representation learning methods such as topic models and word embeddings constitute prominent tools.The goal of this dissertation is to study and address challengingproblems in this area, focusing on both the design of novel text miningalgorithms and tools, as well as on studying how these tools can be applied to text collections written in a single or several languages.In the first part of the thesis we focus on topic models and more precisely on how to incorporate prior information of text structure to such models.Topic models are built on the premise of bag-of-words, and therefore words are exchangeable. While this assumption benefits the calculations of the conditional probabilities it results in loss of information.To overcome this limitation we propose two mechanisms that extend topic models by integrating knowledge of text structure to them. We assume that the documents are partitioned in thematically coherent text segments. The first mechanism assigns the same topic to the words of a segment. The second, capitalizes on the properties of copulas, a tool mainly used in the fields of economics and risk management that is used to model the joint probability density distributions of random variables while having access only to their marginals.The second part of the thesis explores bilingual topic models for comparable corpora with explicit document alignments. Typically, a document collection for such models is in the form of comparable document pairs. The documents of a pair are written in different languages and are thematically similar. Unless translations, the documents of a pair are similar to some extent only. Meanwhile, representative topic models assume that the documents have identical topic distributions, which is a strong and limiting assumption. To overcome it we propose novel bilingual topic models that incorporate the notion of cross-lingual similarity of the documents that constitute the pairs in their generative and inference processes. Calculating this cross-lingual document similarity is a task on itself, which we propose to address using cross-lingual word embeddings.The last part of the thesis concerns the use of word embeddings and neural networks for three text mining applications. First, we discuss polylingual document classification where we argue that translations of a document can be used to enrich its representation. Using an auto-encoder to obtain these robust document representations we demonstrate improvements in the task of multi-class document classification. Second, we explore multi-task sentiment classification of tweets arguing that by jointly training classification systems using correlated tasks can improve the obtained performance. To this end we show how can achieve state-of-the-art performance on a sentiment classification task using recurrent neural networks. The third application we explore is cross-lingual information retrieval. Given a document written in one language, the task consists in retrieving the most similar documents from a pool of documents written in another language. In this line of research, we show that by adapting the transportation problem for the task of estimating document distances one can achieve important improvements
Katranji, Mehdi. „Apprentissage profond de la mobilité des personnes“. Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCA024.
Der volle Inhalt der QuelleKnowledge of mobility is a major challenge for authorities mobility organisers and urban planning. Due to the lack of formal definition of human mobility, the term "people's mobility" will be used in this book. This topic will be introduced by a description of the ecosystem by considering these actors and applications.The creation of a learning model has prerequisites: an understanding of the typologies of the available data sets, their strengths and weaknesses. This state of the art in mobility knowledge is based on the four-step model that has existed and been used since 1970, ending with the renewal of the methodologies of recent years.Our models of people's mobility are then presented. Their common point is the emphasis on the individual, unlike traditional approaches that take the locality as a reference. The models we propose are based on the fact that the intake of individuals' decisions is based on their perception of the environment.This finished book on the study of the deep learning methods of Boltzmann machines restricted. After a state of the art of this family of models, we are looking for strategies to make these models viable in the application world. This last chapter is our contribution main theoretical, by improving robustness and performance of these models
Deschaintre, Valentin. „Acquisition légère de matériaux par apprentissage profond“. Thesis, Université Côte d'Azur (ComUE), 2019. http://theses.univ-cotedazur.fr/2019AZUR4078.
Der volle Inhalt der QuelleWhether it is used for entertainment or industrial design, computer graphics is ever more present in our everyday life. Yet, reproducing a real scene appearance in a virtual environment remains a challenging task, requiring long hours from trained artists. A good solution is the acquisition of geometries and materials directly from real world examples, but this often comes at the cost of complex hardware and calibration processes. In this thesis, we focus on lightweight material appearance capture to simplify and accelerate the acquisition process and solve industrial challenges such as result image resolution or calibration. Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in pictures. Designing algorithms able to leverage these cues to recover spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a few images has challenged computer graphics researchers for decades. We explore the use of deep learning to tackle lightweight appearance capture and make sense of these visual cues. Once trained, our networks are capable of recovering per-pixel normals, diffuse albedo, specular albedo and specular roughness from as little as one picture of a flat surface lit by the environment or a hand-held flash. We show how our method improves its prediction with the number of input pictures to reach high quality reconstructions with up to 10 images --- a sweet spot between existing single-image and complex multi-image approaches --- and allows to capture large scale, HD materials. We achieve this goal by introducing several innovations on training data acquisition and network design, bringing clear improvement over the state of the art for lightweight material capture
Paumard, Marie-Morgane. „Résolution automatique de puzzles par apprentissage profond“. Thesis, CY Cergy Paris Université, 2020. http://www.theses.fr/2020CYUN1067.
Der volle Inhalt der QuelleThe objective of this thesis is to develop semantic methods of reassembly in the complicated framework of heritage collections, where some blocks are eroded or missing.The reassembly of archaeological remains is an important task for heritage sciences: it allows to improve the understanding and conservation of ancient vestiges and artifacts. However, some sets of fragments cannot be reassembled with techniques using contour information or visual continuities. It is then necessary to extract semantic information from the fragments and to interpret them. These tasks can be performed automatically thanks to deep learning techniques coupled with a solver, i.e., a constrained decision making algorithm.This thesis proposes two semantic reassembly methods for 2D fragments with erosion and a new dataset and evaluation metrics.The first method, Deepzzle, proposes a neural network followed by a solver. The neural network is composed of two Siamese convolutional networks trained to predict the relative position of two fragments: it is a 9-class classification. The solver uses Dijkstra's algorithm to maximize the joint probability. Deepzzle can address the case of missing and supernumerary fragments, is capable of processing about 15 fragments per puzzle, and has a performance that is 25% better than the state of the art.The second method, Alphazzle, is based on AlphaZero and single-player Monte Carlo Tree Search (MCTS). It is an iterative method that uses deep reinforcement learning: at each step, a fragment is placed on the current reassembly. Two neural networks guide MCTS: an action predictor, which uses the fragment and the current reassembly to propose a strategy, and an evaluator, which is trained to predict the quality of the future result from the current reassembly. Alphazzle takes into account the relationships between all fragments and adapts to puzzles larger than those solved by Deepzzle. Moreover, Alphazzle is compatible with constraints imposed by a heritage framework: at the end of reassembly, MCTS does not access the reward, unlike AlphaZero. Indeed, the reward, which indicates if a puzzle is well solved or not, can only be estimated by the algorithm, because only a conservator can be sure of the quality of a reassembly
Haykal, Vanessa. „Modélisation des séries temporelles par apprentissage profond“. Thesis, Tours, 2019. http://www.theses.fr/2019TOUR4019.
Der volle Inhalt der QuelleTime series prediction is a problem that has been addressed for many years. In this thesis, we have been interested in methods resulting from deep learning. It is well known that if the relationships between the data are temporal, it is difficult to analyze and predict accurately due to non-linear trends and the existence of noise specifically in the financial and electrical series. From this context, we propose a new hybrid noise reduction architecture that models the recursive error series to improve predictions. The learning process fusessimultaneouslyaconvolutionalneuralnetwork(CNN)andarecurrentlongshort-term memory network (LSTM). This model is distinguished by its ability to capture globally a variety of hybrid properties, where it is able to extract local signal features, to learn long-term and non-linear dependencies, and to have a high noise resistance. The second contribution concerns the limitations of the global approaches because of the dynamic switching regimes in the signal. We present a local unsupervised modification with our previous architecture in order to adjust the results by adapting the Hidden Markov Model (HMM). Finally, we were also interested in multi-resolution techniques to improve the performance of the convolutional layers, notably by using the variational mode decomposition method (VMD)
Sors, Arnaud. „Apprentissage profond pour l'analyse de l'EEG continu“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS006/document.
Der volle Inhalt der QuelleThe objective of this research is to explore and develop machine learning methods for the analysis of continuous electroencephalogram (EEG). Continuous EEG is an interesting modality for functional evaluation of cerebral state in the intensive care unit and beyond. Today its clinical use remains more limited that it could be because interpretation is still mostly performed visually by trained experts. In this work we develop automated analysis tools based on deep neural models.The subparts of this work hinge around post-anoxic coma prognostication, chosen as pilot application. A small number of long-duration records were performed and available existing data was gathered from CHU Grenoble. Different components of a semi-supervised architecture that addresses the application are imagined, developed, and validated on surrogate tasks.First, we validate the effectiveness of deep neural networks for EEG analysis from raw samples. For this we choose the supervised task of sleep stage classification from single-channel EEG. We use a convolutional neural network adapted for EEG and we train and evaluate the system on the SHHS (Sleep Heart Health Study) dataset. This constitutes the first neural sleep scoring system at this scale (5000 patients). Classification performance reaches or surpasses the state of the art.In real use for most clinical applications, the main challenge is the lack of (and difficulty of establishing) suitable annotations on patterns or short EEG segments. Available annotations are high-level (for example, clinical outcome) and therefore they are few. We search how to learn compact EEG representations in an unsupervised/semi-supervised manner. The field of unsupervised learning using deep neural networks is still young. To compare to existing work we start with image data and investigate the use of generative adversarial networks (GANs) for unsupervised adversarial representation learning. The quality and stability of different variants are evaluated. We then apply Gradient-penalized Wasserstein GANs on EEG sequences generation. The system is trained on single channel sequences from post-anoxic coma patients and is able to generate realistic synthetic sequences. We also explore and discuss original ideas for learning representations through matching distributions in the output space of representative networks.Finally, multichannel EEG signals have specificities that should be accounted for in characterization architectures. Each EEG sample is an instantaneous mixture of the activities of a number of sources. Based on this statement we propose an analysis system made of a spatial analysis subsystem followed by a temporal analysis subsystem. The spatial analysis subsystem is an extension of source separation methods built with a neural architecture with adaptive recombination weights, i.e. weights that are not learned but depend on features of the input. We show that this architecture learns to perform Independent Component Analysis if it is trained on a measure of non-gaussianity. For temporal analysis, standard (shared) convolutional neural networks applied on separate recomposed channels can be used
Arnez, Yagualca Fabio Alejandro. „Deep neural network uncertainty runtime monitoring for robust and safe AI-based automated navigation“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG100.
Der volle Inhalt der QuelleDeep Neural Networks (DNNs) have revolutionized various industries in the past decade, such as highly automated vehicles and unmanned aerial vehicles. DNNs can achieve a notorious performance improvement due to their effectiveness in processing complex sensory inputs and their powerful representation learning that outperforms traditional methods across different automation tasks.Despite the impressive performance improvements introduced by DNNs, they still have significant limitations due to their complexity, opacity, and lack of interpretability. More importantly, for the scope of this thesis, DNNs are susceptible to data distribution shifts, confidence representation in DNN predictions is not straightforward, and design-time property specification and verification can become unfeasible in large DNNs. While reducing errors from deep learning components is essential for building trustworthy AI-based systems that can be deployed and adopted in society, addressing these before-mentioned challenges is crucial as well.This thesis proposes new methods to overcome the aforementioned limitations that leverage uncertainty information to build trustworthy AI-based systems. The approach is bottom-up, starting from the component-level perspective and then moving to the systems-level point of view. The use of uncertainty at the component level is presented for the data distribution shift detection task to enable the detection of situations that may impact the reliability of a DNN component functionality and, therefore, the behavior of an automated system. Next, the system perspective is introduced by taking into account a set of components in sequence, where one component consumes the predictions from another to make its own predictions. In this regard, a method to propagate uncertainty is provided so that a downstream component can consider the uncertainty from the predictions of an upstream component. Finally, a framework for dynamic risk management is proposed to cope with the uncertainties that arise along the autonomous navigation system
Benamar, Alexandra. „Évaluation et adaptation de plongements lexicaux au domaine à travers l'exploitation de connaissances syntaxiques et sémantiques“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG035.
Der volle Inhalt der QuelleWord embeddings have established themselves as the most popular representation in NLP. To achieve good performance, they require training on large data sets mainly from the general domain and are frequently finetuned for specialty data. However, finetuning is a resource-intensive practice and its effectiveness is controversial.In this thesis, we evaluate the use of word embedding models on specialty corpora and show that proximity between the vocabularies of the training and application data plays a major role in the representation of out-of-vocabulary terms. We observe that this is mainly due to the initial tokenization of words and propose a measure to compute the impact of the tokenization of words on their representation. To solve this problem, we propose two methods for injecting linguistic knowledge into representations generated by Transformers: one at the data level and the other at the model level. Our research demonstrates that adding syntactic and semantic context can improve the application of self-supervised models to specialty domains, both for vocabulary representation and for NLP tasks.The proposed methods can be used for any language with linguistic information or external knowledge available. The code used for the experiments has been published to facilitate reproducibility and measures have been taken to limit the environmental impact by reducing the number of experiments
Chevalier, Marion. „Résolution variable et information privilégiée pour la reconnaissance d'images“. Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066726/document.
Der volle Inhalt der QuelleImage classification has a prominent interest in numerous visual recognition tasks, particularly for vehicle recognition in airborne systems, where the images have a low resolution because of the large distance between the system and the observed scene. During the training phase, complementary data such as knowledge on the position of the system or high-resolution images may be available. In our work, we focus on the task of low-resolution image classification while taking into account supplementary information during the training phase. We first show the interest of deep convolutional networks for the low-resolution image recognition, especially by proposing an architecture learned on the targeted data. On the other hand, we rely on the framework of learning using privileged information to benefit from the complementary training data, here the high-resolution versions of the images. We propose two novel methods for integrating privileged information in the learning phase of neural networks. Our first model relies on these complementary data to compute an absolute difficulty level, assigning a large weight to the most easily recognized images. Our second model introduces a similarity constraint between the networks learned on each type of data. We experimentally validate our models on several application cases, especially in a fine-grained oriented context and on a dataset containing annotation noise
Ostertag, Cécilia. „Analyse des pathologies neuro-dégénératives par apprentissage profond“. Thesis, La Rochelle, 2022. http://www.theses.fr/2022LAROS003.
Der volle Inhalt der QuelleMonitoring and predicting the cognitive state of a subject affected by a neuro-degenerative disorder is crucial to provide appropriate treatment as soon as possible. Thus, these patients are followed for several years, as part of longitudinal medical studies. During each visit, a large quantity of data is acquired : risk factors linked to the pathology, medical imagery (MRI or PET scans for example), cognitive tests results, sampling of molecules that have been identified as bio-markers, etc. These various modalities give information about the disease's progression, some of them are complementary and others can be redundant. Several deep learning models have been applied to bio-medical data, notably for organ segmentation or pathology diagnosis. This PhD is focused on the conception of a deep neural network model for cognitive decline prediction, using multimodal data, here both structural brain MRI images and clinical data. In this thesis we propose an architecture made of sub-modules tailored to each modality : 3D convolutional network for the brain MRI, and fully connected layers for the quantitative and qualitative clinical data. To predict the patient's evolution, this model takes as input data from two medical visits for each patient. These visits are compared using a siamese architecture. After training and validating this model with Alzheimer's disease as our use case, we look into knowledge transfer to other neuro-degenerative pathologies, and we use transfer learning to adapt our model to Parkinson's disease. Finally, we discuss the choices we made to take into account the temporal aspect of our problem, both during the ground truth creation using the long-term evolution of a cognitive score, and for the choice of using pairs of visits as input instead of longer sequences
Mazari, Ahmed. „Apprentissage profond pour la reconnaissance d’actions en vidéos“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS171.
Der volle Inhalt der QuelleNowadays, video contents are ubiquitous through the popular use of internet and smartphones, as well as social media. Many daily life applications such as video surveillance and video captioning, as well as scene understanding require sophisticated technologies to process video data. It becomes of crucial importance to develop automatic means to analyze and to interpret the large amount of available video data. In this thesis, we are interested in video action recognition, i.e. the problem of assigning action categories to sequences of videos. This can be seen as a key ingredient to build the next generation of vision systems. It is tackled with AI frameworks, mainly with ML and Deep ConvNets. Current ConvNets are increasingly deeper, data-hungrier and this makes their success tributary of the abundance of labeled training data. ConvNets also rely on (max or average) pooling which reduces dimensionality of output layers (and hence attenuates their sensitivity to the availability of labeled data); however, this process may dilute the information of upstream convolutional layers and thereby affect the discrimination power of the trained video representations, especially when the learned action categories are fine-grained
Bertrand, Hadrien. „Optimisation d'hyper-paramètres en apprentissage profond et apprentissage par transfert : applications en imagerie médicale“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT001/document.
Der volle Inhalt der QuelleIn the last few years, deep learning has changed irrevocably the field of computer vision. Faster, giving better results, and requiring a lower degree of expertise to use than traditional computer vision methods, deep learning has become ubiquitous in every imaging application. This includes medical imaging applications. At the beginning of this thesis, there was still a strong lack of tools and understanding of how to build efficient neural networks for specific tasks. Thus this thesis first focused on the topic of hyper-parameter optimization for deep neural networks, i.e. methods for automatically finding efficient neural networks on specific tasks. The thesis includes a comparison of different methods, a performance improvement of one of these methods, Bayesian optimization, and the proposal of a new method of hyper-parameter optimization by combining two existing methods: Bayesian optimization and Hyperband.From there, we used these methods for medical imaging applications such as the classification of field-of-view in MRI, and the segmentation of the kidney in 3D ultrasound images across two populations of patients. This last task required the development of a new transfer learning method based on the modification of the source network by adding new geometric and intensity transformation layers.Finally this thesis loops back to older computer vision methods, and we propose a new segmentation algorithm combining template deformation and deep learning. We show how to use a neural network to predict global and local transformations without requiring the ground-truth of these transformations. The method is validated on the task of kidney segmentation in 3D US images
Cohen-Hadria, Alice. „Estimation de descriptions musicales et sonores par apprentissage profond“. Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS607.
Der volle Inhalt der QuelleIn Music Information Retrieval (MIR) and voice processing, the use of machine learning tools has become in the last few years more and more standard. Especially, many state-of-the-art systems now rely on the use of Neural Networks.In this thesis, we propose a wide overview of four different MIR and voice processing tasks, using systems built with neural networks. More precisely, we will use convolutional neural networks, an image designed class neural networks. The first task presented is music structure estimation. For this task, we will show how the choice of input representation can be critical, when using convolutional neural networks. The second task is singing voice detection. We will present how to use a voice detection system to automatically align lyrics and audio tracks.With this alignment mechanism, we have created the largest synchronized audio and speech data set, called DALI. Singing voice separation is the third task. For this task, we will present a data augmentation strategy, a way to significantly increase the size of a training set. Finally, we tackle voice anonymization. We will present an anonymization method that both obfuscate content and mask the speaker identity, while preserving the acoustic scene
Trabelsi, Anis. „Robustesse aux attaques en authentification digitale par apprentissage profond“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS580.
Der volle Inhalt der QuelleThe identity of people on the Internet is becoming a major security issue. Since the Bale agreements, banking institutions have integrated the verification of people's identity or Know Your Customer (KYC) in their registration process. With the dematerialization of banks, this procedure has become e-KYC or remote KYC which works remotely through the user's smartphone. Similarly, remote identity verification has become the standard for enrollment in electronic signature tools. New regulations are emerging to secure this approach, for example, in France, the PVID framework regulates the remote acquisition of identity documents and people's faces under the eIDAS regulation. This is required because a new type of digital crime is emerging: deep identity theft. With new deep learning tools, imposters can change their appearance to look like someone else in real time. Imposters can then perform all the common actions required in a remote registration without being detected by identity verification algorithms. Today, smartphone applications and tools for a more limited audience exist allowing imposters to easily transform their appearance in real time. There are even methods to spoof an identity based on a single image of the victim's face. The objective of this thesis is to study the vulnerabilities of remote identity authentication systems against new attacks in order to propose solutions based on deep learning to make the systems more robust
Oyallon, Edouard. „Analyzing and introducing structures in deep convolutional neural networks“. Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE060.
Der volle Inhalt der QuelleThis thesis studies empirical properties of deep convolutional neural networks, and in particular the Scattering Transform. Indeed, the theoretical analysis of the latter is hard and until now remains a challenge: successive layers of neurons have the ability to produce complex computations, whose nature is still unknown, thanks to learning algorithms whose convergence guarantees are not well understood. However, those neural networks are outstanding tools to tackle a wide variety of difficult tasks, like image classification or more formally statistical prediction. The Scattering Transform is a non-linear mathematical operator whose properties are inspired by convolutional networks. In this work, we apply it to natural images, and obtain competitive accuracies with unsupervised architectures. Cascading a supervised neural networks after the Scattering permits to compete on ImageNet2012, which is the largest dataset of labeled images available. An efficient GPU implementation is provided. Then, this thesis focuses on the properties of layers of neurons at various depths. We show that a progressive dimensionality reduction occurs and we study the numerical properties of the supervised classification when we vary the hyper parameters of the network. Finally, we introduce a new class of convolutional networks, whose linear operators are structured by the symmetry groups of the classification task
Peiffer, Elsa. „Implications des structures cérébrales profondes dans les apprentissages procéduraux“. Lyon 1, 2000. http://www.theses.fr/2000LYO1T267.
Der volle Inhalt der QuelleRosar, Kós Lassance Carlos Eduardo. „Graphs for deep learning representations“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0204.
Der volle Inhalt der QuelleIn recent years, Deep Learning methods have achieved state of the art performance in a vast range of machine learning tasks, including image classification and multilingual automatic text translation. These architectures are trained to solve machine learning tasks in an end-to-end fashion. In order to reach top-tier performance, these architectures often require a very large number of trainable parameters. There are multiple undesirable consequences, and in order to tackle these issues, it is desired to be able to open the black boxes of deep learning architectures. Problematically, doing so is difficult due to the high dimensionality of representations and the stochasticity of the training process. In this thesis, we investigate these architectures by introducing a graph formalism based on the recent advances in Graph Signal Processing (GSP). Namely, we use graphs to represent the latent spaces of deep neural networks. We showcase that this graph formalism allows us to answer various questions including: ensuring generalization abilities, reducing the amount of arbitrary choices in the design of the learning process, improving robustness to small perturbations added to the inputs, and reducing computational complexity
Moukari, Michel. „Estimation de profondeur à partir d'images monoculaires par apprentissage profond“. Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC211/document.
Der volle Inhalt der QuelleComputer vision is a branch of artificial intelligence whose purpose is to enable a machine to analyze, process and understand the content of digital images. Scene understanding in particular is a major issue in computer vision. It goes through a semantic and structural characterization of the image, on one hand to describe its content and, on the other hand, to understand its geometry. However, while the real space is three-dimensional, the image representing it is two-dimensional. Part of the 3D information is thus lost during the process of image formation and it is therefore non trivial to describe the geometry of a scene from 2D images of it.There are several ways to retrieve the depth information lost in the image. In this thesis we are interested in estimating a depth map given a single image of the scene. In this case, the depth information corresponds, for each pixel, to the distance between the camera and the object represented in this pixel. The automatic estimation of a distance map of the scene from an image is indeed a critical algorithmic brick in a very large number of domains, in particular that of autonomous vehicles (obstacle detection, navigation aids).Although the problem of estimating depth from a single image is a difficult and inherently ill-posed problem, we know that humans can appreciate distances with one eye. This capacity is not innate but acquired and made possible mostly thanks to the identification of indices reflecting the prior knowledge of the surrounding objects. Moreover, we know that learning algorithms can extract these clues directly from images. We are particularly interested in statistical learning methods based on deep neural networks that have recently led to major breakthroughs in many fields and we are studying the case of the monocular depth estimation
Vielzeuf, Valentin. „Apprentissage neuronal profond pour l'analyse de contenus multimodaux et temporels“. Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC229/document.
Der volle Inhalt der QuelleOur perception is by nature multimodal, i.e. it appeals to many of our senses. To solve certain tasks, it is therefore relevant to use different modalities, such as sound or image.This thesis focuses on this notion in the context of deep learning. For this, it seeks to answer a particular problem: how to merge the different modalities within a deep neural network?We first propose to study a problem of concrete application: the automatic recognition of emotion in audio-visual contents.This leads us to different considerations concerning the modeling of emotions and more particularly of facial expressions. We thus propose an analysis of representations of facial expression learned by a deep neural network.In addition, we observe that each multimodal problem appears to require the use of a different merge strategy.This is why we propose and validate two methods to automatically obtain an efficient fusion neural architecture for a given multimodal problem, the first one being based on a central fusion network and aimed at preserving an easy interpretation of the adopted fusion strategy. While the second adapts a method of neural architecture search in the case of multimodal fusion, exploring a greater number of strategies and therefore achieving better performance.Finally, we are interested in a multimodal view of knowledge transfer. Indeed, we detail a non-traditional method to transfer knowledge from several sources, i.e. from several pre-trained models. For that, a more general neural representation is obtained from a single model, which brings together the knowledge contained in the pre-trained models and leads to state-of-the-art performances on a variety of facial analysis tasks
Antipov, Grigory. „Apprentissage profond pour la description sémantique des traits visuels humains“. Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0071/document.
Der volle Inhalt der QuelleThe recent progress in artificial neural networks (rebranded as deep learning) has significantly boosted the state-of-the-art in numerous domains of computer vision. In this PhD study, we explore how deep learning techniques can help in the analysis of gender and age from a human face. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes.Firstly, we conduct a comprehensive study which results in an empirical formulation of a set of principles for optimal design and training of gender recognition and age estimation Convolutional Neural Networks (CNNs). As a result, we obtain the state-of-the-art CNNs for gender/age prediction according to the three most popular benchmarks, and win an international competition on apparent age estimation. On a very challenging internal dataset, our best models reach 98.7% of gender classification accuracy and an average age estimation error of 4.26 years.In order to address the problem of synthesis and editing of human faces, we design and train GA-cGAN, the first Generative Adversarial Network (GAN) which can generate synthetic faces of high visual fidelity within required gender and age categories. Moreover, we propose a novel method which allows employing GA-cGAN for gender swapping and aging/rejuvenation without losing the original identity in synthetic faces. Finally, in order to show the practical interest of the designed face editing method, we apply it to improve the accuracy of an off-the-shelf face verification software in a cross-age evaluation scenario
Kaabi, Rabeb. „Apprentissage profond et traitement d'images pour la détection de fumée“. Electronic Thesis or Diss., Toulon, 2020. http://www.theses.fr/2020TOUL0017.
Der volle Inhalt der QuelleThis thesis deals with the problem of forest fire detection using image processing and machine learning tools. A forest fire is a fire that spreads over a wooded area. It can be of natural origin (due to lightning or a volcanic eruption) or human. Around the world, the impact of forest fires on many aspects of our daily lives is becoming more and more apparent on the entire ecosystem.Many methods have been shown to be effective in detecting forest fires. The originality of the present work lies in the early detection of fires through the detection of forest smoke and the classification of smoky and non-smoky regions using deep learning and image processing tools. A set of pre-processing techniques helped us to have an important database which allowed us afterwards to test the robustness of the model based on deep belief network we proposed and to evaluate the performance by calculating the following metrics (IoU, Accuracy, Recall, F1 score). Finally, the proposed algorithm is tested on several images in order to validate its efficiency. The simulations of our algorithm have been compared with those processed in the state of the art (Deep CNN, SVM...) and have provided very good results. The results of the proposed methods gave an average classification accuracy of about 96.5% for the early detection of smoke