Teses / dissertações sobre o tema "Méthodes d'apprentissage automatique multimodal"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 45 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Méthodes d'apprentissage automatique multimodal".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Labbé, Etienne. "Description automatique des événements sonores par des méthodes d'apprentissage profond". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES054.
Texto completo da fonteIn the audio research field, the majority of machine learning systems focus on recognizing a limited number of sound events. However, when a machine interacts with real data, it must be able to handle much more varied and complex situations. To tackle this problem, annotators use natural language, which allows any sound information to be summarized. Automated Audio Captioning (AAC) was introduced recently to develop systems capable of automatically producing a description of any type of sound in text form. This task concerns all kinds of sound events such as environmental, urban, domestic sounds, sound effects, music or speech. This type of system could be used by people who are deaf or hard of hearing, and could improve the indexing of large audio databases. In the first part of this thesis, we present the state of the art of the AAC task through a global description of public datasets, learning methods, architectures and evaluation metrics. Using this knowledge, we then present the architecture of our first AAC system, which obtains encouraging scores on the main AAC metric named SPIDEr: 24.7% on the Clotho corpus and 40.1% on the AudioCaps corpus. Then, subsequently, we explore many aspects of AAC systems in the second part. We first focus on evaluation methods through the study of SPIDEr. For this, we propose a variant called SPIDEr-max, which considers several candidates for each audio file, and which shows that the SPIDEr metric is very sensitive to the predicted words. Then, we improve our reference system by exploring different architectures and numerous hyper-parameters to exceed the state of the art on AudioCaps (SPIDEr of 49.5%). Next, we explore a multi-task learning method aimed at improving the semantics of sentences generated by our system. Finally, we build a general and unbiased AAC system called CONETTE, which can generate different types of descriptions that approximate those of the target datasets. In the third and last part, we propose to study the capabilities of a AAC system to automatically search for audio content in a database. Our approach obtains competitive scores to systems dedicated to this task, while using fewer parameters. We also introduce semi-supervised methods to improve our system using new unlabeled audio data, and we show how pseudo-label generation can impact a AAC model. Finally, we studied the AAC systems in languages other than English: French, Spanish and German. In addition, we propose a system capable of producing all four languages at the same time, and we compare it with systems specialized in each language
Liu, Li. "Modélisation pour la reconnaissance continue de la langue française parlée complétée à l'aide de méthodes avancées d'apprentissage automatique". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT057/document.
Texto completo da fonteThis PhD thesis deals with the automatic continuous Cued Speech (CS) recognition basedon the images of subjects without marking any artificial landmark. In order to realize thisobjective, we extract high level features of three information flows (lips, hand positions andshapes), and find an optimal approach to merging them for a robust CS recognition system.We first introduce a novel and powerful deep learning method based on the ConvolutionalNeural Networks (CNNs) for extracting the hand shape/lips features from raw images. Theadaptive background mixture models (ABMMs) are also applied to obtain the hand positionfeatures for the first time. Meanwhile, based on an advanced machine learning method Modi-fied Constrained Local Neural Fields (CLNF), we propose the Modified CLNF to extract theinner lips parameters (A and B ), as well as another method named adaptive ellipse model. Allthese methods make significant contributions to the feature extraction in CS. Then, due tothe asynchrony problem of three feature flows (i.e., lips, hand shape and hand position) in CS,the fusion of them is a challenging issue. In order to resolve it, we propose several approachesincluding feature-level and model-level fusion strategies combined with the context-dependentHMM. To achieve the CS recognition, we propose three tandem CNNs-HMM architectureswith different fusion types. All these architectures are evaluated on the corpus without anyartifice, and the CS recognition performance confirms the efficiency of our proposed methods.The result is comparable with the state of the art using the corpus with artifices. In parallel,we investigate a specific study about the temporal organization of hand movements in CS,especially about its temporal segmentation, and the evaluations confirm the superior perfor-mance of our methods. In summary, this PhD thesis applies the advanced machine learningmethods to computer vision, and the deep learning methodologies to CS recognition work,which make a significant step to the general automatic conversion problem of CS to sound.The future work will mainly focus on an end-to-end CNN-RNN system which incorporates alanguage model, and an attention mechanism for the multi-modal fusion
Drosouli, Ifigeneia. "Multimodal machine learning methods for pattern analysis in smart cities and transportation". Electronic Thesis or Diss., Limoges, 2024. http://www.theses.fr/2024LIMO0028.
Texto completo da fonteIn the context of modern, densely populated urban environments, the effective management of transportation and the structure of Intelligent Transportation Systems (ITSs) are paramount. The public transportation sector is currently undergoing a significant expansion and transformation with the objective of enhancing accessibility, accommodating larger passenger volumes without compromising travel quality, and embracing environmentally conscious and sustainable practices. Technological advancements, particularly in Artificial Intelligence (AI), Big Data Analytics (BDA), and Advanced Sensors (AS), have played a pivotal role in achieving these goals and contributing to the development, enhancement, and expansion of Intelligent Transportation Systems. This thesis addresses two critical challenges within the realm of smart cities, specifically focusing on the identification of transportation modes utilized by citizens at any given moment and the estimation and prediction of transportation flow within diverse transportation systems. In the context of the first challenge, two distinct approaches have been developed for Transportation Mode Detection. Firstly, a deep learning approach for the identification of eight transportation media is proposed, utilizing multimodal sensor data collected from user smartphones. This approach is based on a Long Short-Term Memory (LSTM) network and Bayesian optimization of model’s parameters. Through extensive experimental evaluation, the proposed approach demonstrates remarkably high recognition rates compared to a variety of machine learning approaches, including state-of-the-art methods. The thesis also delves into issues related to feature correlation and the impact of dimensionality reduction. The second approach involves a transformer-based model for transportation mode detection named TMD-BERT. This model processes the entire sequence of data, comprehends the importance of each part of the input sequence, and assigns weights accordingly using attention mechanisms to grasp global dependencies in the sequence. Experimental evaluations showcase the model's exceptional performance compared to state-of-the-art methods, highlighting its high prediction accuracy. In addressing the challenge of transportation flow estimation, a Spatial-Temporal Graph Convolutional Recurrent Network is proposed. This network learns from both the spatial stations network data and time-series of historical mobility changes to predict urban metro and bike sharing flow at a future time. The model combines Graph Convolutional Networks (GCN) and Long Short-Term Memory (LSTM) Networks to enhance estimation accuracy. Extensive experiments conducted on real-world datasets from the Hangzhou metro system and the NY City bike sharing system validate the effectiveness of the proposed model, showcasing its ability to identify dynamic spatial correlations between stations and make accurate long-term forecasts
Jacques, Céline. "Méthodes d'apprentissage automatique pour la transcription automatique de la batterie". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS150.
Texto completo da fonteThis thesis focuses on learning methods for automatic transcription of the battery. They are based on a transcription algorithm using a non-negative decomposition method, NMD. This thesis raises two main issues: the adaptation of methods to the analyzed signal and the use of deep learning. Taking into account the information of the signal analyzed in the model can be achieved by their introduction during the decomposition steps. A first approach is to reformulate the decomposition step in a probabilistic context to facilitate the introduction of a posteriori information with methods such as SI-PLCA and statistical NMD. A second approach is to implement an adaptation strategy directly in the NMD: the application of modelable filters to the patterns to model the recording conditions or the adaptation of the learned patterns directly to the signal by applying strong constraints to preserve their physical meaning. The second approach concerns the selection of the signal segments to be analyzed. It is best to analyze segments where at least one percussive event occurs. An onset detector based on a convolutional neural network (CNN) is adapted to detect only percussive onsets. The results obtained being very interesting, the detector is trained to detect only one instrument allowing the transcription of the three main drum instruments with three CNNs. Finally, the use of a CNN multi-output is studied to transcribe the part of battery with a single network
Condevaux, Charles. "Méthodes d'apprentissage automatique pour l'analyse de corpus jurisprudentiels". Thesis, Nîmes, 2021. http://www.theses.fr/2021NIME0008.
Texto completo da fonteJudicial decisions contain deterministic information (whose content is recurrent from one decision to another) and random information (probabilistic). Both types of information come into play in a judge's decision-making process. The former can reinforce the decision insofar as deterministic information is a recurring and well-known element of case law (ie past business results). The latter, which are related to rare or exceptional characters, can make decision-making difficult, since they can modify the case law. The purpose of this thesis is to propose a deep learning model that would highlight these two types of information and study their impact (contribution) in the judge’s decision-making process. The objective is to analyze similar decisions in order to highlight random and deterministic information in a body of decisions and quantify their importance in the judgment process
Théveniaut, Hugo. "Méthodes d'apprentissage automatique et phases quantiques de la matière". Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30228.
Texto completo da fonteMy PhD thesis presents three applications of machine learning to condensed matter theory. Firstly, I will explain how the problem of detecting phase transitions can be rephrased as an image classification task, paving the way to the automatic mapping of phase diagrams. I tested the reliability of this approach and showed its limits for models exhibiting a many-body localized phase in 1 and 2 dimensions. Secondly, I will introduce a variational representation of quantum many-body ground-states in the form of neural-networks and show our results on a constrained model of hardcore bosons in 2d using variational and projection methods. In particular, we confirmed the phase diagram obtained independently earlier and extends its validity to larger system sizes. Moreover we also established the ability of neural-network quantum states to approximate accurately solid and liquid bosonic phases of matter. Finally, I will present a new approach to quantum error correction based on the same techniques used to conceive the best Go game engine. We showed that efficient correction strategies can be uncovered with evolutionary optimization algorithms, competitive with gradient-based optimization techniques. In particular, we found that shallow neural-networks are competitive with deep neural-networks
Qiu, Danny. "Nouvelles méthodes d'apprentissage automatique pour la planification des réseaux mobiles". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS010.
Texto completo da fonteMobile connectivity is an important driver of our societies, which is why mobile data consumption has continued to grow steadily worldwide. To avoid global congestion, mobile network operators are bound to evolve their networks.Mobile networks are strengthened through the deployment of new base stations and antennas. As this task is very expensive, a great attention is given to identifying cost-effective and competitive deployments.In this context, the objective of this thesis is to use machine learning to improve deployment decisions.The first part of the thesis is dedicated to developing machine learning models to assist in the deployment of base stations in new locations. Assuming that network knowledge for an uncovered area is unavailable, the models are trained solely on urban fabric features.At first, models were simply trained to estimate the class of major activity of a base station.Subsequently, this work was extended to predict the typical hourly profile of weekly traffic. Since the train time could be long, several methods for reducting mobile data have been studied.The second part of the thesis focuses on the deployment of new cells to increase the capacity of existing sites. For this purpose, a cell coverage model was developed by deriving the Voronoi diagram representing the coverage of base stations.The first study examined the spectrum refarming of former generations of mobile technology for the deployment of the newest generations.Models are trained to assist in prioritizing capacity additions on sectors that can benefit from the greatest improvement in resource availability.The second study examined the deployment of a new generation of mobile technology, considering two deployment strategies: driven by profitability or by the improvement of the quality of service.Therefore, the methods developed in this thesis offer ways to train models to predict the connectivity demand of a territory as well as its evolution. These models could be integrated into a geo-marketing tool, as well as providing useful information for network dimensioning
Kopinski, Thomas. "Méthodes d'apprentissage pour l'interaction homme-machine". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY002.
Texto completo da fonteThis thesis aims at improving the complex task of hand gesture recognition by utilizing machine learning techniques to learn from features calculated from 3D point cloud data. The main contributions of this work are embedded in the domains of machine learning and in the human-machine interaction. Since the goal is to demonstrate that a robust real-time capable system can be set up which provides a supportive means of interaction, the methods researched have to be light-weight in the sense that descriptivity balances itself with the calculation overhead needed to, in fact, remain real-time capable. To this end several approaches were tested:Initially the fusion of multiple ToF-sensors to improve the overall recognition rate was researched. It is examined, how employing more than one sensor can significantly boost recognition results in especially difficult cases and get a first grasp on the influence of the descriptors for this task as well as the influence of the choice of parameters on the calculation of the descriptor. The performance of MLPs with standard parameters is compared with the performance of SVMs for which the parameters have been obtained via grid search.Building on these results, the integration of the system into the car interior is shown. It is demonstrated how such a system can easily be integrated into an outdoor environment subject to strongly varying lighting conditions without the need for tedious calibration procedures. Furthermore the introduction of a modified light-weight version of the descriptor coupled with an extended database significantly boosts the frame rate for the whole recognition pipeline. Lastly the introduction of confidence measures for the output of the MLPs allows for more stable classification results and gives an insight on the innate challenges of this multiclass problem in general.In order to improve the classification performance of the MLPs without the need for sophisticated algorithm design or extensive parameter search a simple method is proposed which makes use of the existing recognition routines by exploiting information already present in the output neurons of the MLPs. A simple fusion technique is proposed which combines descriptor features with neuron confidences coming from a previously trained net and proves that augmented results can be achieved in nearly all cases for problem classes and individuals respectively.These findings are analyzed in-depth on a more theoretical scale by comparing the effectiveness of learning solely on neural activities in the output layer with the previously introduced fusion approach. In order to take into account temporal information, the thesis describes a possible approach on how to exploit the fact that we are dealing with a problem within which data is processed in a sequential manner and therefore problem-specific information can be taken into account. This approach classifies a hand pose by fusing descriptor features with neural activities coming from previous time steps and lays the ground work for the following section of making the transition towards dynamic hand gestures. Furthermore an infotainment system realized on a mobile device is introduced and coupled with the preprocessing and recognition module which in turn is integrated into an automotive setting demonstrating a possible testing environment for a gesture recognition system.In order to extend the developed system to allow for dynamic hand gesture interaction a simplified approach is proposed. This approach demonstrates that recognition of dynamic hand gesture sequences can be achieved with the simple definition of a starting and an ending pose based on a recognition module working with sufficient accuracy and even allowing for relaxed restrictions in terms of defining the parameters for such a sequence
Saldana, Miranda Diego. "Méthodes d'apprentissage automatique pour l'aide à la formulation : Carburants Alternatifs pour l'Aéronautique". Paris 6, 2013. http://www.theses.fr/2013PA066346.
Texto completo da fonteAlternative fuels and biofuels are a viable and attractive answer to problems associated to the current widespread use of conventional fuels in vehicles. One interesting aspect of alternative fuels is that the range of possible chemical compounds is large due to their diverse biological origins. This aspect opens up the possibility of creating “designer fuels”, whose chemical compositions are tailored to the specifications of the fuel being replaced. In this regard, it would be interesting to develop accurate predictive methods capable of instantaneously estimating a fuel’s physico-chemical properties based solely on its chemical composition and structures of its components. In this PhD work, we have investigated the application of machine learning methods to estimate properties such as flash point, enthalpy of combustion, melting point, cetane number, density and viscosity for families of compounds and mixtures similar to those found in biofuels: hydrocarbons and oxygenated compounds. During the first part of this work, machine learning models of pure compound properties were developed. During the second part mixtures have been examinated, two types of approaches were investigated: (1) the direct application of machine learning methods to mixture property data; (2) the use of the previously developed pure compound property models in combination with theoretically based mixing rules. It was found that machine learning methods, especially support vector machine methods, were an effective way of creating accurate and robust models. It was further found that, in the absence of sufficiently large or representative datasets, the use of mixing rules in combination with machine learning is a viable option. Overall, a number of accurate, robust and fast property estimation methods have been developed as a means to guide the formulation of alternative fuels
Dupas, Rémy. "Apport des méthodes d'apprentissage symbolique automatique pour l'aide à la maintenance industrielle". Valenciennes, 1990. https://ged.uphf.fr/nuxeo/site/esupversions/7ab53b01-cdfb-4932-ba60-cb5332e3925a.
Texto completo da fonteVu, Hien Duc. "Adaptation des méthodes d'apprentissage automatique pour la détection de défauts d'arc électriques". Electronic Thesis or Diss., Université de Lorraine, 2019. http://docnum.univ-lorraine.fr/ulprive/DDOC_T_2019_0152_VU.pdf.
Texto completo da fonteThe detection of electric arcs occurring in an electrical network by machine learning approaches represents the heart of the work presented in this thesis. The problem was first considered as a classification of fixed-size time series with two classes: normal and default. This first part is based on the work of the literature where the detection algorithms are organized mainly on a step of the transformation of the signals acquired on the network, followed by a step of extraction of descriptive characteristics and finally a step of decision. The multi-criteria approach adopted here aims to respond to systematic classification errors. A methodology for selecting the best combinations, transformation, and descriptors has been proposed by using learning solutions. As the development of relevant descriptors is always difficult, differents solutions offered by deep learning has also been studied. In a second phase, the study focused on the variable aspects in time of the fault detection. Two statistical decision paths have been explored, one based on the sequential probabilistic test (SPRT) and the other based on artificial neural networks LSTM (Long Short Time Memory Network). Each of these two methods exploits in its way the duration a first classification step between 0 and 1 (normal, default). The decision by SPRT uses an integration of the initial classification. LSTM learns to classify data with variable time. The results of the LSTM network are very promising, but there are a few things to explore. All of this work is based on experiments with the most complete and broadest possible data on the field of 230V alternative networks in a domestic and industrial context. The accuracy obtained is close to 100% in the majority of situations
Kopinski, Thomas. "Méthodes d'apprentissage pour l'interaction homme-machine". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY002/document.
Texto completo da fonteThis thesis aims at improving the complex task of hand gesture recognition by utilizing machine learning techniques to learn from features calculated from 3D point cloud data. The main contributions of this work are embedded in the domains of machine learning and in the human-machine interaction. Since the goal is to demonstrate that a robust real-time capable system can be set up which provides a supportive means of interaction, the methods researched have to be light-weight in the sense that descriptivity balances itself with the calculation overhead needed to, in fact, remain real-time capable. To this end several approaches were tested:Initially the fusion of multiple ToF-sensors to improve the overall recognition rate was researched. It is examined, how employing more than one sensor can significantly boost recognition results in especially difficult cases and get a first grasp on the influence of the descriptors for this task as well as the influence of the choice of parameters on the calculation of the descriptor. The performance of MLPs with standard parameters is compared with the performance of SVMs for which the parameters have been obtained via grid search.Building on these results, the integration of the system into the car interior is shown. It is demonstrated how such a system can easily be integrated into an outdoor environment subject to strongly varying lighting conditions without the need for tedious calibration procedures. Furthermore the introduction of a modified light-weight version of the descriptor coupled with an extended database significantly boosts the frame rate for the whole recognition pipeline. Lastly the introduction of confidence measures for the output of the MLPs allows for more stable classification results and gives an insight on the innate challenges of this multiclass problem in general.In order to improve the classification performance of the MLPs without the need for sophisticated algorithm design or extensive parameter search a simple method is proposed which makes use of the existing recognition routines by exploiting information already present in the output neurons of the MLPs. A simple fusion technique is proposed which combines descriptor features with neuron confidences coming from a previously trained net and proves that augmented results can be achieved in nearly all cases for problem classes and individuals respectively.These findings are analyzed in-depth on a more theoretical scale by comparing the effectiveness of learning solely on neural activities in the output layer with the previously introduced fusion approach. In order to take into account temporal information, the thesis describes a possible approach on how to exploit the fact that we are dealing with a problem within which data is processed in a sequential manner and therefore problem-specific information can be taken into account. This approach classifies a hand pose by fusing descriptor features with neural activities coming from previous time steps and lays the ground work for the following section of making the transition towards dynamic hand gestures. Furthermore an infotainment system realized on a mobile device is introduced and coupled with the preprocessing and recognition module which in turn is integrated into an automotive setting demonstrating a possible testing environment for a gesture recognition system.In order to extend the developed system to allow for dynamic hand gesture interaction a simplified approach is proposed. This approach demonstrates that recognition of dynamic hand gesture sequences can be achieved with the simple definition of a starting and an ending pose based on a recognition module working with sufficient accuracy and even allowing for relaxed restrictions in terms of defining the parameters for such a sequence
Girod, Thomas. "Un modèle d'apprentissage multimodal pour un substrat distribué d'inspiration corticale". Electronic Thesis or Diss., Nancy 1, 2010. http://www.theses.fr/2010NAN10092.
Texto completo da fonteThe field of computational neurosciences is interested in modeling the cognitive functions through biologically-inspired, numerical models. In this thesis, we focus on learning in a multimodal context, ie the combination of several sensitive/motor modalities. Our model draws from the cerebral cortex, supposedly linked to multimodal integration in the brain, and modelize it on a mesoscopic scale with 2d maps of cortical columns and axonic projections between maps. To build our simulations, we propose a library to simplify the construction and evaluation of mesoscopic models. Our learning model is based on the BCM model (Bienenstock-Cooper-Munro), which offers a local, unsupervized, biologically plausible learning algorithm (one unit learns autonomously from its entries). We adapt this algorithm by introducing the notion of guided learning, a mean to bias the convergence to the benefit of a chosen stimuli. Then, we use this mecanism to establish correlated learning between several modalities. Thanks to correlated leanring, the selectivities acquired tend to account for the same phenomenon, perceived through different modalities. This is the basis for a coherent, multimodal representation of this phenomenon
Girod, Thomas. "Un modèle d'apprentissage multimodal pour un substrat distribué d'inspiration corticale". Phd thesis, Université Henri Poincaré - Nancy I, 2010. http://tel.archives-ouvertes.fr/tel-00547941.
Texto completo da fonteKanj, Sawsan. "Méthodes d'apprentissage pour la classification multi label". Thesis, Compiègne, 2013. http://www.theses.fr/2013COMP2076.
Texto completo da fonteMulti-label classification is an extension of traditional single-label classification, where classes are not mutually exclusive, and each example can be assigned by several classes simultaneously . It is encountered in various modern applications such as scene classification and video annotation. the main objective of this thesis is the development of new techniques to adress the problem of multi-label classification that achieves promising classification performance. the first part of this manuscript studies the problem of multi-label classification in the context of the theory of belief functions. We propose a multi-label learning method that is able to take into account relationships between labels ant to classify new instances using the formalism of representation of uncertainty for set-valued variables. The second part deals withe the problem of prototype selection in the framework of multi-label learning. We propose an editing algorithm based on the k-nearest neighbor rule in order to purify training dataset and improve the performances of multi-label classification algorithms. Experimental results on synthetic and real-world datasets show the effectiveness of our approaches
Gayraud, Nathalie. "Méthodes adaptatives d'apprentissage pour des interfaces cerveau-ordinateur basées sur les potentiels évoqués". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4231/document.
Texto completo da fonteNon-invasive Brain Computer Interfaces (BCIs) allow a user to control a machine using only their brain activity. The BCI system acquires electroencephalographic (EEG) signals, characterized by a low signal-to-noise ratio and an important variability both across sessions and across users. Typically, the BCI system is calibrated before each use, in a process during which the user has to perform a predefined task. This thesis studies of the sources of this variability, with the aim of exploring, designing, and implementing zero-calibration methods. We review the variability of the event related potentials (ERP), focusing mostly on a late component known as the P300. This allows us to quantify the sources of EEG signal variability. Our solution to tackle this variability is to focus on adaptive machine learning methods. We focus on three transfer learning methods: Riemannian Geometry, Optimal Transport, and Ensemble Learning. We propose a model of the EEG takes variability into account. The parameters resulting from our analyses allow us to calibrate this model in a set of simulations, which we use to evaluate the performance of the aforementioned transfer learning methods. These methods are combined and applied to experimental data. We first propose a classification method based on Optimal Transport. Then, we introduce a separability marker which we use to combine Riemannian Geometry, Optimal Transport and Ensemble Learning. Our results demonstrate that the combination of several transfer learning methods produces a classifier that efficiently handles multiple sources of EEG signal variability
Orcesi, Astrid. "Méthodes d'apprentissage appliquées à l'analyse du comportement humain par vision". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST082.
Texto completo da fonteThe analysis of human behavior by vision is a strong studied research topic. Indeed despite the progress brought by deep learning in computer vision, understanding finely what is happening in a scene is a task far from being solved because it presents a very high semantic level.In this thesis we focus on two applications: the recognition of temporally long activities in videos and the detection of interaction in images.The first contribution of this work is the development of the first database of daily activities with high intra-class variability.The second contribution is the proposal of a new method for interaction detection in a single shot on the image which allows it to be much faster than the state of the art two-step methods which apply a reasoning by pair of instances.Finally, the third contribution of this thesis is the constitution of a new interaction dataset composed of interactions both between people and objects and between people which did not exist until now and which allows an exhaustive analysis of human interactions. In order to propose baseline results on this new dataset, the previous interaction detection method has been improved by proposing a multi-task learning which reaches the best results on the public dataset widely used by the community
Mohammed, Omar. "Méthodes d'apprentissage approfondi pour l'extraction et le transfert de style". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT035.
Texto completo da fonteOne aspect of a successful human-machine interface (e.g. human-robot interaction, chatbots, speech, handwriting…,etc) is the ability to have a personalized interaction. This affects the overall human experience, and allow for a more fluent interaction. At the moment, there is a lot of work that uses machine learning in order to model such interactions. However, these models do not address the issue of personalized behavior: they try to average over the different examples from different people in the training set. Identifying the human styles (persona) opens the possibility of biasing the models output to take into account the human preference. In this thesis, we focused on the problem of styles in the context of handwriting.Defining and extracting handwriting styles is a challenging problem, since there is no formal definition for those styles (i.e., it is an ill-posed problem). Styles are both social - depending on the writer's training, especially in middle school - and idiosyncratic - depends on the writer's shaping (letter roundness, sharpness…,etc) and force distribution over time. As a consequence, there are no easy/generic metrics to measure the quality of style in a machine behavior.We may want to change the task or adapt to a new person. Collecting data in the human-machine interface domain can be quite expensive and time consuming. Although most of the time the new task has many things in common with the old task, traditional machine learning techniques fail to take advantage of this commonality, leading to a quick degradation in performance. Thus, one of the objectives of my thesis is to study and evaluate the idea of transferring knowledge about the styles between different tasks, within the machine learning paradigm.The objective of my thesis is to study these problems of styles, in the domain of handwriting. Available to us is IRONOFF dataset, an online handwriting datasets, with 410 writers, with ~25K examples of uppercase, lowercase letters and digits drawings. For transfer learning, we used an extra dataset, QuickDraw!, a sketch drawing dataset containing ~50 million drawing over 345 categories.Major contributions of my thesis are:1) Propose a work pipeline to study the problem of styles in handwriting. This involves proposing methodology, benchmarks and evaluation metrics.We choose temporal generative models paradigm in deep learning in order to generate drawings, and evaluate their proximity/relevance to the intended/ground truth drawings. We proposed two metrics, to evaluate the curvature and the length of the generated drawings. In order to ground those metics, we proposed multiple benchmarks - which we know their relative power in advance -, and then verified that the metrics actually respect the relative power relationship.2) Propose a framework to study and extract styles, and verify its advantage against the previously proposed benchmarks.We settled on the idea of using a deep conditioned-autoencoder in order to summarize and extract the style information, without the need to focus on the task identity (since it is given as a condition). We validate this framework to the previously proposed benchmark using our evaluation metrics. We also to visualize on the extracted styles, leading to some exciting outcomes!3) Using the proposed framework, propose a way to transfer the information about styles between different tasks, and a protocol in order to evaluate the quality of transfer.We leveraged the deep conditioned-autoencoder used earlier, by extract the encoder part in it - which we believe had the relevant information about the styles - and use it to in new models trained on new tasks. We extensively test this paradigm over a different range of tasks, on both IRONOFF and QuickDraw! datasets. We show that we can successfully transfer style information between different tasks
Chardin, David. "Étude de différentes méthodes d'apprentissage supervisé pour le développement de tests diagnostiques basés sur des données métabolomiques". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ6004.
Texto completo da fonteMetabolomics is a recent field of research concerning the study of small molecules or « metabolites » in biological samples. The different omics fields: genomics, transcriptomics, proteomics and metabolomics, form a chain in which each link influences the others and is influenced by external factors. Metabolomics represent the last link of this chain, resulting of genetic, pathologic, environmental and toxicological factors, and is thus the omics field closest to the biological phenotype.Since metabolomic studies are relatively fast and inexpensive, they could be used in routine medical practice, particularly for diagnostic testing.Metabolomic data most frequently include high numbers of variables. Different machine learning methods are used for the statistical analysis of these high dimensional datasets. The most frequently used method is PLS-DA (Partial Least Squares Discriminant Analysis). However, this method has some drawbacks, including a risk of false discoveries due to overfitting.In this work, we evaluated new supervised classification methods for clinical applications of metabolomics, particularly for diagnostic testing.We first introduce two new classification methods, created through a collaboration between biologists, physicians and mathematicians: the PD-CR method (Primal Dual for Classification with Rejection) and a supervised autoencoder. We compare these methods to the most frequently used methods in this setting: PLS-DA, Standard Vector Machines, Random Forests and neural networks. Hence, we show that these new methods have similar or higher performances as the classical methods, while selecting biologically relevant metabolites for which the weights in the classification are given in a straightforward and easily interpretable manner. Moreover, these methods include a probability score for each prediction, which seems particularly relevant for medical applications.We then report the results of a metabolomic study performed on frozen and formalin fixed glial tumor samples. Using an L1 penalized regression method associated with a bootstrap method we created two models to classify glial tumors according to their IDH mutational status and their grade. These models were trained on metabolomic data from frozen samples and lead to the selection of three metabolites: 2-hydroxyglutarate, aminoadipate and guanidinoacetate. When testing these models on metabolomic data obtained on fixed glial tumor samples, they revealed good classification results: IDH mutational status prediction with a sensitivity of 70.6% and a specificity of 80.4% and grade prediction with a sensitivity of 75% and a specificity of 74.5%. Hence, we have shown that performing a metabolomic analysis on fixed samples is possible and can lead to promissing results.Targeted analysis on new tumor samples could be performed to validate our models and lead to applications in routine practice, complementing pre-existing techniques. Moreover, exploring the biological phenomenons underlying the association of glial tumor grade and aminoadipate and guanidinoacetate could lead to a better understanding of these tumors and their carcinogenesis
Playe, Benoit. "Méthodes d'apprentissage statistique pour le criblage virtuel de médicament". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM010/document.
Texto completo da fonteThe rational drug discovery process has limited success despite all the advances in understanding diseases, and technological breakthroughs. Indeed, the process of drug development is currently estimated to require about 1.8 billion US dollars over about 13 years on average. Computational approaches are promising ways to facilitate the tedious task of drug discovery. We focus in this thesis on statistical approaches which virtually screen a large set of compounds against a large set of proteins, which can help to identify drug candidates for known therapeutic targets, anticipate potential side effects or to suggest new therapeutic indications of known drugs. This thesis is conceived following two lines of approaches to perform drug virtual screening : data-blinded feature-based approaches (in which molecules and proteins are numerically described based on experts' knowledge), and data-driven feature-based approaches (in which compounds and proteins numerical descriptors are learned automatically from the chemical graph and the protein sequence). We discuss these approaches, and also propose applications of virtual screening to guide the drug discovery process
Toqué, Florian. "Prévision et visualisation de l'affluence dans les transports en commun à l'aide de méthodes d'apprentissage automatique". Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2029.
Texto completo da fonteAs part of the fight against global warming, several countries around the world, including Canada and some European countries, including France, have established measures to reduce greenhouse gas emissions. One of the major areas addressed by the states concerns the transport sector and more particularly the development of public transport to reduce the use of private cars. To this end, the local authorities concerned aim to establish more accessible, clean and sustainable urban transport systems. In this context, this thesis, co-directed by the University of Paris-Est, the french institute of science and technology for transport, development and network (IFSTTAR) and Polytechnique Montréal in Canada, focuses on the analysis of urban mobility through research conducted on the forecasting and visualization of public transport ridership using machine learning methods. The main motivations concern the improvement of transport services offered to passengers such as: better planning of transport supply, improvement of passenger information (e.g., proposed itinerary in the case of an event/incident, information about the crowd in the train at a chosen time, etc.). In order to improve transport operators' knowledge of user travel in urban areas, we are taking advantage of the development of data science (e.g., data collection, development of machine learning methods). This thesis thus focuses on three main parts: (i) long-term forecasting of passenger demand using event databases, (ii) short-term forecasting of passenger demand and (iii) visualization of passenger demand on public transport. The research is mainly based on the use of ticketing data provided by transport operators and was carried out on three real case study, the metro and bus network of the city of Rennes, the rail and tramway network of "La Défense" business district in Paris, France, and the metro network of Montreal, Quebec in Canada
Pastor, Philippe. "Étude et application des méthodes d'apprentissage pour la navigation d'un robot en environnement inconnu". Toulouse, ENSAE, 1995. http://www.theses.fr/1995ESAE0013.
Texto completo da fonteSelingue, Maxime. "amélioration de la précision de structures sérielles poly-articulées par des méthodes d'apprentissage automatique économes en données". Electronic Thesis or Diss., Paris, HESAM, 2023. http://www.theses.fr/2023HESAE085.
Texto completo da fonteThe evolution of production methods in the context of Industry 4.0 has led to the use of collaborative and industrial robots for tasks such as drilling, machining, and assembly. These tasks require an accuracy of around a tenth of a millimeter, whereas the precision of these robots is in the range of one to two millimeters. Robotic integrators had to propose calibration methods aimed at establishing a more reliable and representative model of the robot's behavior in the real world.As a result, analytical calibration methods model the defects affecting the accuracy of industrial robots, including geometric defects, joint compliance, transmission errors, and thermal drift. Given the complexity of experimentally identifying the parameters of some of these analytical models, hybrid calibration methods have been developed. These methods combine an analytical model with a machine learning approach whose role is to accurately predict residual positioning errors (caused by the inaccuracies of the analytical model). These defects can then be compensated for in advance through a compensation algorithm.However, these methods require a significant amount of time and data and are no longer valid when the robot's payload changes. The objective of this thesis is to improve hybrid calibration methods to make them applicable in industrial contexts. In this regard, several contributions have been made.First, two methods based on neural networks that allow the adaptation of the hybrid model to a new payload within a robot's workspace with very little data. These two methods respectively rely on transfer learning and prediction interpolation.Then, a hybrid calibration method using active learning with Gaussian process regression is presented. Through this approach, in an iterative process, the system autonomously decides on relevant data to acquire, enabling optimized calibration in terms of data and time
Giffon, Luc. "Approximations parcimonieuses et méthodes à noyaux pour la compression de modèles d'apprentissage". Electronic Thesis or Diss., Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0354.
Texto completo da fonteThis thesis aims at studying and experimentally validating the benefits, in terms of amount of computation and data needed, that kernel methods and sparse approximation methods can bring to existing machine learning algorithms. In a first part of this thesis, we propose a new type of neural architecture that uses a kernel function to reduce the number of learnable parameters, thus making it robust to overfiting in a regime where few labeled observations are available. In a second part of this thesis, we seek to reduce the complexity of existing machine learning models by including sparse approximations. First, we propose an alternative algorithm to the K-means algorithm which allows to speed up the inference phase by expressing the centroids as a product of sparse matrices. In addition to the convergence guarantees of the proposed algorithm, we provide an experimental validation of both the quality of the centroids thus expressed and their benefit in terms of computational cost. Then, we explore the compression of neural networks by replacing the matrices that constitute its layers with sparse matrix products. Finally, we hijack the Orthogonal Matching Pursuit (OMP) sparse approximation algorithm to make a weighted selection of decisiontrees from a random forest, we analyze the effect of the weights obtained and we propose a non-negative alternative to the method that outperforms all other tree selectiontechniques considered on a large panel of data sets
Marin, Didier. "Méthodes d'apprentissage pour l'interaction physique homme-robot : application à l'assistance robotisée pour le transfert assis-debout". Paris 6, 2013. http://www.theses.fr/2013PA066293.
Texto completo da fonteSit-to-stand is a task that becomes increasingly difficult with aging. It is however necessary for an autonomous life, since it precedes walking. Physical assistance robotics offers solutions that provide an active assistance in the realization of motor tasks. It gives the possibility to adapt the assistance to the specific needs of each user. Our work proposes and implements a mechanism for automatic adaptation of an assistance robot behaviour to its user. The provided assistance is evaluated using a confort criterion which is specific to the task. The adaptation consists in an optimisation of control parameters using Reinforcement Learning methods. This approach is tested on smart walker prototypes, with healthy subjects and patients
Sokol, Marina. "Méthodes d'apprentissage semi-supervisé basé sur les graphes et détection rapide des nœuds centraux". Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00998394.
Texto completo da fonteBailly, Kévin. "Méthodes d'apprentissage pour l'estimation de la pose de la tête dans des images monoculaires". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00560836.
Texto completo da fonteCappelaere, Charles-Henri. "Estimation du risque de mort subite par arrêt cardiaque a l'aide de méthodes d'apprentissage artificiel". Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066014/document.
Texto completo da fonteImplantable cardioverter defibrillators (ICD) have been prescribed for prophylaxis since the early 2000?s, for patients at high risk of SCD. Unfortunately, most implantations to date appear unnecessary. This result raises an important issue because of the perioperative and postoperative risks. Thus, it is important to improve the selection of the candidates to ICD implantation in primary prevention. Risk stratification for SCD based on Holter recordings has been extensively performed in the past, without resulting in a significant improvement of the selection of candidates to ICD implantation. The present report describes a nonlinear multivariate analysis of Holter recording indices. We computed all the descriptors available in the Holter recordings present in our database. The latter consisted of labelled Holter recordings of patients equipped with an ICD in primary prevention, a fraction of these patients received at least one appropriate therapy from their ICD during a 6-month follow-up. Based on physiological knowledge on arrhythmogenesis, feature selection was performed, and an innovative procedure of classifier design and evaluation was proposed. The classifier is intended to discriminate patients who are really at risk of sudden death from patients for whom ICD implantation does not seem necessary. In addition, we designed an ad hoc classifier that capitalizes on prior knowledge on arrhythmogenesis. We conclude that improving prophylactic ICD-implantation candidate selection by automatic classification from Holter recording features may be possible. Nevertheless, that statement should be supported by the study of a more extensive and appropriate database
Meneroux, Yann. "Méthodes d'apprentissage statistique pour la détection de la signalisation routière à partir de véhicules traceurs". Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2061.
Texto completo da fonteWith the democratization of connected devices equipped with GPS receivers, large quantities of vehicle trajectories become available, particularly via professional vehicle fleets, mobile navigation and collaborative driving applications. Recently, map inference techniques, aiming at deriving mapping information from these GPS tracks, have tended to complete or even replace traditional techniques. Initially restricted to the construction of road geometry, they are gradually being used to enrich existing networks, and in particular to build a digital database of road signs. Detailed and exhaustive knowledge of the infrastructure is an essential prerequisite in many areas : for network managers and decision-makers, for users with precise calculation of travel times, but also in the context of the autonomous vehicle, with the construction and updating of a high definition map providing in real time electronic horizons, which can supplement the system in the event of failures of the main sensors. In this context, statistical learning methods (e.g. Bayesian methods, random tree forests, neural networks,...) provide an interesting perspective and guarantee the adaptability of the approach to different use cases and the great variability of the data encountered in practice.In this thesis, we investigate the potential of this class of methods, for the automatic detection of traffic signals (mainly traffic lights), from a set of GPS speed profiles. First, we are working on an experimental, high-quality dataset, for which we compare the performances of several classifiers on classical image recognition approaches and on a functional approaches stemming from the field of signal processing, aggregating and decomposing speed profiles on a Haar wavelet basis whose coefficients are used as explanatory variables. The results obtained show the relevance of the functional approach, particularly when combined with the random forest algorithm, in terms of accuracy and computation time. The approach is then applied to other types of road signs.In a second part, we try to adapt the proposed method on the case of observational data for which we also try to estimate the position of the traffic lights by regression. The results show the sensitivity of the learning approach to the data noise and the difficulty of defining the spatial extent of individual instances on a complex road network. We are trying to solvethis second issue using global image approaches based on a segmentation by convolutional neural network, allowing us to avoid the definition of instances. Finally, we are experimenting an approach leveraging spatial autocorrelation of individual instances using the graph topology, by modeling the study area as a conditional Markov field. The results obtained show an improvement compared to the performance obtained with non-structured learning.This thesis work has also led to the development of original methods for pre-processing GPS trajectory data, in particular for filtering, debiaising coordinates and map-matching traces on a reference road network
Cappelaere, Charles-Henri. "Estimation du risque de mort subite par arrêt cardiaque a l'aide de méthodes d'apprentissage artificiel". Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066014.
Texto completo da fonteImplantable cardioverter defibrillators (ICD) have been prescribed for prophylaxis since the early 2000?s, for patients at high risk of SCD. Unfortunately, most implantations to date appear unnecessary. This result raises an important issue because of the perioperative and postoperative risks. Thus, it is important to improve the selection of the candidates to ICD implantation in primary prevention. Risk stratification for SCD based on Holter recordings has been extensively performed in the past, without resulting in a significant improvement of the selection of candidates to ICD implantation. The present report describes a nonlinear multivariate analysis of Holter recording indices. We computed all the descriptors available in the Holter recordings present in our database. The latter consisted of labelled Holter recordings of patients equipped with an ICD in primary prevention, a fraction of these patients received at least one appropriate therapy from their ICD during a 6-month follow-up. Based on physiological knowledge on arrhythmogenesis, feature selection was performed, and an innovative procedure of classifier design and evaluation was proposed. The classifier is intended to discriminate patients who are really at risk of sudden death from patients for whom ICD implantation does not seem necessary. In addition, we designed an ad hoc classifier that capitalizes on prior knowledge on arrhythmogenesis. We conclude that improving prophylactic ICD-implantation candidate selection by automatic classification from Holter recording features may be possible. Nevertheless, that statement should be supported by the study of a more extensive and appropriate database
Romanelli, Marco. "Méthodes d'apprentissage machine pour la protection de la vie privée : mesure de leakage et design des mécanismes". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX045.
Texto completo da fonteIn recent years, there has been an increasing involvement of artificial intelligence and machine learning (ML) in countless aspects of our daily lives. In this PhD thesis, we study how notions of information theory and ML can be used to better measure and understand the information leaked by data and / or models, and to design solutions to protect the privacy of the shared information. We first explore the application of ML to estimate the information leakage of a system. We consider a black-box scenario where the system’s internals are either unknown, or too complicated to analyze, and the only available information are pairs of input-output data samples. Previous works focused on counting the frequencies to estimate the input-output conditional probabilities (frequentist approach), however this method is not accurate when the domain of possible outputs is large. To overcome this difficulty, the estimation of the Bayes error of the ideal classifier was recently investigated using ML models and it has been shown to be more accurate thanks to the ability of those models to learn the input-output correspondence. However, the Bayes vulnerability is only suitable to describe one-try attacks. A more general and flexible measure of leakage is the g-vulnerability, which encompasses several different types of adversaries, with different goals and capabilities. We therefore propose a novel ML based approach, that relies on data preprocessing, to perform black-box estimation of the g-vulnerability, formally studying the learnability for all data distributions and evaluating performances in various experimental settings. In the second part of this thesis, we address the problem of obfuscating sensitive information while preserving utility, and we propose a ML approach inspired by the generative adversarial networks paradigm. The idea is to set up two nets: the generator, that tries to produce an optimal obfuscation mechanism to protect the data, and the classifier, that tries to de-obfuscate the data. By letting the two nets compete against each other, the mechanism improves its degree of protection, until an equilibrium is reached. We apply our method to the case of location privacy, and we perform experiments on synthetic data and on real data from the Gowalla dataset. The performance of the obtained obfuscation mechanism is evaluated in terms of the Bayes error, which represents the strongest possible adversary. Finally, we consider that, in classification problems, we try to predict classes observing the values of the features that represent the input samples. Classes and features’ values can be considered respectively as secret input and observable outputs of a system. Therefore, measuring the leakage of such a system is a strategy to tell the most and least informative features apart. Information theory can be considered a useful concept for this task, as the prediction power stems from the correlation, i.e., the mutual information, between features and labels. We compare the Shannon entropy based mutual information to the Rényi min-entropy based one, both from the theoretical and experimental point of view showing that, in general, the two approaches are incomparable, in the sense that, depending on the considered dataset, sometimes the Shannon entropy based method outperforms the Rényi min-entropy based one and sometimes the opposite occurs
Buhot, Arnaud. "Etude de propriétés d'apprentissage supervisé et non supervisé par des méthodes de Physique Statistique". Phd thesis, Université Joseph Fourier (Grenoble), 1999. http://tel.archives-ouvertes.fr/tel-00001642.
Texto completo da fonteGalibourg, Antoine. "Estimation de l'âge dentaire chez le sujet vivant : application des méthodes d'apprentissage machine chez les enfants et les jeunes adultes". Electronic Thesis or Diss., Toulouse 3, 2022. http://thesesups.ups-tlse.fr/5355/.
Texto completo da fonteStatement of the problem: In the living individual, the estimation of dental age is a parameter used in orthopedics or dentofacial orthodontics or in pediatrics to locate the individual on its growth curve. In forensic medicine, the estimation of dental age allows to infer the chronological age for a regression or a classification task. There are physical and radiological methods. While the latter are more accurate, there is no universal method. Demirjian created the most widely used radiological method almost 50 years ago, but it is criticized for its accuracy and for using reference tables based on a French-Canadian population sample. Objective: Artificial intelligence, and more particularly machine learning, has allowed the development of various tools with a learning capacity on an annotated database. The objective of this thesis was to compare the performance of different machine learning algorithms first against two classical methods of dental age estimation, and then between them by adding additional predictors. Material and method: In a first part, the different methods of dental age estimation on living children and young adults are presented. The limitations of these methods are exposed and the possibilities to address them with the use of machine learning are proposed. Using a database of 3605 panoramic radiographs of individuals aged 2 to 24 years (1734 girls and 1871 boys), different machine learning methods were tested to estimate dental age. The accuracies of these methods were compared with each other and with two classical methods by Demirjian and Willems. This work resulted in an article published in the International Journal of Legal Medicine. In a second part, the different machine learning methods are described and discussed. Then, the results obtained in the article are put in perspective with the publications on the subject in 2021. Finally, a perspective of the results of the machine learning methods in relation to their use in dental age estimation is made. Results: The results show that all machine learning methods have better accuracy than the conventional methods tested for dental age estimation under the conditions of their use. They also show that the use of the maturation stage of third molars over an extended range of use to 24 years does not allow the estimation of dental age for a legal issue. Conclusion: Machine learning methods fit into the overall process of automating dental age determination. The specific part of deep learning seems interesting to investigate for dental age classification tasks
Cappelaere, Charles-Henri. "Estimation du risque de mort subite par arrêt cardiaque à l'aide de méthodes d'apprentissage artificiel". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://pastel.archives-ouvertes.fr/pastel-00939082.
Texto completo da fonteZoubeirou, A. Mayaki Mansour. "Méthodes d'apprentissage profond pour la détection d'anomalies et de changement de régimes : application à la maintenance prédictive dans des systèmes embarqués". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4010.
Texto completo da fonteIn the context of Industry 4.0 and the Internet of Things (IoT), predictive maintenance has become vital for optimizing the performance and lifespan of electronic devices and equipment. This approach, reliant on extensive data analysis, stands on two pillars: anomaly detection and drift detection. Anomaly detection plays a crucial role in identifying deviations from established norms, thereby flagging potential issues such as equipment malfunctions.Drift detection, on the other hand, monitors changes in data distributions over time. It addresses "concept drift" to ensure the continued relevance of predictive models in evolving industrial systems. This thesis highlights the synergistic relationship between these two techniques, demonstrating their collective impact in proactive maintenance strategies. We address various challenges in predictive maintenance such as data quality, labeling, complexities of industrial systems, the nuances of drift detection and the demands of real-time processing. A significant part of this research will focus on how to adapt and use these techniques in the context of embedded systems. The significance of this work extends to cost savings, environmental impact reduction and aligning with the advancements in Industry 4.0, positioning predictive maintenance as a key component in the new era of industrial efficiency and sustainability.This study introduces novel methods employing statistical and machine learning techniques, validated in various industrial settings like modern manufacturing plants. These methods, both theoretical and applied, effectively address the challenges of predictive maintenance
Glaude, Hadrien. "Méthodes des moments pour l’inférence de systèmes séquentiels linéaires rationnels". Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10107/document.
Texto completo da fonteLearning stochastic models generating sequences has many applications in natural language processing, speech recognitions or bioinformatics. Multiplicity Automata (MA) are graphical latent variable models that encompass a wide variety of linear systems. In particular, they can model stochastic languages, stochastic processes and controlled processes. Traditional learning algorithms such as the one of Baum-Welch are iterative, slow and may converge to local optima. A recent alternative is to use the Method of Moments (MoM) to design consistent and fast algorithms with pseudo-PAC guarantees. However, MoM-based algorithms have two main disadvantages. First, the PAC guarantees hold only if the size of the learned model corresponds to the size of the target model. Second, although these algorithms learn a function close to the target distribution, most do not ensure it will be a distribution. Thus, a model learned from a finite number of examples may return negative values or values that do not sum to one. This thesis addresses both problems. First, we extend the theoretical guarantees for compressed models, and propose a regularized spectral algorithm that adjusts the size of the model to the data. Then, an application in electronic warfare is proposed to sequence of the dwells of a super-heterodyne receiver. Finally, we design new learning algorithms based on the MoM that do not suffer the problem of negative probabilities. We show for one of them pseudo-PAC guarantees
Roger, Vincent. "Modélisation de l'indice de sévérité du trouble de la parole à l'aide de méthodes d'apprentissage profond : d'une modélisation à partir de quelques exemples à un apprentissage auto-supervisé via une mesure entropique". Thesis, Toulouse 3, 2022. http://www.theses.fr/2022TOU30180.
Texto completo da fontePeople with head and neck cancers have speech difficulties after surgery or radiation therapy. It is important for health practitioners to have a measure that reflects the severity of speech. To produce this measure, a perceptual study is commonly performed with a group of five to six clinical experts. This process limits the use of this assessment in practice. Thus, the creation of an automatic measure, similar to the severity index, would allow a better follow-up of the patients by facilitating its obtaining. To realise such a measure, we relied on a reading task, classically performed. We used the recordings of the C2SI-RUGBI corpus, which includes more than 100 people. This corpus represents about one hour of recording to model the severity index. In this PhD work, a review of state-of-the-art methods on speech, emotion and speaker recognition using little data was undertaken. We then attempted to model severity using transfer learning and deep learning. Since the results were not usable, we turned to the so-called "few shot" techniques (learning from only a few examples). Thus, after promising first attempts at phoneme recognition, we obtained promising results for categorising the severity of patients. Nevertheless, the exploitation of these results for a medical application would require improvements. We therefore performed projections of the data from our corpus. As some score slices were separable using acoustic parameters, we proposed a new entropic measurement method. This one is based on self-supervised speech representations on the Librispeech corpus: the PASE+ model, which is inspired by the Inception Score (generally used in image processing to evaluate the quality of images generated by models). Our method allows us to produce a score similar to the severity index with a Spearman correlation of 0.87 on the reading task of the cancer corpus. The advantage of our approach is that it does not require data from the C2SI-RUGBI corpus for training. Thus, we can use the whole corpus for the evaluation of our system. The quality of our results has allowed us to consider a use in a clinical environment through an application on a tablet: tests are underway at the Larrey Hospital in Toulouse
Cornec, Matthieu. "Inégalités probabilistes pour l'estimateur de validation croisée dans le cadre de l'apprentissage statistique et Modèles statistiques appliqués à l'économie et à la finance". Phd thesis, Université de Nanterre - Paris X, 2009. http://tel.archives-ouvertes.fr/tel-00530876.
Texto completo da fonteHarrar-Eskinazi, Karine. "Dyslexie développementale et méthodes de remédiation : Conception et évaluation d'un programme d'intervention multimodale et multi-componentielle fondé sur les approches phonologique, visuo-attentionnelle et intermodalitaire". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ2025.
Texto completo da fonteDyslexia is a neurodevelopmental disorder that affects children's ability to read and spell words accurately and effectively. Most remediation studies are based on the hypothesis that the core deficit in dyslexic children is based on a single underlying cognitive deficit. However, a large number of studies suggest that most dyslexic children exhibit multiple underlying cognitive deficits that are simultaneously present to varying degrees, leading to a wide range of clinical disorders in reading and spelling. Thus, etiological multifactoriality in developmental dyslexia leads to semiological heterogeneity which explains clinical variability and leads to complex diagnosis, which we called nosographic variability. In line with multi-deficit models of developmental dyslexia, we designed and evaluated a remediation study with a multimodal and multi-componential protocol, which aimed at enhancing both underlying cognitive processes (audio-phonological, visual-attentional, and crossmodal) and reading and spelling procedures, using several training programs and taking into account the child's semiological profile.We assessed benefits of the protocol through a multicenter, longitudinal, randomized, crossover and clinical trial including 3 stages that lasted for a total duration of 16 months. An overall of 94 speech and language therapists and 144 dyslexic readers (aged around 8-13 years) participated in the study.In the first phase, participants were randomly assigned to 2 groups and received weekly speech and language therapy for 2 months without intensive training. In the second phase, in addition to weekly follow-up sessions with the speech therapist, participants received 3 types of intensive computer-based interventions for 2 months each. The first 2 interventions focused on audio-phonological and visual-attentional processes (the order of which was counterbalanced between the 2 groups) and were followed by a third intervention that focused on cross-modal integration processes. The construction of the 3 training programs was based on the scientific literature, the expertise of the speech and language therapists, the patient's complaint (shared care decision) and the environmental context. In the third phase, intensive interventions were discontinued and weekly speech therapy consultations were continued for two months. At the end of the remediation protocol, the multimodal and multi-componential intensive intervention lead to significant improvement in reading efficiency (Cohen's d=2.3), reading comprehension (Cohen's d=0.9), and spelling (Cohen's d=0.78), compared to the weekly speech and language therapy (first phase), and regardless of the order of the interventions. Multiple-case analysis revealed that 52 % of participants were reading disorder free.In conclusion, our results show that an intensive intervention based on a multi-componential and multimodal training program produces major benefits in the treatment of developmental dyslexia. These findings are consistent with a curative (rather than a compensatory) approach for remediation and open up a new avenue for developmental dyslexia treatment
Gaguet, Laurent. "Attitudes mentales et planification en intelligence artificielle : modélisation d'un agent rationnel dans un environnement multi-agents". Clermont-Ferrand 2, 2000. http://www.theses.fr/2000CLF20023.
Texto completo da fonteCléder, Catherine. "Planification didactique et construction de l'objectif d'une session de travail individualisée : modélisation des connaissances et du raisonnement mis en jeu". Clermont-Ferrand 2, 2002. http://www.theses.fr/2002CLF20019.
Texto completo da fontePélissier, Chrysta. "Fonctionnalités et méthodologie de conception d'un module de type ressource : application dans un environnement informatique d'aide à l'apprentissage de la lecture". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2002. http://tel.archives-ouvertes.fr/tel-00661571.
Texto completo da fonteBoussier, Jean-Marie. "Modélisation de comportements dans les systèmes dynamiques : Application à l'organisation et à la régulation de stationnement et de déplacement dans les Systèmes de Trafic Urbain". Phd thesis, Université de La Rochelle, 2007. http://tel.archives-ouvertes.fr/tel-00411272.
Texto completo da fonteFaghihi, Usef. "Méthodes d'apprentissage inspirées de l'humain pour un tuteur cognitif artificiel". Mémoire, 2008. http://www.archipel.uqam.ca/1392/1/M10320.pdf.
Texto completo da fonteZiri, Oussama. "Classification de courriels au moyen de diverses méthodes d'apprentissage et conception d'un outil de préparation des données textuelles basé sur la programmation modulaire : PDTPM". Mémoire, 2013. http://www.archipel.uqam.ca/5679/1/M12851.pdf.
Texto completo da fonte