Contents
Academic literature on the topic 'Apprentissage automatique interprétable'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Apprentissage automatique interprétable.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Dissertations / Theses on the topic "Apprentissage automatique interprétable"
Mita, Graziano. "Toward interpretable machine learning, with applications to large-scale industrial systems data." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS112.
Full textThe contributions presented in this work are two-fold. We first provide a general overview of explanations and interpretable machine learning, making connections with different fields, including sociology, psychology, and philosophy, introducing a taxonomy of popular explainability approaches and evaluation methods. We subsequently focus on rule learning, a specific family of transparent models, and propose a novel rule-based classification approach, based on monotone Boolean function synthesis: LIBRE. LIBRE is an ensemble method that combines the candidate rules learned by multiple bottom-up learners with a simple union, in order to obtain a final intepretable rule set. Our method overcomes most of the limitations of state-of-the-art competitors: it successfully deals with both balanced and imbalanced datasets, efficiently achieving superior performance and higher interpretability in real datasets. Interpretability of data representations constitutes the second broad contribution to this work. We restrict our attention to disentangled representation learning, and, in particular, VAE-based disentanglement methods to automatically learn representations consisting of semantically meaningful features. Recent contributions have demonstrated that disentanglement is impossible in purely unsupervised settings. Nevertheless, incorporating inductive biases on models and data may overcome such limitations. We present a new disentanglement method - IDVAE - with theoretical guarantees on disentanglement, deriving from the employment of an optimal exponential factorized prior, conditionally dependent on auxiliary variables complementing input observations. We additionally propose a semi-supervised version of our method. Our experimental campaign on well-established datasets in the literature shows that IDVAE often beats its competitors according to several disentanglement metrics
Condevaux, Charles. "Méthodes d'apprentissage automatique pour l'analyse de corpus jurisprudentiels." Thesis, Nîmes, 2021. http://www.theses.fr/2021NIME0008.
Full textJudicial decisions contain deterministic information (whose content is recurrent from one decision to another) and random information (probabilistic). Both types of information come into play in a judge's decision-making process. The former can reinforce the decision insofar as deterministic information is a recurring and well-known element of case law (ie past business results). The latter, which are related to rare or exceptional characters, can make decision-making difficult, since they can modify the case law. The purpose of this thesis is to propose a deep learning model that would highlight these two types of information and study their impact (contribution) in the judge’s decision-making process. The objective is to analyze similar decisions in order to highlight random and deterministic information in a body of decisions and quantify their importance in the judgment process
Guillemé, Maël. "Extraction de connaissances interprétables dans des séries temporelles." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S102.
Full textEnergiency is a company that sells a platform to allow manufacturers to analyze their energy consumption data represented in the form of time series. This platform integrates machine learning models to meet customer needs. The application of such models to time series encounters two problems: on the one hand, some classical machine learning approaches have been designed for tabular data and must be adapted to time series, on the other hand, the results of some approaches are difficult for end users to understand. In the first part, we adapt a method to search for occurrences of temporal rules on time series from machines and industrial infrastructures. A temporal rule captures successional relationships between behaviors in time series . In industrial series, due to the presence of many external factors, these regular behaviours can be disruptive. Current methods for searching the occurrences of a rule use a distance measure to assess the similarity between sub-series. However, these measurements are not suitable for assessing the similarity of distorted series such as those in industrial settings. The first contribution of this thesis is the proposal of a method for searching for occurrences of temporal rules capable of capturing this variability in industrial time series. For this purpose, the method integrates the use of elastic distance measurements capable of assessing the similarity between slightly deformed time series. The second part of the thesis is devoted to the interpretability of time series classification methods, i.e. the ability of a classifier to return explanations for its results. These explanations must be understandable by a human. Classification is the task of associating a time series with a category. For an end user inclined to make decisions based on a classifier’s results, understanding the rationale behind those results is of great importance. Otherwise, it is like having blind confidence in the classifier. The second contribution of this thesis is an interpretable time series classifier that can directly provide explanations for its results. This classifier uses local information on time series to discriminate against them. The third and last contribution of this thesis, a method to explain a posteriori any result of any classifier. We carried out a user study to evaluate the interpretability of our method
Guillaume, Serge. "Induction de règles floues interprétables." Toulouse, INSA, 2001. http://www.theses.fr/2001ISAT0021.
Full textThis report deals with interpretable fuzzy rule induction from data for human-computer cooperation purposes. A review of fuzzy rule induction methods shows that they can be grouped into three families. Their comparison highlights the fact that the interpretability is not guaranteed. The central part of our work is a new fuzzy rule induction method. It aims to fulfill three interpretability conditions: readable fuzzy partitions, a number of rules as small as possible, incomplete rules. This is achieved through a three step procedure: generating a family of fuzzy partitions for each input variable, building an accurate fuzzy inference system, simplifying the rule base. The procedure is based on original concepts such as a metric distance suitable for fuzzy partitioning, and the input context defined by a set of rules. We introduced coverage and heterogeneity related indices to guide the prodedure, complementary with a numerical performance index. The method is first validated using well known data and then applied to decison making in a complex system. This application means to extract winemaking rules which enhance the color of red wine
Jouffroy, Emma. "Développement de modèles non supervisés pour l'obtention de représentations latentes interprétables d'images." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0050.
Full textThe Laser Megajoule (LMJ) is a large research device that simulates pressure and temperature conditions similar to those found in stars. During experiments, diagnostics are guided into an experimental chamber for precise positioning. To minimize the risks associated with human error in such an experimental context, the automation of an anti-collision system is envisaged. This involves the design of machine learning tools offering reliable decision levels based on the interpretation of images from cameras positioned in the chamber. Our research focuses on probabilistic generative neural methods, in particular variational auto-encoders (VAEs). The choice of this class of models is linked to the fact that it potentially enables access to a latent space directly linked to the properties of the objects making up the observed scene. The major challenge is to study the design of deep network models that effectively enable access to such a fully informative and interpretable representation, with a view to system reliability. The probabilistic formalism intrinsic to VAE allows us, if we can trace back to such a representation, to access an analysis of the uncertainties of the encoded information
Nicolaï, Alice. "Interpretable representations of human biosignals for individual longitudinal follow-up : application to postural control follow-up in medical consultation." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5224.
Full textIndividual longitudinal follow-up, which aims at following the evolution of an individual state in time, is at the heart of numerous public health issues, particularly in the field of medical prevention. The increasing availability of non-invasive sensors that record various biosignals (e.g., blood glucose, heart rate, eye movements), has encouraged the quantification of human physiology, sensorimotricity, or behavior with the purpose of deriving markers for individual follow-up. This objective raises however several challenges related to signal modelling. Indeed, this particular type of data is complex to interpret, and, a fortiori, to compare across time. This thesis studies the issue of extracting interpretable representations from biosignals through the problematic of balance control follow-up in medical consultation, which has crucial implications for the prevention of falls and frailty in older adults. We focus in particular on the use of force platforms, which are commonly used to record posturography measures, and can be easily deployed in the clinical setting thanks to the development of low cost platforms such as the Wii Balance Board. For this particular application, we investigate the pros and cons of using feature extraction methods or alternatively searching for a generative model of the trajectories. Our contributions include first the review and study of a wide range of state-of-the-art variables that are used to assess fall risk in older adults, derived from the center of pressure (CoP) trajectory. This signal is commonly analyzed in the clinical literature to infer information about balance control. Secondly, we develop a new generative model, ``Total Recall'', based on a previous stochastic model of the CoP, which has shown to reproduce several characteristics of the trajectories but does not integrate the dynamic between the CoP and the center of mass (CoM) -- a dynamic which is considered to be central in postural control. We also review and compare the main methods of estimation of the CoM in quiet standing and conclude that it is possible to obtain an accurate estimation using the Wii Balance Board. The results show the potential relevance of the Total Recall model for the longitudinal follow-up of postural control in a clinical setting. Overall, we highlight the benefit of using generative models, while pointing out the complementarity of features-based and generative-based approachs. Furthermore, this thesis is interested in introducing representations learned on labeled data and tailored for a particular objective of follow-up. We propose new classification algorithms that take advantage of a priori knowledge to improve performances while maintaining complete interpretability. Our approach relies on bagging-based algorithms that are intrinsically interpretable, and a model-space regularization based on medical heuristics. The method is applied to the quantification of fall risk and frailty. This dissertation argues for the importance of researching interpretable methods, designed for specific applications, and incorporating a-priori based on expert knowledge. This approach shows positive results for the integration of the selected biosignals and statistical learning methods in the longitudinal follow-up of postural control. The results encourage the continuation of this work, the further development of the methods, especially in the context of other types of follow-up such as continuous monitoring, and the extension to the study of new biosignals
Avalos, Marta. "Modèles additifs parcimonieux." Phd thesis, Université de Technologie de Compiègne, 2004. http://tel.archives-ouvertes.fr/tel-00008802.
Full text