Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Apprentissage Automatique Explicable“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Apprentissage Automatique Explicable" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Dissertationen zum Thema "Apprentissage Automatique Explicable"
Mita, Graziano. „Toward interpretable machine learning, with applications to large-scale industrial systems data“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS112.
Der volle Inhalt der QuelleThe contributions presented in this work are two-fold. We first provide a general overview of explanations and interpretable machine learning, making connections with different fields, including sociology, psychology, and philosophy, introducing a taxonomy of popular explainability approaches and evaluation methods. We subsequently focus on rule learning, a specific family of transparent models, and propose a novel rule-based classification approach, based on monotone Boolean function synthesis: LIBRE. LIBRE is an ensemble method that combines the candidate rules learned by multiple bottom-up learners with a simple union, in order to obtain a final intepretable rule set. Our method overcomes most of the limitations of state-of-the-art competitors: it successfully deals with both balanced and imbalanced datasets, efficiently achieving superior performance and higher interpretability in real datasets. Interpretability of data representations constitutes the second broad contribution to this work. We restrict our attention to disentangled representation learning, and, in particular, VAE-based disentanglement methods to automatically learn representations consisting of semantically meaningful features. Recent contributions have demonstrated that disentanglement is impossible in purely unsupervised settings. Nevertheless, incorporating inductive biases on models and data may overcome such limitations. We present a new disentanglement method - IDVAE - with theoretical guarantees on disentanglement, deriving from the employment of an optimal exponential factorized prior, conditionally dependent on auxiliary variables complementing input observations. We additionally propose a semi-supervised version of our method. Our experimental campaign on well-established datasets in the literature shows that IDVAE often beats its competitors according to several disentanglement metrics
Chamma, Ahmad. „Statistical interpretation of high-dimensional complex prediction models for biomedical data“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG028.
Der volle Inhalt der QuelleModern large health datasets represent population characteristics in multiple modalities, including brain imaging and socio-demographic data. These large cohorts make it possible to predict and understand individual outcomes, leading to promising results in the epidemiological context of forecasting/predicting the occurrence of diseases, health outcomes, or other events of interest. As data collection expands into different scientific domains, such as brain imaging and genomic analysis, variables are related by complex, possibly non-linear dependencies, along with high degrees of correlation. As a result, popular models such as linear and tree-based techniques are no longer effective in such high-dimensional settings. Powerful non-linear machine learning algorithms, such as Random Forests (RFs) and Deep Neural Networks (DNNs), have become important tools for characterizing inter-individual differences and predicting biomedical outcomes, such as brain age. Explaining the decision process of machine learning algorithms is crucial both to improve the performance of a model and to aid human understanding. This can be achieved by assessing the importance of variables. Traditionally, scientists have favored simple, transparent models such as linear regression, where the importance of variables can be easily measured by coefficients. However, with the use of more advanced methods, direct access to the internal structure has become limited and/or uninterpretable from a human perspective. As a result, these methods are often referred to as "black box" methods. Standard approaches based on Permutation Importance (PI) assess the importance of a variable by measuring the decrease in the loss score when the variable of interest is replaced by its permuted version. While these approaches increase the transparency of black box models and provide statistical validity, they can produce unreliable importance assessments when variables are correlated.The goal of this work is to overcome the limitations of standard permutation importance by integrating conditional schemes. Therefore, we investigate two model-agnostic frameworks, Conditional Permutation Importance (CPI) and Block-Based Conditional Permutation Importance (BCPI), which effectively account for correlations between covariates and overcome the limitations of PI. We present two new algorithms designed to handle situations with correlated variables, whether grouped or ungrouped. Our theoretical and empirical results show that CPI provides computationally efficient and theoretically sound methods for evaluating individual variables. The CPI framework guarantees type-I error control and produces a concise selection of significant variables in large datasets.BCPI presents a strategy for managing both individual and grouped variables. It integrates statistical clustering and uses prior knowledge of grouping to adapt the DNN architecture using stacking techniques. This framework is robust and maintains type-I error control even in scenarios with highly correlated groups of variables. It performs well on various benchmarks. Empirical evaluations of our methods on several biomedical datasets showed good face validity. Our methods have also been applied to multimodal brain data in addition to socio-demographics, paving the way for new discoveries and advances in the targeted areas. The CPI and BCPI frameworks are proposed as replacements for conventional permutation-based methods. They provide improved interpretability and reliability in estimating variable importance for high-performance machine learning models
Afchar, Darius. „Interpretable Music Recommender Systems“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS608.
Der volle Inhalt der Quelle‘‘Why do they keep recommending me this music track?’’ ‘‘Why did our system recommend these tracks to users?’’ Nowadays, streaming platforms are the most common way to listen to recorded music. Still, music recommendations — at the heart of these platforms — are not an easy feat. Sometimes, both users and engineers may be equally puzzled about the behaviour of a music recommendation system (MRS). MRS have been successfully employed to help explore catalogues that may be as large as tens of millions of music tracks. Built and optimised for accuracy, real-world MRS often end up being quite complex. They may further rely on a range of interconnected modules that, for instance, analyse audio signals, retrieve metadata about albums and artists, collect and aggregate user feedbacks on the music service, and compute item similarities with collaborative filtering. All this complexity hinders the ability to explain recommendations and, more broadly, explain the system. Yet, explanations are essential for users to foster a long-term engagement with a system that they can understand (and forgive), and for system owners to rationalise failures and improve said system. Interpretability may also be needed to check the fairness of a decision or can be framed as a means to control the recommendations better. Moreover, we could also recursively question: Why does an explanation method explain in a certain way? Is this explanation relevant? What could be a better explanation? All these questions relate to the interpretability of MRSs. In the first half of this thesis, we explore the many flavours that interpretability can have in various recommendation tasks. Indeed, since there is not just one recommendation task but many (e.g., sequential recommendation, playlist continuation, artist similarity), as well as many angles through which music may be represented and processed (e.g., metadata, audio signals, embeddings computed from listening patterns), there are as many settings that require specific adjustments to make explanations relevant. A topic like this one can never be exhaustively addressed. This study was guided along some of the mentioned modalities of musical objects: interpreting implicit user logs, item features, audio signals and similarity embeddings. Our contribution includes several novel methods for eXplainable Artificial Intelligence (XAI) and several theoretical results, shedding new light on our understanding of past methods. Nevertheless, similar to how recommendations may not be interpretable, explanations about them may themselves lack interpretability and justifications. Therefore, in the second half of this thesis, we found it essential to take a step back from the rationale of ML and try to address a (perhaps surprisingly) understudied question in XAI: ‘‘What is interpretability?’’ Introducing concepts from philosophy and social sciences, we stress that there is a misalignment in the way explanations from XAI are generated and unfold versus how humans actually explain. We highlight that current research tends to rely too much on intuitions or hasty reduction of complex realities into convenient mathematical terms, which leads to the canonisation of assumptions into questionable standards (e.g., sparsity entails interpretability). We have treated this part as a comprehensive tutorial addressed to ML researchers to better ground their knowledge of explanations with a precise vocabulary and a broader perspective. We provide practical advice and highlight less popular branches of XAI better aligned with human cognition. Of course, we also reflect back and recontextualise our methods proposed in the previous part. Overall, this enables us to formulate some perspective for our field of XAI as a whole, including its more critical and promising next steps as well as its shortcomings to overcome
Ketata, Firas. „Risk prediction of endocrine diseases using data science and explainable artificial intelligence“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. https://theses.hal.science/tel-04773988.
Der volle Inhalt der QuelleThis thesis aims to predict the risk of endocrine diseases using data science and machine learning. The aim is to leverage this risk identification to assist doctors in managing financial resources, personalizing the treatment of carbohydrate anomalies in patients with beta-thalassemia major, and screening for metabolic syndrome in adolescents. An explainability study of the predictions was developed in this thesis to evaluate the reliability of predicting glucose anomalies and to reduce the financial burden associated with screening for metabolic syndrome. Finally, in response to the observed limitations of explainable machine learning, we propose an approach to improve and evaluate this explainability, which we test on several datasets
El, Qadi El Haouari Ayoub. „An EXplainable Artificial Intelligence Credit Rating System“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS486.
Der volle Inhalt der QuelleOver the past few years, the trade finance gap has surged to an alarming 1.5 trillion dollars, underscoring a growing crisis in global commerce. This gap is particularly detrimental tosmall and medium-sized enterprises (SMEs), which often find it difficult to access trade finance. Traditional credit scoring systems, which are the backbone of trade finance, are not always tailored to assess the credit worthiness of SMEs adequately. The term credit scoring stands for the methods and techniques used to evaluate the credit worthiness of individuals or business. The score generated is then used by financial institutions to make decisions on loan approvals, interest rates, and credit limits. Credit scoring present several characteristics that makes it a challenging task. First, the lack of explainability in complex machine learning models often results in less acceptance of credit assessments, particulary among stakeholders who require transparent decision-making process. This opacity can be an obstacle in the widespread adoption of advanced scoring techniques. Another significant challenge is the variability in data availability across countries and the often incomplete financial records of SME's which makes it difficult to develop universally applicable models.In this thesis, we initially tackled the issue of explainability by employing state-of-the-art techniques in Explainable Artificial Intelligence (XAI). We introduced a novel strategy that involved comparing the explanations generated by machine learning models with the criteria used by credit experts. This comparative analysis revealed a divergence between the model's reasoning and the expert's judgment, underscoring the necessity of incorporating expert criteria into the training phase of the model. The findings suggest that aligning machine-generated explanations with human expertise could be a pivotal step in enhancing the model's acceptance and trustworthiness. Subsequently, we shifted our focus to address the challenge of sparse or incomplete financial data. We incorporated textual credit assessments into the credit scoring model using cutting-edge Natural Language Processing (NLP) techniques. Our results demon-strated that models trained with both financial data and textual credit assessments out-performed those relying solely on financial data. Moreover, we showed that our approach could effectively generate credit scores using only textual risk assessments, thereby offer-ing a viable solution for scenarios where traditional financial metrics are unavailable or insufficient