Littérature scientifique sur le sujet « Explicable Machine Learning »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Explicable Machine Learning ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Explicable Machine Learning"
FOMICHEVA, S. G. « INFLUENCE OF ATTACK INDICATOR RANKING ON THE QUALITY OF MACHINE LEARNING MODELS IN AGENT-BASED CONTINUOUS AUTHENTICATION SYSTEMS ». T-Comm 17, no 8 (2023) : 45–55. http://dx.doi.org/10.36724/2072-8735-2023-17-8-45-55.
Texte intégralAbrahamsen, Nils-Gunnar Birkeland, Emil Nylén-Forthun, Mats Møller, Petter Eilif de Lange et Morten Risstad. « Financial Distress Prediction in the Nordics : Early Warnings from Machine Learning Models ». Journal of Risk and Financial Management 17, no 10 (27 septembre 2024) : 432. http://dx.doi.org/10.3390/jrfm17100432.
Texte intégralFomicheva, Svetlana, et Sergey Bezzateev. « Modification of the Berlekamp-Massey algorithm for explicable knowledge extraction by SIEM-agents ». Journal of Physics : Conference Series 2373, no 5 (1 décembre 2022) : 052033. http://dx.doi.org/10.1088/1742-6596/2373/5/052033.
Texte intégralAlharbi, Abdulrahman, Ivan Petrunin et Dimitrios Panagiotakopoulos. « Assuring Safe and Efficient Operation of UAV Using Explainable Machine Learning ». Drones 7, no 5 (19 mai 2023) : 327. http://dx.doi.org/10.3390/drones7050327.
Texte intégralFujii, Keisuke. « Understanding of social behaviour in human collective motions with non-trivial rule of control ». Impact 2019, no 10 (30 décembre 2019) : 84–86. http://dx.doi.org/10.21820/23987073.2019.10.84.
Texte intégralWang, Chen, Lin Liu, Chengcheng Xu et Weitao Lv. « Predicting Future Driving Risk of Crash-Involved Drivers Based on a Systematic Machine Learning Framework ». International Journal of Environmental Research and Public Health 16, no 3 (25 janvier 2019) : 334. http://dx.doi.org/10.3390/ijerph16030334.
Texte intégralValladares-Rodríguez, Sonia, Manuel J. Fernández-Iglesias, Luis E. Anido-Rifón et Moisés Pacheco-Lorenzo. « Evaluation of the Predictive Ability and User Acceptance of Panoramix 2.0, an AI-Based E-Health Tool for the Detection of Cognitive Impairment ». Electronics 11, no 21 (22 octobre 2022) : 3424. http://dx.doi.org/10.3390/electronics11213424.
Texte intégralHermitaño Castro, Juler Anderson. « Aplicación de Machine Learning en la Gestión de Riesgo de Crédito Financiero : Una revisión sistemática ». Interfases, no 015 (11 août 2022) : e5898. http://dx.doi.org/10.26439/interfases2022.n015.5898.
Texte intégralUmar, Muhammad, Ashish Shiwlani, Fiza Saeed, Ahsan Ahmad, Masoomi Hifazat Ali Shah et Anoosha Tahir. « Role of Deep Learning in Diagnosis, Treatment, and Prognosis of Oncological Conditions ». International Journal of Membrane Science and Technology 10, no 5 (15 novembre 2023) : 1059–71. http://dx.doi.org/10.15379/ijmst.v10i5.3695.
Texte intégralValdivieso-Ros, Carmen, Francisco Alonso-Sarria et Francisco Gomariz-Castillo. « Effect of the Synergetic Use of Sentinel-1, Sentinel-2, LiDAR and Derived Data in Land Cover Classification of a Semiarid Mediterranean Area Using Machine Learning Algorithms ». Remote Sensing 15, no 2 (5 janvier 2023) : 312. http://dx.doi.org/10.3390/rs15020312.
Texte intégralThèses sur le sujet "Explicable Machine Learning"
Mita, Graziano. « Toward interpretable machine learning, with applications to large-scale industrial systems data ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS112.
Texte intégralThe contributions presented in this work are two-fold. We first provide a general overview of explanations and interpretable machine learning, making connections with different fields, including sociology, psychology, and philosophy, introducing a taxonomy of popular explainability approaches and evaluation methods. We subsequently focus on rule learning, a specific family of transparent models, and propose a novel rule-based classification approach, based on monotone Boolean function synthesis: LIBRE. LIBRE is an ensemble method that combines the candidate rules learned by multiple bottom-up learners with a simple union, in order to obtain a final intepretable rule set. Our method overcomes most of the limitations of state-of-the-art competitors: it successfully deals with both balanced and imbalanced datasets, efficiently achieving superior performance and higher interpretability in real datasets. Interpretability of data representations constitutes the second broad contribution to this work. We restrict our attention to disentangled representation learning, and, in particular, VAE-based disentanglement methods to automatically learn representations consisting of semantically meaningful features. Recent contributions have demonstrated that disentanglement is impossible in purely unsupervised settings. Nevertheless, incorporating inductive biases on models and data may overcome such limitations. We present a new disentanglement method - IDVAE - with theoretical guarantees on disentanglement, deriving from the employment of an optimal exponential factorized prior, conditionally dependent on auxiliary variables complementing input observations. We additionally propose a semi-supervised version of our method. Our experimental campaign on well-established datasets in the literature shows that IDVAE often beats its competitors according to several disentanglement metrics
El, Qadi El Haouari Ayoub. « An EXplainable Artificial Intelligence Credit Rating System ». Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS486.
Texte intégralOver the past few years, the trade finance gap has surged to an alarming 1.5 trillion dollars, underscoring a growing crisis in global commerce. This gap is particularly detrimental tosmall and medium-sized enterprises (SMEs), which often find it difficult to access trade finance. Traditional credit scoring systems, which are the backbone of trade finance, are not always tailored to assess the credit worthiness of SMEs adequately. The term credit scoring stands for the methods and techniques used to evaluate the credit worthiness of individuals or business. The score generated is then used by financial institutions to make decisions on loan approvals, interest rates, and credit limits. Credit scoring present several characteristics that makes it a challenging task. First, the lack of explainability in complex machine learning models often results in less acceptance of credit assessments, particulary among stakeholders who require transparent decision-making process. This opacity can be an obstacle in the widespread adoption of advanced scoring techniques. Another significant challenge is the variability in data availability across countries and the often incomplete financial records of SME's which makes it difficult to develop universally applicable models.In this thesis, we initially tackled the issue of explainability by employing state-of-the-art techniques in Explainable Artificial Intelligence (XAI). We introduced a novel strategy that involved comparing the explanations generated by machine learning models with the criteria used by credit experts. This comparative analysis revealed a divergence between the model's reasoning and the expert's judgment, underscoring the necessity of incorporating expert criteria into the training phase of the model. The findings suggest that aligning machine-generated explanations with human expertise could be a pivotal step in enhancing the model's acceptance and trustworthiness. Subsequently, we shifted our focus to address the challenge of sparse or incomplete financial data. We incorporated textual credit assessments into the credit scoring model using cutting-edge Natural Language Processing (NLP) techniques. Our results demon-strated that models trained with both financial data and textual credit assessments out-performed those relying solely on financial data. Moreover, we showed that our approach could effectively generate credit scores using only textual risk assessments, thereby offer-ing a viable solution for scenarios where traditional financial metrics are unavailable or insufficient
Chamma, Ahmad. « Statistical interpretation of high-dimensional complex prediction models for biomedical data ». Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG028.
Texte intégralModern large health datasets represent population characteristics in multiple modalities, including brain imaging and socio-demographic data. These large cohorts make it possible to predict and understand individual outcomes, leading to promising results in the epidemiological context of forecasting/predicting the occurrence of diseases, health outcomes, or other events of interest. As data collection expands into different scientific domains, such as brain imaging and genomic analysis, variables are related by complex, possibly non-linear dependencies, along with high degrees of correlation. As a result, popular models such as linear and tree-based techniques are no longer effective in such high-dimensional settings. Powerful non-linear machine learning algorithms, such as Random Forests (RFs) and Deep Neural Networks (DNNs), have become important tools for characterizing inter-individual differences and predicting biomedical outcomes, such as brain age. Explaining the decision process of machine learning algorithms is crucial both to improve the performance of a model and to aid human understanding. This can be achieved by assessing the importance of variables. Traditionally, scientists have favored simple, transparent models such as linear regression, where the importance of variables can be easily measured by coefficients. However, with the use of more advanced methods, direct access to the internal structure has become limited and/or uninterpretable from a human perspective. As a result, these methods are often referred to as "black box" methods. Standard approaches based on Permutation Importance (PI) assess the importance of a variable by measuring the decrease in the loss score when the variable of interest is replaced by its permuted version. While these approaches increase the transparency of black box models and provide statistical validity, they can produce unreliable importance assessments when variables are correlated.The goal of this work is to overcome the limitations of standard permutation importance by integrating conditional schemes. Therefore, we investigate two model-agnostic frameworks, Conditional Permutation Importance (CPI) and Block-Based Conditional Permutation Importance (BCPI), which effectively account for correlations between covariates and overcome the limitations of PI. We present two new algorithms designed to handle situations with correlated variables, whether grouped or ungrouped. Our theoretical and empirical results show that CPI provides computationally efficient and theoretically sound methods for evaluating individual variables. The CPI framework guarantees type-I error control and produces a concise selection of significant variables in large datasets.BCPI presents a strategy for managing both individual and grouped variables. It integrates statistical clustering and uses prior knowledge of grouping to adapt the DNN architecture using stacking techniques. This framework is robust and maintains type-I error control even in scenarios with highly correlated groups of variables. It performs well on various benchmarks. Empirical evaluations of our methods on several biomedical datasets showed good face validity. Our methods have also been applied to multimodal brain data in addition to socio-demographics, paving the way for new discoveries and advances in the targeted areas. The CPI and BCPI frameworks are proposed as replacements for conventional permutation-based methods. They provide improved interpretability and reliability in estimating variable importance for high-performance machine learning models
Afchar, Darius. « Interpretable Music Recommender Systems ». Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS608.
Texte intégral‘‘Why do they keep recommending me this music track?’’ ‘‘Why did our system recommend these tracks to users?’’ Nowadays, streaming platforms are the most common way to listen to recorded music. Still, music recommendations — at the heart of these platforms — are not an easy feat. Sometimes, both users and engineers may be equally puzzled about the behaviour of a music recommendation system (MRS). MRS have been successfully employed to help explore catalogues that may be as large as tens of millions of music tracks. Built and optimised for accuracy, real-world MRS often end up being quite complex. They may further rely on a range of interconnected modules that, for instance, analyse audio signals, retrieve metadata about albums and artists, collect and aggregate user feedbacks on the music service, and compute item similarities with collaborative filtering. All this complexity hinders the ability to explain recommendations and, more broadly, explain the system. Yet, explanations are essential for users to foster a long-term engagement with a system that they can understand (and forgive), and for system owners to rationalise failures and improve said system. Interpretability may also be needed to check the fairness of a decision or can be framed as a means to control the recommendations better. Moreover, we could also recursively question: Why does an explanation method explain in a certain way? Is this explanation relevant? What could be a better explanation? All these questions relate to the interpretability of MRSs. In the first half of this thesis, we explore the many flavours that interpretability can have in various recommendation tasks. Indeed, since there is not just one recommendation task but many (e.g., sequential recommendation, playlist continuation, artist similarity), as well as many angles through which music may be represented and processed (e.g., metadata, audio signals, embeddings computed from listening patterns), there are as many settings that require specific adjustments to make explanations relevant. A topic like this one can never be exhaustively addressed. This study was guided along some of the mentioned modalities of musical objects: interpreting implicit user logs, item features, audio signals and similarity embeddings. Our contribution includes several novel methods for eXplainable Artificial Intelligence (XAI) and several theoretical results, shedding new light on our understanding of past methods. Nevertheless, similar to how recommendations may not be interpretable, explanations about them may themselves lack interpretability and justifications. Therefore, in the second half of this thesis, we found it essential to take a step back from the rationale of ML and try to address a (perhaps surprisingly) understudied question in XAI: ‘‘What is interpretability?’’ Introducing concepts from philosophy and social sciences, we stress that there is a misalignment in the way explanations from XAI are generated and unfold versus how humans actually explain. We highlight that current research tends to rely too much on intuitions or hasty reduction of complex realities into convenient mathematical terms, which leads to the canonisation of assumptions into questionable standards (e.g., sparsity entails interpretability). We have treated this part as a comprehensive tutorial addressed to ML researchers to better ground their knowledge of explanations with a precise vocabulary and a broader perspective. We provide practical advice and highlight less popular branches of XAI better aligned with human cognition. Of course, we also reflect back and recontextualise our methods proposed in the previous part. Overall, this enables us to formulate some perspective for our field of XAI as a whole, including its more critical and promising next steps as well as its shortcomings to overcome
Ketata, Firas. « Risk prediction of endocrine diseases using data science and explainable artificial intelligence ». Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. https://theses.hal.science/tel-04773988.
Texte intégralThis thesis aims to predict the risk of endocrine diseases using data science and machine learning. The aim is to leverage this risk identification to assist doctors in managing financial resources, personalizing the treatment of carbohydrate anomalies in patients with beta-thalassemia major, and screening for metabolic syndrome in adolescents. An explainability study of the predictions was developed in this thesis to evaluate the reliability of predicting glucose anomalies and to reduce the financial burden associated with screening for metabolic syndrome. Finally, in response to the observed limitations of explainable machine learning, we propose an approach to improve and evaluate this explainability, which we test on several datasets
Chapitres de livres sur le sujet "Explicable Machine Learning"
Stuhler, Oscar, Dustin S. Stoltz et John Levi Martin. « Meaning and Machines ». Dans The Oxford Handbook of the Sociology of Machine Learning. Oxford University Press, 2023. http://dx.doi.org/10.1093/oxfordhb/9780197653609.013.9.
Texte intégralActes de conférences sur le sujet "Explicable Machine Learning"
Bramson, Aaron, et Masayoshi Mita. « Explicable Machine Learning Models Using Rich Geospatial Data ». Dans 2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC), 2381–86. IEEE, 2024. http://dx.doi.org/10.1109/compsac61105.2024.00382.
Texte intégralSun, Xiangrong, Wei Shu, Yuxin Zhang, Xiansheng Huang, Juxia Liu, Yuan Liu et Tiejun Yang. « Identification of Alzheimer’s Disease Associated Genes through Explicable Deep Learning and Bioinformatic ». Dans 2023 IEEE 4th International Conference on Pattern Recognition and Machine Learning (PRML). IEEE, 2023. http://dx.doi.org/10.1109/prml59573.2023.10348276.
Texte intégral