Auswahl der wissenschaftlichen Literatur zum Thema „Explicable Machine Learning“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Explicable Machine Learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Explicable Machine Learning"

1

FOMICHEVA, S. G. „INFLUENCE OF ATTACK INDICATOR RANKING ON THE QUALITY OF MACHINE LEARNING MODELS IN AGENT-BASED CONTINUOUS AUTHENTICATION SYSTEMS“. T-Comm 17, Nr. 8 (2023): 45–55. http://dx.doi.org/10.36724/2072-8735-2023-17-8-45-55.

Der volle Inhalt der Quelle
Annotation:
Security agents of authentication systems function in automatic mode and control the behavior of subjects, analyzing their dynamics using both traditional (statistical) methods and methods based on machine learning. The expansion of the cybersecurity fabric paradigm actualizes the improvement of adaptive explicable methods and machine learning models. Purpose: the purpose of the study was to assess the impact of ranking methods at compromise indicators, attacks indicators and other futures on the quality of detecting network traffic anomalies as part of the security fabric with continuous authentication. Probabilistic and explicable methods of binary classification were used, as well as nonlinear regressors based on decision trees. The results of the study showed that the methods of pre liminary ranking increase the F1-Score and functioning speed for supervised ML-models by an average of 7%. In unsupervised models, preliminary ranking does not significantly affect the training time, but increases the by 2-10%, which justifies their expediency in agent based systems of continuous authentication. Practical relevance: the models developed in the work substantiate the feasibility of mechanisms for preliminary ranking of compromise and attacks indicators, creating patterns prototypes of attack indicators in automatic mode. In general, uncontrolled models are not as accurate as controlled ones, which actualizes the improvement of either explicable uncontrolled approaches to detecting anomalies, or approaches based on methods with reinforcement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Abrahamsen, Nils-Gunnar Birkeland, Emil Nylén-Forthun, Mats Møller, Petter Eilif de Lange und Morten Risstad. „Financial Distress Prediction in the Nordics: Early Warnings from Machine Learning Models“. Journal of Risk and Financial Management 17, Nr. 10 (27.09.2024): 432. http://dx.doi.org/10.3390/jrfm17100432.

Der volle Inhalt der Quelle
Annotation:
This paper proposes an explicable early warning machine learning model for predicting financial distress, which generalizes across listed Nordic corporations. We develop a novel dataset, covering the period from Q1 2001 to Q2 2022, in which we combine idiosyncratic quarterly financial statement data, information from financial markets, and indicators of macroeconomic trends. The preferred LightGBM model, whose features are selected by applying explainable artificial intelligence, outperforms the benchmark models by a notable margin across evaluation metrics. We find that features related to liquidity, solvency, and size are highly important indicators of financial health and thus crucial variables for forecasting financial distress. Furthermore, we show that explicitly accounting for seasonality, in combination with entity, market, and macro information, improves model performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fomicheva, Svetlana, und Sergey Bezzateev. „Modification of the Berlekamp-Massey algorithm for explicable knowledge extraction by SIEM-agents“. Journal of Physics: Conference Series 2373, Nr. 5 (01.12.2022): 052033. http://dx.doi.org/10.1088/1742-6596/2373/5/052033.

Der volle Inhalt der Quelle
Annotation:
Abstract The article discusses the problems of applying self-explanatory machine learning models in Security Information Event Management systems. We prove the possibility of using information processing methods in finite fields for extracting knowledge from security event repositories by mobile agents. Based on the isomorphism of fuzzy production and fuzzy relational knowledge bases, a constructive method for identifying patterns based on the modified Berlekamp-Massey algorithm is proposed. This allows security agents, while solving their typical cryptanalysis tasks, to use the existing built-in tools to extract knowledge and detect previously unknown anomalies. Experimental characteristics of the application of the proposed algorithm are given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Alharbi, Abdulrahman, Ivan Petrunin und Dimitrios Panagiotakopoulos. „Assuring Safe and Efficient Operation of UAV Using Explainable Machine Learning“. Drones 7, Nr. 5 (19.05.2023): 327. http://dx.doi.org/10.3390/drones7050327.

Der volle Inhalt der Quelle
Annotation:
The accurate estimation of airspace capacity in unmanned traffic management (UTM) operations is critical for a safe, efficient, and equitable allocation of airspace system resources. While conventional approaches for assessing airspace complexity certainly exist, these methods fail to capture true airspace capacity, since they fail to address several important variables (such as weather). Meanwhile, existing AI-based decision-support systems evince opacity and inexplicability, and this restricts their practical application. With these challenges in mind, the authors propose a tailored solution to the needs of demand and capacity management (DCM) services. This solution, by deploying a synthesized fuzzy rule-based model and deep learning will address the trade-off between explicability and performance. In doing so, it will generate an intelligent system that will be explicable and reasonably comprehensible. The results show that this advisory system will be able to indicate the most appropriate regions for unmanned aerial vehicle (UAVs) operation, and it will also increase UTM airspace availability by more than 23%. Moreover, the proposed system demonstrates a maximum capacity gain of 65% and a minimum safety gain of 35%, while possessing an explainability attribute of 70%. This will assist UTM authorities through more effective airspace capacity estimation and the formulation of new operational regulations and performance requirements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fujii, Keisuke. „Understanding of social behaviour in human collective motions with non-trivial rule of control“. Impact 2019, Nr. 10 (30.12.2019): 84–86. http://dx.doi.org/10.21820/23987073.2019.10.84.

Der volle Inhalt der Quelle
Annotation:
The coordination and movement of people in large crowds, during sports games or when socialising, seems readily explicable. Sometimes this occurs according to specific rules or instructions such as in a sport or game, at other times the motivations for movement may be more focused around an individual's needs or fears. Over the last decade, the computational ability to identify and track a given individual in video footage has increased. The conventional methods of how data is gathered and interpreted in biology rely on fitting statistical results to particular models or hypotheses. However, data from tracking movements in social groups or team sports are so complex as they cannot easily analyse the vast amounts of information and highly varied patterns. The author is an expert in human behaviour and machine learning who is based at the Graduate School of Informatics at Nagoya University. His challenge is to bridge the gap between rule-based theoretical modelling and data-driven modelling. He is employing machine learning techniques to attempt to solve this problem, as a visiting scientist in RIKEN Center for Advanced Intelligence Project.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Chen, Lin Liu, Chengcheng Xu und Weitao Lv. „Predicting Future Driving Risk of Crash-Involved Drivers Based on a Systematic Machine Learning Framework“. International Journal of Environmental Research and Public Health 16, Nr. 3 (25.01.2019): 334. http://dx.doi.org/10.3390/ijerph16030334.

Der volle Inhalt der Quelle
Annotation:
The objective of this paper is to predict the future driving risk of crash-involved drivers in Kunshan, China. A systematic machine learning framework is proposed to deal with three critical technical issues: 1. defining driving risk; 2. developing risky driving factors; 3. developing a reliable and explicable machine learning model. High-risk (HR) and low-risk (LR) drivers were defined by five different scenarios. A number of features were extracted from seven-year crash/violation records. Drivers’ two-year prior crash/violation information was used to predict their driving risk in the subsequent two years. Using a one-year rolling time window, prediction models were developed for four consecutive time periods: 2013–2014, 2014–2015, 2015–2016, and 2016–2017. Four tree-based ensemble learning techniques were attempted, including random forest (RF), Adaboost with decision tree, gradient boosting decision tree (GBDT), and extreme gradient boosting decision tree (XGboost). A temporal transferability test and a follow-up study were applied to validate the trained models. The best scenario defining driving risk was multi-dimensional, encompassing crash recurrence, severity, and fault commitment. GBDT appeared to be the best model choice across all time periods, with an acceptable average precision (AP) of 0.68 on the most recent datasets (i.e., 2016–2017). Seven of nine top features were related to risky driving behaviors, which presented non-linear relationships with driving risk. Model transferability held within relatively short time intervals (1–2 years). Appropriate risk definition, complicated violation/crash features, and advanced machine learning techniques need to be considered for risk prediction task. The proposed machine learning approach is promising, so that safety interventions can be launched more effectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Valladares-Rodríguez, Sonia, Manuel J. Fernández-Iglesias, Luis E. Anido-Rifón und Moisés Pacheco-Lorenzo. „Evaluation of the Predictive Ability and User Acceptance of Panoramix 2.0, an AI-Based E-Health Tool for the Detection of Cognitive Impairment“. Electronics 11, Nr. 21 (22.10.2022): 3424. http://dx.doi.org/10.3390/electronics11213424.

Der volle Inhalt der Quelle
Annotation:
The high prevalence of Alzheimer-type dementia and the limitations of traditional neuropsychological tests motivate the introduction of new cognitive assessment methods. We discuss the validation of an all-digital, ecological and non-intrusive e-health application for the early detection of cognitive impairment, based on artificial intelligence for patient classification, and more specifically on machine learning algorithms. To evaluate the discrimination power of this application, a cross-sectional pilot study was carried out involving 30 subjects: 10 health control subjects (mean age: 75.62 years); 14 individuals with mild cognitive impairment (mean age: 81.24 years) and 6 early-stage Alzheimer’s patients (mean age: 80.44 years). The study was carried out in two separate sessions in November 2021 and January 2022. All participants completed the study, and no concerns were raised about the acceptability of the test. Analysis including socio-demographics and game data supports the prediction of participants’ cognitive status using machine learning algorithms. According to the performance metrics computed, best classification results are obtained a Multilayer Perceptron classifier, Support Vector Machines and Random Forest, respectively, with weighted recall values >= 0.9784 ± 0.0265 and F1-score = 0.9764 ± 0.0291. Furthermore, thanks to hyper-parameter optimization, false negative rates were dramatically reduced. Shapley’s additive planning (SHAP) applied according to the eXplicable AI (XAI) method, made it possible to visually and quantitatively evaluate the importance of the different features in the final classification. This is a relevant step ahead towards the use of machine learning and gamification to early detect cognitive impairment. In addition, this tool was designed to support self-administration, which could be a relevant aspect in confinement situations with limited access to health professionals. However, further research is required to identify patterns that may help to predict or estimate future cognitive damage and normative data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hermitaño Castro, Juler Anderson. „Aplicación de Machine Learning en la Gestión de Riesgo de Crédito Financiero: Una revisión sistemática“. Interfases, Nr. 015 (11.08.2022): e5898. http://dx.doi.org/10.26439/interfases2022.n015.5898.

Der volle Inhalt der Quelle
Annotation:
La gestión de riesgos bancarios puede ser dividida en las siguientes tipologías: riesgo crediticio, riesgo de mercado, riesgo operativo y riesgo de liquidez, siendo el primero el tipo de riesgo más importante para el sector financiero. El presente artículo tiene como objetivo mostrar las ventajas y desventajas que posee la implementación de los algoritmos de machine learning en la gestión de riesgos de crédito y, a partir de esto, mostrar cuál tiene mejor rendimiento, mostrando también las desventajas que puedan presentar. Para lograr el objetivo se realizó una revisión sistemática de la literatura con la estrategia de búsqueda PICo y se seleccionaron 12 artículos. Los resultados reflejan que el riesgo de crédito es el de mayor relevancia. Además, algunos de los algoritmos de machine learning ya han comenzado a implementarse, sin embargo, algunos presentan desventajas resaltantes como el no poder explicar el funcionamiento del modelo y ser considerados como caja negra. En ese sentido, desfavorece la implementación debido a que los organismos regulatorios exigen que un modelo deba ser explicable, interpretable y poseer una transparencia. Frente a esto, se ha optado por realizar modelos híbridos entre algoritmos que no son sencillos de explicar cómo aquellos modelos tradicionales de regresión logística. También, se presenta como alternativa utilizar métodos como SHAPley Additive exPlanations (SHAP) que ayudan a la interpretación de estos modelos.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Umar, Muhammad, Ashish Shiwlani, Fiza Saeed, Ahsan Ahmad, Masoomi Hifazat Ali Shah und Anoosha Tahir. „Role of Deep Learning in Diagnosis, Treatment, and Prognosis of Oncological Conditions“. International Journal of Membrane Science and Technology 10, Nr. 5 (15.11.2023): 1059–71. http://dx.doi.org/10.15379/ijmst.v10i5.3695.

Der volle Inhalt der Quelle
Annotation:
Deep learning, a branch of artificial intelligence, excavates massive data sets for patterns and predictions using a machine learning method known as artificial neural networks. Research on the potential applications of deep learning in understanding the intricate biology of cancer has intensified due to its increasing applications among healthcare domains and the accessibility of extensively characterized cancer datasets. Although preliminary findings are encouraging, this is a fast-moving sector where novel insights into deep learning and cancer biology are being discovered. We give a framework for new deep learning methods and their applications in oncology in this review. Our attention was directed towards its applications for DNA methylation, transcriptomic, and genomic data, along with histopathological inferences. We offer insights into how these disparate data sets can be combined for the creation of decision support systems. Specific instances of learning applications in cancer prognosis, diagnosis, and therapy planning are presented. Additionally, the present barriers and difficulties in deep learning applications in the field of precision oncology, such as the dearth of phenotypical data and the requirement for more explicable deep learning techniques have been elaborated. We wrap up by talking about ways to get beyond the existing challenges so that deep learning can be used in healthcare settings in the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Valdivieso-Ros, Carmen, Francisco Alonso-Sarria und Francisco Gomariz-Castillo. „Effect of the Synergetic Use of Sentinel-1, Sentinel-2, LiDAR and Derived Data in Land Cover Classification of a Semiarid Mediterranean Area Using Machine Learning Algorithms“. Remote Sensing 15, Nr. 2 (05.01.2023): 312. http://dx.doi.org/10.3390/rs15020312.

Der volle Inhalt der Quelle
Annotation:
Land cover classification in semiarid areas is a difficult task that has been tackled using different strategies, such as the use of normalized indices, texture metrics, and the combination of images from different dates or different sensors. In this paper we present the results of an experiment using three sensors (Sentinel-1 SAR, Sentinel-2 MSI and LiDAR), four dates and different normalized indices and texture metrics to classify a semiarid area. Three machine learning algorithms were used: Random Forest, Support Vector Machines and Multilayer Perceptron; Maximum Likelihood was used as a baseline classifier. The synergetic use of all these sources resulted in a significant increase in accuracy, Random Forest being the model reaching the highest accuracy. However, the large amount of features (126) advises the use of feature selection to reduce this figure. After using Variance Inflation Factor and Random Forest feature importance, the amount of features was reduced to 62. The final overall accuracy obtained was 0.91 ± 0.005 (α = 0.05) and kappa index 0.898 ± 0.006 (α = 0.05). Most of the observed confusions are easily explicable and do not represent a significant difference in agronomic terms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Explicable Machine Learning"

1

Mita, Graziano. „Toward interpretable machine learning, with applications to large-scale industrial systems data“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS112.

Der volle Inhalt der Quelle
Annotation:
Les contributions présentées dans cette thèse sont doubles. Nous fournissons d'abord un aperçu général de l'apprentissage automatique interprétable, en établissant des liens avec différents domaines, en introduisant une taxonomie des approches d'explicabilité. Nous nous concentrons sur l'apprentissage des règles et proposons une nouvelle approche de classification, LIBRE, basée sur la synthèse de fonction booléenne monotone. LIBRE est une méthode ensembliste qui combine les règles candidates apprises par plusieurs apprenants faibles ascendants avec une simple union, afin d'obtenir un ensemble final de règles interprétables. LIBRE traite avec succès des données équilibrés et déséquilibrés, atteignant efficacement des performances supérieures et une meilleure interprétabilité par rapport aux plusieurs approches. L'interprétabilité des représentations des données constitue la deuxième grande contribution à ce travail. Nous limitons notre attention à l'apprentissage des représentations démêlées basées sur les autoencodeurs variationnels pour apprendre des représentations sémantiquement significatives. Des contributions récentes ont démontré que le démêlage est impossible dans des contextes purement non supervisés. Néanmoins, nous présentons une nouvelle méthode, IDVAE, avec des garanties théoriques sur le démêlage, dérivant de l'emploi d'une distribution a priori exponentiel optimal factorisé, conditionnellement dépendant de variables auxiliaires complétant les observations d'entrée. Nous proposons également une version semi-supervisée de notre méthode. Notre campagne expérimentale montre qu'IDVAE bat souvent ses concurrents selon plusieurs métriques de démêlage
The contributions presented in this work are two-fold. We first provide a general overview of explanations and interpretable machine learning, making connections with different fields, including sociology, psychology, and philosophy, introducing a taxonomy of popular explainability approaches and evaluation methods. We subsequently focus on rule learning, a specific family of transparent models, and propose a novel rule-based classification approach, based on monotone Boolean function synthesis: LIBRE. LIBRE is an ensemble method that combines the candidate rules learned by multiple bottom-up learners with a simple union, in order to obtain a final intepretable rule set. Our method overcomes most of the limitations of state-of-the-art competitors: it successfully deals with both balanced and imbalanced datasets, efficiently achieving superior performance and higher interpretability in real datasets. Interpretability of data representations constitutes the second broad contribution to this work. We restrict our attention to disentangled representation learning, and, in particular, VAE-based disentanglement methods to automatically learn representations consisting of semantically meaningful features. Recent contributions have demonstrated that disentanglement is impossible in purely unsupervised settings. Nevertheless, incorporating inductive biases on models and data may overcome such limitations. We present a new disentanglement method - IDVAE - with theoretical guarantees on disentanglement, deriving from the employment of an optimal exponential factorized prior, conditionally dependent on auxiliary variables complementing input observations. We additionally propose a semi-supervised version of our method. Our experimental campaign on well-established datasets in the literature shows that IDVAE often beats its competitors according to several disentanglement metrics
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

El, Qadi El Haouari Ayoub. „An EXplainable Artificial Intelligence Credit Rating System“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS486.

Der volle Inhalt der Quelle
Annotation:
Au cours des dernières années, le déficit de financement du commerce a atteint le chiffre alarmant de 1 500 milliards de dollars, soulignant une crise croissante dans le commerce mondial. Ce déficit est particulièrement préjudiciable aux petites et moyennes entreprises (PME), qui éprouvent souvent des difficultés à accéder au financement du commerce. Les systèmes traditionnels d'évaluation du crédit, qui constituent l'épine dorsale du finance-ment du commerce, ne sont pas toujours adaptés pour évaluer correctement la solvabilité des PME. Le terme "credit scoring" désigne les méthodes et techniques utilisées pour évaluer la solvabilité des individus ou des entreprises. Le score généré est ensuite utilisé par les institutions financières pour prendre des décisions sur l'approbation des prêts, les taux d'intérêt et les limites de crédit. L'évaluation du crédit présente plusieurs caractéristiques qui en font une tâche difficile. Tout d'abord, le manque d'explicabilité des modèles complexes d'apprentissage automatique entraîne souvent une moindre acceptation des évaluations de crédit, en particulier parmi les parties prenantes qui exigent un processus décisionnel transparent. Cette opacité peut constituer un obstacle à l'adoption généralisée de techniques d'évaluation avancées. Un autre défi important est la variabilité de la disponibilité des données entre les pays et les dossiers financiers souvent incomplets des PME, ce qui rend difficile le développement de modèles universellement applicables. Dans cette thèse, nous avons d'abord abordé la question de l'explicabilité en utilisant des techniques de pointe dans le domaine de l'intelligence artificielle explicable (XAI). Nous avons introduit une nouvelle stratégie consistant à comparer les explications générées par les modèles d'apprentissage automatique avec les critères utilisés par les experts en crédit. Cette analyse comparative a révélé une divergence entre le raisonnement du modèle et le jugement de l'expert, soulignant la nécessité d'incorporer les critères de l'expert dans la phase de formation du modèle. Les résultats suggèrent que l'alignement des explications générées par la machine sur l'expertise humaine pourrait être une étape cruciale dans l'amélioration de l'acceptation et de la fiabilité du modèle.Par la suite, nous nous sommes concentrés sur le défi que représentent les don-nées financières éparses ou incomplètes. Nous avons incorporé des évaluations de crédit textuelles dans le modèle d'évaluation du crédit en utilisant des techniques de pointe de traitement du langage naturel (NLP). Nos résultats ont démontré que les modèles formés à la fois avec des données financières et des évaluations de crédit textuelles étaient plus performants que ceux qui s'appuyaient uniquement sur des données financières. En outre, nous avons montré que notre approche pouvait effectivement générer des scores de crédit en utilisant uniquement des évaluations de risque textuelles, offrant ainsi une solution viable pour les scénarios dans lesquels les mesures financières traditionnelles ne sont pas disponibles ou insuffisantes
Over the past few years, the trade finance gap has surged to an alarming 1.5 trillion dollars, underscoring a growing crisis in global commerce. This gap is particularly detrimental tosmall and medium-sized enterprises (SMEs), which often find it difficult to access trade finance. Traditional credit scoring systems, which are the backbone of trade finance, are not always tailored to assess the credit worthiness of SMEs adequately. The term credit scoring stands for the methods and techniques used to evaluate the credit worthiness of individuals or business. The score generated is then used by financial institutions to make decisions on loan approvals, interest rates, and credit limits. Credit scoring present several characteristics that makes it a challenging task. First, the lack of explainability in complex machine learning models often results in less acceptance of credit assessments, particulary among stakeholders who require transparent decision-making process. This opacity can be an obstacle in the widespread adoption of advanced scoring techniques. Another significant challenge is the variability in data availability across countries and the often incomplete financial records of SME's which makes it difficult to develop universally applicable models.In this thesis, we initially tackled the issue of explainability by employing state-of-the-art techniques in Explainable Artificial Intelligence (XAI). We introduced a novel strategy that involved comparing the explanations generated by machine learning models with the criteria used by credit experts. This comparative analysis revealed a divergence between the model's reasoning and the expert's judgment, underscoring the necessity of incorporating expert criteria into the training phase of the model. The findings suggest that aligning machine-generated explanations with human expertise could be a pivotal step in enhancing the model's acceptance and trustworthiness. Subsequently, we shifted our focus to address the challenge of sparse or incomplete financial data. We incorporated textual credit assessments into the credit scoring model using cutting-edge Natural Language Processing (NLP) techniques. Our results demon-strated that models trained with both financial data and textual credit assessments out-performed those relying solely on financial data. Moreover, we showed that our approach could effectively generate credit scores using only textual risk assessments, thereby offer-ing a viable solution for scenarios where traditional financial metrics are unavailable or insufficient
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chamma, Ahmad. „Statistical interpretation of high-dimensional complex prediction models for biomedical data“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG028.

Der volle Inhalt der Quelle
Annotation:
Les grands jeux de données de santé produits, qui représentent les caractéristiques de la population selon de multiples modalités, permettent de prédire et de comprendre les résultats individuels. À mesure que la collecte de données s'étend aux domaines scientifiques, tels que l'imagerie cérébrale, les variables sont liées par des dépendances complexes, éventuellement non linéaires, ainsi que par des degrés élevés de corrélation. Par conséquent, les modèles populaires tels que les techniques linéaires et à base d'arbres de décision ne sont plus efficaces dans ces contextes à haute dimension. De puissants algorithmes d'apprentissage automatique non linéaires, tels que les forêts aléatoires et les réseaux de neurones profonds, sont devenus des outils importants pour caractériser les différences interindividuelles et prédire les résultats biomédicaux, tels que l'âge du cerveau. Il est essentiel d'expliquer le processus de décision des algorithmes d'apprentissage automatique, à la fois pour améliorer les performances d'un modèle et pour faciliter la compréhension. Cet objectif peut être atteint en évaluant l'importance des variables. Traditionnellement, les scientifiques ont privilégié des modèles simples et transparents tels que la régression linéaire, où l'importance des variables peut être facilement mesurée par des coefficients. Cependant, avec l'utilisation de méthodes plus avancées, l'accès direct à la structure interne est devenu limité et/ou ininterprétable d'un point de vue humain. C'est pourquoi ces méthodes sont souvent appelées méthodes "boîte noire". Les approches standard basées sur l'importance par permutation (PI) évaluent l'importance d'une variable en mesurant la diminution du score de perte lorsque la variable d'intérêt est remplacée par sa version permutée. Bien que ces approches augmentent la transparence des modèles de boîte noire et offrent une validité statistique, elles peuvent produire des évaluations d'importance peu fiables lorsque les variables sont corrélées.L'objectif de ce travail est de surmonter les limites de l'importance de permutation standard en intégrant des schémas conditionnels. Par conséquent, nous développons deux cadres génériques, l'importance par permutation conditionnelle (CPI) et l'importance par permutation conditionnelle basée sur des blocs (BCPI), qui prennent efficacement en compte les corrélations entre les variables et surmontent les limites de l'importance par permutation. Nous présentons deux nouveaux algorithmes conçus pour traiter les situations où les variables sont corrélées, qu'elles soient groupées ou non. Nos résultats théoriques et empiriques montrent que CPI fournit des méthodes efficaces sur le plan du calcul et solides sur le plan théorique pour l'évaluation des variables individuelles. Le cadre de CPI garantit le contrôle des erreurs de type-I et produit une sélection concise des variables significatives dans les grands ensembles de données.BCPI présente une stratégie de gestion des variables individuelles et groupées. Elle intègre le regroupement statistique et utilise la connaissance préalable du regroupement pour adapter l'architecture du réseau DNN à l'aide de techniques d'empilement. Ce cadre est robuste et maintient le contrôle de l'erreur de type-I même dans des scénarios avec des groupes de variables fortement corrélées. Il donne de bons résultats sur divers points de référence. Les évaluations empiriques de nos méthodes sur plusieurs jeux de données biomédicales ont montré une bonne validité apparente. Nous avons également appliqué ces méthodes à des données cérébrales multimodales ainsi qu'à des données sociodémographiques, ouvrant la voie à de nouvelles découvertes et avancées dans les domaines ciblés. Les cadres CPI et BCPI sont proposés en remplacement des méthodes conventionnelles basées sur la permutation. Ils améliorent l'interprétabilité de l'estimation de l'importance des variables pour les modèles d'apprentissage à haute performance
Modern large health datasets represent population characteristics in multiple modalities, including brain imaging and socio-demographic data. These large cohorts make it possible to predict and understand individual outcomes, leading to promising results in the epidemiological context of forecasting/predicting the occurrence of diseases, health outcomes, or other events of interest. As data collection expands into different scientific domains, such as brain imaging and genomic analysis, variables are related by complex, possibly non-linear dependencies, along with high degrees of correlation. As a result, popular models such as linear and tree-based techniques are no longer effective in such high-dimensional settings. Powerful non-linear machine learning algorithms, such as Random Forests (RFs) and Deep Neural Networks (DNNs), have become important tools for characterizing inter-individual differences and predicting biomedical outcomes, such as brain age. Explaining the decision process of machine learning algorithms is crucial both to improve the performance of a model and to aid human understanding. This can be achieved by assessing the importance of variables. Traditionally, scientists have favored simple, transparent models such as linear regression, where the importance of variables can be easily measured by coefficients. However, with the use of more advanced methods, direct access to the internal structure has become limited and/or uninterpretable from a human perspective. As a result, these methods are often referred to as "black box" methods. Standard approaches based on Permutation Importance (PI) assess the importance of a variable by measuring the decrease in the loss score when the variable of interest is replaced by its permuted version. While these approaches increase the transparency of black box models and provide statistical validity, they can produce unreliable importance assessments when variables are correlated.The goal of this work is to overcome the limitations of standard permutation importance by integrating conditional schemes. Therefore, we investigate two model-agnostic frameworks, Conditional Permutation Importance (CPI) and Block-Based Conditional Permutation Importance (BCPI), which effectively account for correlations between covariates and overcome the limitations of PI. We present two new algorithms designed to handle situations with correlated variables, whether grouped or ungrouped. Our theoretical and empirical results show that CPI provides computationally efficient and theoretically sound methods for evaluating individual variables. The CPI framework guarantees type-I error control and produces a concise selection of significant variables in large datasets.BCPI presents a strategy for managing both individual and grouped variables. It integrates statistical clustering and uses prior knowledge of grouping to adapt the DNN architecture using stacking techniques. This framework is robust and maintains type-I error control even in scenarios with highly correlated groups of variables. It performs well on various benchmarks. Empirical evaluations of our methods on several biomedical datasets showed good face validity. Our methods have also been applied to multimodal brain data in addition to socio-demographics, paving the way for new discoveries and advances in the targeted areas. The CPI and BCPI frameworks are proposed as replacements for conventional permutation-based methods. They provide improved interpretability and reliability in estimating variable importance for high-performance machine learning models
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Afchar, Darius. „Interpretable Music Recommender Systems“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS608.

Der volle Inhalt der Quelle
Annotation:
« Pourquoi est-ce qu’on me recommande toujours les même musiques ? » « Pourquoi notre système recommande-t’il cela aux utilisateurs ? » De nos jours, les plateformes de streaming sont le moyen le plus courant d'écouter de la musique enregistrée. Pourtant, les recommandations musicales — au cœur de ces plateformes — sont loin d’être une mince affaire. Il arrive parfois qu’utilisateurs et ingénieurs soient tout aussi perplexes du comportement d’un système de recommandation musicale (SRM). Les SRM ont été utilisés avec succès pour aider à explorer des catalogues comptant des dizaines de millions de titres musicaux. Construits et optimisés pour la précision, les SRM industriels sont souvent assez complexes. Ils peuvent en outre dépendre de nombreux modules interconnectés qui, notamment, analysent les signaux audio, récupèrent les métadonnées d’albums et artistes et les interactions des utilisateurs du service, et estiment des similarités basées sur du filtrage collaboratif. Cette complexité va en l’encontre de la capacité d'expliquer les recommandations et, plus généralement, ces systèmes. Pourtant, les explications sont essentielles pour fidéliser des utilisateurs sur le long termes avec un système qu'ils peuvent comprendre (et pardonner), et pour les propriétaires du système pour rationaliser les erreurs dudit système. L'interprétabilité peut également être nécessaire pour vérifier l'équité d'une décision ou peut être envisagées comme un moyen de rendre les recommandations plus contrôlables. Nous pouvons également récursivement demander : pourquoi une méthode d'explication explique-t-elle d'une certaine manière ? Cette explication est-elle pertinente ? Quelle pourrait être une meilleure explication ? Toutes ces questions sont liées à l'interprétabilité des SRM. Dans une première partie, nous explorons les multiples visages de l'interprétabilité dans diverses tâches de recommandation. En effet, puisqu'il n'y a pas une seule tâche de recommandation mais plusieurs (e.g., recommandation séquentielle, continuation de playlists, similarité artistes), ainsi que de nombreuses modalités de représentation de la musique (e.g., métadonnées, signaux audio, plongements), il y a autant de tâches possibles d’explications nécessitant des ajustements. Notre étude a été guidée par l’exploration des modalités sus-mentionnées : l'interprétation des signaux implicites utilisateurs, des caractéristiques, des signaux audio, et des inter-similarités. Notre thèse présente plusieurs nouvelles méthodes pour l'IA explicable (XAI) et plusieurs résultats théoriques, portant un nouvel éclairage sur notre compréhension des méthodes passées. Néanmoins, les méthodes d’explications peuvent à leur tour manquer d'interprétabilité. C'est pourquoi, une deuxième partie, nous avons jugé essentiel de prendre du recul par rapport aux discours habituels de l’IA et d'essayer de répondre à une question paradoxalement peu claire pour l’XAI : « Qu'est-ce que l'interprétabilité ? » En s'appuyant sur des concepts issus des sciences sociales, nous soulignons qu'il existe un décalage entre la manière dont les explications de l'XAI sont générées et la manière dont les humains expliquent réellement. Nous suggérons que la recherche actuelle a tendance à trop s'appuyer sur des intuitions et des réductions hâtive de réalités complexes en termes mathématiques commodes, conduisant à ériger des hypothèses en normes discutables (e.g., la parcimonie entraîne l'interprétabilité). Nous avons pensé cette partie comme un tutoriel destiné aux chercheurs en IA afin de renforcer leur connaissance des explications avec un vocabulaire précis et une perspective plus large. Nous résumons des conseils pratiques et mettons en évidence des branches moins populaires de l'XAI mieux alignées avec l’humain. Cela nous permet de formuler une perspective globale pour notre domaine de l'XAI, y compris ses prochaines étapes les plus critiques et prometteuses ainsi que ses lacunes à surmonter
‘‘Why do they keep recommending me this music track?’’ ‘‘Why did our system recommend these tracks to users?’’ Nowadays, streaming platforms are the most common way to listen to recorded music. Still, music recommendations — at the heart of these platforms — are not an easy feat. Sometimes, both users and engineers may be equally puzzled about the behaviour of a music recommendation system (MRS). MRS have been successfully employed to help explore catalogues that may be as large as tens of millions of music tracks. Built and optimised for accuracy, real-world MRS often end up being quite complex. They may further rely on a range of interconnected modules that, for instance, analyse audio signals, retrieve metadata about albums and artists, collect and aggregate user feedbacks on the music service, and compute item similarities with collaborative filtering. All this complexity hinders the ability to explain recommendations and, more broadly, explain the system. Yet, explanations are essential for users to foster a long-term engagement with a system that they can understand (and forgive), and for system owners to rationalise failures and improve said system. Interpretability may also be needed to check the fairness of a decision or can be framed as a means to control the recommendations better. Moreover, we could also recursively question: Why does an explanation method explain in a certain way? Is this explanation relevant? What could be a better explanation? All these questions relate to the interpretability of MRSs. In the first half of this thesis, we explore the many flavours that interpretability can have in various recommendation tasks. Indeed, since there is not just one recommendation task but many (e.g., sequential recommendation, playlist continuation, artist similarity), as well as many angles through which music may be represented and processed (e.g., metadata, audio signals, embeddings computed from listening patterns), there are as many settings that require specific adjustments to make explanations relevant. A topic like this one can never be exhaustively addressed. This study was guided along some of the mentioned modalities of musical objects: interpreting implicit user logs, item features, audio signals and similarity embeddings. Our contribution includes several novel methods for eXplainable Artificial Intelligence (XAI) and several theoretical results, shedding new light on our understanding of past methods. Nevertheless, similar to how recommendations may not be interpretable, explanations about them may themselves lack interpretability and justifications. Therefore, in the second half of this thesis, we found it essential to take a step back from the rationale of ML and try to address a (perhaps surprisingly) understudied question in XAI: ‘‘What is interpretability?’’ Introducing concepts from philosophy and social sciences, we stress that there is a misalignment in the way explanations from XAI are generated and unfold versus how humans actually explain. We highlight that current research tends to rely too much on intuitions or hasty reduction of complex realities into convenient mathematical terms, which leads to the canonisation of assumptions into questionable standards (e.g., sparsity entails interpretability). We have treated this part as a comprehensive tutorial addressed to ML researchers to better ground their knowledge of explanations with a precise vocabulary and a broader perspective. We provide practical advice and highlight less popular branches of XAI better aligned with human cognition. Of course, we also reflect back and recontextualise our methods proposed in the previous part. Overall, this enables us to formulate some perspective for our field of XAI as a whole, including its more critical and promising next steps as well as its shortcomings to overcome
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ketata, Firas. „Risk prediction of endocrine diseases using data science and explainable artificial intelligence“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. https://theses.hal.science/tel-04773988.

Der volle Inhalt der Quelle
Annotation:
L'objectif de cette thèse est de prédire le risque de maladies endocriniennes à l'aide de la science des données et de l'apprentissage automatique. L'idée est d'exploiter cette identification de risque pour aider les médecins à gérer les ressources financières et personnaliser le traitement des anomalies glucidiques chez les patients atteints de bêta-thalassémie majeure, ainsi que pour le dépistage du syndrome métabolique chez les adolescents. Une étude d'explicabilité des prédictions a été développée dans cette thèse pour évaluer la fiabilité de la prédiction des anomalies glucidiques et pour réduire les coûts financiers associés au dépistage du syndrome métabolique. Enfin, en réponse aux limites constatées de l'apprentissage automatique explicable, nous proposons une approche visant à améliorer et évaluer cette explicabilité, que nous testons sur différents jeux de données
This thesis aims to predict the risk of endocrine diseases using data science and machine learning. The aim is to leverage this risk identification to assist doctors in managing financial resources, personalizing the treatment of carbohydrate anomalies in patients with beta-thalassemia major, and screening for metabolic syndrome in adolescents. An explainability study of the predictions was developed in this thesis to evaluate the reliability of predicting glucose anomalies and to reduce the financial burden associated with screening for metabolic syndrome. Finally, in response to the observed limitations of explainable machine learning, we propose an approach to improve and evaluate this explainability, which we test on several datasets
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Explicable Machine Learning"

1

Stuhler, Oscar, Dustin S. Stoltz und John Levi Martin. „Meaning and Machines“. In The Oxford Handbook of the Sociology of Machine Learning. Oxford University Press, 2023. http://dx.doi.org/10.1093/oxfordhb/9780197653609.013.9.

Der volle Inhalt der Quelle
Annotation:
Abstract Given the non-sentient nature of current machines, it can be puzzling to attempt to use ideas regarding “meaning” to explicate their results, even when these involve language outputs, often considered the acme of meaning creation. This chapter proposes a formal approach to meaning that treats it as a dyad of a relation of reference, allowing a clear translation to the case of data analysis, and considers the ways that machine learning may be used to help human analysts explore the meanings in some set of data. Building on the two ways (formal and informal) that humans learn language, the chapter proposes that there are two promising approaches to using machines, a more formal, grammar-based, one, and a more informal, embedding-based, one. Each of these has certain advantages and disadvantages, and the chapter suggests ways that analysts can best make use of developing technologies as opposed to letting their theoretical imagination be hijacked by the path of the development of computer science.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Explicable Machine Learning"

1

Bramson, Aaron, und Masayoshi Mita. „Explicable Machine Learning Models Using Rich Geospatial Data“. In 2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC), 2381–86. IEEE, 2024. http://dx.doi.org/10.1109/compsac61105.2024.00382.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sun, Xiangrong, Wei Shu, Yuxin Zhang, Xiansheng Huang, Juxia Liu, Yuan Liu und Tiejun Yang. „Identification of Alzheimer’s Disease Associated Genes through Explicable Deep Learning and Bioinformatic“. In 2023 IEEE 4th International Conference on Pattern Recognition and Machine Learning (PRML). IEEE, 2023. http://dx.doi.org/10.1109/prml59573.2023.10348276.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie