Littérature scientifique sur le sujet « XAI Interpretability »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « XAI Interpretability ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "XAI Interpretability"

1

Lim, Suk-Young, Dong-Kyu Chae et Sang-Chul Lee. « Detecting Deepfake Voice Using Explainable Deep Learning Techniques ». Applied Sciences 12, no 8 (13 avril 2022) : 3926. http://dx.doi.org/10.3390/app12083926.

Texte intégral
Résumé :
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zerilli, John. « Explaining Machine Learning Decisions ». Philosophy of Science 89, no 1 (janvier 2022) : 1–19. http://dx.doi.org/10.1017/psa.2021.13.

Texte intégral
Résumé :
AbstractThe operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Veitch, Erik, et Ole Andreas Alsos. « Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles ». Journal of Marine Science and Engineering 9, no 11 (6 novembre 2021) : 1227. http://dx.doi.org/10.3390/jmse9111227.

Texte intégral
Résumé :
Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Dindorf, Carlo, Wolfgang Teufl, Bertram Taetz, Gabriele Bleser et Michael Fröhlich. « Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty ». Sensors 20, no 16 (6 août 2020) : 4385. http://dx.doi.org/10.3390/s20164385.

Texte intégral
Résumé :
Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown promising results. The current study examined the influence of different input representations on a trained model’s accuracy, interpretability, as well as clinical relevancy using XAI methods. The gait of 27 healthy subjects and 20 subjects after total hip arthroplasty (THA) was recorded with an inertial measurement unit (IMU)-based system. Three different input representations were used for classification. Local Interpretable Model-Agnostic Explanations (LIME) was used for model interpretation. The best accuracy was achieved with automatically extracted features (mean accuracy Macc = 100%), followed by features based on simple descriptive statistics (Macc = 97.38%) and waveform data (Macc = 95.88%). Globally seen, sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task. The current work shows that the type of input representation crucially determines interpretability as well as clinical relevance. A combined approach using different forms of representations seems advantageous. The results might assist physicians and therapists finding and addressing individual pathologic gait patterns.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chaddad, Ahmad, Jihao Peng, Jian Xu et Ahmed Bouridane. « Survey of Explainable AI Techniques in Healthcare ». Sensors 23, no 2 (5 janvier 2023) : 634. http://dx.doi.org/10.3390/s23020634.

Texte intégral
Résumé :
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi et Sema Sevinç Şengör. « A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications ». Water 14, no 8 (11 avril 2022) : 1230. http://dx.doi.org/10.3390/w14081230.

Texte intégral
Résumé :
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid et Reham Baageel. « Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI) ». Sustainability 14, no 12 (16 juin 2022) : 7375. http://dx.doi.org/10.3390/su14127375.

Texte intégral
Résumé :
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Luo, Ru, Jin Xing, Lifu Chen, Zhouhao Pan, Xingmin Cai, Zengqi Li, Jielan Wang et Alistair Ford. « Glassboxing Deep Learning to Enhance Aircraft Detection from SAR Imagery ». Remote Sensing 13, no 18 (13 septembre 2021) : 3650. http://dx.doi.org/10.3390/rs13183650.

Texte intégral
Résumé :
Although deep learning has achieved great success in aircraft detection from SAR imagery, its blackbox behavior has been criticized for low comprehensibility and interpretability. Such challenges have impeded the trustworthiness and wide application of deep learning techniques in SAR image analytics. In this paper, we propose an innovative eXplainable Artificial Intelligence (XAI) framework to glassbox deep neural networks (DNN) by using aircraft detection as a case study. This framework is composed of three parts: hybrid global attribution mapping (HGAM) for backbone network selection, path aggregation network (PANet), and class-specific confidence scores mapping (CCSM) for visualization of the detector. HGAM integrates the local and global XAI techniques to evaluate the effectiveness of DNN feature extraction; PANet provides advanced feature fusion to generate multi-scale prediction feature maps; while CCSM relies on visualization methods to examine the detection performance with given DNN and input SAR images. This framework can select the optimal backbone DNN for aircraft detection and map the detection performance for better understanding of the DNN. We verify its effectiveness with experiments using Gaofen-3 imagery. Our XAI framework offers an explainable approach to design, develop, and deploy DNN for SAR image analytics.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bogdanova, Alina, et Vitaly Romanov. « Explainable source code authorship attribution algorithm ». Journal of Physics : Conference Series 2134, no 1 (1 décembre 2021) : 012011. http://dx.doi.org/10.1088/1742-6596/2134/1/012011.

Texte intégral
Résumé :
Abstract Source Code Authorship Attribution is a problem that is lately studied more often due improvements in Deep Learning techniques. Among existing solutions, two common issues are inability to add new authors without retraining and lack of interpretability. We address both these problem. In our experiments, we were able to correctly classify 75% of authors for diferent programming languages. Additionally, we applied techniques of explainable AI (XAI) and found that our model seems to pay attention to distinctive features of source code.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Islam, Mir Riyanul, Mobyen Uddin Ahmed, Shaibal Barua et Shahina Begum. « A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks ». Applied Sciences 12, no 3 (27 janvier 2022) : 1353. http://dx.doi.org/10.3390/app12031353.

Texte intégral
Résumé :
Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant growth over the last couple of years with the development of highly accurate models but with a paucity of explainability and interpretability. The literature shows evidence from numerous studies on the philosophy and methodologies of XAI. Nonetheless, there is an evident scarcity of secondary studies in connection with the application domains and tasks, let alone review studies following prescribed guidelines, that can enable researchers’ understanding of the current trends in XAI, which could lead to future research for domain- and application-specific method development. Therefore, this paper presents a systematic literature review (SLR) on the recent developments of XAI methods and evaluation metrics concerning different application domains and tasks. This study considers 137 articles published in recent years and identified through the prominent bibliographic databases. This systematic synthesis of research articles resulted in several analytical findings: XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations. Research studies have been performed on the addition of explanations to widely used AI/ML models for expert users. However, more attention is required to generate explanations for general users from sensitive domains such as finance and the judicial system.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "XAI Interpretability"

1

SEVESO, ANDREA. « Symbolic Reasoning for Contrastive Explanations ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.

Texte intégral
Résumé :
La necessità di spiegazioni sui sistemi di Machine Learning (ML) sta crescendo man mano che i nuovi modelli superano in performance i loro predecessori, diventando più complessi e meno comprensibili per gli utenti finali. Un passaggio essenziale nella ricerca in ambito eXplainable Artificial Intelligence (XAI) è la creazione di modelli interpretabili che mirano ad approssimare la funzione decisionale di un algoritmo black box. Sebbene negli ultimi anni siano stati proposti diversi metodi di XAI, non è stata prestata sufficiente attenzione alla spiegazione di come i modelli modificano il loro comportamento in contrasto con altre versioni (ad esempio, a causa di nuovi addestramenti dei modelli o modifica dei dati sottostanti). In questi casi, un sistema XAI dovrebbe spiegare perché il modello cambia le sue previsioni sui risultati passati. In diverse situazioni pratiche, i decisori umani si confrontano con più di un modello di apprendimento automatico. Di conseguenza, sta crescendo l'importanza di capire come funzionano due modelli di Machine Learning al di là delle loro performance predittive, per comprendere il loro comportamento, le loro differenze e la loro somiglianza. Ad oggi, i modelli interpretabili sono sintetizzati per spiegare i cosiddetti modelli black-box e le loro previsioni, e possono essere utili per rappresentare formalmente e misurare le differenze nel comportamento del modello ri-addestrato nel trattare dati nuovi e diversi. Catturare e comprendere tali differenze è fondamentale, poiché la necessità di fiducia è fondamentale in qualsiasi applicazione a supporto dei processi decisionali umani-IA. Questa è l'idea di ContrXT, un nuovo approccio che (i) traccia i criteri decisionali di un classificatore black box codificando i cambiamenti nella logica decisionale attraverso Binary Decision Diagrams. Quindi (ii) fornisce spiegazioni globali, agnostici dalla tipologia di modello, Model-Contrastive (M-contrast) in linguaggio naturale, stimando perché -e in quale misura- il modello ha modificato il suo comportamento nel tempo. Abbiamo implementato e valutato questo approccio su diversi modelli ML supervisionati addestrati su set di dati di benchmark e un'applicazione reale, dimostrando che è efficace nel rilevare classi notevolmente modificate e nello spiegare la loro variazione attraverso un user study. L'approccio è stato implementato ed è disponibile per la comunità sia come pacchetto Python che tramite API REST, fornendo contrastive explanations as a service.
The need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Matz, Filip, et Yuxiang Luo. « Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry ». Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300897.

Texte intégral
Résumé :
The field of explainable artificial intelligence (XAI) has gained momentum in recent years following the increased use of AI systems across industries leading to bias, discrimination, and data security concerns. Several conceptual frameworks for how to reach AI systems that are fair, transparent, and understandable have been proposed, as well as a number of technical solutions improving some of these aspects in a research context. However, there is still a lack of studies examining the implementation of these concepts and techniques in practice. This research aims to bridge the gap between prominent theory within the area and practical implementation, exploring the implementation and evaluation of XAI models in the Swedish credit scoring industry, and proposes a three-step framework for the implementation of local explanations in practice. The research methods used consisted of a case study with the model development at UC AB as a subject and an experiment evaluating the consumers' levels of trust and system understanding as well as the usefulness, persuasive power, and usability of the explanation for three different explanation prototypes developed. The framework proposed was validated by the case study and highlighted a number of key challenges and trade-offs present when implementing XAI in practice. Moreover, the evaluation of the XAI prototypes showed that the majority of consumers prefers rulebased explanations, but that preferences for explanations is still dependent on the individual consumer. Recommended future research endeavors include studying a longterm XAI project in which the models can be evaluated by the open market and the combination of different XAI methods in reaching a more personalized explanation for the consumer.
Under senare år har antalet AI implementationer stadigt ökat i flera industrier. Dessa implementationer har visat flera utmaningar kring nuvarande AI system, specifikt gällande diskriminering, otydlighet och datasäkerhet vilket lett till ett intresse för förklarbar artificiell intelligens (XAI). XAI syftar till att utveckla AI system som är rättvisa, transparenta och begripliga. Flera konceptuella ramverk har introducerats för XAI som presenterar etiska såväl som politiska perspektiv och målbilder. Dessutom har tekniska metoder utvecklats som gjort framsteg mot förklarbarhet i forskningskontext. Däremot saknas det fortfarande studier som undersöker implementationer av dessa koncept och tekniker i praktiken. Denna studie syftar till att överbrygga klyftan mellan den senaste teorin inom området och praktiken genom en fallstudie av ett företag i den svenska kreditupplysningsindustrin. Detta genom att föreslå ett ramverk för implementation av lokala förklaringar i praktiken och genom att utveckla tre förklaringsprototyper. Rapporten utvärderar även prototyperna med konsumenter på följande dimensioner: tillit, systemförståelse, användbarhet och övertalningsstyrka. Det föreslagna ramverket validerades genom fallstudien och belyste ett antal utmaningar och avvägningar som förekommer när XAI system utvecklas för användning i praktiken. Utöver detta visar utvärderingen av prototyperna att majoriteten av konsumenter föredrar regelbaserade förklaringar men indikerar även att preferenser mellan konsumenter varierar. Rekommendationer för framtida forskning är dels en längre studie, vari en XAI modell introduceras på och utvärderas av den fria marknaden, dels forskning som kombinerar olika XAI metoder för att generera mer personliga förklaringar för konsumenter.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "XAI Interpretability"

1

Dinu, Marius-Constantin, Markus Hofmarcher, Vihang P. Patil, Matthias Dorfer, Patrick M. Blies, Johannes Brandstetter, Jose A. Arjona-Medina et Sepp Hochreiter. « XAI and Strategy Extraction via Reward Redistribution ». Dans xxAI - Beyond Explainable AI, 177–205. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_10.

Texte intégral
Résumé :
AbstractIn reinforcement learning, an agent interacts with an environment from which it receives rewards, that are then used to learn a task. However, it is often unclear what strategies or concepts the agent has learned to solve the task. Thus, interpretability of the agent’s behavior is an important aspect in practical applications, next to the agent’s performance at the task itself. However, with the increasing complexity of both tasks and agents, interpreting the agent’s behavior becomes much more difficult. Therefore, developing new interpretable RL agents is of high importance. To this end, we propose to use Align-RUDDER as an interpretability method for reinforcement learning. Align-RUDDER is a method based on the recently introduced RUDDER framework, which relies on contribution analysis of an LSTM model, to redistribute rewards to key events. From these key events a strategy can be derived, guiding the agent’s decisions in order to solve a certain task. More importantly, the key events are in general interpretable by humans, and are often sub-tasks; where solving these sub-tasks is crucial for solving the main task. Align-RUDDER enhances the RUDDER framework with methods from multiple sequence alignment (MSA) to identify key events from demonstration trajectories. MSA needs only a few trajectories in order to perform well, and is much better understood than deep learning models such as LSTMs. Consequently, strategies and concepts can be learned from a few expert demonstrations, where the expert can be a human or an agent trained by reinforcement learning. By substituting RUDDER’s LSTM with a profile model that is obtained from MSA of demonstration trajectories, we are able to interpret an agent at three stages: First, by extracting common strategies from demonstration trajectories with MSA. Second, by encoding the most prevalent strategy via the MSA profile model and therefore explaining the expert’s behavior. And third, by allowing the interpretation of an arbitrary agent’s behavior based on its demonstration trajectories.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Stevens, Alexander, Johannes De Smedt et Jari Peeperkorn. « Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring ». Dans Lecture Notes in Business Information Processing, 194–206. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Texte intégral
Résumé :
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structures (e.g. neural networks) with post-processing to calculate feature importance. In this paper, a comprehensive overview of predictive models with varying intrinsic complexity are measured based on explainability with model-agnostic quantitative evaluation metrics. To this end, explainability is designed as a symbiosis between interpretability and faithfulness and thereby allowing to compare inherently created explanations (e.g. decision tree rules) with post-hoc explainability techniques (e.g. Shapley values) on top of AI models. Moreover, two improved versions of the logistic regression model capable of capturing non-linear interactions and both inherently generating their own explanations are proposed in the OOPPM context. These models are benchmarked with two common state-of-the-art models with post-hoc explanation techniques in the explainability-performance space.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Virgolin, Marco, Andrea De Lorenzo, Eric Medvet et Francesca Randone. « Learning a Formula of Interpretability to Learn Interpretable Formulas ». Dans Parallel Problem Solving from Nature – PPSN XVI, 79–93. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58115-2_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Singh, Chandan, Wooseok Ha et Bin Yu. « Interpreting and Improving Deep-Learning Models with Reality Checks ». Dans xxAI - Beyond Explainable AI, 229–54. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_12.

Texte intégral
Résumé :
AbstractRecent deep-learning models have achieved impressive predictive performance by learning complex functions of many variables, often at the cost of interpretability. This chapter covers recent work aiming to interpret models by attributing importance to features and feature groups for a single prediction. Importantly, the proposed attributions assign importance to interactions between features, in addition to features in isolation. These attributions are shown to yield insights across real-world domains, including bio-imaging, cosmology image and natural-language processing. We then show how these attributions can be used to directly improve the generalization of a neural network or to distill it into a simple model. Throughout the chapter, we emphasize the use of reality checks to scrutinize the proposed interpretation techniques. (Code for all methods in this chapter is available at "Image missing"github.com/csinva and "Image missing"github.com/Yu-Group, implemented in PyTorch [54]).
Styles APA, Harvard, Vancouver, ISO, etc.
5

Mittelstadt, Brent. « Interpretability and Transparency in Artificial Intelligence ». Dans The Oxford Handbook of Digital Ethics. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780198857815.013.20.

Texte intégral
Résumé :
Abstract Artificial Intelligence (AI) systems are frequently thought of as opaque, meaning their performance or logic is thought to be inaccessible or incomprehensible to human observers. Models can consist of millions of features connected in a complex web of dependent behaviours. Conveying this internal state and dependencies in a humanly comprehensible way is extremely challenging. Explaining the functionality and behaviour of AI systems in a meaningful and useful way to people designing, operating, regulating, or affected by their outputs is a complex technical, philosophical, and ethical project. Despite this complexity, principles citing ‘transparency’ or ‘interpretability’ are commonly found in ethical and regulatory frameworks addressing technology. This chapter provides an overview of these concepts and methods design to explain how AI works. After reviewing key concepts and terminology, two sets of methods are examined: (1) interpretability methods designed to explain and approximate AI functionality and behaviour; and (2) transparency frameworks meant to help assess and provide information about the development, governance, and potential impact of training datasets, models, and specific applications. These methods are analysed in the context of prior work on explanations in the philosophy of science. The chapter closes by introducing a framework of criteria to evaluate the quality and utility of methods in explainable AI (XAI) and to clarify the open challenges facing the field.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kavila, Selvani Deepthi, Rajesh Bandaru, Tanishk Venkat Mahesh Babu Gali et Jana Shafi. « Analysis of Cardiovascular Disease Prediction Using Model-Agnostic Explainable Artificial Intelligence Techniques ». Dans Advances in Medical Technologies and Clinical Practice, 27–54. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3791-9.ch002.

Texte intégral
Résumé :
The heart is mainly responsible for supplying oxygen and nutrients and pumping blood to the entire body. The diseases that affect the heart or capillaries are known as cardiovascular diseases. In predicting cardiovascular diseases, machine learning and neural network models play a vital role and help in reducing human effort. Though the complex algorithms in machine learning and neural networks help in giving accurate results, the interpretability behind the prediction has become difficult. To understand the reason behind the prediction, explainable artificial intelligence (XAI) is introduced. This chapter aims to perform different machine learning and neural network models for predicting cardiovascular diseases. For the interpretation behind the prediction, the authors used explainable artificial intelligence model-agnostic approaches. Based on experimentation results, the artificial neural network (ANN) with multi-level model gives an accuracy of 87%, which is best compared to other models.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Daglarli, Evren. « Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models for Cyber-Physical Systems ». Dans Advances in Systems Analysis, Software Engineering, and High Performance Computing, 42–67. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5101-1.ch003.

Texte intégral
Résumé :
Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Dağlarli, Evren. « Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models ». Dans Advances and Applications in Deep Learning. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.92172.

Texte intégral
Résumé :
The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "XAI Interpretability"

1

Alibekov, M. R. « Diagnosis of Plant Biotic Stress by Methods of Explainable Artificial Intelligence ». Dans 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-728-739.

Texte intégral
Résumé :
Methods for digital image preprocessing, which significantly increase the efficiency of ML methods, and also a number of ML methods and models as a basis for constructing simple and efficient XAI networks for diagnosing plant biotic stresses, have been studied. A complex solution has been built, which includes the following stages: automatic segmentation; feature extraction; classification by ML models. The best classifiers and feature vectors are selected. The study was carried out on the open dataset PlantVillage Dataset. The single-layer perceptron (SLP) trained on a full vector of 92 features (20 statistical, 72 textural) became the best according to the F1- score=93% criterion. The training time on a PC with an Intel Core i5-8300H CPU took 189 minutes. According to the criterion “F1-score/number of features”, SLP trained on 7 principal components with F1-score=85% also became the best. Training time - 29 minutes. The criterion “F1- score/number+interpretability of features” favors the selected 9 features and the random forest model, F1-score=83%. The research software package is made in a modern version of Python using the OpenCV and deep learning model libraries, and is able for using in precision farming.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Demajo, Lara Marie, Vince Vella et Alexiei Dingli. « Explainable AI for Interpretable Credit Scoring ». Dans 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101516.

Texte intégral
Résumé :
With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Fryskowska, Anna, Michal Kedzierski, Damian Wierzbicki, Marcin Gorka et Natalia Berlinska. « Analysis of imagery interpretability of open sources radar satellite imagery ». Dans XII Conference on Reconnaissance and Electronic Warfare Systems, sous la direction de Piotr Kaniewski. SPIE, 2019. http://dx.doi.org/10.1117/12.2525013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Walczykowski, Piotr, Marcin Gorka, Michal Kedzierski, Aleksandra Sekrecka et Marcin Walkowiak. « Evaluation of the interpretability of satellite imagery obtained from open sources of information ». Dans XII Conference on Reconnaissance and Electronic Warfare Systems, sous la direction de Piotr Kaniewski. SPIE, 2019. http://dx.doi.org/10.1117/12.2525019.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie