Academic literature on the topic 'Explainable AI (XAI)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explainable AI (XAI).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Explainable AI (XAI)"

1

Thrun, Michael C., Alfred Ultsch, and Lutz Breuer. "Explainable AI Framework for Multivariate Hydrochemical Time Series." Machine Learning and Knowledge Extraction 3, no. 1 (February 4, 2021): 170–204. http://dx.doi.org/10.3390/make3010009.

Full text
Abstract:
The understanding of water quality and its underlying processes is important for the protection of aquatic environments. With the rare opportunity of access to a domain expert, an explainable AI (XAI) framework is proposed that is applicable to multivariate time series. The XAI provides explanations that are interpretable by domain experts. In three steps, it combines a data-driven choice of a distance measure with supervised decision trees guided by projection-based clustering. The multivariate time series consists of water quality measurements, including nitrate, electrical conductivity, and twelve other environmental parameters. The relationships between water quality and the environmental parameters are investigated by identifying similar days within a cluster and dissimilar days between clusters. The framework, called DDS-XAI, does not depend on prior knowledge about data structure, and its explanations are tendentially contrastive. The relationships in the data can be visualized by a topographic map representing high-dimensional structures. Two state of the art XAIs called eUD3.5 and iterative mistake minimization (IMM) were unable to provide meaningful and relevant explanations from the three multivariate time series data. The DDS-XAI framework can be swiftly applied to new data. Open-source code in R for all steps of the XAI framework is provided and the steps are structured application-oriented.
APA, Harvard, Vancouver, ISO, and other styles
2

Clement, Tobias, Nils Kemmerzell, Mohamed Abdelaal, and Michael Amberg. "XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process." Machine Learning and Knowledge Extraction 5, no. 1 (January 11, 2023): 78–108. http://dx.doi.org/10.3390/make5010006.

Full text
Abstract:
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
3

Medianovskyi, Kyrylo, and Ahti-Veikko Pietarinen. "On Explainable AI and Abductive Inference." Philosophies 7, no. 2 (March 23, 2022): 35. http://dx.doi.org/10.3390/philosophies7020035.

Full text
Abstract:
Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outlines the key predicament in the current inductive paradigm of ML and the associated XAI techniques, and sketches the desiderata for a truly participatory, second-generation XAI, which is endowed with abduction.
APA, Harvard, Vancouver, ISO, and other styles
4

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Full text
Abstract:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Burkart, Nadia, Danilo Brajovic, and Marco F. Huber. "Explainable AI: introducing trust and comprehensibility to AI engineering." at - Automatisierungstechnik 70, no. 9 (September 1, 2022): 787–92. http://dx.doi.org/10.1515/auto-2022-0013.

Full text
Abstract:
Abstract Machine learning (ML) rapidly gains increasing interest due to the continuous improvements in performance. ML is used in many different applications to support human users. The representational power of ML models allows solving difficult tasks, while making them impossible to be understood by humans. This provides room for possible errors and limits the full potential of ML, as it cannot be applied in critical environments. In this paper, we propose employing Explainable AI (xAI) for both model and data set refinement, in order to introduce trust and comprehensibility. Model refinement utilizes xAI for providing insights to inner workings of an ML model, for identifying limitations and for deriving potential improvements. Similarly, xAI is used in data set refinement to detect and resolve problems of the training data.
APA, Harvard, Vancouver, ISO, and other styles
6

Rajabi, Enayat, and Somayeh Kafaie. "Knowledge Graphs and Explainable AI in Healthcare." Information 13, no. 10 (September 28, 2022): 459. http://dx.doi.org/10.3390/info13100459.

Full text
Abstract:
Building trust and transparency in healthcare can be achieved using eXplainable Artificial Intelligence (XAI), as it facilitates the decision-making process for healthcare professionals. Knowledge graphs can be used in XAI for explainability by structuring information, extracting features and relations, and performing reasoning. This paper highlights the role of knowledge graphs in XAI models in healthcare, considering a state-of-the-art review. Based on our review, knowledge graphs have been used for explainability to detect healthcare misinformation, adverse drug reactions, drug-drug interactions and to reduce the knowledge gap between healthcare experts and AI-based models. We also discuss how to leverage knowledge graphs in pre-model, in-model, and post-model XAI models in healthcare to make them more explainable.
APA, Harvard, Vancouver, ISO, and other styles
7

Mishra, Sunny, Amit K. Shukla, and Pranab K. Muhuri. "Explainable Fuzzy AI Challenge 2022: Winner’s Approach to a Computationally Efficient and Explainable Solution." Axioms 11, no. 10 (September 20, 2022): 489. http://dx.doi.org/10.3390/axioms11100489.

Full text
Abstract:
An explainable artificial intelligence (XAI) agent is an autonomous agent that uses a fundamental XAI model at its core to perceive its environment and suggests actions to be performed. One of the significant challenges for these XAI agents is performing their operation efficiently, which is governed by the underlying inference and optimization system. Along similar lines, an Explainable Fuzzy AI Challenge (XFC 2022) competition was launched, whose principal objective was to develop a fully autonomous and optimized XAI algorithm that could play the Python arcade game “Asteroid Smasher”. This research first investigates inference models to implement an efficient (XAI) agent using rule-based fuzzy systems. We also discuss the proposed approach (which won the competition) to attain efficiency in the XAI algorithm. We have explored the potential of the widely used Mamdani- and TSK-based fuzzy inference systems and investigated which model might have a more optimized implementation. Even though the TSK-based model outperforms Mamdani in several applications, no empirical evidence suggests this will also be applicable in implementing an XAI agent. The experimentations are then performed to find a better-performing inference system in a fast-paced environment. The thorough analysis recommends more robust and efficient TSK-based XAI agents than Mamdani-based fuzzy inference systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Chaddad, Ahmad, Jihao Peng, Jian Xu, and Ahmed Bouridane. "Survey of Explainable AI Techniques in Healthcare." Sensors 23, no. 2 (January 5, 2023): 634. http://dx.doi.org/10.3390/s23020634.

Full text
Abstract:
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (January 19, 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Full text
Abstract:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Explainable AI (XAI)"

1

Vincenzi, Leonardo. "eXplainable Artificial Intelligence User Experience: contesto e stato dell’arte." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23338/.

Full text
Abstract:
Il grande sviluppo del mondo dell’Intelligenza Artificiale unito alla sua vastissima applicazione in molteplici ambiti degli ultimi anni, ha portato a una sempre maggior richiesta di spiegabilità dei sistemi di Machine Learning. A seguito di questa necessità il campo dell’eXplainable Artificial Intelligence ha compiuto passi importanti verso la creazione di sistemi e metodi per rendere i sistemi intelligenti sempre più trasparenti e in un futuro prossimo, per garantire sempre più equità e sicurezza nelle decisioni prese dall’AI, si prevede una sempre più rigida regolamentazione verso la sua spiegabilità. Per compiere un ulteriore salto di qualità, il recente campo di studio XAI UX si pone come obiettivo principale l’inserimento degli utenti al centro dei processi di progettazione di sistemi di AI, attraverso la combinazione di tecniche di spiegabilità offerte dall'eXplainable AI insieme allo studio di soluzioni UX. Il nuovo focus sull’utente e la necessità di creare team multidisciplinari, e quindi con maggiori barriere comunicative tra le persone, impongono ancora un largo studio sia da parte degli esperti XAI sia da parte della comunità HCI, e sono attualmente le principali difficoltà da risolvere. All’interno dell’elaborato si fornisce una visione attuale sullo stato dell'arte della XAI UX introducendo le motivazioni, il contesto e i vari ambiti di ricerca che comprende.
APA, Harvard, Vancouver, ISO, and other styles
2

Corinaldesi, Marianna. "Explainable AI: tassonomia e analisi di modelli spiegabili per il Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
La complessità dei modelli di Deep Learning ha permesso di ottenere risultati sbalorditivi in termini di accuratezza. Tale complessità è data sia dalla struttura non lineare e multistrato delle reti neurali profonde, sia dal loro elevato numero di parametri calcolati. Tuttavia, questo causa grandi difficoltà nello spiegare il processo decisionale di una rete neurale, che in alcuni contesti è però essenziale. Di fatto, per permettere l’accesso alle tecnologie di Deep Learning e Machine Learning anche ai settori critici - ovvero quei settori in cui le decisioni hanno un peso importante, quali l’ambito medico, economico, politico, giudiziario e così via - è necessario che le predizioni dei modelli siano avvalorate da una spiegazione. L’ Explainable AI (XAI) è il campo di studi che si occupa di sviluppare metodi per fornire spiegazioni alle decisioni effettuate da un modello predittivo. Questo lavoro di tesi raccoglie, organizza ed esamina i principali studi dei ricercatori di XAI in modo da facilitare l’avvicinamento a questa disciplina in rapido sviluppo. Si spiegherà a cosa, a chi e quando serve XAI; sarà mostrata la tassonomia degli attuali metodi utilizzati; si descriveranno e analizzeranno i limiti di alcuni tra gli algoritmi di maggior successo: tecniche basate sul gradiente ascendente sull’input, Deconvolutional Neural Network, CAM e Grad-CAM, LIME, SHAP; si discuterà brevemente dei metodi di valutazione di un modello XAI; si mostrerà il confronto tra l’ allenamento basato sul campionamento nello spazio latente e l’allenamento basato sul calcolo o stima del likelihood; si indicheranno tre librerie open-source di rilievo per la programmazione di modelli spiegabili.
APA, Harvard, Vancouver, ISO, and other styles
3

Hammarström, Tobias. "Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424779.

Full text
Abstract:
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
APA, Harvard, Vancouver, ISO, and other styles
4

Gjeka, Mario. "Uno strumento per le spiegazioni di sistemi di Explainable Artificial Intelligence." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
L'obiettivo di questa tesi è quello di mostrare l’importanza delle spiegazioni in un sistema intelligente. Il bisogno di avere un'intelligenza artificiale spiegabile e trasparente sta crescendo notevolmente, esigenza evidenziata dalla ricerca delle aziende di sviluppare sistemi informatici intelligenti trasparenti e spiegabili.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Explainable AI (XAI)"

1

Sayed-Mouchaweh, Moamar. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sayed-Mouchaweh, Moamar. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Explainable AI (XAI)"

1

Montavon, Grégoire, Jacob Kauffmann, Wojciech Samek, and Klaus-Robert Müller. "Explaining the Predictions of Unsupervised Learning Models." In xxAI - Beyond Explainable AI, 117–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_7.

Full text
Abstract:
AbstractUnsupervised learning is a subfield of machine learning that focuses on learning the structure of data without making use of labels. This implies a different set of learning algorithms than those used for supervised learning, and consequently, also prevents a direct transposition of Explainable AI (XAI) methods from the supervised to the less studied unsupervised setting. In this chapter, we review our recently proposed ‘neuralization-propagation’ (NEON) approach for bringing XAI to workhorses of unsupervised learning such as kernel density estimation and k-means clustering. NEON first converts (without retraining) the unsupervised model into a functionally equivalent neural network so that, in a second step, supervised XAI techniques such as layer-wise relevance propagation (LRP) can be used. The approach is showcased on two application examples: (1) analysis of spending behavior in wholesale customer data and (2) analysis of visual features in industrial and scene images.
APA, Harvard, Vancouver, ISO, and other styles
2

Holzinger, Andreas, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek. "xxAI - Beyond Explainable Artificial Intelligence." In xxAI - Beyond Explainable AI, 3–10. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_1.

Full text
Abstract:
AbstractThe success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model’s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on “XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.”
APA, Harvard, Vancouver, ISO, and other styles
3

Gianfagna, Leonida, and Antonio Di Cecco. "Model-Agnostic Methods for XAI." In Explainable AI with Python, 81–113. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68640-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mamalakis, Antonios, Imme Ebert-Uphoff, and Elizabeth A. Barnes. "Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science." In xxAI - Beyond Explainable AI, 315–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_16.

Full text
Abstract:
AbstractIn recent years, artificial intelligence and specifically artificial neural networks (NNs) have shown great success in solving complex, nonlinear problems in earth sciences. Despite their success, the strategies upon which NNs make decisions are hard to decipher, which prevents scientists from interpreting and building trust in the NN predictions; a highly desired and necessary condition for the further use and exploitation of NNs’ potential. Thus, a variety of methods have been recently introduced with the aim of attributing the NN predictions to specific features in the input space and explaining their strategy. The so-called eXplainable Artificial Intelligence (XAI) is already seeing great application in a plethora of fields, offering promising results and insights about the decision strategies of NNs. Here, we provide an overview of the most recent work from our group, applying XAI to meteorology and climate science. Specifically, we present results from satellite applications that include weather phenomena identification and image to image translation, applications to climate prediction at subseasonal to decadal timescales, and detection of forced climatic changes and anthropogenic footprint. We also summarize a recently introduced synthetic benchmark dataset that can be used to improve our understanding of different XAI methods and introduce objectivity into the assessment of their fidelity. With this overview, we aim to illustrate how gaining accurate insights about the NN decision strategy can help climate scientists and meteorologists improve practices in fine-tuning model architectures, calibrating trust in climate and weather prediction and attribution, and learning new science.
APA, Harvard, Vancouver, ISO, and other styles
5

Dinu, Marius-Constantin, Markus Hofmarcher, Vihang P. Patil, Matthias Dorfer, Patrick M. Blies, Johannes Brandstetter, Jose A. Arjona-Medina, and Sepp Hochreiter. "XAI and Strategy Extraction via Reward Redistribution." In xxAI - Beyond Explainable AI, 177–205. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_10.

Full text
Abstract:
AbstractIn reinforcement learning, an agent interacts with an environment from which it receives rewards, that are then used to learn a task. However, it is often unclear what strategies or concepts the agent has learned to solve the task. Thus, interpretability of the agent’s behavior is an important aspect in practical applications, next to the agent’s performance at the task itself. However, with the increasing complexity of both tasks and agents, interpreting the agent’s behavior becomes much more difficult. Therefore, developing new interpretable RL agents is of high importance. To this end, we propose to use Align-RUDDER as an interpretability method for reinforcement learning. Align-RUDDER is a method based on the recently introduced RUDDER framework, which relies on contribution analysis of an LSTM model, to redistribute rewards to key events. From these key events a strategy can be derived, guiding the agent’s decisions in order to solve a certain task. More importantly, the key events are in general interpretable by humans, and are often sub-tasks; where solving these sub-tasks is crucial for solving the main task. Align-RUDDER enhances the RUDDER framework with methods from multiple sequence alignment (MSA) to identify key events from demonstration trajectories. MSA needs only a few trajectories in order to perform well, and is much better understood than deep learning models such as LSTMs. Consequently, strategies and concepts can be learned from a few expert demonstrations, where the expert can be a human or an agent trained by reinforcement learning. By substituting RUDDER’s LSTM with a profile model that is obtained from MSA of demonstration trajectories, we are able to interpret an agent at three stages: First, by extracting common strategies from demonstration trajectories with MSA. Second, by encoding the most prevalent strategy via the MSA profile model and therefore explaining the expert’s behavior. And third, by allowing the interpretation of an arbitrary agent’s behavior based on its demonstration trajectories.
APA, Harvard, Vancouver, ISO, and other styles
6

Gianfagna, Leonida, and Antonio Di Cecco. "Making Science with Machine Learning and XAI." In Explainable AI with Python, 143–64. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68640-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mishra, Pradeepta. "Counterfactual Explanations for XAI Models." In Practical Explainable AI Using Python, 265–78. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7158-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Becking, Daniel, Maximilian Dreyer, Wojciech Samek, Karsten Müller, and Sebastian Lapuschkin. "ECQ$$^{\text {x}}$$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs." In xxAI - Beyond Explainable AI, 271–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_14.

Full text
Abstract:
AbstractThe remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning prohibitive for resource-constrained hardware platforms such as mobile devices. Recent efforts aim to reduce these overheads, while preserving model performance as much as possible, and include parameter reduction techniques, parameter quantization, and lossless compression techniques.In this chapter, we develop and describe a novel quantization paradigm for DNNs: Our method leverages concepts of explainable AI (XAI) and concepts of information theory: Instead of assigning weight values based on their distances to the quantization clusters, the assignment function additionally considers weight relevances obtained from Layer-wise Relevance Propagation (LRP) and the information content of the clusters (entropy optimization). The ultimate goal is to preserve the most relevant weights in quantization clusters of highest information content.Experimental results show that this novel Entropy-Constrained and XAI-adjusted Quantization (ECQ$$^{\text {x}}$$ x ) method generates ultra low-precision (2–5 bit) and simultaneously sparse neural networks while maintaining or even improving model performance. Due to reduced parameter precision and high number of zero-elements, the rendered networks are highly compressible in terms of file size, up to 103$$\times $$ × compared to the full-precision unquantized DNN model. Our approach was evaluated on different types of models and datasets (including Google Speech Commands, CIFAR-10 and Pascal VOC) and compared with previous work.
APA, Harvard, Vancouver, ISO, and other styles
9

Tsai, Chun-Hua, and John M. Carroll. "Logic and Pragmatics in AI Explanation." In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.

Full text
Abstract:
AbstractThis paper reviews logical approaches and challenges raised for explaining AI. We discuss the issues of presenting explanations as accurate computational models that users cannot understand or use. Then, we introduce pragmatic approaches that consider explanation a sort of speech act that commits to felicity conditions, including intelligibility, trustworthiness, and usefulness to the users. We argue Explainable AI (XAI) is more than a matter of accurate and complete computational explanation, that it requires pragmatics to address the issues it seeks to address. At the end of this paper, we draw a historical analogy to usability. This term was understood logically and pragmatically, but that has evolved empirically through time to become more prosperous and more functional.
APA, Harvard, Vancouver, ISO, and other styles
10

Holzinger, Andreas, Anna Saranti, Christoph Molnar, Przemyslaw Biecek, and Wojciech Samek. "Explainable AI Methods - A Brief Overview." In xxAI - Beyond Explainable AI, 13–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_2.

Full text
Abstract:
AbstractExplainable Artificial Intelligence (xAI) is an established field with a vibrant community that has developed a variety of very successful approaches to explain and interpret predictions of complex machine learning models such as deep neural networks. In this article, we briefly introduce a few selected methods and discuss them in a short, clear and concise way. The goal of this article is to give beginners, especially application engineers and data scientists, a quick overview of the state of the art in this current topic. The following 17 methods are covered in this chapter: LIME, Anchors, GraphLIME, LRP, DTD, PDA, TCAV, XGNN, SHAP, ASV, Break-Down, Shapley Flow, Textual Explanations of Visual Models, Integrated Gradients, Causal Models, Meaningful Perturbations, and X-NeSyL.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Explainable AI (XAI)"

1

Hughes, Rowan, Cameron Edmond, Lindsay Wells, Mashhuda Glencross, Liming Zhu, and Tomasz Bednarz. "eXplainable AI (XAI)." In SA '20: SIGGRAPH Asia 2020. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3415263.3419166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ignatiev, Alexey. "Towards Trustable Explainable AI." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/726.

Full text
Abstract:
Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.
APA, Harvard, Vancouver, ISO, and other styles
3

Palacio, Sebastian, Adriano Lucieri, Mohsin Munir, Sheraz Ahmed, Jorn Hees, and Andreas Dengel. "XAI Handbook: Towards a Unified Framework for Explainable AI." In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Čyras, Kristijonas, Antonio Rago, Emanuele Albini, Pietro Baroni, and Francesca Toni. "Argumentative XAI: A Survey." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/600.

Full text
Abstract:
Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. We overview the literature focusing on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use. We also lay out a roadmap for future work.
APA, Harvard, Vancouver, ISO, and other styles
5

Byrne, Ruth M. J. "Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/876.

Full text
Abstract:
Counterfactuals about what could have happened are increasingly used in an array of Artificial Intelligence (AI) applications, and especially in explainable AI (XAI). Counterfactuals can aid the provision of interpretable models to make the decisions of inscrutable systems intelligible to developers and users. However, not all counterfactuals are equally helpful in assisting human comprehension. Discoveries about the nature of the counterfactuals that humans create are a helpful guide to maximize the effectiveness of counterfactual use in AI.
APA, Harvard, Vancouver, ISO, and other styles
6

D'Alterio, Pasquale, Jonathan M. Garibaldi, and Robert I. John. "Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI)." In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2020. http://dx.doi.org/10.1109/fuzz48607.2020.9177671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nyre-Yu, Megan, Elizabeth Morris, Michael Smith, Blake Moss, and Charles Smutz. "Explainable AI in Cybersecurity Operations: Lessons Learned from xAI Tool Deployment." In Symposium on Usable Security. Reston, VA: Internet Society, 2022. http://dx.doi.org/10.14722/usec.2022.23014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ahamed, Aadil, Kamran Alipour, Sateesh Kumar, Severine Soltani, and Michael Pazzani. "Improving Explanations of Image Classification with Ensembles of Learners." In 8th International Conference on Artificial Intelligence and Applications (AI 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121801.

Full text
Abstract:
In explainable AI (XAI) for deep learning, saliency maps, heatmaps, or attention maps are commonly used to identify important regions for the classification of images of explanations. Recent research has shown that many common XAI methods do not accurately identify the regions that human experts consider important. We propose averaging explanations from ensembles of learners to increase the accuracy of explanations. Our technique is general and can be used with multiple deep learning architectures and multiple XAI algorithms. We show that this method decreases the difference between regions of interest of XAI algorithms and those identified by human experts. Furthermore, we show that human experts prefer the explanations produced by ensembles to those of individual networks.
APA, Harvard, Vancouver, ISO, and other styles
9

Guerdan, Luke, Alex Raymond, and Hatice Gunes. "Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions." In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Guang, Arvind Rao, Christine Fernandez-Maloigne, Vince Calhoun, and Gloria Menegaz. "Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges." In 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography