Journal articles on the topic 'Explainable AI (XAI)'

To see the other types of publications on this topic, follow the link: Explainable AI (XAI).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explainable AI (XAI).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Thrun, Michael C., Alfred Ultsch, and Lutz Breuer. "Explainable AI Framework for Multivariate Hydrochemical Time Series." Machine Learning and Knowledge Extraction 3, no. 1 (February 4, 2021): 170–204. http://dx.doi.org/10.3390/make3010009.

Full text
Abstract:
The understanding of water quality and its underlying processes is important for the protection of aquatic environments. With the rare opportunity of access to a domain expert, an explainable AI (XAI) framework is proposed that is applicable to multivariate time series. The XAI provides explanations that are interpretable by domain experts. In three steps, it combines a data-driven choice of a distance measure with supervised decision trees guided by projection-based clustering. The multivariate time series consists of water quality measurements, including nitrate, electrical conductivity, and twelve other environmental parameters. The relationships between water quality and the environmental parameters are investigated by identifying similar days within a cluster and dissimilar days between clusters. The framework, called DDS-XAI, does not depend on prior knowledge about data structure, and its explanations are tendentially contrastive. The relationships in the data can be visualized by a topographic map representing high-dimensional structures. Two state of the art XAIs called eUD3.5 and iterative mistake minimization (IMM) were unable to provide meaningful and relevant explanations from the three multivariate time series data. The DDS-XAI framework can be swiftly applied to new data. Open-source code in R for all steps of the XAI framework is provided and the steps are structured application-oriented.
APA, Harvard, Vancouver, ISO, and other styles
2

Clement, Tobias, Nils Kemmerzell, Mohamed Abdelaal, and Michael Amberg. "XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process." Machine Learning and Knowledge Extraction 5, no. 1 (January 11, 2023): 78–108. http://dx.doi.org/10.3390/make5010006.

Full text
Abstract:
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
3

Medianovskyi, Kyrylo, and Ahti-Veikko Pietarinen. "On Explainable AI and Abductive Inference." Philosophies 7, no. 2 (March 23, 2022): 35. http://dx.doi.org/10.3390/philosophies7020035.

Full text
Abstract:
Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outlines the key predicament in the current inductive paradigm of ML and the associated XAI techniques, and sketches the desiderata for a truly participatory, second-generation XAI, which is endowed with abduction.
APA, Harvard, Vancouver, ISO, and other styles
4

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Full text
Abstract:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Burkart, Nadia, Danilo Brajovic, and Marco F. Huber. "Explainable AI: introducing trust and comprehensibility to AI engineering." at - Automatisierungstechnik 70, no. 9 (September 1, 2022): 787–92. http://dx.doi.org/10.1515/auto-2022-0013.

Full text
Abstract:
Abstract Machine learning (ML) rapidly gains increasing interest due to the continuous improvements in performance. ML is used in many different applications to support human users. The representational power of ML models allows solving difficult tasks, while making them impossible to be understood by humans. This provides room for possible errors and limits the full potential of ML, as it cannot be applied in critical environments. In this paper, we propose employing Explainable AI (xAI) for both model and data set refinement, in order to introduce trust and comprehensibility. Model refinement utilizes xAI for providing insights to inner workings of an ML model, for identifying limitations and for deriving potential improvements. Similarly, xAI is used in data set refinement to detect and resolve problems of the training data.
APA, Harvard, Vancouver, ISO, and other styles
6

Rajabi, Enayat, and Somayeh Kafaie. "Knowledge Graphs and Explainable AI in Healthcare." Information 13, no. 10 (September 28, 2022): 459. http://dx.doi.org/10.3390/info13100459.

Full text
Abstract:
Building trust and transparency in healthcare can be achieved using eXplainable Artificial Intelligence (XAI), as it facilitates the decision-making process for healthcare professionals. Knowledge graphs can be used in XAI for explainability by structuring information, extracting features and relations, and performing reasoning. This paper highlights the role of knowledge graphs in XAI models in healthcare, considering a state-of-the-art review. Based on our review, knowledge graphs have been used for explainability to detect healthcare misinformation, adverse drug reactions, drug-drug interactions and to reduce the knowledge gap between healthcare experts and AI-based models. We also discuss how to leverage knowledge graphs in pre-model, in-model, and post-model XAI models in healthcare to make them more explainable.
APA, Harvard, Vancouver, ISO, and other styles
7

Mishra, Sunny, Amit K. Shukla, and Pranab K. Muhuri. "Explainable Fuzzy AI Challenge 2022: Winner’s Approach to a Computationally Efficient and Explainable Solution." Axioms 11, no. 10 (September 20, 2022): 489. http://dx.doi.org/10.3390/axioms11100489.

Full text
Abstract:
An explainable artificial intelligence (XAI) agent is an autonomous agent that uses a fundamental XAI model at its core to perceive its environment and suggests actions to be performed. One of the significant challenges for these XAI agents is performing their operation efficiently, which is governed by the underlying inference and optimization system. Along similar lines, an Explainable Fuzzy AI Challenge (XFC 2022) competition was launched, whose principal objective was to develop a fully autonomous and optimized XAI algorithm that could play the Python arcade game “Asteroid Smasher”. This research first investigates inference models to implement an efficient (XAI) agent using rule-based fuzzy systems. We also discuss the proposed approach (which won the competition) to attain efficiency in the XAI algorithm. We have explored the potential of the widely used Mamdani- and TSK-based fuzzy inference systems and investigated which model might have a more optimized implementation. Even though the TSK-based model outperforms Mamdani in several applications, no empirical evidence suggests this will also be applicable in implementing an XAI agent. The experimentations are then performed to find a better-performing inference system in a fast-paced environment. The thorough analysis recommends more robust and efficient TSK-based XAI agents than Mamdani-based fuzzy inference systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Chaddad, Ahmad, Jihao Peng, Jian Xu, and Ahmed Bouridane. "Survey of Explainable AI Techniques in Healthcare." Sensors 23, no. 2 (January 5, 2023): 634. http://dx.doi.org/10.3390/s23020634.

Full text
Abstract:
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (January 19, 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Full text
Abstract:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
APA, Harvard, Vancouver, ISO, and other styles
11

Tosun, Akif B., Filippo Pullara, Michael J. Becich, D. Lansing Taylor, Jeffrey L. Fine, and S. Chakra Chennubhotla. "Explainable AI (xAI) for Anatomic Pathology." Advances in Anatomic Pathology 27, no. 4 (July 2020): 241–50. http://dx.doi.org/10.1097/pap.0000000000000264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wolf, Christine T., and Kathryn E. Ringland. "Designing accessible, explainable AI (XAI) experiences." ACM SIGACCESS Accessibility and Computing, no. 125 (March 2, 2020): 1. http://dx.doi.org/10.1145/3386296.3386302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hafermalz, Ella, and Marleen Huysman. "Please Explain: Key Questions for Explainable AI research from an Organizational perspective." Morals & Machines 1, no. 2 (2021): 10–23. http://dx.doi.org/10.5771/2747-5174-2021-2-10.

Full text
Abstract:
There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.
APA, Harvard, Vancouver, ISO, and other styles
14

Connie, Tee, Yee Fan Tan, Michael Kah Ong Goh, Hock Woon Hon, Zulaikha Kadim, and Li Pei Wong. "Explainable health prediction from facial features with transfer learning." Journal of Intelligent & Fuzzy Systems 42, no. 3 (February 2, 2022): 2491–503. http://dx.doi.org/10.3233/jifs-211737.

Full text
Abstract:
In the recent years, Artificial Intelligence (AI) has been widely deployed in the healthcare industry. The new AI technology enables efficient and personalized healthcare systems for the public. In this paper, transfer learning with pre-trained VGGFace model is applied to identify sick symptoms based on the facial features of a person. As the deep learning model’s operation is unknown for making a decision, this paper investigates the use of Explainable AI (XAI) techniques for soliciting explanations for the predictions made by the model. Various XAI techniques including Integrated Gradient, Explainable region-based AI (XRAI) and Local Interpretable Model-Agnostic Explanations (LIME) are studied. XAI is crucial to increase the model’s transparency and reliability for practical deployment. Experimental results demonstrate that the attribution method can give proper explanations for the decisions made by highlighting important attributes in the images. The facial features that account for positive and negative classes predictions are highlighted appropriately for effective visualization. XAI can help to increase accountability and trustworthiness of the healthcare system as it provides insights for understanding how a conclusion is derived from the AI model.
APA, Harvard, Vancouver, ISO, and other styles
15

Combs, Kara, Mary Fendley, and Trevor Bihl. "A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 8 (June 1, 2020): 61–72. http://dx.doi.org/10.37394/232018.2020.8.9.

Full text
Abstract:
Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Han, Seung-Ho, Min-Su Kwon, and Ho-Jin Choi. "EXplainable AI (XAI) approach to image captioning." Journal of Engineering 2020, no. 13 (July 1, 2020): 589–94. http://dx.doi.org/10.1049/joe.2019.1217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mohseni, Sina, Niloofar Zarei, and Eric D. Ragan. "A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems." ACM Transactions on Interactive Intelligent Systems 11, no. 3-4 (December 31, 2021): 1–45. http://dx.doi.org/10.1145/3387166.

Full text
Abstract:
The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.
APA, Harvard, Vancouver, ISO, and other styles
18

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Weitz, Katharina. "Towards Human-Centered AI: Psychological concepts as foundation for empirical XAI research." it - Information Technology 64, no. 1-2 (November 17, 2021): 71–75. http://dx.doi.org/10.1515/itit-2021-0047.

Full text
Abstract:
Abstract Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.
APA, Harvard, Vancouver, ISO, and other styles
20

de Lange, Petter Eilif, Borger Melsom, Christian Bakke Vennerød, and Sjur Westgaard. "Explainable AI for Credit Assessment in Banks." Journal of Risk and Financial Management 15, no. 12 (November 28, 2022): 556. http://dx.doi.org/10.3390/jrfm15120556.

Full text
Abstract:
Banks’ credit scoring models are required by financial authorities to be explainable. This paper proposes an explainable artificial intelligence (XAI) model for predicting credit default on a unique dataset of unsecured consumer loans provided by a Norwegian bank. We combined a LightGBM model with SHAP, which enables the interpretation of explanatory variables affecting the predictions. The LightGBM model clearly outperforms the bank’s actual credit scoring model (Logistic Regression). We found that the most important explanatory variables for predicting default in the LightGBM model are the volatility of utilized credit balance, remaining credit in percentage of total credit and the duration of the customer relationship. Our main contribution is the implementation of XAI methods in banking, exploring how these methods can be applied to improve the interpretability and reliability of state-of-the-art AI models. We also suggest a method for analyzing the potential economic value of an improved credit scoring model.
APA, Harvard, Vancouver, ISO, and other styles
21

Dikmen, Murat, and Catherine Burns. "Abstraction Hierarchy Based Explainable Artificial Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 319–23. http://dx.doi.org/10.1177/1071181320641073.

Full text
Abstract:
This work explores the application of Cognitive Work Analysis (CWA) in the context of Explainable Artificial Intelligence (XAI). We built an AI system using a loan evaluation data set and applied an XAI technique to obtain data-driven explanations for predictions. Using an Abstraction Hierarchy (AH), we generated domain knowledge-based explanations to accompany data-driven explanations. An online experiment was conducted to test the usefulness of AH-based explanations. Participants read financial profiles of loan applicants, the AI system’s loan approval/rejection decisions, and explanations that justify the decisions. Presence or absence of AH-based explanations was manipulated, and participants’ perceptions of the explanation quality was measured. The results showed that providing AH-based explanations helped participants learn about the loan evaluation process and improved the perceived quality of explanations. We conclude that a CWA approach can increase understandability when explaining the decisions made by AI systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Liao, Q. Vera, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, and Amit Dhurandhar. "Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 10, no. 1 (October 14, 2022): 147–59. http://dx.doi.org/10.1609/hcomp.v10i1.21995.

Full text
Abstract:
Recent years have seen a surge of interest in the field of explainable AI (XAI), with a plethora of algorithms proposed in the literature. However, a lack of consensus on how to evaluate XAI hinders the advancement of the field. We highlight that XAI is not a monolithic set of technologies---researchers and practitioners have begun to leverage XAI algorithms to build XAI systems that serve different usage contexts, such as model debugging and decision-support. Algorithmic research of XAI, however, often does not account for these diverse downstream usage contexts, resulting in limited effectiveness or even unintended consequences for actual users, as well as difficulties for practitioners to make technical choices. We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts. Towards this goal, we introduce a perspective of contextualized XAI evaluation by considering the relative importance of XAI evaluation criteria for prototypical usage contexts of XAI. To explore the context dependency of XAI evaluation criteria, we conduct two survey studies, one with XAI topical experts and another with crowd workers. Our results urge for responsible AI research with usage-informed evaluation practices, and provide a nuanced understanding of user requirements for XAI in different usage contexts.
APA, Harvard, Vancouver, ISO, and other styles
23

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Full text
Abstract:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven healthcare approaches and their performances in the current study are discussed. The toolkits used in local and global post hoc explainability and the multiple techniques for explainability pertaining the Rational, Data, and Performance explainability are discussed in the current study. Methods. The explainability of the artificial intelligence model in the healthcare domain is implemented through the Local Interpretable Model-Agnostic Explanations and Shapley Additive Explanations for better comprehensibility of the internal working mechanism of the original AI models and the correlation among the feature set that influences decision of the model. Results. The current state-of-the-art XAI-based and future technologies through XAI are reported on research findings in various implementation aspects, including research challenges and limitations of existing models. The role of XAI in the healthcare domain ranging from the earlier prediction of future illness to the disease’s smart diagnosis is discussed. The metrics considered in evaluating the model’s explainability are presented, along with various explainability tools. Three case studies about the role of XAI in the healthcare domain with their performances are incorporated for better comprehensibility. Conclusion. The future perspective of XAI in healthcare will assist in obtaining research insight in the healthcare domain.
APA, Harvard, Vancouver, ISO, and other styles
24

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (April 30, 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Full text
Abstract:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
APA, Harvard, Vancouver, ISO, and other styles
25

Ahmed, Saad Bin, Roberto Solis-Oba, and Lucian Ilie. "Explainable-AI in Automated Medical Report Generation Using Chest X-ray Images." Applied Sciences 12, no. 22 (November 18, 2022): 11750. http://dx.doi.org/10.3390/app122211750.

Full text
Abstract:
The use of machine learning in healthcare has the potential to revolutionize virtually every aspect of the industry. However, the lack of transparency in AI applications may lead to the problem of trustworthiness and reliability of the information provided by these applications. Medical practitioners rely on such systems for clinical decision making, but without adequate explanations, diagnosis made by these systems cannot be completely trusted. Explainability in Artificial Intelligence (XAI) aims to improve our understanding of why a given output has been produced by an AI system. Automated medical report generation is one area that would benefit greatly from XAI. This survey provides an extensive literature review on XAI techniques used in medical image analysis and automated medical report generation. We present a systematic classification of XAI techniques used in this field, highlighting the most important features of each one that could be used by future research to select the most appropriate XAI technique to create understandable and reliable explanations for decisions made by AI systems. In addition to providing an overview of the state of the art in this area, we identify some of the most important issues that need to be addressed and on which research should be focused.
APA, Harvard, Vancouver, ISO, and other styles
26

Holzinger, Andreas. "Explainable AI and Multi-Modal Causability in Medicine." i-com 19, no. 3 (December 1, 2020): 171–79. http://dx.doi.org/10.1515/icom-2020-0024.

Full text
Abstract:
Abstract Progress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.
APA, Harvard, Vancouver, ISO, and other styles
27

Landgrebe, Jobst. "Certifiable AI." Applied Sciences 12, no. 3 (January 20, 2022): 1050. http://dx.doi.org/10.3390/app12031050.

Full text
Abstract:
Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Marques-Silva, Joao, and Alexey Ignatiev. "Delivering Trustworthy AI through Formal XAI." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12342–50. http://dx.doi.org/10.1609/aaai.v36i11.21499.

Full text
Abstract:
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and regulations, but also by recommendations from OECD and UNESCO, among several other examples. One critical premise of trustworthy AI involves the necessity of finding explanations that offer reliable guarantees of soundness. This paper argues that the best known eXplainable AI (XAI) approaches fail to provide sound explanations, or that alternatively find explanations which can exhibit significant redundancy. The solution to these drawbacks are explanation approaches that offer formal guarantees of rigor. These formal explanations are not only sound but guarantee irredundancy. This paper summarizes the recent developments in the emerging discipline of formal XAI. The paper also outlines existing challenges for formal XAI.
APA, Harvard, Vancouver, ISO, and other styles
29

Kim, Mi-Young, Shahin Atakishiyev, Housam Khalifa Bashier Babiker, Nawshad Farruque, Randy Goebel, Osmar R. Zaïane, Mohammad-Hossein Motallebi, et al. "A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence." Machine Learning and Knowledge Extraction 3, no. 4 (November 18, 2021): 900–921. http://dx.doi.org/10.3390/make3040045.

Full text
Abstract:
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.
APA, Harvard, Vancouver, ISO, and other styles
30

Lorente, Maria Paz Sesmero, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez, and Araceli Sanchis de Miguel. "Explaining Deep Learning-Based Driver Models." Applied Sciences 11, no. 8 (April 7, 2021): 3321. http://dx.doi.org/10.3390/app11083321.

Full text
Abstract:
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments.
APA, Harvard, Vancouver, ISO, and other styles
31

Calegari, Roberta, Giovanni Ciatto, and Andrea Omicini. "On the integration of symbolic and sub-symbolic techniques for XAI: A survey." Intelligenza Artificiale 14, no. 1 (September 17, 2020): 7–32. http://dx.doi.org/10.3233/ia-190036.

Full text
Abstract:
The more intelligent systems based on sub-symbolic techniques pervade our everyday lives, the less human can understand them. This is why symbolic approaches are getting more and more attention in the general effort to make AI interpretable, explainable, and trustable. Understanding the current state of the art of AI techniques integrating symbolic and sub-symbolic approaches is then of paramount importance, nowadays—in particular in the XAI perspective. This is why this paper provides an overview of the main symbolic/sub-symbolic integration techniques, focussing in particular on those targeting explainable AI systems.
APA, Harvard, Vancouver, ISO, and other styles
32

Ibne Mamun, Tauseef, Lamia Alam, Robert R. Hoffman, and Shane T. Mueller. "Assessing Satisfaction in and Understanding of a Collaborative Explainable AI (Cxai) System through User Studies." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, no. 1 (September 2022): 1270–74. http://dx.doi.org/10.1177/1071181322661428.

Full text
Abstract:
Modern artificial intelligence (AI) and machine learning (ML) systems have become more capable and more widely used, but often involve underlying processes their users do not understand and may not trust. Some researchers have addressed this by developing algorithms that help explain the workings of the system using ‘Explainable’ AI algorithms (XAI), but these have not always been successful in improving their understanding. Alternatively, collaborative user-driven explanations may address the needs of users, augmenting or replacing algorithmic explanations. We evaluate one such approach called “collaborative explainable AI” (CXAI). Across two experiments, we examined CXAI to assess whether users’ mental models, performance, and satisfaction improved with access to user-generated explanations. Results showed that collaborative explanations afforded users a better understanding of and satisfaction with the system than users without access to the explanations, suggesting that a CXAI system may provide a useful support that more dominant XAI approaches do not.
APA, Harvard, Vancouver, ISO, and other styles
33

Sarp, Salih, Murat Kuzlu, Emmanuel Wilson, Umit Cali, and Ozgur Guler. "The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification." Electronics 10, no. 12 (June 11, 2021): 1406. http://dx.doi.org/10.3390/electronics10121406.

Full text
Abstract:
Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, built on weights calculated as a result of large matrix multiplications. It is typically hard to interpret and debug the computationally intensive processes. Explainable Artificial Intelligence (XAI) aims to solve black-box and hard-to-debug approaches through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
34

Khrais, Laith T. "Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce." Future Internet 12, no. 12 (December 8, 2020): 226. http://dx.doi.org/10.3390/fi12120226.

Full text
Abstract:
The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.
APA, Harvard, Vancouver, ISO, and other styles
35

Veitch, Erik, and Ole Andreas Alsos. "Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles." Journal of Marine Science and Engineering 9, no. 11 (November 6, 2021): 1227. http://dx.doi.org/10.3390/jmse9111227.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions.
APA, Harvard, Vancouver, ISO, and other styles
36

Mankodiya, Harsh, Dhairya Jadav, Rajesh Gupta, Sudeep Tanwar, Wei-Chiang Hong, and Ravi Sharma. "OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles." Applied Sciences 12, no. 11 (May 24, 2022): 5310. http://dx.doi.org/10.3390/app12115310.

Full text
Abstract:
In recent years, artificial intelligence (AI) has become one of the most prominent fields in autonomous vehicles (AVs). With the help of AI, the stress levels of drivers have been reduced, as most of the work is executed by the AV itself. With the increasing complexity of models, explainable artificial intelligence (XAI) techniques work as handy tools that allow naive people and developers to understand the intricate workings of deep learning models. These techniques can be paralleled to AI to increase their interpretability. One essential task of AVs is to be able to follow the road. This paper attempts to justify how AVs can detect and segment the road on which they are moving using deep learning (DL) models. We trained and compared three semantic segmentation architectures for the task of pixel-wise road detection. Max IoU scores of 0.9459 and 0.9621 were obtained on the train and test set. Such DL algorithms are called “black box models” as they are hard to interpret due to their highly complex structures. Integrating XAI enables us to interpret and comprehend the predictions of these abstract models. We applied various XAI methods and generated explanations for the proposed segmentation model for road detection in AVs.
APA, Harvard, Vancouver, ISO, and other styles
37

Nwakanma, Cosmas Ifeanyi, Love Allen Chijioke Ahakonye, Judith Nkechinyere Njoku, Jacinta Chioma Odirichukwu, Stanley Adiele Okolie, Chinebuli Uzondu, Christiana Chidimma Ndubuisi Nweke, and Dong-Seong Kim. "Explainable Artificial Intelligence (XAI) for Intrusion Detection and Mitigation in Intelligent Connected Vehicles: A Review." Applied Sciences 13, no. 3 (January 17, 2023): 1252. http://dx.doi.org/10.3390/app13031252.

Full text
Abstract:
The potential for an intelligent transportation system (ITS) has been made possible by the growth of the Internet of things (IoT) and artificial intelligence (AI), resulting in the integration of IoT and ITS—known as the Internet of vehicles (IoV). To achieve the goal of automatic driving and efficient mobility, IoV is now combined with modern communication technologies (such as 5G) to achieve intelligent connected vehicles (ICVs). However, IoV is challenged with security risks in the following five (5) domains: ICV security, intelligent device security, service platform security, V2X communication security, and data security. Numerous AI models have been developed to mitigate the impact of intrusion threats on ICVs. On the other hand, the rise in explainable AI (XAI) results from the requirement to inject confidence, transparency, and repeatability into the development of AI for the security of ICV and to provide a safe ITS. As a result, the scope of this review covered the XAI models used in ICV intrusion detection systems (IDSs), their taxonomies, and outstanding research problems. The results of the study show that XAI though in its infancy of application to ICV, is a promising research direction in the quest for improving the network efficiency of ICVs. The paper further reveals that XAI increased transparency will foster its acceptability in the automobile industry.
APA, Harvard, Vancouver, ISO, and other styles
38

Matin, Sahar S., and Biswajeet Pradhan. "Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)." Sensors 21, no. 13 (June 30, 2021): 4489. http://dx.doi.org/10.3390/s21134489.

Full text
Abstract:
Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack of transparency in the AI-model, the lack of quality in the labelled image, and the use of irrelevant descriptor features in building the AI-model. Using explainable AI (XAI) can lead us to gain insight into identifying these limitations and therefore, to modify the training dataset and the model accordingly. This paper proposes the use of SHAP (Shapley additive explanation) to interpret the outputs of a multilayer perceptron (MLP)—a machine learning model—and analyse the impact of each feature descriptor included in the model for building-damage assessment to examine the reliability of the model. In this study, a post-event satellite image from the 2018 Palu earthquake was used. The results show that MLP can classify the collapsed and non-collapsed buildings with an overall accuracy of 84% after removing the redundant features. Further, spectral features are found to be more important than texture features in distinguishing the collapsed and non-collapsed buildings. Finally, we argue that constructing an explainable model would help to understand the model’s decision to classify the buildings as collapsed and non-collapsed and open avenues to build a transferable AI model.
APA, Harvard, Vancouver, ISO, and other styles
39

Mamun, Tauseef Ibne, Kenzie Baker, Hunter Malinowski, Rober R. Hoffman, and Shane T. Mueller. "Assessing Collaborative Explanations of AI using Explanation Goodness Criteria." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 988–93. http://dx.doi.org/10.1177/1071181321651307.

Full text
Abstract:
Explainable AI represents an increasingly important category of systems that attempt to support human understanding and trust in machine intelligence and automation. Typical systems rely on algorithms to help understand underlying information about decisions and establish justified trust and reliance. Researchers have proposed using goodness criteria to measure the quality of explanations as a formative evaluation of an XAI system, but these criteria have not been systematically investigated in the literature. To explore this, we present a novel collaborative explanation system (CXAI) and propose several goodness criteria to evaluate the quality of its explanations. Results suggest that the explanations provided by this system are typically correct, informative, written in understandable ways, and focus on explanation of larger scale data patterns than are typically generated by algorithmic XAI systems. Implications for how these criteria may be applied to other XAI systems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
40

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Full text
Abstract:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
APA, Harvard, Vancouver, ISO, and other styles
41

Kerr, Alison Duncan, and Kevin Scharp. "The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence." Minds and Machines 32, no. 3 (September 2022): 585–611. http://dx.doi.org/10.1007/s11023-022-09609-7.

Full text
Abstract:
AbstractArtificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.
APA, Harvard, Vancouver, ISO, and other styles
42

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence." Digital Technologies Research and Applications 1, no. 1 (January 26, 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Full text
Abstract:
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
APA, Harvard, Vancouver, ISO, and other styles
43

Islam, Mir Riyanul, Mobyen Uddin Ahmed, Shaibal Barua, and Shahina Begum. "A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks." Applied Sciences 12, no. 3 (January 27, 2022): 1353. http://dx.doi.org/10.3390/app12031353.

Full text
Abstract:
Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant growth over the last couple of years with the development of highly accurate models but with a paucity of explainability and interpretability. The literature shows evidence from numerous studies on the philosophy and methodologies of XAI. Nonetheless, there is an evident scarcity of secondary studies in connection with the application domains and tasks, let alone review studies following prescribed guidelines, that can enable researchers’ understanding of the current trends in XAI, which could lead to future research for domain- and application-specific method development. Therefore, this paper presents a systematic literature review (SLR) on the recent developments of XAI methods and evaluation metrics concerning different application domains and tasks. This study considers 137 articles published in recent years and identified through the prominent bibliographic databases. This systematic synthesis of research articles resulted in several analytical findings: XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations. Research studies have been performed on the addition of explanations to widely used AI/ML models for expert users. However, more attention is required to generate explanations for general users from sensitive domains such as finance and the judicial system.
APA, Harvard, Vancouver, ISO, and other styles
44

Sim, Taeyong, Seonbin Choi, Yunjae Kim, Su Hyun Youn, Dong-Jin Jang, Sujin Lee, and Chang-Jae Chun. "eXplainable AI (XAI)-Based Input Variable Selection Methodology for Forecasting Energy Consumption." Electronics 11, no. 18 (September 17, 2022): 2947. http://dx.doi.org/10.3390/electronics11182947.

Full text
Abstract:
This research proposes a methodology for the selection of input variables based on eXplainable AI (XAI) for energy consumption prediction. For this purpose, the energy consumption prediction model (R2 = 0.871; MAE = 2.176; MSE = 9.870) was selected by collecting the energy data used in the building of a university in Seoul, Republic of Korea. Applying XAI to the results from the prediction model, input variables were divided into three groups by the expectation of the ranking-score (Fqvar) (10 ≤ Strong, 5 ≤ Ambiguous < 10, and Weak < 5), according to their influence. As a result, the models considering the input variables of the Strong + Ambiguous group (R2 = 0.917; MAE = 1.859; MSE = 6.639) or the Strong group (R2 = 0.916; MAE = 1.816; MSE = 6.663) showed higher prediction results than other cases (p < 0.05 or 0.01). There were no statistically significant results between the Strong group and the Strong + Ambiguous group (R2: p = 0.408; MAE: p = 0.488; MSE: p = 0.478). This means that when considering the input variables of the Strong group (Fqvar: Year = 14.8; E-Diff = 12.8; Hour = 11.0; Temp = 11.0; Surface-Temp = 10.4) determined by the XAI-based methodology, the energy consumption prediction model showed excellent performance. Therefore, the methodology proposed in this study is expected to determine a model that can accurately and efficiently predict energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
45

Zerilli, John. "Explaining Machine Learning Decisions." Philosophy of Science 89, no. 1 (January 2022): 1–19. http://dx.doi.org/10.1017/psa.2021.13.

Full text
Abstract:
AbstractThe operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem.
APA, Harvard, Vancouver, ISO, and other styles
46

Thrun, Michael C. "Exploiting Distance-Based Structures in Data Using an Explainable AI for Stock Picking." Information 13, no. 2 (January 21, 2022): 51. http://dx.doi.org/10.3390/info13020051.

Full text
Abstract:
In principle, the fundamental data of companies may be used to select stocks with a high probability of either increasing or decreasing price. Many of the commonly known rules or used explanations for such a stock-picking process are too vague to be applied in concrete cases, and at the same time, it is challenging to analyze high-dimensional data with a low number of cases in order to derive data-driven and usable explanations. This work proposes an explainable AI (XAI) approach on the quarterly available fundamental data of companies traded on the German stock market. In the XAI, distance-based structures in data (DSD) that guide decision tree induction are identified. The leaves of the appropriately selected decision tree contain subsets of stocks and provide viable explanations that can be rated by a human. The prediction of the future price trends of specific stocks is made possible using the explanations and a rating. In each quarter, stock picking by DSD-XAI is based on understanding the explanations and has a higher success rate than arbitrary stock picking, a hybrid AI system, and a recent unsupervised decision tree called eUD3.5.
APA, Harvard, Vancouver, ISO, and other styles
47

Bogdanova, Alina, and Vitaly Romanov. "Explainable source code authorship attribution algorithm." Journal of Physics: Conference Series 2134, no. 1 (December 1, 2021): 012011. http://dx.doi.org/10.1088/1742-6596/2134/1/012011.

Full text
Abstract:
Abstract Source Code Authorship Attribution is a problem that is lately studied more often due improvements in Deep Learning techniques. Among existing solutions, two common issues are inability to add new authors without retraining and lack of interpretability. We address both these problem. In our experiments, we were able to correctly classify 75% of authors for diferent programming languages. Additionally, we applied techniques of explainable AI (XAI) and found that our model seems to pay attention to distinctive features of source code.
APA, Harvard, Vancouver, ISO, and other styles
48

Lötsch, Jörn, Dario Kringel, and Alfred Ultsch. "Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients." BioMedInformatics 2, no. 1 (December 22, 2021): 1–17. http://dx.doi.org/10.3390/biomedinformatics2010001.

Full text
Abstract:
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field.
APA, Harvard, Vancouver, ISO, and other styles
49

Dikshit, Abhirup, and Biswajeet Pradhan. "Interpretable and explainable AI (XAI) model for spatial drought prediction." Science of The Total Environment 801 (December 2021): 149797. http://dx.doi.org/10.1016/j.scitotenv.2021.149797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Abdollahi, Abolfazl, and Biswajeet Pradhan. "Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI)." Sensors 21, no. 14 (July 11, 2021): 4738. http://dx.doi.org/10.3390/s21144738.

Full text
Abstract:
Urban vegetation mapping is critical in many applications, i.e., preserving biodiversity, maintaining ecological balance, and minimizing the urban heat island effect. It is still challenging to extract accurate vegetation covers from aerial imagery using traditional classification approaches, because urban vegetation categories have complex spatial structures and similar spectral properties. Deep neural networks (DNNs) have shown a significant improvement in remote sensing image classification outcomes during the last few years. These methods are promising in this domain, yet unreliable for various reasons, such as the use of irrelevant descriptor features in the building of the models and lack of quality in the labeled image. Explainable AI (XAI) can help us gain insight into these limits and, as a result, adjust the training dataset and model as needed. Thus, in this work, we explain how an explanation model called Shapley additive explanations (SHAP) can be utilized for interpreting the output of the DNN model that is designed for classifying vegetation covers. We want to not only produce high-quality vegetation maps, but also rank the input parameters and select appropriate features for classification. Therefore, we test our method on vegetation mapping from aerial imagery based on spectral and textural features. Texture features can help overcome the limitations of poor spectral resolution in aerial imagery for vegetation mapping. The model was capable of obtaining an overall accuracy (OA) of 94.44% for vegetation cover mapping. The conclusions derived from SHAP plots demonstrate the high contribution of features, such as Hue, Brightness, GLCM_Dissimilarity, GLCM_Homogeneity, and GLCM_Mean to the output of the proposed model for vegetation mapping. Therefore, the study indicates that existing vegetation mapping strategies based only on spectral characteristics are insufficient to appropriately classify vegetation covers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography