Journal articles on the topic 'Explainable Artificial Intelligence (XAI)'

To see the other types of publications on this topic, follow the link: Explainable Artificial Intelligence (XAI).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explainable Artificial Intelligence (XAI).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
2

Sewada, Ranu, Ashwani Jangid, Piyush Kumar, and Neha Mishra. "Explainable Artificial Intelligence (XAI)." Journal of Nonlinear Analysis and Optimization 13, no. 01 (2023): 41–47. http://dx.doi.org/10.36893/jnao.2022.v13i02.041-047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical facet in the realm of machine learning and artificial intelligence, responding to the increasing complexity of models, particularly deep neural networks, and the subsequent need for transparent decision making processes. This research paper delves into the essence of XAI, unraveling its significance across diverse domains such as healthcare, finance, and criminal justice. As a countermeasure to the opacity of intricate models, the paper explores various XAI methods and techniques, including LIME and SHAP, weighing their interpretability against computational efficiency and accuracy. Through an examination of real-world applications, the research elucidates how XAI not only enhances decision-making processes but also influences user trust and acceptance in AI systems. However, the paper also scrutinizes the delicate balance between interpretability and performance, shedding light on instances where the pursuit of accuracy may compromise explain-ability. Additionally, it navigates through the current challenges and limitations in XAI, the regulatory landscape surrounding AI explain-ability, and offers insights into future trends and directions, fostering a comprehensive understanding of XAI's present state and future potential.
3

Gunning, David, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. "XAI—Explainable artificial intelligence." Science Robotics 4, no. 37 (December 18, 2019): eaay7120. http://dx.doi.org/10.1126/scirobotics.aay7120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
5

Chaudhary, G. "Explainable Artificial Intelligence (xAI): Reflections on Judicial System." Kutafin Law Review 10, no. 4 (January 13, 2024): 872–89. http://dx.doi.org/10.17803/2713-0533.2023.4.26.872-889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Machine learning algorithms are increasingly being utilized in scenarios, such, as criminal, administrative and civil proceedings. However, there is growing concern regarding the lack of transparency and accountability due to the “black box” nature of these algorithms. This makes it challenging for judges’ to comprehend how decisions or predictions are reached. This paper aims to explore the significance of Explainable AI (xAI) in enhancing transparency and accountability within contexts. Additionally, it examines the role that the judicial system can play in developing xAI. The methodology involves a review of existing xAI research and a discussion on how feedback from the system can improve its effectiveness in legal settings. The argument presented is that xAI is crucial in contexts as it empowers judges to make informed decisions based on algorithmic outcomes. However, the lack of transparency, in decision-making processes can impede judge’s ability to do effectively. Therefore, implementing xAI can contribute to increasing transparency and accountability within this decision-making process. The judicial system has an opportunity to aid in the development of xAI by emulating reasoning customizing approaches according to specific jurisdictions and audiences and providing valuable feedback for improving this technology’s efficacy.Hence the primary objective is to emphasize the significance of xAI in enhancing transparency and accountability, within settings well as the potential contribution of the judicial system, towards its advancement. Judges could consider asking about the rationale, behind outcomes. It is advisable for xAI systems to provide a clear account of the steps taken by algorithms to reach their conclusions or predictions. Additionally, it is proposed that public stakeholders have a role, in shaping xAI to guarantee ethical and socially responsible technology.
6

Praveenraj, D. David Winster, Melvin Victor, C. Vennila, Ahmed Hussein Alawadi, Pardaeva Diyora, N. Vasudevan, and T. Avudaiappan. "Exploring Explainable Artificial Intelligence for Transparent Decision Making." E3S Web of Conferences 399 (2023): 04030. http://dx.doi.org/10.1051/e3sconf/202339904030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence (AI) has become a potent tool in many fields, allowing complicated tasks to be completed with astounding effectiveness. However, as AI systems get more complex, worries about their interpretability and transparency have become increasingly prominent. It is now more important than ever to use Explainable Artificial Intelligence (XAI) methodologies in decision-making processes, where the capacity to comprehend and trust AI-based judgments is crucial. This abstract explores the idea of XAI and how important it is for promoting transparent decision-making. Finally, the development of Explainable Artificial Intelligence (XAI) has shown to be crucial for promoting clear decision-making in AI systems. XAI approaches close the cognitive gap between complicated algorithms and human comprehension by empowering users to comprehend and analyze the inner workings of AI models. XAI equips stakeholders to evaluate and trust AI systems, assuring fairness, accountability, and ethical standards in fields like healthcare and finance where AI-based choices have substantial ramifications. The development of XAI is essential for attaining AI's full potential while retaining transparency and human-centric decision making, despite ongoing hurdles.
7

Javed, Abdul Rehman, Waqas Ahmed, Sharnil Pandya, Praveen Kumar Reddy Maddikunta, Mamoun Alazab, and Thippa Reddy Gadekallu. "A Survey of Explainable Artificial Intelligence for Smart Cities." Electronics 12, no. 4 (February 18, 2023): 1020. http://dx.doi.org/10.3390/electronics12041020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The emergence of Explainable Artificial Intelligence (XAI) has enhanced the lives of humans and envisioned the concept of smart cities using informed actions, enhanced user interpretations and explanations, and firm decision-making processes. The XAI systems can unbox the potential of black-box AI models and describe them explicitly. The study comprehensively surveys the current and future developments in XAI technologies for smart cities. It also highlights the societal, industrial, and technological trends that initiate the drive towards XAI for smart cities. It presents the key to enabling XAI technologies for smart cities in detail. The paper also discusses the concept of XAI for smart cities, various XAI technology use cases, challenges, applications, possible alternative solutions, and current and future research enhancements. Research projects and activities, including standardization efforts toward developing XAI for smart cities, are outlined in detail. The lessons learned from state-of-the-art research are summarized, and various technical challenges are discussed to shed new light on future research possibilities. The presented study on XAI for smart cities is a first-of-its-kind, rigorous, and detailed study to assist future researchers in implementing XAI-driven systems, architectures, and applications for smart cities.
8

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (January 19, 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
9

Lozano-Murcia, Catalina, Francisco P. Romero, Jesus Serrano-Guerrero, Arturo Peralta, and Jose A. Olivas. "Potential Applications of Explainable Artificial Intelligence to Actuarial Problems." Mathematics 12, no. 5 (February 21, 2024): 635. http://dx.doi.org/10.3390/math12050635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable artificial intelligence (XAI) is a group of techniques and evaluations that allows users to understand artificial intelligence knowledge and increase the reliability of the results produced using artificial intelligence. XAI can assist actuaries in achieving better estimations and decisions. This study reviews the current literature to summarize XAI in common actuarial problems. We proposed a research process based on understanding the type of AI used in actuarial practice in the financial industry and insurance pricing and then researched XAI implementation. This study systematically reviews the literature on the need for implementation options and the current use of explanatory artificial intelligence (XAI) techniques for actuarial problems. The study begins with a contextual introduction outlining the use of artificial intelligence techniques and their potential limitations, followed by the definition of the search equations used in the research process, the analysis of the results, and the identification of the main potential fields for exploitation in actuarial problems, as well as pointers for potential future work in this area.
10

Shukla, Bibhudhendu, Ip-Shing Fan, and Ian Jennions. "Opportunities for Explainable Artificial Intelligence in Aerospace Predictive Maintenance." PHM Society European Conference 5, no. 1 (July 22, 2020): 11. http://dx.doi.org/10.36001/phme.2020.v5i1.1231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper aims to look at the value and the necessity of XAI (Explainable Artificial Intelligence) when using DNNs (Deep Neural Networks) in PM (Predictive Maintenance). The context will be the field of Aerospace IVHM (Integrated Vehicle Health Management) when using DNNs. An XAI (Explainable Artificial Intelligence) system is necessary so that the result of an AI (Artificial Intelligence) solution is clearly explained and understood by a human expert. This would allow the IVHM system to use XAI based PM to improve effectiveness of predictive model. An IVHM system would be able to utilize the information to assess the health of the subsystems, and their effect on the aircraft. Even if the underlying mathematical principles are understood, they lack an understandable insight, hence have difficulty in generating the underlying explanatory structures (i.e. black box). This calls for a process, or system, that enables decisions to be explainable, transparent, and understandable. It is argued that research in XAI would generally help to accelerate the implementation of AI/ML (Machine Learning) in the aerospace domain, and specifically help to facilitate compliance, transparency, and trust. This paper explains the following areas: Challenges & benefits of AI based PM in aerospace Why XAI is required for DNNs in aerospace PM? Evolution of XAI models and industry adoption Framework for XAI using XPA (Explainability Parameters) Discussion about future research in adopting XAI & DNNs in improving IVHM.
11

Milad, Akram, and Mohamed Whiba. "Exploring Explainable Artificial Intelligence Technologies: Approaches, Challenges, and Applications." International Science and Technology Journal 34, no. 1 (April 8, 2024): 1–21. http://dx.doi.org/10.62341/amia8430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research paper delves into the transformative domain of Explainable Artificial Intelligence (XAI) in response to the evolving complexities of artificial intelligence and machine learning. Navigating through XAI approaches, challenges, applications, and future directions, the paper emphasizes the delicate balance between model accuracy and interpretability. Challenges such as the trade-off between accuracy and interpretability, explaining black-box models, privacy concerns, and ethical considerations are comprehensively addressed. Real-world applications showcase XAI's potential in healthcare, finance, criminal justice, and education. The evaluation of XAI models, exemplified through a Random Forest Classifier in a diabetes dataset, underscores the practical implications. Looking ahead, the paper outlines future directions, emphasizing ensemble explanations, standardized evaluation metrics, and human-centric designs. It concludes by advocating for the widespread adoption of XAI, envisioning a future where AI systems are not only powerful but also transparent, fair, and accountable, fostering trust and understanding in the human-AI interaction. Keywords: Explainable Artificial Intelligence (XAI), machine learning, transparency, accountability, AI models, interpretability, challenges, XAI applications, model evaluation, bias detection, user comprehension, ethical alignment.
12

Dikmen, Murat, and Catherine Burns. "Abstraction Hierarchy Based Explainable Artificial Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 319–23. http://dx.doi.org/10.1177/1071181320641073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This work explores the application of Cognitive Work Analysis (CWA) in the context of Explainable Artificial Intelligence (XAI). We built an AI system using a loan evaluation data set and applied an XAI technique to obtain data-driven explanations for predictions. Using an Abstraction Hierarchy (AH), we generated domain knowledge-based explanations to accompany data-driven explanations. An online experiment was conducted to test the usefulness of AH-based explanations. Participants read financial profiles of loan applicants, the AI system’s loan approval/rejection decisions, and explanations that justify the decisions. Presence or absence of AH-based explanations was manipulated, and participants’ perceptions of the explanation quality was measured. The results showed that providing AH-based explanations helped participants learn about the loan evaluation process and improved the perceived quality of explanations. We conclude that a CWA approach can increase understandability when explaining the decisions made by AI systems.
13

Qian, Jinzhao, Hailong Li, Junqi Wang, and Lili He. "Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging." Diagnostics 13, no. 9 (April 27, 2023): 1571. http://dx.doi.org/10.3390/diagnostics13091571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Advances in artificial intelligence (AI), especially deep learning (DL), have facilitated magnetic resonance imaging (MRI) data analysis, enabling AI-assisted medical image diagnoses and prognoses. However, most of the DL models are considered as “black boxes”. There is an unmet need to demystify DL models so domain experts can trust these high-performance DL models. This has resulted in a sub-domain of AI research called explainable artificial intelligence (XAI). In the last decade, many experts have dedicated their efforts to developing novel XAI methods that are competent at visualizing and explaining the logic behind data-driven DL models. However, XAI techniques are still in their infancy for medical MRI image analysis. This study aims to outline the XAI applications that are able to interpret DL models for MRI data analysis. We first introduce several common MRI data modalities. Then, a brief history of DL models is discussed. Next, we highlight XAI frameworks and elaborate on the principles of multiple popular XAI methods. Moreover, studies on XAI applications in MRI image analysis are reviewed across the tissues/organs of the human body. A quantitative analysis is conducted to reveal the insights of MRI researchers on these XAI techniques. Finally, evaluations of XAI methods are discussed. This survey presents recent advances in the XAI domain for explaining the DL models that have been utilized in MRI applications.
14

Miller, Tim, Robert Hoffman, Ofra Amir, and Andreas Holzinger. "Special issue on Explainable Artificial Intelligence (XAI)." Artificial Intelligence 307 (June 2022): 103705. http://dx.doi.org/10.1016/j.artint.2022.103705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sharma, Neeraj Anand, Rishal Ravikesh Chand, Zain Buksh, A. B. M. Shawkat Ali, Ambreen Hanif, and Amin Beheshti. "Explainable AI Frameworks: Navigating the Present Challenges and Unveiling Innovative Applications." Algorithms 17, no. 6 (May 24, 2024): 227. http://dx.doi.org/10.3390/a17060227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study delves into the realm of Explainable Artificial Intelligence (XAI) frameworks, aiming to empower researchers and practitioners with a deeper understanding of these tools. We establish a comprehensive knowledge base by classifying and analyzing prominent XAI solutions based on key attributes like explanation type, model dependence, and use cases. This resource equips users to navigate the diverse XAI landscape and select the most suitable framework for their specific needs. Furthermore, the study proposes a novel framework called XAIE (eXplainable AI Evaluator) for informed decision-making in XAI adoption. This framework empowers users to assess different XAI options based on their application context objectively. This will lead to more responsible AI development by fostering transparency and trust. Finally, the research identifies the limitations and challenges associated with the existing XAI frameworks, paving the way for future advancements. By highlighting these areas, the study guides researchers and developers in enhancing the capabilities of Explainable AI.
16

Liu, Peng, Lizhe Wang, and Jun Li. "Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data." Remote Sensing 15, no. 23 (November 22, 2023): 5448. http://dx.doi.org/10.3390/rs15235448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jung, Jinsun, and Hyeoneui Kim. "Evaluating the Effectiveness of Explainable Artificial Intelligence Approaches (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23528–29. http://dx.doi.org/10.1609/aaai.v38i21.30458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable Artificial Intelligence (XAI), a promising future technology in the field of healthcare, has attracted significant interest. Despite ongoing efforts in the development of XAI approaches, there has been inadequate evaluation of explanation effectiveness and no standardized framework for the evaluation has been established. This study aims to examine the relationship between subjective interpretability and perceived plausibility for various XAI explanations and to determine the factors affecting users' acceptance of the XAI explanation.
18

Veitch, Erik, and Ole Andreas Alsos. "Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles." Journal of Marine Science and Engineering 9, no. 11 (November 6, 2021): 1227. http://dx.doi.org/10.3390/jmse9111227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions.
19

Dhiman, Pummy, Anupam Bonkra, Amandeep Kaur, Yonis Gulzar, Yasir Hamid, Mohammad Shuaib Mir, Arjumand Bano Soomro, and Osman Elwasila. "Healthcare Trust Evolution with Explainable Artificial Intelligence: Bibliometric Analysis." Information 14, no. 10 (October 3, 2023): 541. http://dx.doi.org/10.3390/info14100541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent developments in IoT, big data, fog and edge networks, and AI technologies have had a profound impact on a number of industries, including medical. The use of AI for therapeutic purposes has been hampered by its inexplicability. Explainable Artificial Intelligence (XAI), a revolutionary movement, has arisen to solve this constraint. By using decision-making and prediction outputs, XAI seeks to improve the explicability of standard AI models. In this study, we examined global developments in empirical XAI research in the medical field. The bibliometric analysis tools VOSviewer and Biblioshiny were used to examine 171 open access publications from the Scopus database (2019–2022). Our findings point to several prospects for growth in this area, notably in areas of medicine like diagnostic imaging. With 109 research articles using XAI for healthcare classification, prediction, and diagnosis, the USA leads the world in research output. With 88 citations, IEEE Access has the greatest number of publications of all the journals. Our extensive survey covers a range of XAI applications in healthcare, such as diagnosis, therapy, prevention, and palliation, and offers helpful insights for researchers who are interested in this field. This report provides a direction for future healthcare industry research endeavors.
20

Shoukat Makubhai, Shahin, Ganesh R. Pathak, and Pankaj R. Chandre. "Predicting lung cancer risk using explainable artificial intelligence." Bulletin of Electrical Engineering and Informatics 13, no. 2 (April 1, 2024): 1276–85. http://dx.doi.org/10.11591/eei.v13i2.6280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Lung cancer is a lethal disease that claims numerous lives annually, and early detection is essential for improving survival rates. Machine learning has shown promise in predicting lung cancer risk, but the lack of transparency and interpretability in black-box models impedes the understanding of factors that contribute to risk. Explainable artificial intelligence (XAI) can overcome this limitation by providing a clear and understandable approach to machine learning. In this study, we will use a large patient record dataset to train an XAI-based model that considers various patient information, including lifestyle factors, clinical data, and medical history, for predicting lung cancer risk. We will use different XAI techniques, including decision trees, partial dependence plots, and feature importance, to interpret the model’s predictions. These methods will provide healthcare professionals with a transparent and interpretable framework for screening and treatment decisions concerning lung cancer risk.
21

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence." Digital Technologies Research and Applications 1, no. 1 (January 26, 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
22

Singh, Dr Shashank, Dr Dhirendra Pratap Singh, and Mr Kaushal Chandra. "Enhancing Transparency and Interpretability in Deep Learning Models: A Comprehensive Study on Explainable AI Techniques." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 02 (February 16, 2024): 1–13. http://dx.doi.org/10.55041/ijsrem28675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: Deep learning models have demonstrated remarkable capabilities across various domains, but their inherent complexity often leads to challenges in understanding and interpreting their decisions. The demand for transparent and interpretable artificial intelligence (AI) systems is particularly crucial in fields such as healthcare, finance, and autonomous systems. This research paper presents a comprehensive study on the application of Explainable AI (XAI) techniques to enhance transparency and interpretability in deep learning models. Keywords: Explainable AI (XAI), artificial intelligence (AI).
23

Hulsen, Tim. "Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare." AI 4, no. 3 (August 10, 2023): 652–66. http://dx.doi.org/10.3390/ai4030034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial Intelligence (AI) describes computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Examples of AI techniques are machine learning, neural networks, and deep learning. AI can be applied in many different areas, such as econometrics, biometry, e-commerce, and the automotive industry. In recent years, AI has found its way into healthcare as well, helping doctors make better decisions (“clinical decision support”), localizing tumors in magnetic resonance images, reading and analyzing reports written by radiologists and pathologists, and much more. However, AI has one big risk: it can be perceived as a “black box”, limiting trust in its reliability, which is a very big issue in an area in which a decision can mean life or death. As a result, the term Explainable Artificial Intelligence (XAI) has been gaining momentum. XAI tries to ensure that AI algorithms (and the resulting decisions) can be understood by humans. In this narrative review, we will have a look at some central concepts in XAI, describe several challenges around XAI in healthcare, and discuss whether it can really help healthcare to advance, for example, by increasing understanding and trust. Finally, alternatives to increase trust in AI are discussed, as well as future research possibilities in the area of XAI.
24

Peters, Uwe, and Mary Carman. "Cultural Bias in Explainable AI Research: A Systematic Analysis." Journal of Artificial Intelligence Research 79 (March 28, 2024): 971–1000. http://dx.doi.org/10.1613/jair.1.14888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that many popular XAI designs implicitly and problematically assume that Western explanatory needs are shared cross-culturally. Additionally, we systematically reviewed over 200 XAI user studies and found that most studies did not consider relevant cultural variations, sampled only Western populations, but drew conclusions about human-XAI interactions more generally. We also analyzed over 30 literature reviews of XAI studies. Most reviews did not mention cultural differences in explanatory needs or flag overly broad cross-cultural extrapolations of XAI user study results. Combined, our analyses provide evidence of a cultural bias toward Western populations in XAI research, highlighting an important knowledge gap regarding how culturally diverse users may respond to widely used XAI systems that future work can and should address.
25

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (April 30, 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
26

Anand Reddy, S. Tharun. "Human-Computer Interaction Techniques for Explainable Artificial Intelligence Systems." Research & Review: Machine Learning and Cloud Computing 3, no. 1 (March 26, 2024): 1–7. http://dx.doi.org/10.46610/rtaia.2024.v03i01.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As Artificial Intelligence (AI) systems become more widespread, there is a growing need for transparency to ensure human understanding and oversight. This is where Explainable AI (XAI) comes in to make AI systems more transparent and interpretable. However, developing adequate explanations is still an open research problem. Human-Computer Interaction (HCI) is significant in designing interfaces for explainable AI. This article reviews the HCI techniques that can be used for solvable AI systems. The literature was explored with a focus on papers at the intersection of HCI and XAI. Essential techniques include interactive visualizations, natural language explanations, conversational agents, mixed-initiative systems, and model introspection methods while Explainable AI presents opportunities to improve system transparency, it also comes with risks, especially if the explanations need to be designed carefully. To ensure that explanations are tailored for diverse users, contexts, and AI applications, HCI principles and participatory design approaches can be utilized. Therefore, this article concludes with recommendations for developing human-centred XAI systems, which can be achieved through interdisciplinary collaboration between HCI and AI. As Artificial Intelligence (AI) systems become more common in our daily lives, the need for transparency in these systems is becoming increasingly important. Ensuring that humans clearly understand how AI systems work and can oversee their functioning is crucial. This is where the concept of Explainable AI (XAI) comes in to make AI systems more transparent and interpretable. However, developing adequate explanations for AI systems is still an open research problem. In this context, Human-Computer Interaction (HCI) is significant in designing interfaces for explainable AI. By integrating HCI principles, we can create systems humans understand and operate more efficiently. This article reviews the HCI techniques that can be used for solvable AI systems. The literature was explored with a focus on papers at the intersection of HCI and XAI. The essential methods identified include interactive visualizations, natural language explanations, conversational agents, mixed-initiative systems, and model introspection methods. Each of these techniques has unique advantages and can be used to provide explanations for different types of AI systems. While Explainable AI presents opportunities to improve system transparency, it also comes with risks, especially if the explanations need to be designed carefully. There is a risk of oversimplification, leading to misunderstanding or mistrust of the AI system. It is essential to employ HCI principles and participatory design approaches to ensure that explanations are tailored for diverse users, contexts, and AI applications. By developing human-centred XAI systems, we can ensure that AI systems are transparent, interpretable, and trustworthy. This can be achieved through interdisciplinary collaboration between HCI and AI. The recommendations in this article provide a starting point for designing such systems. In essence, XAI presents a significant opportunity to improve the transparency of AI systems, but it requires careful design and implementation to be effective.
27

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (September 27, 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICIDS-2017 dataset were chosen. This paper proposes an intrusion detection system using machine learning algorithms such as decision trees, random forests, and SVM (IDS). After training these models, an ensemble technique voting classifier was added and achieved an accuracy of 96.25%. Furthermore, the proposed model also incorporates the XAI algorithm LIME for better explainability and understanding of the black-box approach to reliable intrusion detection. Our experimental results confirmed that XAI LIME is more explanation-friendly and more responsive.
28

Hoffmann, Rudolf, and Christoph Reich. "A Systematic Literature Review on Artificial Intelligence and Explainable Artificial Intelligence for Visual Quality Assurance in Manufacturing." Electronics 12, no. 22 (November 8, 2023): 4572. http://dx.doi.org/10.3390/electronics12224572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries.
29

Shafiabady, Niusha, Nick Hadjinicolaou, Nadeesha Hettikankanamage, Ehsan MohammadiSavadkoohi, Robert M. X. Wu, and James Vakilian. "eXplainable Artificial Intelligence (XAI) for improving organisational regility." PLOS ONE 19, no. 4 (April 24, 2024): e0301429. http://dx.doi.org/10.1371/journal.pone.0301429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Since the pandemic started, organisations have been actively seeking ways to improve their organisational agility and resilience (regility) and turn to Artificial Intelligence (AI) to gain a deeper understanding and further enhance their agility and regility. Organisations are turning to AI as a critical enabler to achieve these goals. AI empowers organisations by analysing large data sets quickly and accurately, enabling faster decision-making and building agility and resilience. This strategic use of AI gives businesses a competitive advantage and allows them to adapt to rapidly changing environments. Failure to prioritise agility and responsiveness can result in increased costs, missed opportunities, competition and reputational damage, and ultimately, loss of customers, revenue, profitability, and market share. Prioritising can be achieved by utilising eXplainable Artificial Intelligence (XAI) techniques, illuminating how AI models make decisions and making them transparent, interpretable, and understandable. Based on previous research on using AI to predict organisational agility, this study focuses on integrating XAI techniques, such as Shapley Additive Explanations (SHAP), in organisational agility and resilience. By identifying the importance of different features that affect organisational agility prediction, this study aims to demystify the decision-making processes of the prediction model using XAI. This is essential for the ethical deployment of AI, fostering trust and transparency in these systems. Recognising key features in organisational agility prediction can guide companies in determining which areas to concentrate on in order to improve their agility and resilience.
30

Páez, Andrés. "The Pragmatic Turn in Explainable Artificial Intelligence (XAI)." Minds and Machines 29, no. 3 (May 29, 2019): 441–59. http://dx.doi.org/10.1007/s11023-019-09502-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Agarwal, Abhinav. "EXPLORING THE LANDSCAPE OF EXPLAINABLE ARTIFICIAL INTELLIGENCE: BENEFITS, CHALLENGES, AND FUTURE PERSPECTIVES." International Journal of Advanced Research 11, no. 12 (December 31, 2023): 1042–46. http://dx.doi.org/10.21474/ijar01/18074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research paper delves into the dynamic realm of Explainable Artificial Intelligence (XAI), scrutinizing its advantages and limitations. XAI emerges as a pivotal facet in the evolution of artificial intelligence (AI) systems, emphasizing transparency to render AI systems comprehensible to humans. The primary objective of XAI is to illuminate the decision-making processes of complex AI models, offering insights into their reasoning mechanisms. Through heightened transparency, XAI aims to enhance human comprehension, instill trust in AI outcomes, and ultimately foster accountability, ethical adherence, and user confidence in AI systems. This paper presents a comprehensive analysis of the benefits of XAI, explores its constraints concerning individual privacy, and discusses the future perspectives of this rapidly evolving field.
32

Combs, Kara, Mary Fendley, and Trevor Bihl. "A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 8 (June 1, 2020): 61–72. http://dx.doi.org/10.37394/232018.2020.8.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.
33

Metta, Carlo, Andrea Beretta, Roberto Pellungrini, Salvatore Rinzivillo, and Fosca Giannotti. "Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence." Bioengineering 11, no. 4 (April 12, 2024): 369. http://dx.doi.org/10.3390/bioengineering11040369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians’ and patients’ understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.
34

Naik, Het, Priyanka Goradia, Vomini Desai, Yukta Desai, and Muralikrishna Iyyanki. "Explainable Artificial Intelligence (XAI) for Population Health Management – An Appraisal." European Journal of Electrical Engineering and Computer Science 5, no. 6 (December 23, 2021): 64–76. http://dx.doi.org/10.24018/ejece.2021.5.6.368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study explores Explainable Artificial Intelligence (XAI) in general and then talked about its potential use for the India Healthcare system. It also demonstrated some XAI techniques on a diabetes dataset with an aim to show practical implementation and implore the readers to think about more application areas. However, there are certain limitations of the technology which are highlighted along with the future scope in the discussion.
35

Gniadek, Thomas, Jason Kang, Talent Theparee, and Jacob Krive. "Framework for Classifying Explainable Artificial Intelligence (XAI) Algorithms in Clinical Medicine." Online Journal of Public Health Informatics 15 (September 1, 2023): e50934. http://dx.doi.org/10.2196/50934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence (AI) applied to medicine offers immense promise, in addition to safety and regulatory concerns. Traditional AI produces a core algorithm result, typically without a measure of statistical confidence or an explanation of its biological-theoretical basis. Efforts are underway to develop explainable AI (XAI) algorithms that not only produce a result but also an explanation to support that result. Here we present a framework for classifying XAI algorithms applied to clinical medicine: An algorithm’s clinical scope is defined by whether the core algorithm output leads to observations (eg, tests, imaging, clinical evaluation), interventions (eg, procedures, medications), diagnoses, and prognostication. Explanations are classified by whether they provide empiric statistical information, association with a historical population or populations, or association with an established disease mechanism or mechanisms. XAI implementations can be classified based on whether algorithm training and validation took into account the actions of health care providers in response to the insights and explanations provided or whether training was performed using only the core algorithm output as the end point. Finally, communication modalities used to convey an XAI explanation can be used to classify algorithms and may affect clinical outcomes. This framework can be used when designing, evaluating, and comparing XAI algorithms applied to medicine.
36

Bernardo, Ezekiel, and Rosemary Seva. "Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective." Informatics 10, no. 1 (March 16, 2023): 32. http://dx.doi.org/10.3390/informatics10010032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the field grew, and development flourished. However, concerns have been expressed that the techniques are limited in terms of to whom they are applicable and how their effect can be leveraged. Currently, most XAI techniques have been designed by developers. Though needed and valuable, XAI is more critical for an end-user, considering transparency cleaves on trust and adoption. This study aims to understand and conceptualize an end-user-centric XAI to fill in the lack of end-user understanding. Considering recent findings of related studies, this study focuses on design conceptualization and affective analysis. Data from 202 participants were collected from an online survey to identify the vital XAI design components and testbed experimentation to explore the affective and trust change per design configuration. The results show that affective is a viable trust calibration route for XAI. In terms of design, explanation form, communication style, and presence of supplementary information are the components users look for in an effective XAI. Lastly, anxiety about AI, incidental emotion, perceived AI reliability, and experience using the system are significant moderators of the trust calibration process for an end-user.
37

Jiang, Xuejie, Siti Norlizaiha Harun, and Linyu Liu. "Explainable Artificial Intelligence for Ancient Architecture and Lacquer Art." Buildings 13, no. 5 (May 4, 2023): 1213. http://dx.doi.org/10.3390/buildings13051213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research investigates the use of explainable artificial intelligence (XAI) in ancient architecture and lacquer art. The aim is to create accurate and interpretable models to reveal these cultural artefacts’ underlying design principles and techniques. To achieve this, machine learning and data-driven techniques are employed, which provide new insights into their construction and preservation. The study emphasises the importance of transparent and trustworthy AI systems, which can enhance the reliability and credibility of the results. The developed model outperforms CNN-based emotion recognition and random forest models in all four evaluation metrics, achieving an impressive accuracy of 92%. This research demonstrates the potential of XAI to support the study and conservation of ancient architecture and lacquer art, opening up new avenues for interdisciplinary research and collaboration.
38

Melo, Elvis, Ivanovitch Silva, Daniel G. Costa, Carlos M. D. Viegas, and Thiago M. Barros. "On the Use of eXplainable Artificial Intelligence to Evaluate School Dropout." Education Sciences 12, no. 12 (November 22, 2022): 845. http://dx.doi.org/10.3390/educsci12120845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The school dropout problem has been recurrent in different educational areas, which has reinforced important challenges when pursuing education objectives. In this scenario, technical schools have also suffered from considerable dropout levels, even when considering a still increasing need for professionals in areas associated to computing and engineering. Actually, the dropout phenomenon may be not uniform and thus it has become urgent the identification of the profile of those students, putting in evidence techniques such as eXplainable Artificial Intelligence (XAI) that can ensure more ethical, transparent, and auditable use of educational data. Therefore, this article applies and evaluates XAI methods to predict students in school dropout situation, considering a database of students from the Federal Institute of Rio Grande do Norte (IFRN), a Brazilian technical school. For that, a checklist was created comprising explanatory evaluation metrics according to a broad literature review, resulting in the proposal of a new explainability index to evaluate XAI frameworks. Doing so, we expect to support the adoption of XAI models to better understand school-related data, supporting important research efforts in this area.
39

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
40

Apostolopoulos, Ioannis D., and Peter P. Groumpos. "Fuzzy Cognitive Maps: Their Role in Explainable Artificial Intelligence." Applied Sciences 13, no. 6 (March 7, 2023): 3412. http://dx.doi.org/10.3390/app13063412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Currently, artificial intelligence is facing several problems with its practical implementation in various application domains. The explainability of advanced artificial intelligence algorithms is a topic of paramount importance, and many discussions have been held recently. Pioneering and classical machine learning and deep learning models behave as black boxes, constraining the logical interpretations that the end users desire. Artificial intelligence applications in industry, medicine, agriculture, and social sciences require the users’ trust in the systems. Users are always entitled to know why and how each method has made a decision and which factors play a critical role. Otherwise, they will always be wary of using new techniques. This paper discusses the nature of fuzzy cognitive maps (FCMs), a soft computational method to model human knowledge and provide decisions handling uncertainty. Though FCMs are not new to the field, they are evolving and incorporate recent advancements in artificial intelligence, such as learning algorithms and convolutional neural networks. The nature of FCMs reveals their supremacy in transparency, interpretability, transferability, and other aspects of explainable artificial intelligence (XAI) methods. The present study aims to reveal and defend the explainability properties of FCMs and to highlight their successful implementation in many domains. Subsequently, the present study discusses how FCMs cope with XAI directions and presents critical examples from the literature that demonstrate their superiority. The study results demonstrate that FCMs are both in accordance with the XAI directives and have many successful applications in domains such as medical decision-support systems, precision agriculture, energy savings, environmental monitoring, and policy-making for the public sector.
41

Kim, Mi-Young, Shahin Atakishiyev, Housam Khalifa Bashier Babiker, Nawshad Farruque, Randy Goebel, Osmar R. Zaïane, Mohammad-Hossein Motallebi, et al. "A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence." Machine Learning and Knowledge Extraction 3, no. 4 (November 18, 2021): 900–921. http://dx.doi.org/10.3390/make3040045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.
42

Challa, Narayana. "Demystifying AI: Navigating the Balance between Precision and Comprehensibility with Explainable Artificial Intelligence." International Journal of Computing and Engineering 5, no. 1 (January 5, 2024): 12–17. http://dx.doi.org/10.47941/ijce.1603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Integrating Artificial Intelligence (AI) into daily life has brought transformative changes, ranging from personalized recommendations on streaming platforms to advancements in medical diagnostics. However, concerns about the transparency and interpretability of AI models, intense neural networks, have become prominent. This paper explores the emerging paradigm of Explainable Artificial Intelligence (XAI) as a crucial response to address these concerns. Delving into the multifaceted challenges posed by AI complexity, the study emphasizes the critical significance of interpretability. It examines how XAI is fundamentally reshaping the landscape of artificial intelligence, seeking to reconcile precision with the transparency necessary for widespread acceptance.
43

Althoff, Daniel, Helizani Couto Bazame, and Jessica Garcia Nascimento. "Untangling hybrid hydrological models with explainable artificial intelligence." H2Open Journal 4, no. 1 (January 1, 2021): 13–28. http://dx.doi.org/10.2166/h2oj.2021.066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Hydrological models are valuable tools for developing streamflow predictions in unmonitored catchments to increase our understanding of hydrological processes. A recent effort has been made in the development of hybrid (conceptual/machine learning) models that can preserve some of the hydrological processes represented by conceptual models and can improve streamflow predictions. However, these studies have not explored how the data-driven component of hybrid models resolved runoff routing. In this study, explainable artificial intelligence (XAI) techniques are used to turn a ‘black-box’ model into a ‘glass box’ model. The hybrid models reduced the root-mean-square error of the simulated streamflow values by approximately 27, 50, and 24% for stations 17120000, 27380000, and 33680000, respectively, relative to the traditional method. XAI techniques helped unveil the importance of accounting for soil moisture in hydrological models. Differing from purely data-driven hydrological models, the inclusion of the production storage in the proposed hybrid model, which is responsible for estimating the water balance, reduced the short- and long-term dependencies of input variables for streamflow prediction. In addition, soil moisture controlled water percolation, which was the main predictor of streamflow. This finding is because soil moisture controls the underlying mechanisms of groundwater flow into river streams.
44

Clement, Tobias, Nils Kemmerzell, Mohamed Abdelaal, and Michael Amberg. "XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process." Machine Learning and Knowledge Extraction 5, no. 1 (January 11, 2023): 78–108. http://dx.doi.org/10.3390/make5010006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research.
45

Mishra, Sunny, Amit K. Shukla, and Pranab K. Muhuri. "Explainable Fuzzy AI Challenge 2022: Winner’s Approach to a Computationally Efficient and Explainable Solution." Axioms 11, no. 10 (September 20, 2022): 489. http://dx.doi.org/10.3390/axioms11100489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An explainable artificial intelligence (XAI) agent is an autonomous agent that uses a fundamental XAI model at its core to perceive its environment and suggests actions to be performed. One of the significant challenges for these XAI agents is performing their operation efficiently, which is governed by the underlying inference and optimization system. Along similar lines, an Explainable Fuzzy AI Challenge (XFC 2022) competition was launched, whose principal objective was to develop a fully autonomous and optimized XAI algorithm that could play the Python arcade game “Asteroid Smasher”. This research first investigates inference models to implement an efficient (XAI) agent using rule-based fuzzy systems. We also discuss the proposed approach (which won the competition) to attain efficiency in the XAI algorithm. We have explored the potential of the widely used Mamdani- and TSK-based fuzzy inference systems and investigated which model might have a more optimized implementation. Even though the TSK-based model outperforms Mamdani in several applications, no empirical evidence suggests this will also be applicable in implementing an XAI agent. The experimentations are then performed to find a better-performing inference system in a fast-paced environment. The thorough analysis recommends more robust and efficient TSK-based XAI agents than Mamdani-based fuzzy inference systems.
46

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
47

Rajabi, Enayat, and Somayeh Kafaie. "Knowledge Graphs and Explainable AI in Healthcare." Information 13, no. 10 (September 28, 2022): 459. http://dx.doi.org/10.3390/info13100459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Building trust and transparency in healthcare can be achieved using eXplainable Artificial Intelligence (XAI), as it facilitates the decision-making process for healthcare professionals. Knowledge graphs can be used in XAI for explainability by structuring information, extracting features and relations, and performing reasoning. This paper highlights the role of knowledge graphs in XAI models in healthcare, considering a state-of-the-art review. Based on our review, knowledge graphs have been used for explainability to detect healthcare misinformation, adverse drug reactions, drug-drug interactions and to reduce the knowledge gap between healthcare experts and AI-based models. We also discuss how to leverage knowledge graphs in pre-model, in-model, and post-model XAI models in healthcare to make them more explainable.
48

Thalpage, Nipuna. "Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems." Journal of Digital Art & Humanities 4, no. 1 (June 26, 2023): 31–36. http://dx.doi.org/10.33847/2712-8148.4.1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical field in AI research, addressing the lack of transparency and interpretability in complex AI models. This conceptual review explores the significance of XAI in promoting trust and transparency in AI systems. The paper analyzes existing literature on XAI, identifies patterns and gaps, and presents a coherent conceptual framework. Various XAI techniques, such as saliency maps, attention mechanisms, rule-based explanations, and model-agnostic approaches, are discussed to enhance interpretability. The paper highlights the challenges posed by black-box AI models, explores the role of XAI in enhancing trust and transparency, and examines the ethical considerations and responsible deployment of XAI. By promoting transparency and interpretability, this review aims to build trust, encourage accountable AI systems, and contribute to the ongoing discourse on XAI.
49

Araddhana Arvind Deshmukh,. "Explainable AI for Adversarial Machine Learning: Enhancing Transparency and Trust in Cyber Security." Journal of Electrical Systems 20, no. 1s (March 28, 2024): 11–27. http://dx.doi.org/10.52783/jes.749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainable artificial intelligence (XAI) is essential for improving machine learning models' interpretability, transparency, and reliability—especially in challenging and important fields like cybersecurity. These abstract addresses approaches, structures, and evaluation criteria for putting XAI techniques into practice and comparing them, as well as offering a thorough understanding of all the important components of XAI in the context of adversarial machine learning. Model-agnosticism, global/local explanation, adversarial assault resistance, interpretability, computing efficiency, and scalability are all covered in the discussion. Notably, the suggested SHIME approach shows excellent performance in a number of dimensions, making it a promising solution. The need of carefully weighing XAI solutions based on particular application requirements is emphasized in the abstract's conclusion, opening the door for future developments in the field to handle changing difficulties at the nexus of cybersecurity and artificial intelligence.
50

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven healthcare approaches and their performances in the current study are discussed. The toolkits used in local and global post hoc explainability and the multiple techniques for explainability pertaining the Rational, Data, and Performance explainability are discussed in the current study. Methods. The explainability of the artificial intelligence model in the healthcare domain is implemented through the Local Interpretable Model-Agnostic Explanations and Shapley Additive Explanations for better comprehensibility of the internal working mechanism of the original AI models and the correlation among the feature set that influences decision of the model. Results. The current state-of-the-art XAI-based and future technologies through XAI are reported on research findings in various implementation aspects, including research challenges and limitations of existing models. The role of XAI in the healthcare domain ranging from the earlier prediction of future illness to the disease’s smart diagnosis is discussed. The metrics considered in evaluating the model’s explainability are presented, along with various explainability tools. Three case studies about the role of XAI in the healthcare domain with their performances are incorporated for better comprehensibility. Conclusion. The future perspective of XAI in healthcare will assist in obtaining research insight in the healthcare domain.

To the bibliography