To see the other types of publications on this topic, follow the link: XAI Interpretability.

Journal articles on the topic 'XAI Interpretability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'XAI Interpretability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Thalpage, Nipuna. "Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems." Journal of Digital Art & Humanities 4, no. 1 (2023): 31–36. http://dx.doi.org/10.33847/2712-8148.4.1_4.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical field in AI research, addressing the lack of transparency and interpretability in complex AI models. This conceptual review explores the significance of XAI in promoting trust and transparency in AI systems. The paper analyzes existing literature on XAI, identifies patterns and gaps, and presents a coherent conceptual framework. Various XAI techniques, such as saliency maps, attention mechanisms, rule-based explanations, and model-agnostic approaches, are discussed to enhance interpretability. The paper highlights the challeng
APA, Harvard, Vancouver, ISO, and other styles
2

Verma, Prof Ashish. "Advancements in Explainable AI: Bridging the Gap Between Interpretability and Performance in Machine Learning Models." International Journal of Machine Learning, AI & Data Science Evolution 1, no. 01 (2025): 1–8. https://doi.org/10.63665/ijmlaidse.v1i1.01.

Full text
Abstract:
The growing adoption of Artificial Intelligence (AI) and Machine Learning (ML) in critical decision-making areas such as healthcare, finance, and autonomous systems has raised concerns regarding the interpretability of these models. While deep learning and other advanced ML models deliver high accuracy, their "black box" nature makes it difficult to explain their decision-making processes. Explainable AI (XAI) aims to bridge this gap by introducing methods that enhance transparency without significantly compromising performance. This paper explores key advancements in XAI, including model-agno
APA, Harvard, Vancouver, ISO, and other styles
3

Mohan, Raja Pulicharla. "Explainable AI in the Context of Data Engineering: Unveiling the Black Box in the Pipeline." Explainable AI in the Context of Data Engineering: Unveiling the Black Box in the Pipeline 9, no. 1 (2024): 6. https://doi.org/10.5281/zenodo.10623633.

Full text
Abstract:
The burgeoning integration of Artificial Intelligence (AI) into data engineering pipelines has spurred phenomenal advancements in automation, efficiency, and insights. However, the opaqueness of many AI models, often referred to as "black boxes," raises concerns about trust, accountability, and interpretability. Explainable AI (XAI) emerges as a critical bridge between the power of AI and the human stakeholders in data engineering workflows. This paper delves into the symbiotic relationship between XAI and data engineering, exploring how XAI tools and techniques can enhance the transparency, t
APA, Harvard, Vancouver, ISO, and other styles
4

Milad, Akram, and Mohamed Whiba. "Exploring Explainable Artificial Intelligence Technologies: Approaches, Challenges, and Applications." International Science and Technology Journal 34, no. 1 (2024): 1–21. http://dx.doi.org/10.62341/amia8430.

Full text
Abstract:
This research paper delves into the transformative domain of Explainable Artificial Intelligence (XAI) in response to the evolving complexities of artificial intelligence and machine learning. Navigating through XAI approaches, challenges, applications, and future directions, the paper emphasizes the delicate balance between model accuracy and interpretability. Challenges such as the trade-off between accuracy and interpretability, explaining black-box models, privacy concerns, and ethical considerations are comprehensively addressed. Real-world applications showcase XAI's potential in healthc
APA, Harvard, Vancouver, ISO, and other styles
5

Duggal, Bhanu. "Explainable AI For Fraud Detection in Financial Transactions." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem44356.

Full text
Abstract:
Abstract—Explainable AI (XAI) improves machine learning models’ interpretability, especially for detecting financial fraud. Financial fraud is a growing threat, with criminals using increas- ingly sophisticated methods to circumvent standard security mea- sures. This research article investigates various XAI strategies for increasing transparency and confidence in fraud detection algorithms. The study examines the efficacy of SHAP (Shap- ley Additive Explanations), LIME (Local Interpretable Model- agnostic Explanations), and attention mechanisms in providing insight into model predictions. We
APA, Harvard, Vancouver, ISO, and other styles
6

Ramakrishna, Jeevakala Siva, Sonagiri China Venkateswarlu, Kommu Naveen Kumar, and Parikipandla Shreya. "Development of explainable machine intelligence models for heart sound abnormality detection." Indonesian Journal of Electrical Engineering and Computer Science 36, no. 2 (2024): 846. http://dx.doi.org/10.11591/ijeecs.v36.i2.pp846-853.

Full text
Abstract:
Developing explainable machine intelligence (XAI) models for heart sound abnormality detection is a crucial area of research aimed at improving the interpretability and transparency of machine learning algorithms in medical diagnostics. In this study, we propose a framework for building XAI models that can effectively detect abnormalities in heart sounds while providing interpretable explanations for their predictions. We leverage techniques such as SHapley additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) to generate explanations for model predictions, en
APA, Harvard, Vancouver, ISO, and other styles
7

Jeevakala, Siva Ramakrishna Sonagiri China Venkateswarlu Kommu Naveen Kumar Parikipandla Shreya. "Development of explainable machine intelligence models for heart sound abnormality detection." Indonesian Journal of Electrical Engineering and Computer Science 36, no. 2 (2024): 846–53. https://doi.org/10.11591/ijeecs.v36.i2.pp846-853.

Full text
Abstract:
Developing explainable machine intelligence (XAI) models for heart sound abnormality detection is a crucial area of research aimed at improving the interpretability and transparency of machine learning algorithms in medical diagnostics. In this study, we propose a framework for building XAI models that can effectively detect abnormalities in heart sounds while providing interpretable explanations for their predictions. We leverage techniques such as SHapley additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) to generate explanations for model predictions, en
APA, Harvard, Vancouver, ISO, and other styles
8

Ozdemir, Olcar. "Explainable AI (XAI) in Healthcare: Bridging the Gap between Accuracy and Interpretability." Journal of Science, Technology and Engineering Research 1, no. 1 (2024): 32–44. https://doi.org/10.64206/0z78ev10.

Full text
Abstract:
Artificial Intelligence (AI) has demonstrated significant potential in revolutionizing healthcare by enhancing diagnostic accuracy, predicting patient outcomes, and optimizing treatment plans. However, the increasing reliance on complex, black-box models has raised critical concerns around transparency, trust, and accountability—particularly in high-stakes medical settings where interpretability is vital for clinical decision-making. This paper explores Explainable AI (XAI) as a solution to bridge the gap between model performance and human interpretability. We review current XAI techniques, i
APA, Harvard, Vancouver, ISO, and other styles
9

Hutke, Prof Ankush, Kiran Sahu, Ameet Mishra, Aniruddha Sawant, and Ruchitha Gowda. "Predict XAI." International Research Journal of Innovations in Engineering and Technology 09, no. 04 (2025): 172–76. https://doi.org/10.47001/irjiet/2025.904026.

Full text
Abstract:
Stroke predictors using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable stroke risk predictions. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to ensure model transparency and interpretability. This paper presents the methodology, implem
APA, Harvard, Vancouver, ISO, and other styles
10

Amirineni, Sreenivasarao. "Enhancing Predictive Analytics in Business Intelligence through Explainable AI: A Case Study in Financial Products." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 258–88. http://dx.doi.org/10.60087/jaigs.v6i1.251.

Full text
Abstract:
Today, when the importance of data-based decision-making is impossible to question, the use of Explainable Artificial Intelligence (XAI) in business intelligence (BI) has inestimable benefits for the financial industry. This paper discusses how XAI influences predictive analytics in BI systems and how it may improve interpretability, and useful suggestions for financial product companies. Thus, within the context of this study, an XAI framework helps the financial institutions to employ higher-performing and more accurate models, like gradient boosting and neural networks, while sustaining int
APA, Harvard, Vancouver, ISO, and other styles
11

Kamakshi, Vidhya, and Narayanan C. Krishnan. "Explainable Image Classification: The Journey So Far and the Road Ahead." AI 4, no. 3 (2023): 620–51. http://dx.doi.org/10.3390/ai4030033.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff between model accuracy and interpretability. Motivated by the need to address this tradeoff, we conduct an extensive review of the literature, presenting a multi-view taxonomy that offers a new perspective on XAI methodologies. We analyze various sub-categories of XAI methods, considering their strengths,
APA, Harvard, Vancouver, ISO, and other styles
12

Morrison, Katelyn, Mayank Jain, Jessica Hammer, and Adam Perer. "Eye into AI: Evaluating the Interpretability of Explainable AI Techniques through a Game with a Purpose." Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (2023): 1–22. http://dx.doi.org/10.1145/3610064.

Full text
Abstract:
Recent developments in explainable AI (XAI) aim to improve the transparency of black-box models. However, empirically evaluating the interpretability of these XAI techniques is still an open challenge. The most common evaluation method is algorithmic performance, but such an approach may not accurately represent how interpretable these techniques are to people. A less common but growing evaluation strategy is to leverage crowd-workers to provide feedback on multiple XAI techniques to compare them. However, these tasks often feel like work and may limit participation. We propose a novel, playfu
APA, Harvard, Vancouver, ISO, and other styles
13

Metta, Carlo, Andrea Beretta, Roberto Pellungrini, Salvatore Rinzivillo, and Fosca Giannotti. "Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence." Bioengineering 11, no. 4 (2024): 369. http://dx.doi.org/10.3390/bioengineering11040369.

Full text
Abstract:
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By
APA, Harvard, Vancouver, ISO, and other styles
14

Sewada, Ranu, Ashwani Jangid, Piyush Kumar, and Neha Mishra. "Explainable Artificial Intelligence (XAI)." Journal of Nonlinear Analysis and Optimization 13, no. 01 (2023): 41–47. http://dx.doi.org/10.36893/jnao.2022.v13i02.041-047.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical facet in the realm of machine learning and artificial intelligence, responding to the increasing complexity of models, particularly deep neural networks, and the subsequent need for transparent decision making processes. This research paper delves into the essence of XAI, unraveling its significance across diverse domains such as healthcare, finance, and criminal justice. As a countermeasure to the opacity of intricate models, the paper explores various XAI methods and techniques, including LIME and SHAP, weighing their interp
APA, Harvard, Vancouver, ISO, and other styles
15

K, Kiran,. "Crop Recommendation System with XAI." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem32331.

Full text
Abstract:
Modern agriculture is increasingly leveraging advanced technologies like XAI for improving crop recommendation systems. This paper presents a unique approach that integrates XAI methodologies into crop recommendation frameworks to improve transparency and interpretability. By harnessing machine learning models such as classification & regression techniques and ensemble techniques alongside XAI explanations, our system offers personalized crop suggestions based on environmental and geographic data. Our study contributes to the agricultural sector by enhancing the transparency of crop recomm
APA, Harvard, Vancouver, ISO, and other styles
16

Imam, Niddal H. "Adversarial Examples on XAI-Enabled DT for Smart Healthcare Systems." Sensors 24, no. 21 (2024): 6891. http://dx.doi.org/10.3390/s24216891.

Full text
Abstract:
There have recently been rapid developments in smart healthcare systems, such as precision diagnosis, smart diet management, and drug discovery. These systems require the integration of the Internet of Things (IoT) for data acquisition, Digital Twins (DT) for data representation into a digital replica and Artificial Intelligence (AI) for decision-making. DT is a digital copy or replica of physical entities (e.g., patients), one of the emerging technologies that enable the advancement of smart healthcare systems. AI and Machine Learning (ML) offer great benefits to DT-based smart healthcare sys
APA, Harvard, Vancouver, ISO, and other styles
17

Innocent Paul Ojo and Ashna Tomy. "Explainable AI for credit card fraud detection: Bridging the gap between accuracy and interpretability." World Journal of Advanced Research and Reviews 25, no. 2 (2025): 1246–56. https://doi.org/10.30574/wjarr.2025.25.2.0492.

Full text
Abstract:
Credit card fraud poses a persistent threat to the financial sector, demanding robust and transparent detection systems. This study aims to address the balance between accuracy and interpretability in fraud detection models by applying Explainable AI (XAI) techniques. Using a publicly available dataset from Kaggle, we explored multiple machine learning models, including Random Forest and Gradient Boosting, to classify fraudulent transactions. Given the imbalanced nature of the dataset, SMOTE was used for oversampling to ensure model fairness. The XAI techniques SHAP and LIME were employed to p
APA, Harvard, Vancouver, ISO, and other styles
18

Ganguly, Rita, Dharmpal Singh, and Rajesh Bose. "The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset." Scientific Temper 16, no. 05 (2025): 4165–70. https://doi.org/10.58414/scientifictemper.2025.16.5.01.

Full text
Abstract:
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processin
APA, Harvard, Vancouver, ISO, and other styles
19

Araddhana Arvind Deshmukh,. "Explainable AI for Adversarial Machine Learning: Enhancing Transparency and Trust in Cyber Security." Journal of Electrical Systems 20, no. 1s (2024): 11–27. http://dx.doi.org/10.52783/jes.749.

Full text
Abstract:
Explainable artificial intelligence (XAI) is essential for improving machine learning models' interpretability, transparency, and reliability—especially in challenging and important fields like cybersecurity. These abstract addresses approaches, structures, and evaluation criteria for putting XAI techniques into practice and comparing them, as well as offering a thorough understanding of all the important components of XAI in the context of adversarial machine learning. Model-agnosticism, global/local explanation, adversarial assault resistance, interpretability, computing efficiency, and scal
APA, Harvard, Vancouver, ISO, and other styles
20

Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (2022): 3926. http://dx.doi.org/10.3390/app12083926.

Full text
Abstract:
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image clas
APA, Harvard, Vancouver, ISO, and other styles
21

Jung, Jinsun, and Hyeoneui Kim. "Evaluating the Effectiveness of Explainable Artificial Intelligence Approaches (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23528–29. http://dx.doi.org/10.1609/aaai.v38i21.30458.

Full text
Abstract:
Explainable Artificial Intelligence (XAI), a promising future technology in the field of healthcare, has attracted significant interest. Despite ongoing efforts in the development of XAI approaches, there has been inadequate evaluation of explanation effectiveness and no standardized framework for the evaluation has been established. This study aims to examine the relationship between subjective interpretability and perceived plausibility for various XAI explanations and to determine the factors affecting users' acceptance of the XAI explanation.
APA, Harvard, Vancouver, ISO, and other styles
22

Ibrahim, Riza, and Hilda Azkiyah. "The Role of Explainable Artificial Intelligence (XAI) in Drug Discovery: A Study of Opportunities and Barriers to Implementation." International Journal of Health, Medicine, and Sports 3, no. 2 (2025): 49–53. https://doi.org/10.46336/ijhms.v3i2.216.

Full text
Abstract:
Drug discovery is a complex, lengthy, and costly process with a high failure rate, especially during clinical trials. The integration of Artificial Intelligence (AI) has revolutionized various stages of drug discovery by enabling faster and more accurate analysis of biological and chemical data. However, most AI models in this field operate as “black boxes,” where their decision-making processes are opaque and difficult to interpret. This lack of transparency poses significant challenges in terms of trust, validation, and adoption of AI-generated predictions in both clinical and regulatory set
APA, Harvard, Vancouver, ISO, and other styles
23

Imam, Nasir Musa, Abubakar Ibrahim, and Mohit Tiwari. "Explainable Artificial Intelligence (XAI) Techniques To Enhance Transparency In Deep Learning Models." IOSR Journal of Computer Engineering 26, no. 6 (2024): 29–36. http://dx.doi.org/10.9790/0661-2606012936.

Full text
Abstract:
Deep learning has revolutionized many fields, but caused the 'black-box' problem, where model prediction is not interpretable and transparent. Explainable Artificial Intelligence (XAI) attempts to overcome this problem with the help of Interpretability and Transparency in AI systems. We review important XAI methods focusing on LIME, SHAP and saliency maps that explain the elements behind model predictions. The paper discusses about the role of Explainable Artificial Intelligence (XAI) in high-stake fields such as healthcare, finance and autonomous systems, emphasizing on why trust is important
APA, Harvard, Vancouver, ISO, and other styles
24

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
APA, Harvard, Vancouver, ISO, and other styles
25

Harshitha Raghavan Devarajan,. "Explainable AI for Cloud-Based Machine Learning Interpretable Models and Transparency in Decision Making." Tuijin Jishu/Journal of Propulsion Technology 45, no. 02 (2024): 2886–94. http://dx.doi.org/10.52783/tjjpt.v45.i02.6376.

Full text
Abstract:
As machine learning models become increasingly complex and ubiquitous in cloud-based applications, the need for interpretability and transparency in decision making has become paramount. Explainable AI (XAI) techniques aim to provide insights into the inner workings of machine learning models, thereby enhancing their interpretability and facilitating trust among users. In this paper, we delve into the significance of XAI in cloud-based machine learning environments, emphasizing the importance of interpretable models and transparent decision-making processes. [1] XAI epitomizes a paradigm shift
APA, Harvard, Vancouver, ISO, and other styles
26

Ghorpade, V. S., Pradnya A. Jadhav, R. S. Jadhav, Satish N. Gujar, Wankhede Vishal Ashok, and Shraddha V. Pandit. "Exploring explainable AI in pharmaceutical decision-making : Bridging the gap between black box models and clinical insights." Journal of Statistics and Management Systems 27, no. 2 (2024): 225–36. http://dx.doi.org/10.47974/jsms-1249.

Full text
Abstract:
This study explores the crucial nexus between artificial intelligence (AI) and clinical decision-making in pharmaceuticals, highlighting the need to close the growing gap between black box models and clinical insights. The opaqueness of black box models creates questions about regulatory compliance, interpretability, and transparency as AI becomes more and more integrated into clinical applications, drug development, and discovery processes. Acknowledging the importance of Explainable AI (XAI) in this regard, we thoroughly examine XAI methods, focusing on their use in medical environments. The
APA, Harvard, Vancouver, ISO, and other styles
27

Nkoro, Ebuka Chinaechetam, Judith Nkechinyere Njoku, Cosmas Ifeanyi Nwakanma, Jae-Min Lee, and Dong-Seong Kim. "Zero-Trust Marine Cyberdefense for IoT-Based Communications: An Explainable Approach." Electronics 13, no. 2 (2024): 276. http://dx.doi.org/10.3390/electronics13020276.

Full text
Abstract:
Integrating Explainable Artificial Intelligence (XAI) into marine cyberdefense systems can address the lack of trustworthiness and low interpretability inherent in complex black-box Network Intrusion Detection Systems (NIDS) models. XAI has emerged as a pivotal focus in achieving a zero-trust cybersecurity strategy within marine communication networks. This article presents the development of a zero-trust NIDS framework designed to detect contemporary marine cyberattacks, utilizing two modern datasets (2023 Edge-IIoTset and 2023 CICIoT). The zero-trust NIDS model achieves an optimal Matthews C
APA, Harvard, Vancouver, ISO, and other styles
28

Sarvesh Koli Komal Bhat Prajwal Korade, Deepak Mane Anand Magar Om Khode. "Unlocking Machine Learning Model Decisions: A Comparative Analysis of LIME and SHAP for Enhanced Interpretability." Journal of Electrical Systems 20, no. 2s (2024): 598–613. http://dx.doi.org/10.52783/jes.1480.

Full text
Abstract:
XAI is critical for establishing trust and enabling the appropriate development of machine learning models. By offering transparency into how these models make judgements, XAI enables researchers and users to uncover potential biases, admit limits, and eventually enhance the fairness and dependability of AI systems. In this paper, we demonstrates two techniques, LIME and SHAP, used to improve the interpretability of machine learning models. Assessing Explainable AI (XAI) approaches is critical in searching for transparent and interpretable artificial intelligence (AI) models. Explainable AI (X
APA, Harvard, Vancouver, ISO, and other styles
29

Sarvesh Koli, Komal Bhat, Prajwal Korade, Deepak Mane, Anand Magar, Om Khode,. "Unlocking Machine Learning Model Decisions: A Comparative Analysis of LIME and SHAP for Enhanced Interpretability." Journal of Electrical Systems 20, no. 2s (2024): 1252–67. http://dx.doi.org/10.52783/jes.1768.

Full text
Abstract:
XAI is critical for establishing trust and enabling the appropriate development of machine learning models. By offering transparency into how these models make judgements, XAI enables researchers and users to uncover potential biases, admit limits, and eventually enhance the fairness and dependability of AI systems. In this paper, we demonstrates two techniques, LIME and SHAP, used to improve the interpretability of machine learning models. Assessing Explainable AI (XAI) approaches is critical in searching for transparent and interpretable artificial intelligence (AI) models. Explainable AI (X
APA, Harvard, Vancouver, ISO, and other styles
30

Lozano-Murcia, Catalina, Francisco P. Romero, Jesus Serrano-Guerrero, and Jose A. Olivas. "A Comparison between Explainable Machine Learning Methods for Classification and Regression Problems in the Actuarial Context." Mathematics 11, no. 14 (2023): 3088. http://dx.doi.org/10.3390/math11143088.

Full text
Abstract:
Machine learning, a subfield of artificial intelligence, emphasizes the creation of algorithms capable of learning from data and generating predictions. However, in actuarial science, the interpretability of these models often presents challenges, raising concerns about their accuracy and reliability. Explainable artificial intelligence (XAI) has emerged to address these issues by facilitating the development of accurate and comprehensible models. This paper conducts a comparative analysis of various XAI approaches for tackling distinct data-driven insurance problems. The machine learning meth
APA, Harvard, Vancouver, ISO, and other styles
31

Kulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in critical domains such as healthcare, finance, and autonomous systems. This research explores methodologies and frameworks to enhance the interpretability of ML models, focusing on techniques like feature attribution, surrogate models, and counterfactual explanations. By balancing model complexity and transparency, this study highli
APA, Harvard, Vancouver, ISO, and other styles
32

Bhargava, Kumar, and Kumar Tejaswini. "Explainable AI in Finance and Investment Banking: Techniques, Applications, and Future Directions." Journal of Scientific and Engineering Research 9, no. 5 (2022): 119–24. https://doi.org/10.5281/zenodo.12666879.

Full text
Abstract:
The increasing reliance on artificial intelligence (AI) in the finance and investment banking industries underscores the need for clear and comprehensible models. Explainable AI (XAI) fulfills this requirement by making the decision-making processes of complex models transparent and comprehensible to human stakeholders. This paper explores the role of XAI in finance, examining prominent techniques such as LIME and SHAP, and their applications in credit scoring, fraud detection, algorithmic trading, and investment risk management. Additionally, we discuss the challenges and constraints of imple
APA, Harvard, Vancouver, ISO, and other styles
33

Damaševičius, Robertas. "Explainable Artificial Intelligence Methods for Breast Cancer Recognition." Innovation Discovery 1, no. 3 (2024): 25. http://dx.doi.org/10.53964/id.2024025.

Full text
Abstract:
Breast cancer remains a leading cause of cancer-related mortality among women worldwide, necessitating early and accurate detection for effective treatment and improved survival rates. Artificial intelligence (AI) has shown significant potential in enhancing the diagnostic and prognostic capabilities in breast cancer recognition. However, the black-box nature of many AI models poses challenges for their clinical adoption due to the lack of transparency and interpretability. Explainable AI (XAI) methods address these issues by providing human-understandable explanations of AI models’ decision-m
APA, Harvard, Vancouver, ISO, and other styles
34

Challa, Narayana. "Demystifying AI: Navigating the Balance between Precision and Comprehensibility with Explainable Artificial Intelligence." International Journal of Computing and Engineering 5, no. 1 (2024): 12–17. http://dx.doi.org/10.47941/ijce.1603.

Full text
Abstract:
Integrating Artificial Intelligence (AI) into daily life has brought transformative changes, ranging from personalized recommendations on streaming platforms to advancements in medical diagnostics. However, concerns about the transparency and interpretability of AI models, intense neural networks, have become prominent. This paper explores the emerging paradigm of Explainable Artificial Intelligence (XAI) as a crucial response to address these concerns. Delving into the multifaceted challenges posed by AI complexity, the study emphasizes the critical significance of interpretability. It examin
APA, Harvard, Vancouver, ISO, and other styles
35

Gaurav, Kashyap. "Explainable AI (XAI): Methods and Techniques to Make Deep Learning Models More Interpretable and Their Real-World Implications." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 11, no. 4 (2023): 1–7. https://doi.org/10.5281/zenodo.14382747.

Full text
Abstract:
The goal of the developing field of explainable artificial intelligence (XAI) is to make complex AI models, especially deep learning (DL) models, which are frequently criticized for being "black boxes" more interpretable. Understanding how deep learning models make decisions is becoming crucial for accountability, fairness, and trust as deep learning is used more and more in various industries. This paper offers a thorough analysis of the strategies and tactics used to improve the interpretability of deep learning models, including hybrid approaches, post-hoc explanations, and model-specific s
APA, Harvard, Vancouver, ISO, and other styles
36

Bhatnagar, Shweta, and Rashmi Agrawal. "Understanding explainable artificial intelligence techniques: a comparative analysis for practical application." Bulletin of Electrical Engineering and Informatics 13, no. 6 (2024): 4451–55. http://dx.doi.org/10.11591/eei.v13i6.8378.

Full text
Abstract:
Explainable artificial intelligence (XAI) uses artificial intelligence (AI) tools and techniques to build interpretability in black-box algorithms. XAI methods are classified based on their purpose (pre-model, in-model, and post-model), scope (local or global), and usability (model-agnostic and model-specific). XAI methods and techniques were summarized in this paper with real-life examples of XAI applications. Local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP) methods were applied to the moral dataset to compare the performance outcomes of these tw
APA, Harvard, Vancouver, ISO, and other styles
37

Hamim, Sultanul Arifeen, Mubasshar U. I. Tamim, M. F. Mridha, Mejdl Safran, and Dunren Che. "SmartSkin-XAI: An Interpretable Deep Learning Approach for Enhanced Skin Cancer Diagnosis in Smart Healthcare." Diagnostics 15, no. 1 (2024): 64. https://doi.org/10.3390/diagnostics15010064.

Full text
Abstract:
Background: Skin cancer, particularly melanoma, poses significant challenges due to the heterogeneity of skin images and the demand for accurate and interpretable diagnostic systems. Early detection and effective management are crucial for improving patient outcomes. Traditional AI models often struggle with balancing accuracy and interpretability, which are critical for clinical adoption. Methods: The SmartSkin-XAI methodology incorporates a fine-tuned DenseNet121 model combined with XAI techniques to interpret predictions. This approach improves early detection and patient management by offe
APA, Harvard, Vancouver, ISO, and other styles
38

Hoffmann, Rudolf, and Christoph Reich. "A Systematic Literature Review on Artificial Intelligence and Explainable Artificial Intelligence for Visual Quality Assurance in Manufacturing." Electronics 12, no. 22 (2023): 4572. http://dx.doi.org/10.3390/electronics12224572.

Full text
Abstract:
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achie
APA, Harvard, Vancouver, ISO, and other styles
39

R, Jain. "Transparency in AI Decision Making: A Survey of Explainable AI Methods and Applications." Advances in Robotic Technology 2, no. 1 (2024): 1–10. http://dx.doi.org/10.23880/art-16000110.

Full text
Abstract:
Artificial Intelligence (AI) systems have become pervasive in numerous facets of modern life, wielding considerable influence in critical decision-making realms such as healthcare, finance, criminal justice, and beyond. Yet, the inherent opacity of many AI models presents significant hurdles concerning trust, accountability, and fairness. To address these challenges, Explainable AI (XAI) has emerged as a pivotal area of research, striving to augment the transparency and interpretability of AI systems. This survey paper serves as a comprehensive exploration of the state-of-the-art in XAI method
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Mini Han, Ruoyu Zhou, Zhiyuan Lin, et al. "Can Explainable Artificial Intelligence Optimize the Data Quality of Machine Learning Model? Taking Meibomian Gland Dysfunction Detections as a Case Study." Journal of Physics: Conference Series 2650, no. 1 (2023): 012025. http://dx.doi.org/10.1088/1742-6596/2650/1/012025.

Full text
Abstract:
Abstract Data quality plays a crucial role in computer-aided diagnosis (CAD) for ophthalmic disease detection. Various methodologies for data enhancement and preprocessing exist, with varying effectiveness and impact on model performance. However, the process of identifying the most effective approach usually involves time-consuming and resource-intensive experiments to determine optimal parameters. To address this issue, this study introduces a novel guidance framework that utilizes Explainable Artificial Intelligence (XAI) to enhance data quality. This method provides evidence of the signifi
APA, Harvard, Vancouver, ISO, and other styles
41

Jinad, Razaq, ABM Islam, and Narasimha Shashidhar. "Interpretability and Transparency of Machine Learning in File Fragment Analysis with Explainable Artificial Intelligence." Electronics 13, no. 13 (2024): 2438. http://dx.doi.org/10.3390/electronics13132438.

Full text
Abstract:
Machine learning models are increasingly being used across diverse fields, including file fragment classification. As these models become more prevalent, it is crucial to understand and interpret their decision-making processes to ensure accountability, transparency, and trust. This research investigates the interpretability of four machine learning models used for file fragment classification through the lens of Explainable Artificial Intelligence (XAI) techniques. Specifically, we employ two prominent XAI methods, Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Ex
APA, Harvard, Vancouver, ISO, and other styles
42

Zainuddin, Mohd. "The Role of Explainable AI in Enhancing Trust in Machine Learning Models." Open Access Journal of Multidisciplinary Research 1, no. 1 (2025): 1–3. https://doi.org/10.47760/oajmr.2025.v01i01.001.

Full text
Abstract:
Artificial Intelligence (AI) has advanced rapidly, but the "black-box" nature of many machine learning models has raised concerns about trust and interpretability. Explainable AI (XAI) is a growing field aimed at making AI decision-making processes understandable to humans. This paper explores the importance of XAI, the current methods used to achieve it, and its future impact across industries.
APA, Harvard, Vancouver, ISO, and other styles
43

Goutham Sunkara. "Explainable AI for cyber threat Intelligence: Enhancing analyst trust." Open Access Research Journal of Science and Technology 14, no. 2 (2025): 029–40. https://doi.org/10.53022/oarjst.2025.14.2.0091.

Full text
Abstract:
With the rise of artificial intelligence (AI) in the ecosystem of contemporary cyber threat intelligence (CTI) platforms, the issue of AI-driven decision interpretability and transparency has become increasingly common. Although there is an increased ability in machine learning models to detect the complex and evolving cyber threats, this usually prevents human trust and restricts perceived actionable insights because they are black boxed and have a challenge of accountability. This paper discusses the use of Explainable Artificial Intelligence (XAI, including SHAP (SHapley Additive exPlanatio
APA, Harvard, Vancouver, ISO, and other styles
44

Kumar, Dr S. N. Arjun. "Explainable AI in Financial Forecasting Using Time Series Analysis." International Journal for Research in Applied Science and Engineering Technology 13, no. 4 (2025): 7155–59. https://doi.org/10.22214/ijraset.2025.70080.

Full text
Abstract:
Financial forecasting is a cornerstone of investment strategy, economic planning, and risk mitigation. With the advent of Artificial Intelligence (AI), models such as Long Short-Term Memory (LSTM) networks and other deep learning techniques have drastically improved forecasting accuracy. However, the lack of transparency in these models has raised concerns, particularly in regulatory and high-stakes environments. Explainable Artificial Intelligence (XAI) addresses this limitation by offering interpretability into model behavior and predictions. This paper investigates the integration of XAI me
APA, Harvard, Vancouver, ISO, and other styles
45

Ali, Ahmed Hussein, and Marwan Ali Shnan. "Explainable AI: Methods, Challenges, and Future Directions." Applied Data Science and Analysis 2025 (January 15, 2025): 1–2. https://doi.org/10.58496/adsa/2025/001.

Full text
Abstract:
As artificial intelligence (AI)[1] systems become increasingly complex and pervasive, the need for transparency and interpretability has become a critical concern. Explainable AI (XAI)[2, 3] seeks to bridge the gap between opaque machine learning models and human users by providing insights into the decision-making processes of AI systems. This editorial explores the various methods employed in XAI, the challenges faced in achieving interpretability, and potential future directions for the field. The rapid adoption of AI in critical domains such as healthcare, finance, and criminal justice has
APA, Harvard, Vancouver, ISO, and other styles
46

Mkhatshwa, Junior, Tatenda Kavu, and Olawande Daramola. "Analysing the Performance and Interpretability of CNN-Based Architectures for Plant Nutrient Deficiency Identification." Computation 12, no. 6 (2024): 113. http://dx.doi.org/10.3390/computation12060113.

Full text
Abstract:
Early detection of plant nutrient deficiency is crucial for agricultural productivity. This study investigated the performance and interpretability of Convolutional Neural Networks (CNNs) for this task. Using the rice and banana datasets, we compared three CNN architectures (CNN, VGG-16, Inception-V3). Inception-V3 achieved the highest accuracy (93% for rice and banana), but simpler models such as VGG-16 might be easier to understand. To address this trade-off, we employed Explainable AI (XAI) techniques (SHAP and Grad-CAM) to gain insights into model decision-making. This study emphasises the
APA, Harvard, Vancouver, ISO, and other styles
47

Bae, Jae Kwon. "A Study on the Applicability of eXplainable Artificial Intelligence(XAI) Methodology by Industrial District." Academic Society of Global Business Administration 20, no. 2 (2023): 195–208. http://dx.doi.org/10.38115/asgba.2023.20.2.195.

Full text
Abstract:
The learning performance of artificial intelligence (AI) technologies such as machine learning and deep learning is approaching or surpassing that of humans, and humans are gaining new insights through hidden patterns and rules discovered by AI, but their delivery and explanatory power is in short supply. As the use of AI technology expands by industry, values such as transparency, fairness, and accountability are continuously required in addition to accuracy. Accordingly, the demand and necessity for eXplainable Artificial Intelligence (XAI) has recently been emphasized. XAI is an analysis mo
APA, Harvard, Vancouver, ISO, and other styles
48

Kalyanathaya, Krishna P., and Krishna Prasad K. "novel method for developing explainable machine learning framework using feature neutralization technique." Scientific Temper 15, no. 02 (2024): 2225–30. http://dx.doi.org/10.58414/scientifictemper.2024.15.2.35.

Full text
Abstract:
The rapid advancement of artificial intelligence (AI) has led to its widespread adoption across various domains. One of the most important challenges faced by AI adoption is to justify the outcome of the AI model. In response, explainable AI (XAI) has emerged as a critical area of research, aiming to enhance transparency and interpretability in AI systems. However, existing XAI methods facing several challenges, such as complexity, difficulty in interpretation, limited applicability, and lack of transparency. In this paper, we discuss current challenges using SHAP and LIME metrics being popula
APA, Harvard, Vancouver, ISO, and other styles
49

Senjoba, Lesego, Hajime Ikeda, Hisatoshi Toriya, Tsuyoshi Adachi, and Youhei Kawamura. "Enhancing Interpretability in Drill Bit Wear Analysis through Explainable Artificial Intelligence: A Grad-CAM Approach." Applied Sciences 14, no. 9 (2024): 3621. http://dx.doi.org/10.3390/app14093621.

Full text
Abstract:
This study introduces a novel method for analyzing vibration data related to drill bit failure. Our approach combines explainable artificial intelligence (XAI) with convolutional neural networks (CNNs). Conventional signal analysis methods, such as fast Fourier transform (FFT) and wavelet transform (WT), require extensive knowledge of drilling equipment specifications, which limits their adaptability to different conditions. In contrast, our method leverages XAI algorithms applied to CNNs to directly identify fault signatures from vibration signals. The signals are transformed into their frequ
APA, Harvard, Vancouver, ISO, and other styles
50

Mashfiquer Rahman, Shafiq Ullah, Sharmin Nahar, Mohammad Shahadat Hossain, Mostafizur Rahman, and Mostafijur Rahman. "The Role of Explainable AI in cyber threat intelligence: Enhancing transparency and trust in security systems." World Journal of Advanced Research and Reviews 23, no. 2 (2024): 2897–907. https://doi.org/10.30574/wjarr.2024.23.2.2404.

Full text
Abstract:
XAI technology transforms cybersecurity by enabling transparent, secure systems that gain users' trust in AI threat information processes. This research examines how XAI improves cybersecurity systems through CTI by enhancing security models' interpretability and decision-making capabilities based on AI algorithms. The research evaluates how XAI addresses trust problems in typical AI systems because of their "black box" operation. Security frameworks with XAI components enhance user reliability and defensive quality by improving detection methods and response capabilities. Experts have confirm
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!