Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Post-hoc explainabil.

Articles de revues sur le sujet « Post-hoc explainabil »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Post-hoc explainabil ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

de-la-Rica-Escudero, Alejandra, Eduardo C. Garrido-Merchán, and María Coronado-Vaca. "Explainable post hoc portfolio management financial policy of a Deep Reinforcement Learning agent." PLOS ONE 20, no. 1 (2025): e0315528. https://doi.org/10.1371/journal.pone.0315528.

Texte intégral
Résumé :
Financial portfolio management investment policies computed quantitatively by modern portfolio theory techniques like the Markowitz model rely on a set of assumptions that are not supported by data in high volatility markets such as the technological sector or cryptocurrencies. Hence, quantitative researchers are looking for alternative models to tackle this problem. Concretely, portfolio management (PM) is a problem that has been successfully addressed recently by Deep Reinforcement Learning (DRL) approaches. In particular, DRL algorithms train an agent by estimating the distribution of the e
Styles APA, Harvard, Vancouver, ISO, etc.
2

Viswan, Vimb, Shaffi Noushath, and Mahmud Mufti. "Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection." Brain Informatics 11 (April 5, 2024): A10. https://doi.org/10.1186/s40708-024-00222-1.

Texte intégral
Résumé :
Explainable artificial intelligence (XAI) has gained much interest in recent years for its ability to explain the complex decision-making process of machine learning (ML) and deep learning (DL) models. The Local Interpretable Model-agnostic Explanations (LIME) and Shaply Additive exPlanation (SHAP) frameworks have grown as popular interpretive tools for ML and DL models. This article provides a systematic review of the application of LIME and SHAP in interpreting the detection of Alzheimer’s disease (AD). Adhering to PRISMA and Kitchenham’s guidelines, we identified 23 relevant art
Styles APA, Harvard, Vancouver, ISO, etc.
3

Alvanpour, Aneseh, Cagla Acun, Kyle Spurlock, et al. "Comparative Analysis of Post Hoc Explainable Methods for Robotic Grasp Failure Prediction." Electronics 14, no. 9 (2025): 1868. https://doi.org/10.3390/electronics14091868.

Texte intégral
Résumé :
In human–robot collaborative environments, predicting and explaining robotic grasp failures is crucial for effective operation. While machine learning models can predict failures accurately, they often lack transparency, limiting their utility in critical applications. This paper presents a comparative analysis of three post hoc explanation methods—Tree-SHAP, LIME, and TreeInterpreter—for explaining grasp failure predictions from white-box and black-box models. Using a simulated robotic grasping dataset, we evaluate these methods based on their agreement in identifying important features, simi
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Texte intégral
Résumé :
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relations
Styles APA, Harvard, Vancouver, ISO, etc.
5

Larriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.

Texte intégral
Résumé :
The massive usage of Internet services nowadays has led to a drastic increase in cyberattacks, including sophisticated techniques, so that Intrusion Detection Systems (IDSs) need to use AP technologies to enhance their effectiveness. However, this has resulted in a lack of interpretability and explainability from different applications that use AI predictions, making it hard to understand by cybersecurity operators why decisions were made. To address this, the concept of Explainable AI (XAI) has been introduced to make the AI’s decisions more understandable at both global and local levels. Thi
Styles APA, Harvard, Vancouver, ISO, etc.
6

Metsch, Jacqueline Michelle, and Anne-Christin Hauschild. "BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data." Computers in Biology and Medicine 191 (June 2025): 110124. https://doi.org/10.1016/j.compbiomed.2025.110124.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Arjunan, Gopalakrishnan. "Implementing Explainable AI in Healthcare: Techniques for Interpretable Machine Learning Models in Clinical Decision-Making." International Journal of Scientific Research and Management (IJSRM) 9, no. 05 (2021): 597–603. http://dx.doi.org/10.18535/ijsrm/v9i05.ec03.

Texte intégral
Résumé :
The integration of explainable artificial intelligence (XAI) in healthcare is revolutionizing clinical decision-making by providing clarity around complex machine learning (ML) models. As AI becomes increasingly critical in medical fields—ranging from diagnostics to treatment personalization—the interpretability of these models is crucial for fostering trust, transparency, and accountability among healthcare providers and patients. Traditional "black-box" models, such as deep neural networks, often achieve high accuracy but lack transparency, creating challenges in highly regulated, high-stake
Styles APA, Harvard, Vancouver, ISO, etc.
8

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Texte intégral
Résumé :
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.

Texte intégral
Résumé :
Abstract Medical imaging plays a pivotal role in disease detection and intervention. The black-box nature of deep learning models, such as YOLOv8, creates challenges in interpreting their decisions. This paper presents a toolset to enhance interpretability in AI based diagnostics by integrating Explainable AI (XAI) techniques with YOLOv8. This paper explores implementation of post hoc methods, including Grad-CAM and Eigen CAM, to assist end users in understanding the decision making of the model. This comprehensive evaluation utilises CT-Datasets, demonstrating the efficacy of YOLOv8 for objec
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yang, Huijin, Seon Ha Baek, and Sejoong Kim. "Explainable Prediction of Overcorrection in Severe Hyponatremia: A Post Hoc Analysis of the SALSA Trial." Journal of the American Society of Nephrology 32, no. 10S (2021): 377. http://dx.doi.org/10.1681/asn.20213210s1377b.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Xiaopu, Wubing Miao, and Guodong Liu. "Explainable Data Mining Framework of Identifying Root Causes of Rocket Engine Anomalies Based on Knowledge and Physics-Informed Feature Selection." Machines 13, no. 8 (2025): 640. https://doi.org/10.3390/machines13080640.

Texte intégral
Résumé :
Liquid rocket engines occasionally experience abnormal phenomena with unclear mechanisms, causing difficulty in design improvements. To address the above issue, a data mining method that combines ante hoc explainability, post hoc explainability, and prediction accuracy is proposed. For ante hoc explainability, a feature selection method driven by data, models, and domain knowledge is established. Global sensitivity analysis of a physical model combined with expert knowledge and data correlation is utilized to establish the correlations between different types of parameters. Then a two-stage op
Styles APA, Harvard, Vancouver, ISO, etc.
12

Yan, Fei, Yunqing Chen, Yiwen Xia, Zhiliang Wang, and Ruoxiu Xiao. "An Explainable Brain Tumor Detection Framework for MRI Analysis." Applied Sciences 13, no. 6 (2023): 3438. http://dx.doi.org/10.3390/app13063438.

Texte intégral
Résumé :
Explainability in medical images analysis plays an important role in the accurate diagnosis and treatment of tumors, which can help medical professionals better understand the images analysis results based on deep models. This paper proposes an explainable brain tumor detection framework that can complete the tasks of segmentation, classification, and explainability. The re-parameterization method is applied to our classification network, and the effect of explainable heatmaps is improved by modifying the network architecture. Our classification model also has the advantage of post-hoc explain
Styles APA, Harvard, Vancouver, ISO, etc.
13

Ayaz, Hamail, Esra Sümer-Arpak, Esin Ozturk-Isik, et al. "Post-hoc eXplainable AI methods for analyzing medical images of gliomas (— A review for clinical applications)." Computers in Biology and Medicine 196 (September 2025): 110649. https://doi.org/10.1016/j.compbiomed.2025.110649.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Amarasinghe, Kasun, Kit T. Rodolfa, Sérgio Jesus, et al. "On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (2024): 20921–29. http://dx.doi.org/10.1609/aaai.v38i19.30082.

Texte intégral
Résumé :
Most existing evaluations of explainable machine learning (ML) methods rely on simplifying assumptions or proxies that do not reflect real-world use cases; the handful of more robust evaluations on real-world settings have shortcomings in their design, generally leading to overestimation of methods' real-world utility. In this work, we seek to address this by conducting a study that evaluates post-hoc explainable ML methods in a setting consistent with the application context and provide a template for future evaluation studies. We modify and improve a prior study on e-commerce fraud detection
Styles APA, Harvard, Vancouver, ISO, etc.
15

Sai Teja Boppiniti. "A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES." International Journal of Innovations in Engineering Research and Technology 7, no. 3 (2020): 57–66. http://dx.doi.org/10.26662/ijiert.v7i3.pp57-66.

Texte intégral
Résumé :
Explainable Artificial Intelligence (XAI) is a rapidly evolving field aimed at making AI systems more interpretable and transparent to human users. As AI technologies become increasingly integrated into critical sectors such as healthcare, finance, and autonomous systems, the need for explanations behind AI decisions has grown significantly. This survey provides a comprehensive review of XAI techniques, categorizing them into post-hoc and intrinsic methods, and examines their application in various domains. Additionally, the paper explores the major challenges in achieving explainability, incl
Styles APA, Harvard, Vancouver, ISO, etc.
16

Kulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.

Texte intégral
Résumé :
Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in critical domains such as healthcare, finance, and autonomous systems. This research explores methodologies and frameworks to enhance the interpretability of ML models, focusing on techniques like feature attribution, surrogate models, and counterfactual explanations. By balancing model complexity and transparency, this study highli
Styles APA, Harvard, Vancouver, ISO, etc.
17

Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (2021): 3137. http://dx.doi.org/10.3390/math9233137.

Texte intégral
Résumé :
Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS cla
Styles APA, Harvard, Vancouver, ISO, etc.
18

Yu, Jinqiang, Alexey Ignatiev, Peter J. Stuckey, Nina Narodytska, and Joao Marques-Silva. "Eliminating the Impossible, Whatever Remains Must Be True: On Extracting and Applying Background Knowledge in the Context of Formal Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 4123–31. http://dx.doi.org/10.1609/aaai.v37i4.25528.

Texte intégral
Résumé :
The rise of AI methods to make predictions and decisions has led to a pressing need for more explainable artificial intelligence (XAI) methods. One common approach for XAI is to produce a post-hoc explanation, explaining why a black box ML model made a certain prediction. Formal approaches to post-hoc explanations provide succinct reasons for why a prediction was made, as well as why not another prediction was made. But these approaches assume that features are independent and uniformly distributed. While this means that “why” explanations are correct, they may be longer than required. It also
Styles APA, Harvard, Vancouver, ISO, etc.
19

Alfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.

Texte intégral
Résumé :
Explainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual `even-if' thinking and their computational complexity amon
Styles APA, Harvard, Vancouver, ISO, etc.
20

Roscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.

Texte intégral
Résumé :
Abstract. For some time now, machine learning methods have been indispensable in many application areas. Especially with the recent development of efficient neural networks, these methods are increasingly used in the sciences to obtain scientific outcomes from observational or simulated data. Besides a high accuracy, a desired goal is to learn explainable models. In order to reach this goal and obtain explanation, knowledge from the respective domain is necessary, which can be integrated into the model or applied post-hoc. We discuss explainable machine learning approaches which are used to ta
Styles APA, Harvard, Vancouver, ISO, etc.
21

Boggess, Kayla. "Explanations for Multi-Agent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29245–46. https://doi.org/10.1609/aaai.v39i28.35200.

Texte intégral
Résumé :
Explainable reinforcement learning (xRL) provides explanations for ``black-box" decision making systems. However, most work in xRL is based on single-agent settings instead of the more complex multi-agent reinforcement learning (MARL). Several different types of post-hoc explanations must be provided to increase understanding of both centralized and decentralized MARL systems. For centralized MARL, this research develops methods to generate global policy summaries, query-based explanations, and temporal explanations. For decentralized MARL, this research develops global policy summaries and qu
Styles APA, Harvard, Vancouver, ISO, etc.
22

Nogueira, Caio, Luís Fernandes, João N. D. Fernandes, and Jaime S. Cardoso. "Explaining Bounding Boxes in Deep Object Detectors Using Post Hoc Methods for Autonomous Driving Systems." Sensors 24, no. 2 (2024): 516. http://dx.doi.org/10.3390/s24020516.

Texte intégral
Résumé :
Deep learning has rapidly increased in popularity, leading to the development of perception solutions for autonomous driving. The latter field leverages techniques developed for computer vision in other domains for accomplishing perception tasks such as object detection. However, the black-box nature of deep neural models and the complexity of the autonomous driving context motivates the study of explainability in these models that perform perception tasks. Moreover, this work explores explainable AI techniques for the object detection task in the context of autonomous driving. An extensive an
Styles APA, Harvard, Vancouver, ISO, etc.
23

O'Loughlin, Ryan J., Dan Li, Richard Neale, and Travis A. O'Brien. "Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling." Geoscientific Model Development 18, no. 3 (2025): 787–802. https://doi.org/10.5194/gmd-18-787-2025.

Texte intégral
Résumé :
Abstract. AI models are criticized as being black boxes, potentially subjecting climate science to greater uncertainty. Explainable artificial intelligence (XAI) has been proposed to probe AI models and increase trust. In this review and perspective paper, we suggest that, in addition to using XAI methods, AI researchers in climate science can learn from past successes in the development of physics-based dynamical climate models. Dynamical models are complex but have gained trust because their successes and failures can sometimes be attributed to specific components or sub-models, such as when
Styles APA, Harvard, Vancouver, ISO, etc.
24

Gill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (2020): 137. http://dx.doi.org/10.3390/info11030137.

Texte intégral
Résumé :
This manuscript outlines a viable approach for training and evaluating machine learning systems for high-stakes, human-centered, or regulated applications using common Python programming tools. The accuracy and intrinsic interpretability of two types of constrained models, monotonic gradient boosting machines and explainable neural networks, a deep learning architecture well-suited for structured data, are assessed on simulated data and publicly available mortgage data. For maximum transparency and the potential generation of personalized adverse action notices, the constrained models are anal
Styles APA, Harvard, Vancouver, ISO, etc.
25

Gadzinski, Gregory, and Alessio Castello. "Combining white box models, black box machines and human interventions for interpretable decision strategies." Judgment and Decision Making 17, no. 3 (2022): 598–627. http://dx.doi.org/10.1017/s1930297500003594.

Texte intégral
Résumé :
AbstractGranting a short-term loan is a critical decision. A great deal of research has concerned the prediction of credit default, notably through Machine Learning (ML) algorithms. However, given that their black-box nature has sometimes led to unwanted outcomes, comprehensibility in ML guided decision-making strategies has become more important. In many domains, transparency and accountability are no longer optional. In this article, instead of opposing white-box against black-box models, we use a multi-step procedure that combines the Fast and Frugal Tree (FFT) methodology of Martignon et a
Styles APA, Harvard, Vancouver, ISO, etc.
26

Wyatt, Lucie S., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. "Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review." Applied Sciences 14, no. 18 (2024): 8108. http://dx.doi.org/10.3390/app14188108.

Texte intégral
Résumé :
This review provides an overview of explainable AI (XAI) methods for oncological ultrasound image analysis and compares their performance evaluations. A systematic search of Medline Embase and Scopus between 25 March and 14 April 2024 identified 17 studies describing 14 XAI methods, including visualization, semantics, example-based, and hybrid functions. These methods primarily provided specific, local, and post hoc explanations. Performance evaluations focused on AI model performance, with limited assessment of explainability impact. Standardized evaluations incorporating clinical end-users a
Styles APA, Harvard, Vancouver, ISO, etc.
27

Baek, Myunghun, Taeho An, Seungkuk Kuk, and Kyongtae Park. "14‐3: Drop Resistance Optimization Through Post‐Hoc Analysis of Chemically Strengthened Glass." SID Symposium Digest of Technical Papers 54, no. 1 (2023): 174–77. http://dx.doi.org/10.1002/sdtp.16517.

Texte intégral
Résumé :
In the market of the mobile cover glass, the development of chemically strengthened glass is focused on the drop resistance improvement. The cover glass which protects the display panel is a typical brittle material that the micro cracks tends to be occurred underneath a glass surface by physical impacts. The micro cracks tends to be propagated by tensile stress and it is known as a general procedures on glass breakage. In purpose of the cover glass strength improvement, compressive stress is applied to the glass surface using chemical strengthening method by ion exchange. However, since the c
Styles APA, Harvard, Vancouver, ISO, etc.
28

Vinayak Pillai. "Enhancing the transparency of data and ml models using explainable AI (XAI)." World Journal of Advanced Engineering Technology and Sciences 13, no. 1 (2024): 397–406. http://dx.doi.org/10.30574/wjaets.2024.13.1.0428.

Texte intégral
Résumé :
To this end, this paper focuses on the increasing demand for the explainability of Machine Learning (ML) models especially in environments where these models are employed to make critical decisions such as in healthcare, finance, and law. Although the typical ML models are considered opaque, XAI provides a set of ways and means to propose making these models more transparent and, thus, easier to explain. This paper describes and analyzes the model-agnostic approach, method of intrinsic explanation, post-hoc explanation, and visualization instruments and demonstrates the use of XAI in various f
Styles APA, Harvard, Vancouver, ISO, etc.
29

Kadioglu, Serdar, Elton Yechao Zhu, Gili Rosenberg, et al. "BoolXAI: Explainable AI Using Expressive Boolean Formulas." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 28900–28906. https://doi.org/10.1609/aaai.v39i28.35157.

Texte intégral
Résumé :
In this tool paper, we design, develop, and release BoolXAI, an interpretable machine learning classification approach for Explainable AI (XAI) based on expressive Boolean formulas. The Boolean formula defines a logical rule with tunable complexity according to which input data are classified. Beyond the classical conjunction and disjunction, BoolXAI offers expressive operators such as AtLeast, AtMost, and Choose and their parameterization. This provides higher expressiveness compared to rigid rules- and tree-based approaches. We show how to train BoolXAI classifiers effectively using native l
Styles APA, Harvard, Vancouver, ISO, etc.
30

Shen, Yifan, Li Liu, Zhihao Tang, et al. "Explainable Survival Analysis with Convolution-Involved Vision Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2207–15. http://dx.doi.org/10.1609/aaai.v36i2.20118.

Texte intégral
Résumé :
Image-based survival prediction models can facilitate doctors in diagnosing and treating cancer patients. With the advance of digital pathology technologies, the big whole slide images (WSIs) provide increasing resolution and more details for diagnosis. However, the gigabyte-size WSIs would make most models computationally infeasible. To this end, instead of using the complete WSIs, most of existing models only use a pre-selected subset of key patches or patch clusters as input, which might fail to completely capture the patient's tumor morphology. In this work, we aim to develop a novel survi
Styles APA, Harvard, Vancouver, ISO, etc.
31

Gunasekara, Sachini, and Mirka Saarela. "Explainable AI in Education: Techniques and Qualitative Assessment." Applied Sciences 15, no. 3 (2025): 1239. https://doi.org/10.3390/app15031239.

Texte intégral
Résumé :
Many of the articles on AI in education compare the performance and fairness of different models, but few specifically focus on quantitatively analyzing their explainability. To bridge this gap, we analyzed key evaluation metrics for two machine learning models—ANN and DT—with a focus on their performance and explainability in predicting student outcomes using the OULAD. The methodology involved evaluating the DT, an intrinsically explainable model, against the more complex ANN, which requires post hoc explainability techniques. The results show that, although the feature-based and structured
Styles APA, Harvard, Vancouver, ISO, etc.
32

Grozdanovski, Ljupcho. "THE EXPLANATIONS ONE NEEDS FOR THE EXPLANATIONS ONE GIVES—THE NECESSITY OF EXPLAINABLE AI (XAI) FOR CAUSAL EXPLANATIONS OF AI-RELATED HARM:DECONSTRUCTING THE ‘REFUGE OF IGNORANCE’ IN THE EU’S AI LIABILITY REGULATION." International Journal of Law, Ethics, and Technology 2024, no. 2 (2024): 155–262. http://dx.doi.org/10.55574/tqcg5204.

Texte intégral
Résumé :
This paper examines how explanations related to the adverse outcomes of Artificial Intelligence (AI) contribute to the development of causal evidentiary explanations in disputes surrounding AI liability. The study employs a dual approach: first, it analyzes the emerging global caselaw in the field of AI liability, seeking to discern prevailing trends regarding the evidence and explanations considered essential for the fair resolution of disputes. Against the backdrop of those trends, the paper evaluates the upcoming legislation in the European Union (EU) concerning AI liability, namely the AI
Styles APA, Harvard, Vancouver, ISO, etc.
33

Kabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (2024): 1797. http://dx.doi.org/10.3390/en17081797.

Texte intégral
Résumé :
The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability to deal with data uncertainties, such prediction has limited applicability in this domain. As a result, domain knowledge-based explainability with high accuracy is critical for making energy predictions trustworthy. Motivated by this, we propose an advanced explainable Belief Rule-Based Expert System
Styles APA, Harvard, Vancouver, ISO, etc.
34

Bourgais, Mathieu, Franco Giustozzi, Laurent Vercouter, and Cecilia Zanni-Merk. "Towards the use of post-hoc explainable methods to define and detect semantic situations of importance in medical data." Procedia Computer Science 225 (2023): 2312–21. http://dx.doi.org/10.1016/j.procs.2023.10.222.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Zdravkovic, Milan. "On the global feature importance for interpretable and trustworthy heat demand forecasting." Thermal Science, no. 00 (2025): 48. https://doi.org/10.2298/tsci241223048z.

Texte intégral
Résumé :
The paper introduces the Explainable AI methodology to assess the global feature importance of the Machine Learning models used for heat demand forecasting in intelligent control of District Heating Systems (DHS), with motivation to facilitate their interpretability and trustworthiness, hence addressin g the challenges related to adherence to communal standards, customer satisfaction and liability risks. Methodology involves generation of global feature importance insights by using four different approaches, namely intrinsic (ante-hoc) interpretability of Gradient Boosting method and selected
Styles APA, Harvard, Vancouver, ISO, etc.
36

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, et al. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (2022): 7375. http://dx.doi.org/10.3390/su14127375.

Texte intégral
Résumé :
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Ex
Styles APA, Harvard, Vancouver, ISO, etc.
37

Mikołajczyk, Agnieszka, Michał Grochowski, and Arkadiusz Kwasigroch. "Towards Explainable Classifiers Using the Counterfactual Approach - Global Explanations for Discovering Bias in Data." Journal of Artificial Intelligence and Soft Computing Research 11, no. 1 (2021): 51–67. http://dx.doi.org/10.2478/jaiscr-2021-0004.

Texte intégral
Résumé :
AbstractThe paper proposes summarized attribution-based post-hoc explanations for the detection and identification of bias in data. A global explanation is proposed, and a step-by-step framework on how to detect and test bias is introduced. Since removing unwanted bias is often a complicated and tremendous task, it is automatically inserted, instead. Then, the bias is evaluated with the proposed counterfactual approach. The obtained results are validated on a sample skin lesion dataset. Using the proposed method, a number of possible bias-causing artifacts are successfully identified and confi
Styles APA, Harvard, Vancouver, ISO, etc.
38

Aryal, Saugat. "Semi-factual Explanations in AI." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23379–80. http://dx.doi.org/10.1609/aaai.v38i21.30390.

Texte intégral
Résumé :
Most of the recent works on post-hoc example-based eXplainable AI (XAI) methods revolves around employing counterfactual explanations to provide justification of the predictions made by AI systems. Counterfactuals show what changes to the input-features change the output decision. However, a lesser-known, special-case of the counterfacual is the semi-factual, which provide explanations about what changes to the input-features do not change the output decision. Semi-factuals are potentially as useful as counterfactuals but have received little attention in the XAI literature. My doctoral resear
Styles APA, Harvard, Vancouver, ISO, etc.
39

Oluwatosin Oladayo ARAMIDE. "Explainable AI (XAI) for Network Operations and Troubleshooting." International Journal for Research Publication and Seminar 16, no. 1 (2025): 533–54. https://doi.org/10.36676/jrps.v16.i1.286.

Texte intégral
Résumé :
With the advancement of 5G, Software-Defined Networking (SDN), and Network Function Virtualization (NFV) technics, which enable modern network infrastructures to become more and more complex and dynamic, the rule-based controller-based management systems are becoming insufficient to pinpoint the faults in real-time, to determine the root cause, and to conduct fine-grain optimization according to the needs of users. Artificial Intelligence (AI) especially Machine Learning (ML) has become an efficient means of automating the network. Nonetheless, the secrecy of most AI models simply acts as a se
Styles APA, Harvard, Vancouver, ISO, etc.
40

Pendyala, Vishnu S., Neha Bais Thakur, and Radhika Agarwal. "Explainable Use of Foundation Models for Job Hiring." Electronics 14, no. 14 (2025): 2787. https://doi.org/10.3390/electronics14142787.

Texte intégral
Résumé :
Automating candidate shortlisting is a non-trivial task that stands to benefit substantially from advances in artificial intelligence. We evaluate a suite of foundation models such as Llama 2, Llama 3, Mixtral, Gemma-2b, Gemma-7b, Phi-3 Small, Phi-3 Mini, Zephyr, and Mistral-7b for their ability to predict hiring outcomes in both zero-shot and few-shot settings. Using only features extracted from applicants’ submissions, these models, on average, achieved an AUC above 0.5 in zero-shot settings. Providing a few examples similar to the job applicants based on a nearest neighbor search improved t
Styles APA, Harvard, Vancouver, ISO, etc.
41

Ranjith Gopalan, Dileesh Onniyil, Ganesh Viswanathan, and Gaurav Samdani. "Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1388–402. https://doi.org/10.30574/wjaets.2025.15.2.0635.

Texte intégral
Résumé :
The rapid advancements in artificial intelligence and machine learning have led to the development of highly sophisticated models capable of superhuman performance in a variety of tasks. However, the increasing complexity of these models has also resulted in them becoming "black boxes", where the internal decision-making process is opaque and difficult to interpret. This lack of transparency and explainability has become a significant barrier to the widespread adoption of these models, particularly in sensitive domains such as healthcare and finance. To address this challenge, the field of Exp
Styles APA, Harvard, Vancouver, ISO, etc.
42

한, 애라. "민사소송에서의 AI 알고리즘 심사". Korea Association of the Law of Civil Procedure 27, № 1 (2023): 185–233. http://dx.doi.org/10.30639/cp.2023.2.27.1.185.

Texte intégral
Résumé :
Automated decision-making by AI algorithms is increasingly likely to cause civil liability. However, AI algorithms based on machine learning techniques are less explainable due to technical inscrutability arising from the nature of the learning methods itself, legal opacity due to protection of trade secrets or intellectual property rights, or incomprehensibility of the general public or judges due to the complexity and counterintuitiveness of algorithms, thus making its judicial review difficult. When the mechanism of an AI algorithm is at issue in a lawsuit, the question is how an opaque AI
Styles APA, Harvard, Vancouver, ISO, etc.
43

Knapič, Samanta, Avleen Malhi, Rohit Saluja, and Kary Främling. "Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 740–70. http://dx.doi.org/10.3390/make3030037.

Texte intégral
Résumé :
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, c
Styles APA, Harvard, Vancouver, ISO, etc.
44

Kumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.

Texte intégral
Résumé :
Sarcasm detection in dialogues has been gaining popularity among natural language processing (NLP) researchers with the increased use of conversational threads on social media. Capturing the knowledge of the domain of discourse, context propagation during the course of dialogue, and situational context and tone of the speaker are some important features to train the machine learning models for detecting sarcasm in real time. As situational comedies vibrantly represent human mannerism and behaviour in everyday real-life situations, this research demonstrates the use of an ensemble supervised le
Styles APA, Harvard, Vancouver, ISO, etc.
45

Li, Lu, Jiale Liu, Xingyu Ji, Maojun Wang, and Zeyu Zhang. "Self-Explainable Graph Transformer for Link Sign Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12084–92. https://doi.org/10.1609/aaai.v39i11.33316.

Texte intégral
Résumé :
Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns in real-world situations where positive and negative links coexist. However, SGNN models suffer from poor explainability, which limit their adoptions in critical scenarios that require understanding the rationale behind predictions. To the best of our knowledge, there is currently no research work on the explainability of the SGNN models. Our goal is to address the explainability of decision-making for the downstream task of link sign prediction specific to signed graph neural networks. Since pos
Styles APA, Harvard, Vancouver, ISO, etc.
46

Madhukar E. "Multi-Level Feature Selection and Transfer Learning Framework for Scalable and Explainable Machine Learning Systems in Real-Time Applications." Journal of Information Systems Engineering and Management 10, no. 46s (2025): 1091–101. https://doi.org/10.52783/jisem.v10i46s.9242.

Texte intégral
Résumé :
Rapid advances in data-intensive real-time applications (e.g., IoT monitoring, autonomous systems) have heightened the need for machine learning (ML) solutions that are both scalable and explainable. Real-time systems demand low-latency inference on streaming data while ensuring model interpretability for trust and compliance. In this work, we propose a novel multi-level feature selection and transfer learning framework designed to address these challenges. Our framework integrates filter, wrapper, and embedded feature selection stages to reduce dimensionality and improve model efficiency, fol
Styles APA, Harvard, Vancouver, ISO, etc.
47

Methuku, Vijayalaxmi, Sharath Chandra Kondaparthy, and Direesh Reddy Aunugu. "Explainability and Transparency in Artificial Intelligence: Ethical Imperatives and Practical Challenges." International Journal of Electrical, Electronics and Computers 8, no. 3 (2023): 7–12. https://doi.org/10.22161/eec.84.2.

Texte intégral
Résumé :
Artificial Intelligence (AI) is increasingly embedded in high-stakes domains such as healthcare, finance, and law enforcement, where opaque decision-making raises significant ethical concerns. Among the core challenges in AI ethics are explainability and transparency—key to fostering trust, accountability, and fairness in algorithmic systems. This review explores the ethical foundations of explainable AI (XAI), surveys leading technical approaches such as model-agnostic interpretability techniques and post-hoc explanation methods and examines their inherent limitations and trade-offs. A real-w
Styles APA, Harvard, Vancouver, ISO, etc.
48

Gaurav, Kashyap. "Explainable AI (XAI): Methods and Techniques to Make Deep Learning Models More Interpretable and Their Real-World Implications." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 11, no. 4 (2023): 1–7. https://doi.org/10.5281/zenodo.14382747.

Texte intégral
Résumé :
The goal of the developing field of explainable artificial intelligence (XAI) is to make complex AI models, especially deep learning (DL) models, which are frequently criticized for being "black boxes" more interpretable. Understanding how deep learning models make decisions is becoming crucial for accountability, fairness, and trust as deep learning is used more and more in various industries. This paper offers a thorough analysis of the strategies and tactics used to improve the interpretability of deep learning models, including hybrid approaches, post-hoc explanations, and model-specific s
Styles APA, Harvard, Vancouver, ISO, etc.
49

Abdelaal, Yasmin, Michaël Aupetit, Abdelkader Baggag, and Dena Al-Thani. "Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review." Journal of Medical Internet Research 26 (December 24, 2024): e53863. https://doi.org/10.2196/53863.

Texte intégral
Résumé :
Background Wearable technologies have become increasingly prominent in health care. However, intricate machine learning and deep learning algorithms often lead to the development of “black box” models, which lack transparency and comprehensibility for medical professionals and end users. In this context, the integration of explainable artificial intelligence (XAI) has emerged as a crucial solution. By providing insights into the inner workings of complex algorithms, XAI aims to foster trust and empower stakeholders to use wearable technologies responsibly. Objective This paper aims to review t
Styles APA, Harvard, Vancouver, ISO, etc.
50

Jeon, Minseok, Jihyeok Park, and Hakjoo Oh. "PL4XGL: A Programming Language Approach to Explainable Graph Learning." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 2148–73. http://dx.doi.org/10.1145/3656464.

Texte intégral
Résumé :
In this article, we present a new, language-based approach to explainable graph learning. Though graph neural networks (GNNs) have shown impressive performance in various graph learning tasks, they have severe limitations in explainability, hindering their use in decision-critical applications. To address these limitations, several GNN explanation techniques have been proposed using a post-hoc explanation approach providing subgraphs as explanations for classification results. Unfortunately, however, they have two fundamental drawbacks in terms of additional explanation costs and 2) the correc
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!