Academic literature on the topic 'Post-hoc explainabil'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Post-hoc explainabil.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Post-hoc explainabil"

1

de-la-Rica-Escudero, Alejandra, Eduardo C. Garrido-Merchán, and María Coronado-Vaca. "Explainable post hoc portfolio management financial policy of a Deep Reinforcement Learning agent." PLOS ONE 20, no. 1 (2025): e0315528. https://doi.org/10.1371/journal.pone.0315528.

Full text
Abstract:
Financial portfolio management investment policies computed quantitatively by modern portfolio theory techniques like the Markowitz model rely on a set of assumptions that are not supported by data in high volatility markets such as the technological sector or cryptocurrencies. Hence, quantitative researchers are looking for alternative models to tackle this problem. Concretely, portfolio management (PM) is a problem that has been successfully addressed recently by Deep Reinforcement Learning (DRL) approaches. In particular, DRL algorithms train an agent by estimating the distribution of the e
APA, Harvard, Vancouver, ISO, and other styles
2

Viswan, Vimb, Shaffi Noushath, and Mahmud Mufti. "Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection." Brain Informatics 11 (April 5, 2024): A10. https://doi.org/10.1186/s40708-024-00222-1.

Full text
Abstract:
Explainable artificial intelligence (XAI) has gained much interest in recent years for its ability to explain the complex decision-making process of machine learning (ML) and deep learning (DL) models. The Local Interpretable Model-agnostic Explanations (LIME) and Shaply Additive exPlanation (SHAP) frameworks have grown as popular interpretive tools for ML and DL models. This article provides a systematic review of the application of LIME and SHAP in interpreting the detection of Alzheimer’s disease (AD). Adhering to PRISMA and Kitchenham’s guidelines, we identified 23 relevant art
APA, Harvard, Vancouver, ISO, and other styles
3

Alvanpour, Aneseh, Cagla Acun, Kyle Spurlock, et al. "Comparative Analysis of Post Hoc Explainable Methods for Robotic Grasp Failure Prediction." Electronics 14, no. 9 (2025): 1868. https://doi.org/10.3390/electronics14091868.

Full text
Abstract:
In human–robot collaborative environments, predicting and explaining robotic grasp failures is crucial for effective operation. While machine learning models can predict failures accurately, they often lack transparency, limiting their utility in critical applications. This paper presents a comparative analysis of three post hoc explanation methods—Tree-SHAP, LIME, and TreeInterpreter—for explaining grasp failure predictions from white-box and black-box models. Using a simulated robotic grasping dataset, we evaluate these methods based on their agreement in identifying important features, simi
APA, Harvard, Vancouver, ISO, and other styles
4

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Full text
Abstract:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relations
APA, Harvard, Vancouver, ISO, and other styles
5

Larriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.

Full text
Abstract:
The massive usage of Internet services nowadays has led to a drastic increase in cyberattacks, including sophisticated techniques, so that Intrusion Detection Systems (IDSs) need to use AP technologies to enhance their effectiveness. However, this has resulted in a lack of interpretability and explainability from different applications that use AI predictions, making it hard to understand by cybersecurity operators why decisions were made. To address this, the concept of Explainable AI (XAI) has been introduced to make the AI’s decisions more understandable at both global and local levels. Thi
APA, Harvard, Vancouver, ISO, and other styles
6

Metsch, Jacqueline Michelle, and Anne-Christin Hauschild. "BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data." Computers in Biology and Medicine 191 (June 2025): 110124. https://doi.org/10.1016/j.compbiomed.2025.110124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arjunan, Gopalakrishnan. "Implementing Explainable AI in Healthcare: Techniques for Interpretable Machine Learning Models in Clinical Decision-Making." International Journal of Scientific Research and Management (IJSRM) 9, no. 05 (2021): 597–603. http://dx.doi.org/10.18535/ijsrm/v9i05.ec03.

Full text
Abstract:
The integration of explainable artificial intelligence (XAI) in healthcare is revolutionizing clinical decision-making by providing clarity around complex machine learning (ML) models. As AI becomes increasingly critical in medical fields—ranging from diagnostics to treatment personalization—the interpretability of these models is crucial for fostering trust, transparency, and accountability among healthcare providers and patients. Traditional "black-box" models, such as deep neural networks, often achieve high accuracy but lack transparency, creating challenges in highly regulated, high-stake
APA, Harvard, Vancouver, ISO, and other styles
8

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
APA, Harvard, Vancouver, ISO, and other styles
9

Sarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.

Full text
Abstract:
Abstract Medical imaging plays a pivotal role in disease detection and intervention. The black-box nature of deep learning models, such as YOLOv8, creates challenges in interpreting their decisions. This paper presents a toolset to enhance interpretability in AI based diagnostics by integrating Explainable AI (XAI) techniques with YOLOv8. This paper explores implementation of post hoc methods, including Grad-CAM and Eigen CAM, to assist end users in understanding the decision making of the model. This comprehensive evaluation utilises CT-Datasets, demonstrating the efficacy of YOLOv8 for objec
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Huijin, Seon Ha Baek, and Sejoong Kim. "Explainable Prediction of Overcorrection in Severe Hyponatremia: A Post Hoc Analysis of the SALSA Trial." Journal of the American Society of Nephrology 32, no. 10S (2021): 377. http://dx.doi.org/10.1681/asn.20213210s1377b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Post-hoc explainabil"

1

SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.

Full text
Abstract:
La necessità di spiegazioni sui sistemi di Machine Learning (ML) sta crescendo man mano che i nuovi modelli superano in performance i loro predecessori, diventando più complessi e meno comprensibili per gli utenti finali. Un passaggio essenziale nella ricerca in ambito eXplainable Artificial Intelligence (XAI) è la creazione di modelli interpretabili che mirano ad approssimare la funzione decisionale di un algoritmo black box. Sebbene negli ultimi anni siano stati proposti diversi metodi di XAI, non è stata prestata sufficiente attenzione alla spiegazione di come i modelli modificano il loro
APA, Harvard, Vancouver, ISO, and other styles
2

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Full text
Abstract:
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Post-hoc explainabil"

1

Kamath, Uday, and John Liu. "Post-Hoc Interpretability and Explanations." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Deshpande, Saurabh, Rahee Walambe, Ketan Kotecha, and Marina Marjanović Jakovljević. "Post-hoc Explainable Reinforcement Learning Using Probabilistic Graphical Models." In Communications in Computer and Information Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95502-1_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cánovas-Segura, Bernardo, Antonio Morales, Antonio López Martínez-Carrasco, et al. "Exploring Antimicrobial Resistance Prediction Using Post-hoc Interpretable Methods." In Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37446-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Full text
Abstract:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structu
APA, Harvard, Vancouver, ISO, and other styles
5

Agiollo, Andrea, Luciano Cavalcante Siebert, Pradeep Kumar Murukannaiah, and Andrea Omicini. "The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing." In Explainable and Transparent AI and Multi-Agent Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40878-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Neubig, Stefan, Daria Cappey, Nicolas Gehring, Linus Göhl, Andreas Hein, and Helmut Krcmar. "Visualizing Explainable Touristic Recommendations: An Interactive Approach." In Information and Communication Technologies in Tourism 2024. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_37.

Full text
Abstract:
AbstractPersonalized recommendations have played a vital role in tourism, serving various purposes, ranging from an improved visitor experience to addressing sustainability issues. However, research shows that recommendations are more likely to be accepted by visitors if they are comprehensible and appeal to the visitors’ common sense. This highlights the importance of explainable recommendations that, according to a previously specified goal, explain an algorithm’s inference process, generate trust among visitors, or educate visitors by making them aware of sustainability practices. Based on
APA, Harvard, Vancouver, ISO, and other styles
7

Mota, Bruno, Pedro Faria, Juan Corchado, and Carlos Ramos. "Explainable Artificial Intelligence Applied to Predictive Maintenance: Comparison of Post-Hoc Explainability Techniques." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63803-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oliveira, Pedro, Francisco Franco, Afonso Bessa, Dalila Durães, and Paulo Novais. "Employing Explainable AI Techniques for Air Pollution: An Ante-Hoc and Post-Hoc Approach in Dioxide Nitrogen Forecasting." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-77731-8_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pandey, Chetraj, Rafal A. Angryk, Manolis K. Georgoulis, and Berkay Aydin. "Explainable Deep Learning-Based Solar Flare Prediction with Post Hoc Attention for Operational Forecasting." In Discovery Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45275-8_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nizam, Tasleem, Sherin Zafar, Siddhartha Sankar Biswas, and Imran Hussain. "Investigating the Quality of Explainable Artificial Intelligence: A Survey on Various Techniques of Post hoc." In Intelligent Strategies for ICT. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1260-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Post-hoc explainabil"

1

Xu, Kerui, Jun Xu, Sheng Gao, Si Li, Jun Guo, and Ji-Rong Wen. "A Tag-Based Post-Hoc Framework for Explainable Conversational Recommendation." In ICTIR '22: The 2022 ACM SIGIR International Conference on the Theory of Information Retrieval. ACM, 2022. http://dx.doi.org/10.1145/3539813.3545120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Deb, Kiron, Xuan Zhang, and Kevin Duh. "Post-Hoc Interpretation of Transformer Hyperparameters with Explainable Boosting Machines." In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.blackboxnlp-1.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Senevirathna, Thulitha, Bartlomiej Siniarski, Madhusanka Liyanage, and Shen Wang. "Deceiving Post-Hoc Explainable AI (XAI) Methods in Network Intrusion Detection." In 2024 IEEE 21st Consumer Communications & Networking Conference (CCNC). IEEE, 2024. http://dx.doi.org/10.1109/ccnc51664.2024.10454633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kenny, Eoin M., Eoin Delaney, and Mark T. Keane. "Advancing Post-Hoc Case-Based Explanation with Feature Highlighting." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/48.

Full text
Abstract:
Explainable AI (XAI) has been proposed as a valuable tool to assist in downstream tasks involving human-AI collaboration. Perhaps the most psychologically valid XAI techniques are case-based approaches which display "whole" exemplars to explain the predictions of black-box AI systems. However, for such post-hoc XAI methods dealing with images, there has been no attempt to improve their scope by using multiple clear feature "parts" of the images to explain the predictions while linking back to relevant cases in the training data, thus allowing for more comprehensive explanations that are faithf
APA, Harvard, Vancouver, ISO, and other styles
5

Demir, Caglar, and Axel-Cyrille Ngonga Ngomo. "Neuro-Symbolic Class Expression Learning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/403.

Full text
Abstract:
Models computed using deep learning have been effectively applied to tackle various problems in many disciplines. Yet, the predictions of these models are often at most post-hoc and locally explainable. In contrast, class expressions in description logics are ante-hoc and globally explainable. Although state-of-the-art symbolic machine learning approaches are being successfully applied to learn class expressions, their application at large scale has been hindered by their impractical runtimes. Arguably, the reliance on myopic heuristic functions contributes to this limitation. We propose a nov
APA, Harvard, Vancouver, ISO, and other styles
6

Čyras, Kristijonas, Antonio Rago, Emanuele Albini, Pietro Baroni, and Francesca Toni. "Argumentative XAI: A Survey." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/600.

Full text
Abstract:
Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. We overview the literature focusing on diffe
APA, Harvard, Vancouver, ISO, and other styles
7

Aryal, Saugat, and Mark T. Keane. "Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/732.

Full text
Abstract:
Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e.g., a customer refused a loan might be told “if you asked for a loan with a shorter term, it would have been approved”). Counterfactuals explain what changes to the input-features of an AI system change the output-decision. However, there is a sub-type of counterfactual, semi-factuals, that have received less attention in AI (though the Cognitive Sciences have studied them more). This paper surveys semi-factual explanation, summarising historical and recent w
APA, Harvard, Vancouver, ISO, and other styles
8

Thendral Surendranath, Ephina. "Explainable Hybrid Machine Learning Technique for Healthcare Service Utilization." In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004837.

Full text
Abstract:
In the era of data, predictive and prescriptive analytics in healthcare is enabled by machine learning (ML) algorithms. The varied healthcare entities pose challenges in the inclusion of ML predictive models in the rule-based claims processing system. The hybrid ML algorithm proposed in this research article is for handling huge volumes of data in predicting a member’s utilization of Medicaid home healthcare service. The member’s demographic features, health details and enrolment details are generally considered for building the utilization model though health details may not be available for
APA, Harvard, Vancouver, ISO, and other styles
9

Sattarzadeh, Sam, Mahesh Sudhakar, and Konstantinos N. Plataniotis. "SVEA: A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition." In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Morais, Lucas Rabelo de Araujo, Gabriel Arnaud de Melo Fragoso, Teresa Bernarda Ludermir, and Claudio Luis Alves Monteiro. "Explainable AI For the Brazilian Stock Market Index: A Post-Hoc Approach to Deep Learning Models in Time-Series Forecasting." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2024. https://doi.org/10.5753/eniac.2024.244444.

Full text
Abstract:
Time-series forecasting is challenging when data lacks clear trends or seasonality, making traditional statistical models less effective. Deep Learning models, like Neural Networks, excel at capturing non-linear patterns and offer a promising alternative. The Bovespa Index (Ibovespa), a key indicator of Brazil’s stock market, is volatile, leading to potential investor losses due to inaccurate forecasts and limited market insight. Neural Networks can enhance forecast accuracy, but reduce model explainability. This study aims to use Deep Learning to forecast the Ibovespa, striving to balance hig
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!