Academic literature on the topic 'Post-hoc Explainability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Post-hoc Explainability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Post-hoc Explainability"

1

Zhang, Xiaopu, Wubing Miao, and Guodong Liu. "Explainable Data Mining Framework of Identifying Root Causes of Rocket Engine Anomalies Based on Knowledge and Physics-Informed Feature Selection." Machines 13, no. 8 (2025): 640. https://doi.org/10.3390/machines13080640.

Full text
Abstract:
Liquid rocket engines occasionally experience abnormal phenomena with unclear mechanisms, causing difficulty in design improvements. To address the above issue, a data mining method that combines ante hoc explainability, post hoc explainability, and prediction accuracy is proposed. For ante hoc explainability, a feature selection method driven by data, models, and domain knowledge is established. Global sensitivity analysis of a physical model combined with expert knowledge and data correlation is utilized to establish the correlations between different types of parameters. Then a two-stage op
APA, Harvard, Vancouver, ISO, and other styles
2

Acun, Cagla, Ali Ashary, Dan O. Popa, and Olfa Nasraoui. "Optimizing Local Explainability in Robotic Grasp Failure Prediction." Electronics 14, no. 12 (2025): 2363. https://doi.org/10.3390/electronics14122363.

Full text
Abstract:
This paper presents a local explainability mechanism for robotic grasp failure prediction that enhances machine learning transparency at the instance level. Building upon pre hoc explainability concepts, we develop a neighborhood-based optimization approach that leverages the Jensen–Shannon divergence to ensure fidelity between predictor and explainer models at a local level. Unlike traditional post hoc methods such as LIME, our local in-training explainability framework directly optimizes the predictor model during training, then fine-tunes the pre-trained explainer for each test instance wit
APA, Harvard, Vancouver, ISO, and other styles
3

Alfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.

Full text
Abstract:
Explainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual `even-if' thinking and their computational complexity amon
APA, Harvard, Vancouver, ISO, and other styles
4

Mochaourab, Rami, Arun Venkitaraman, Isak Samsten, Panagiotis Papapetrou, and Cristian R. Rojas. "Post Hoc Explainability for Time Series Classification: Toward a signal processing perspective." IEEE Signal Processing Magazine 39, no. 4 (2022): 119–29. http://dx.doi.org/10.1109/msp.2022.3155955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (2021): 3137. http://dx.doi.org/10.3390/math9233137.

Full text
Abstract:
Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS cla
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Gin Chong, and Chu Kiong Loo. "On the Post Hoc Explainability of Optimized Self-Organizing Reservoir Network for Action Recognition." Sensors 22, no. 5 (2022): 1905. http://dx.doi.org/10.3390/s22051905.

Full text
Abstract:
This work proposes a novel unsupervised self-organizing network, called the Self-Organizing Convolutional Echo State Network (SO-ConvESN), for learning node centroids and interconnectivity maps compatible with the deterministic initialization of Echo State Network (ESN) input and reservoir weights, in the context of human action recognition (HAR). To ensure stability and echo state property in the reservoir, Recurrent Plots (RPs) and Recurrence Quantification Analysis (RQA) techniques are exploited for explainability and characterization of the reservoir dynamics and hence tuning ESN hyperpara
APA, Harvard, Vancouver, ISO, and other styles
7

Hildt, Elisabeth. "What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach." Bioengineering 12, no. 4 (2025): 375. https://doi.org/10.3390/bioengineering12040375.

Full text
Abstract:
This article reflects on explainability in the context of medical artificial intelligence (AI) applications, focusing on AI-based clinical decision support systems (CDSS). After introducing the concept of explainability in AI and providing a short overview of AI-based clinical decision support systems (CDSSs) and the role of explainability in CDSSs, four use cases of AI-based CDSSs will be presented. The examples were chosen to highlight different types of AI-based CDSSs as well as different types of explanations: a machine language (ML) tool that lacks explainability; an approach with post ho
APA, Harvard, Vancouver, ISO, and other styles
8

Boya Marqas, Ridwan, Saman M. Almufti, and Rezhna Azad Yusif. "Unveiling explainability in artificial intelligence: a step to-‎wards transparent AI." International Journal of Scientific World 11, no. 1 (2025): 13–20. https://doi.org/10.14419/f2agrs86.

Full text
Abstract:
Explainability in artificial intelligence (AI) is an essential factor for building transparent, trustworthy, and ethical systems, particularly in ‎high-stakes domains such as healthcare, finance, justice, and autonomous systems. This study examines the foundations of AI explainability, ‎its critical role in fostering trust, and the current methodologies used to interpret AI models, such as post-hoc techniques, intrinsically inter-‎pretable models, and hybrid approaches. Despite these advancements, challenges persist, including trade-offs between accuracy and inter-‎pretability, scalability, et
APA, Harvard, Vancouver, ISO, and other styles
9

Maddala, Suresh Kumar. "Understanding Explainability in Enterprise AI Models." International Journal of Management Technology 12, no. 1 (2025): 58–68. https://doi.org/10.37745/ijmt.2013/vol12n25868.

Full text
Abstract:
This article examines the critical role of explainability in enterprise AI deployments, where algorithmic transparency has emerged as both a regulatory necessity and a business imperative. As organizations increasingly rely on sophisticated machine learning models for consequential decisions, the "black box" problem threatens stakeholder trust, regulatory compliance, and effective model governance. We explore the multifaceted business case for explainable AI across regulated industries, analyze the spectrum of interpretability techniques—from inherently transparent models to post-hoc explanati
APA, Harvard, Vancouver, ISO, and other styles
10

Kabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (2024): 1797. http://dx.doi.org/10.3390/en17081797.

Full text
Abstract:
The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability to deal with data uncertainties, such prediction has limited applicability in this domain. As a result, domain knowledge-based explainability with high accuracy is critical for making energy predictions trustworthy. Motivated by this, we propose an advanced explainable Belief Rule-Based Expert System
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Post-hoc Explainability"

1

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Full text
Abstract:
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général
APA, Harvard, Vancouver, ISO, and other styles
2

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Full text
Abstract:
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue.
APA, Harvard, Vancouver, ISO, and other styles
3

Ayad, Célia. "Towards Reliable Post Hoc Explanations for Machine Learning on Tabular Data and their Applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX082.

Full text
Abstract:
Alors que l’apprentissage automatique continue de démontrer de solides capacités prédictives, il est devenu un outil très précieux dans plusieurs domaines scientifiques et industriels. Cependant, à mesure que les modèles ML évoluent pour atteindre une plus grande précision, ils deviennent également de plus en plus complexes et nécessitent davantage de paramètres.Être capable de comprendre les complexités internes et d’établir une confiance dans les prédictions de ces modèles d’apprentissage automatique est donc devenu essentiel dans divers domaines critiques, notamment la santé et la finance.L
APA, Harvard, Vancouver, ISO, and other styles
4

Bhattacharya, Debarpan. "A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.

Full text
Abstract:
Deep neural networks are the most widely used examples of sophisticated mapping functions from feature space to class labels. In the recent years, several high impact decisions in domains such as finance, healthcare, law and autonomous driving, are made with deep models. In these tasks, the model decisions lack interpretability, and pose difficulties in making the models accountable. Hence, there is a strong demand for developing explainable approaches which can elicit how the deep neural architecture, despite the astounding performance improvements observed in all fields, including computer v
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Post-hoc Explainability"

1

Cinquini, Martina, Fosca Giannotti, Riccardo Guidotti, and Andrea Mattei. "Handling Missing Values in Local Post-hoc Explainability." In Communications in Computer and Information Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Full text
Abstract:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structu
APA, Harvard, Vancouver, ISO, and other styles
3

Ming, Ho Kah, Vong Wan Tze, Brian Loh Chung Shiong, and Patrick Hang Hui Then. "Evaluation of Post-Hoc Explainability Methods for Glaucoma Classification Using Fundus Images." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-92605-1_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mota, Bruno, Pedro Faria, Juan Corchado, and Carlos Ramos. "Explainable Artificial Intelligence Applied to Predictive Maintenance: Comparison of Post-Hoc Explainability Techniques." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63803-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.

Full text
Abstract:
AbstractMany methods have been developed to understand complex predictive models and high expectations are placed on post-hoc model explainability. It turns out that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications where auditability became a must-have t
APA, Harvard, Vancouver, ISO, and other styles
6

Neubig, Stefan, Daria Cappey, Nicolas Gehring, Linus Göhl, Andreas Hein, and Helmut Krcmar. "Visualizing Explainable Touristic Recommendations: An Interactive Approach." In Information and Communication Technologies in Tourism 2024. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_37.

Full text
Abstract:
AbstractPersonalized recommendations have played a vital role in tourism, serving various purposes, ranging from an improved visitor experience to addressing sustainability issues. However, research shows that recommendations are more likely to be accepted by visitors if they are comprehensible and appeal to the visitors’ common sense. This highlights the importance of explainable recommendations that, according to a previously specified goal, explain an algorithm’s inference process, generate trust among visitors, or educate visitors by making them aware of sustainability practices. Based on
APA, Harvard, Vancouver, ISO, and other styles
7

Mikriukov, Georgii, Gesina Schwalbe, Christian Hellert, and Korinna Bade. "Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability." In Communications in Computer and Information Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0_26.

Full text
Abstract:
AbstractAnalysis of how semantic concepts are represented within Convolutional Neural Networks (CNNs) is a widely used approach in Explainable Artificial Intelligence (XAI) for interpreting CNNs. A motivation is the need for transparency in safety-critical AI-based systems, as mandated in various domains like automated driving. However, to use the concept representations for safety-relevant purposes, like inspection or error retrieval, these must be of high quality and, in particular, stable. This paper focuses on two stability goals when working with concept representations in computer vision
APA, Harvard, Vancouver, ISO, and other styles
8

Barzas, Konstantinos, Shereen Fouad, Gainer Jasa, and Gabriel Landini. "An Explainable Deep Learning Framework for Mandibular Canal Segmentation from Cone Beam Computed Tomography Volumes." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82768-6_1.

Full text
Abstract:
Abstract Cone Beam Computed Tomography (CBCT) is an indispensable imaging modality in oral radiology, offering comprehensive dental anatomical information. Accurate detection of the mandibular canal (MC), a crucial anatomical structure in the lower jaw, within CBCT volumes is essential to support clinical dentistry workflows, including diagnosis, preoperative treatment planning, and postoperative evaluation. In this study, we present a deep learning-based (DL) approach for MC segmentation using 3D U-Net and 3D Attention U-Net networks. We collected a unique dataset of CBCT scans from 20 anonym
APA, Harvard, Vancouver, ISO, and other styles
9

De Santis, Antonio, Riccardo Campi, Matteo Bianchi, Andrea Tocchetti, and Marco Brambilla. "Foundational approaches to post-hoc explainability for image classification." In Bi-directionality in Human-AI Collaborative Systems. Elsevier, 2025. https://doi.org/10.1016/b978-0-44-340553-2.00008-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Priya Dharshini, K. R., and D. Sathiyaraj. "Artificial Intelligence and Machine Learning for Predictive Maintenance in Solar Energy Systems." In Solar Energy Systems and Smart Electrical Grids for Sustainable Renewable Energy. RADemics Research Institute, 2025. https://doi.org/10.71443/9789349552517-14.

Full text
Abstract:
Predictive maintenance powered by Artificial Intelligence (AI) and Machine Learning (ML) is transforming the management and operational efficiency of renewable energy systems, particularly in solar energy installations. As AI-driven solutions become more integrated into predictive maintenance frameworks, the need for transparency, explainability, and regulatory compliance becomes paramount. This chapter explores the fundamental role of Explainable AI (XAI) in enhancing the reliability, transparency, and accountability of predictive maintenance models, with a focus on solar energy systems. Key
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Post-hoc Explainability"

1

Ducange, Pietro, Francesco Marcelloni, Alessandro Renda, and Fabrizio Ruffini. "Consistent Post-Hoc Explainability in Federated Learning through Federated Fuzzy Clustering." In 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2024. http://dx.doi.org/10.1109/fuzz-ieee60900.2024.10611761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Harsh, Kamalnath M. S, Ashwanth Ram A. S, and Jiji C. V. "Explainability to Image Captioning Models: An Improved Post-hoc Approach through Grad-CAM." In 2025 International Conference on Innovation in Computing and Engineering (ICE). IEEE, 2025. https://doi.org/10.1109/ice63309.2025.10984143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Narkhede, Jeet. "Comparative Evaluation of Post-Hoc Explainability Methods in AI: LIME, SHAP, and Grad-CAM." In 2024 4th International Conference on Sustainable Expert Systems (ICSES). IEEE, 2024. https://doi.org/10.1109/icses63445.2024.10762963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Tongyu, Haoyu Sheng, and Iris Howley. "Assessing Post-hoc Explainability of the BKT Algorithm." In AIES '20: AAAI/ACM Conference on AI, Ethics, and Society. ACM, 2020. http://dx.doi.org/10.1145/3375627.3375856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dhaini, Mahdi, Ege Erdogan, Nils Feldhus, and Gjergji Kasneci. "Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods." In FAccT '25: The 2025 ACM Conference on Fairness, Accountability, and Transparency. ACM, 2025. https://doi.org/10.1145/3715275.3732192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Saini, Aditya, and Ranjitha Prasad. "Select Wisely and Explain: Active Learning and Probabilistic Local Post-hoc Explainability." In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. ACM, 2022. http://dx.doi.org/10.1145/3514094.3534191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bianchi, Matteo, Antonio De Santis, Andrea Tocchetti, and Marco Brambilla. "Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/411.

Full text
Abstract:
Transparency and explainability in image classification are essential for establishing trust in machine learning models and detecting biases and errors. State-of-the-art explainability methods generate saliency maps to show where a specific class is identified, without providing a detailed explanation of the model's decision process. Striving to address such a need, we introduce a post-hoc method that explains the entire feature extraction process of a Convolutional Neural Network. These explanations include a layer-wise representation of the features the model extracts from the input. Such fe
APA, Harvard, Vancouver, ISO, and other styles
8

Kokkotis, Christos, Serafeim Moustakidis, Elpiniki Papageorgiou, Giannis Giakas, and Dimitrios Tsaopoulos. "A Machine Learning workflow for Diagnosis of Knee Osteoarthritis with a focus on post-hoc explainability." In 2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA). IEEE, 2020. http://dx.doi.org/10.1109/iisa50023.2020.9284354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karimzadeh, Mohammad, Aleksandar Vakanski, Min Xian, and Boyu Zhang. "Post-Hoc Explainability of BI-RADS Descriptors in a Multi-Task Framework for Breast Cancer Detection and Segmentation." In 2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2023. http://dx.doi.org/10.1109/mlsp55844.2023.10286006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Morais, Lucas Rabelo de Araujo, Gabriel Arnaud de Melo Fragoso, Teresa Bernarda Ludermir, and Claudio Luis Alves Monteiro. "Explainable AI For the Brazilian Stock Market Index: A Post-Hoc Approach to Deep Learning Models in Time-Series Forecasting." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2024. https://doi.org/10.5753/eniac.2024.244444.

Full text
Abstract:
Time-series forecasting is challenging when data lacks clear trends or seasonality, making traditional statistical models less effective. Deep Learning models, like Neural Networks, excel at capturing non-linear patterns and offer a promising alternative. The Bovespa Index (Ibovespa), a key indicator of Brazil’s stock market, is volatile, leading to potential investor losses due to inaccurate forecasts and limited market insight. Neural Networks can enhance forecast accuracy, but reduce model explainability. This study aims to use Deep Learning to forecast the Ibovespa, striving to balance hig
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!