Academic literature on the topic 'Post-hoc interpretability'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Post-hoc interpretability.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Post-hoc interpretability"
Feng, Jiangfan, Yukun Liang, and Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability." Computational Intelligence and Neuroscience 2021 (July 26, 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Full textZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Full textXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou, and Aijing Luo. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review." Journal of Healthcare Engineering 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Full textGill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (February 29, 2020): 137. http://dx.doi.org/10.3390/info11030137.
Full textMarconato, Emanuele, Andrea Passerini, and Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning." Entropy 25, no. 12 (November 22, 2023): 1574. http://dx.doi.org/10.3390/e25121574.
Full textDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, Chrysoula Garefa, Lukas S. Keller, Reto Boehm, Domenico Ciancone, et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”." Imaging 14, no. 2 (December 23, 2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Full textLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan, and Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Full textJalali, Anahid, Alexander Schindler, Bernhard Haslhofer, and Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study." PHM Society European Conference 5, no. 1 (July 22, 2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Full textGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, Himar Fabelo, Inger Torhild Gram, Maja-Lisa Løchen, Conceição Granja, and Cristina Soguero-Ruiz. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors." Applied Sciences 13, no. 7 (March 23, 2023): 4119. http://dx.doi.org/10.3390/app13074119.
Full textWang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Full textDissertations / Theses on the topic "Post-hoc interpretability"
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Full textThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.
Full textThe need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.
Full textThis thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Full textCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Bhattacharya, Debarpan. "A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.
Full textBook chapters on the topic "Post-hoc interpretability"
Kamath, Uday, and John Liu. "Post-Hoc Interpretability and Explanations." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 167–216. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.
Full textGreenwell, Brandon M. "Peeking inside the “black box”: post-hoc interpretability." In Tree-Based Methods for Statistical Learning in R, 203–28. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003089032-6.
Full textSantos, Flávio Arthur Oliveira, Cleber Zanchettin, José Vitor Santos Silva, Leonardo Nogueira Matos, and Paulo Novais. "A Hybrid Post Hoc Interpretability Approach for Deep Neural Networks." In Lecture Notes in Computer Science, 600–610. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86271-8_50.
Full textAnn Jo, Ashly, and Ebin Deni Raj. "Post hoc Interpretability: Review on New Frontiers of Interpretable AI." In Lecture Notes in Networks and Systems, 261–76. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2_23.
Full textMolnar, Christoph, Giuseppe Casalicchio, and Bernd Bischl. "Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability." In Machine Learning and Knowledge Discovery in Databases, 193–204. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_17.
Full textStevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Full textTurbé, Hugues, Mina Bjelogrlic, Mehdi Namdar, Christophe Gaudet-Blavignac, Jamil Zaghir, Jean-Philippe Goldman, Belinda Lokaj, and Christian Lovis. "A Lightweight and Interpretable Model to Classify Bundle Branch Blocks from ECG Signals." In Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220393.
Full textDumka, Ankur, Vaibhav Chaudhari, Anil Kumar Bisht, Ruchira Rawat, and Arnav Pandey. "Methods, Techniques, and Application of Explainable Artificial Intelligence." In Advances in Environmental Engineering and Green Technologies, 337–54. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2351-9.ch017.
Full textLi, Yaoman, and Irwin King. "Neural Architecture Search for Explainable Networks." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230423.
Full textConference papers on the topic "Post-hoc interpretability"
Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. "The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/388.
Full textVieira, Carla Piazzon, and Luciano Antonio Digiampietri. "Machine Learning post-hoc interpretability: a systematic mapping study." In SBSI: XVIII Brazilian Symposium on Information Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3535511.3535512.
Full textAttanasio, Giuseppe, Debora Nozza, Eliana Pastor, and Dirk Hovy. "Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection." In Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlppower-1.11.
Full textSujana, D. Swainson, and D. Peter Augustine. "Explaining Autism Diagnosis Model Through Local Interpretability Techniques – A Post-hoc Approach." In 2023 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI). IEEE, 2023. http://dx.doi.org/10.1109/icdsaai59313.2023.10452575.
Full textGkoumas, Dimitris, Qiuchi Li, Yijun Yu, and Dawei Song. "An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/239.
Full text