Littérature scientifique sur le sujet « Post-hoc interpretability »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Post-hoc interpretability ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Post-hoc interpretability"
Feng, Jiangfan, Yukun Liang et Lin Li. « Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability ». Computational Intelligence and Neuroscience 2021 (26 juillet 2021) : 1–15. http://dx.doi.org/10.1155/2021/7367870.
Texte intégralZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu et Cheekong Lee. « ProtGNN : Towards Self-Explaining Graph Neural Networks ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 8 (28 juin 2022) : 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Texte intégralXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou et Aijing Luo. « Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective : A Systematic Review ». Journal of Healthcare Engineering 2023 (3 février 2023) : 1–13. http://dx.doi.org/10.1155/2023/9919269.
Texte intégralGill, Navdeep, Patrick Hall, Kim Montgomery et Nicholas Schmidt. « A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing ». Information 11, no 3 (29 février 2020) : 137. http://dx.doi.org/10.3390/info11030137.
Texte intégralMarconato, Emanuele, Andrea Passerini et Stefano Teso. « Interpretability Is in the Mind of the Beholder : A Causal Framework for Human-Interpretable Representation Learning ». Entropy 25, no 12 (22 novembre 2023) : 1574. http://dx.doi.org/10.3390/e25121574.
Texte intégralDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, Chrysoula Garefa, Lukas S. Keller, Reto Boehm, Domenico Ciancone et al. « Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free” ». Imaging 14, no 2 (23 décembre 2022) : 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Texte intégralLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan et Wei Shen. « ViTree : Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 3 (24 mars 2024) : 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Texte intégralJalali, Anahid, Alexander Schindler, Bernhard Haslhofer et Andreas Rauber. « Machine Learning Interpretability Techniques for Outage Prediction : A Comparative Study ». PHM Society European Conference 5, no 1 (22 juillet 2020) : 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Texte intégralGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, Himar Fabelo, Inger Torhild Gram, Maja-Lisa Løchen, Conceição Granja et Cristina Soguero-Ruiz. « Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors ». Applied Sciences 13, no 7 (23 mars 2023) : 4119. http://dx.doi.org/10.3390/app13074119.
Texte intégralWang, Zhengguang. « Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 21 (24 mars 2024) : 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Texte intégralThèses sur le sujet "Post-hoc interpretability"
Jeyasothy, Adulam. « Génération d'explications post-hoc personnalisées ». Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Texte intégralThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
SEVESO, ANDREA. « Symbolic Reasoning for Contrastive Explanations ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.
Texte intégralThe need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Laugel, Thibault. « Interprétabilité locale post-hoc des modèles de classification "boites noires" ». Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.
Texte intégralThis thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Radulovic, Nedeljko. « Post-hoc Explainable AI for Black Box Models on Tabular Data ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Texte intégralCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Bhattacharya, Debarpan. « A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications ». Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.
Texte intégralChapitres de livres sur le sujet "Post-hoc interpretability"
Kamath, Uday, et John Liu. « Post-Hoc Interpretability and Explanations ». Dans Explainable Artificial Intelligence : An Introduction to Interpretable Machine Learning, 167–216. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.
Texte intégralGreenwell, Brandon M. « Peeking inside the “black box” : post-hoc interpretability ». Dans Tree-Based Methods for Statistical Learning in R, 203–28. Boca Raton : Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003089032-6.
Texte intégralSantos, Flávio Arthur Oliveira, Cleber Zanchettin, José Vitor Santos Silva, Leonardo Nogueira Matos et Paulo Novais. « A Hybrid Post Hoc Interpretability Approach for Deep Neural Networks ». Dans Lecture Notes in Computer Science, 600–610. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86271-8_50.
Texte intégralAnn Jo, Ashly, et Ebin Deni Raj. « Post hoc Interpretability : Review on New Frontiers of Interpretable AI ». Dans Lecture Notes in Networks and Systems, 261–76. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2_23.
Texte intégralMolnar, Christoph, Giuseppe Casalicchio et Bernd Bischl. « Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability ». Dans Machine Learning and Knowledge Discovery in Databases, 193–204. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_17.
Texte intégralStevens, Alexander, Johannes De Smedt et Jari Peeperkorn. « Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring ». Dans Lecture Notes in Business Information Processing, 194–206. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Texte intégralTurbé, Hugues, Mina Bjelogrlic, Mehdi Namdar, Christophe Gaudet-Blavignac, Jamil Zaghir, Jean-Philippe Goldman, Belinda Lokaj et Christian Lovis. « A Lightweight and Interpretable Model to Classify Bundle Branch Blocks from ECG Signals ». Dans Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220393.
Texte intégralDumka, Ankur, Vaibhav Chaudhari, Anil Kumar Bisht, Ruchira Rawat et Arnav Pandey. « Methods, Techniques, and Application of Explainable Artificial Intelligence ». Dans Advances in Environmental Engineering and Green Technologies, 337–54. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2351-9.ch017.
Texte intégralLi, Yaoman, et Irwin King. « Neural Architecture Search for Explainable Networks ». Dans Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230423.
Texte intégralActes de conférences sur le sujet "Post-hoc interpretability"
Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard et Marcin Detyniecki. « The Dangers of Post-hoc Interpretability : Unjustified Counterfactual Explanations ». Dans Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California : International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/388.
Texte intégralVieira, Carla Piazzon, et Luciano Antonio Digiampietri. « Machine Learning post-hoc interpretability : a systematic mapping study ». Dans SBSI : XVIII Brazilian Symposium on Information Systems. New York, NY, USA : ACM, 2022. http://dx.doi.org/10.1145/3535511.3535512.
Texte intégralAttanasio, Giuseppe, Debora Nozza, Eliana Pastor et Dirk Hovy. « Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection ». Dans Proceedings of NLP Power ! The First Workshop on Efficient Benchmarking in NLP. Stroudsburg, PA, USA : Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlppower-1.11.
Texte intégralSujana, D. Swainson, et D. Peter Augustine. « Explaining Autism Diagnosis Model Through Local Interpretability Techniques – A Post-hoc Approach ». Dans 2023 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI). IEEE, 2023. http://dx.doi.org/10.1109/icdsaai59313.2023.10452575.
Texte intégralGkoumas, Dimitris, Qiuchi Li, Yijun Yu et Dawei Song. « An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/239.
Texte intégral