Literatura académica sobre el tema "Post-hoc interpretability"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Post-hoc interpretability".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Post-hoc interpretability"
Feng, Jiangfan, Yukun Liang y Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability". Computational Intelligence and Neuroscience 2021 (26 de julio de 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Texto completoZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu y Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junio de 2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Texto completoXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou y Aijing Luo. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review". Journal of Healthcare Engineering 2023 (3 de febrero de 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Texto completoGill, Navdeep, Patrick Hall, Kim Montgomery y Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing". Information 11, n.º 3 (29 de febrero de 2020): 137. http://dx.doi.org/10.3390/info11030137.
Texto completoMarconato, Emanuele, Andrea Passerini y Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning". Entropy 25, n.º 12 (22 de noviembre de 2023): 1574. http://dx.doi.org/10.3390/e25121574.
Texto completoDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, Chrysoula Garefa, Lukas S. Keller, Reto Boehm, Domenico Ciancone et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”". Imaging 14, n.º 2 (23 de diciembre de 2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Texto completoLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan y Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Texto completoJalali, Anahid, Alexander Schindler, Bernhard Haslhofer y Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study". PHM Society European Conference 5, n.º 1 (22 de julio de 2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Texto completoGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, Himar Fabelo, Inger Torhild Gram, Maja-Lisa Løchen, Conceição Granja y Cristina Soguero-Ruiz. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors". Applied Sciences 13, n.º 7 (23 de marzo de 2023): 4119. http://dx.doi.org/10.3390/app13074119.
Texto completoWang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de marzo de 2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Texto completoTesis sobre el tema "Post-hoc interpretability"
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Texto completoThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.
Texto completoThe need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.
Texto completoThis thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Texto completoCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Bhattacharya, Debarpan. "A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications". Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.
Texto completoCapítulos de libros sobre el tema "Post-hoc interpretability"
Kamath, Uday y John Liu. "Post-Hoc Interpretability and Explanations". En Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 167–216. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.
Texto completoGreenwell, Brandon M. "Peeking inside the “black box”: post-hoc interpretability". En Tree-Based Methods for Statistical Learning in R, 203–28. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003089032-6.
Texto completoSantos, Flávio Arthur Oliveira, Cleber Zanchettin, José Vitor Santos Silva, Leonardo Nogueira Matos y Paulo Novais. "A Hybrid Post Hoc Interpretability Approach for Deep Neural Networks". En Lecture Notes in Computer Science, 600–610. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86271-8_50.
Texto completoAnn Jo, Ashly y Ebin Deni Raj. "Post hoc Interpretability: Review on New Frontiers of Interpretable AI". En Lecture Notes in Networks and Systems, 261–76. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2_23.
Texto completoMolnar, Christoph, Giuseppe Casalicchio y Bernd Bischl. "Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability". En Machine Learning and Knowledge Discovery in Databases, 193–204. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_17.
Texto completoStevens, Alexander, Johannes De Smedt y Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring". En Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Texto completoTurbé, Hugues, Mina Bjelogrlic, Mehdi Namdar, Christophe Gaudet-Blavignac, Jamil Zaghir, Jean-Philippe Goldman, Belinda Lokaj y Christian Lovis. "A Lightweight and Interpretable Model to Classify Bundle Branch Blocks from ECG Signals". En Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220393.
Texto completoDumka, Ankur, Vaibhav Chaudhari, Anil Kumar Bisht, Ruchira Rawat y Arnav Pandey. "Methods, Techniques, and Application of Explainable Artificial Intelligence". En Advances in Environmental Engineering and Green Technologies, 337–54. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2351-9.ch017.
Texto completoLi, Yaoman y Irwin King. "Neural Architecture Search for Explainable Networks". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230423.
Texto completoActas de conferencias sobre el tema "Post-hoc interpretability"
Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard y Marcin Detyniecki. "The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/388.
Texto completoVieira, Carla Piazzon y Luciano Antonio Digiampietri. "Machine Learning post-hoc interpretability: a systematic mapping study". En SBSI: XVIII Brazilian Symposium on Information Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3535511.3535512.
Texto completoAttanasio, Giuseppe, Debora Nozza, Eliana Pastor y Dirk Hovy. "Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection". En Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlppower-1.11.
Texto completoSujana, D. Swainson y D. Peter Augustine. "Explaining Autism Diagnosis Model Through Local Interpretability Techniques – A Post-hoc Approach". En 2023 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI). IEEE, 2023. http://dx.doi.org/10.1109/icdsaai59313.2023.10452575.
Texto completoGkoumas, Dimitris, Qiuchi Li, Yijun Yu y Dawei Song. "An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/239.
Texto completo