Auswahl der wissenschaftlichen Literatur zum Thema „Post-hoc interpretability“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Post-hoc interpretability" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Post-hoc interpretability"
Feng, Jiangfan, Yukun Liang und Lin Li. „Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability“. Computational Intelligence and Neuroscience 2021 (26.07.2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Der volle Inhalt der QuelleZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu und Cheekong Lee. „ProtGNN: Towards Self-Explaining Graph Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Der volle Inhalt der QuelleXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou und Aijing Luo. „Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review“. Journal of Healthcare Engineering 2023 (03.02.2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Der volle Inhalt der QuelleGill, Navdeep, Patrick Hall, Kim Montgomery und Nicholas Schmidt. „A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing“. Information 11, Nr. 3 (29.02.2020): 137. http://dx.doi.org/10.3390/info11030137.
Der volle Inhalt der QuelleMarconato, Emanuele, Andrea Passerini und Stefano Teso. „Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning“. Entropy 25, Nr. 12 (22.11.2023): 1574. http://dx.doi.org/10.3390/e25121574.
Der volle Inhalt der QuelleDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, Chrysoula Garefa, Lukas S. Keller, Reto Boehm, Domenico Ciancone et al. „Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”“. Imaging 14, Nr. 2 (23.12.2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Der volle Inhalt der QuelleLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan und Wei Shen. „ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Der volle Inhalt der QuelleJalali, Anahid, Alexander Schindler, Bernhard Haslhofer und Andreas Rauber. „Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study“. PHM Society European Conference 5, Nr. 1 (22.07.2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Der volle Inhalt der QuelleGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, Himar Fabelo, Inger Torhild Gram, Maja-Lisa Løchen, Conceição Granja und Cristina Soguero-Ruiz. „Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors“. Applied Sciences 13, Nr. 7 (23.03.2023): 4119. http://dx.doi.org/10.3390/app13074119.
Der volle Inhalt der QuelleWang, Zhengguang. „Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 21 (24.03.2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Der volle Inhalt der QuelleDissertationen zum Thema "Post-hoc interpretability"
Jeyasothy, Adulam. „Génération d'explications post-hoc personnalisées“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Der volle Inhalt der QuelleThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
SEVESO, ANDREA. „Symbolic Reasoning for Contrastive Explanations“. Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.
Der volle Inhalt der QuelleThe need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Laugel, Thibault. „Interprétabilité locale post-hoc des modèles de classification "boites noires"“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.
Der volle Inhalt der QuelleThis thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Radulovic, Nedeljko. „Post-hoc Explainable AI for Black Box Models on Tabular Data“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Der volle Inhalt der QuelleCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Bhattacharya, Debarpan. „A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications“. Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.
Der volle Inhalt der QuelleBuchteile zum Thema "Post-hoc interpretability"
Kamath, Uday, und John Liu. „Post-Hoc Interpretability and Explanations“. In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 167–216. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.
Der volle Inhalt der QuelleGreenwell, Brandon M. „Peeking inside the “black box”: post-hoc interpretability“. In Tree-Based Methods for Statistical Learning in R, 203–28. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003089032-6.
Der volle Inhalt der QuelleSantos, Flávio Arthur Oliveira, Cleber Zanchettin, José Vitor Santos Silva, Leonardo Nogueira Matos und Paulo Novais. „A Hybrid Post Hoc Interpretability Approach for Deep Neural Networks“. In Lecture Notes in Computer Science, 600–610. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86271-8_50.
Der volle Inhalt der QuelleAnn Jo, Ashly, und Ebin Deni Raj. „Post hoc Interpretability: Review on New Frontiers of Interpretable AI“. In Lecture Notes in Networks and Systems, 261–76. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2_23.
Der volle Inhalt der QuelleMolnar, Christoph, Giuseppe Casalicchio und Bernd Bischl. „Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability“. In Machine Learning and Knowledge Discovery in Databases, 193–204. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_17.
Der volle Inhalt der QuelleStevens, Alexander, Johannes De Smedt und Jari Peeperkorn. „Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring“. In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Der volle Inhalt der QuelleTurbé, Hugues, Mina Bjelogrlic, Mehdi Namdar, Christophe Gaudet-Blavignac, Jamil Zaghir, Jean-Philippe Goldman, Belinda Lokaj und Christian Lovis. „A Lightweight and Interpretable Model to Classify Bundle Branch Blocks from ECG Signals“. In Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220393.
Der volle Inhalt der QuelleDumka, Ankur, Vaibhav Chaudhari, Anil Kumar Bisht, Ruchira Rawat und Arnav Pandey. „Methods, Techniques, and Application of Explainable Artificial Intelligence“. In Advances in Environmental Engineering and Green Technologies, 337–54. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2351-9.ch017.
Der volle Inhalt der QuelleLi, Yaoman, und Irwin King. „Neural Architecture Search for Explainable Networks“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230423.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Post-hoc interpretability"
Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard und Marcin Detyniecki. „The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/388.
Der volle Inhalt der QuelleVieira, Carla Piazzon, und Luciano Antonio Digiampietri. „Machine Learning post-hoc interpretability: a systematic mapping study“. In SBSI: XVIII Brazilian Symposium on Information Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3535511.3535512.
Der volle Inhalt der QuelleAttanasio, Giuseppe, Debora Nozza, Eliana Pastor und Dirk Hovy. „Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection“. In Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlppower-1.11.
Der volle Inhalt der QuelleSujana, D. Swainson, und D. Peter Augustine. „Explaining Autism Diagnosis Model Through Local Interpretability Techniques – A Post-hoc Approach“. In 2023 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI). IEEE, 2023. http://dx.doi.org/10.1109/icdsaai59313.2023.10452575.
Der volle Inhalt der QuelleGkoumas, Dimitris, Qiuchi Li, Yijun Yu und Dawei Song. „An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/239.
Der volle Inhalt der Quelle