Academic literature on the topic 'Counterfactual Explanation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Counterfactual Explanation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Counterfactual Explanation"
VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann, and Elke A. Rundensteiner. "FACET: Robust Counterfactual Explanation Analytics." Proceedings of the ACM on Management of Data 1, no. 4 (December 8, 2023): 1–27. http://dx.doi.org/10.1145/3626729.
Full textSia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, and Lambert Mathias. "Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9837–45. http://dx.doi.org/10.1609/aaai.v37i8.26174.
Full textAsher, Nicholas, Lucas De Lara, Soumya Paul, and Chris Russell. "Counterfactual Models for Fair and Adequate Explanations." Machine Learning and Knowledge Extraction 4, no. 2 (March 31, 2022): 316–49. http://dx.doi.org/10.3390/make4020014.
Full textSi, Michelle, and Jian Pei. "Counterfactual Explanation of Shapley Value in Data Coalitions." Proceedings of the VLDB Endowment 17, no. 11 (July 2024): 3332–45. http://dx.doi.org/10.14778/3681954.3682004.
Full textBaron, Sam. "Counterfactual Scheming." Mind 129, no. 514 (April 1, 2019): 535–62. http://dx.doi.org/10.1093/mind/fzz008.
Full textBaron, Sam, Mark Colyvan, and David Ripley. "A Counterfactual Approach to Explanation in Mathematics." Philosophia Mathematica 28, no. 1 (December 2, 2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.
Full textChapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz, and Konstantinos Georgatzis. "FIMAP: Feature Importance by Minimal Adversarial Perturbation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.
Full textDOHRN, DANIEL. "Counterfactual Narrative Explanation." Journal of Aesthetics and Art Criticism 67, no. 1 (February 2009): 37–47. http://dx.doi.org/10.1111/j.1540-6245.2008.01333.x.
Full textLeofante, Francesco, and Nico Potyka. "Promoting Counterfactual Robustness through Diversity." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21322–30. http://dx.doi.org/10.1609/aaai.v38i19.30127.
Full textHe, Ming, Boyang An, Jiwen Wang, and Hao Wen. "CETD: Counterfactual Explanations by Considering Temporal Dependencies in Sequential Recommendation." Applied Sciences 13, no. 20 (October 11, 2023): 11176. http://dx.doi.org/10.3390/app132011176.
Full textDissertations / Theses on the topic "Counterfactual Explanation"
Broadbent, Alex. "A reverse counterfactual analysis of causation." Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/226170.
Full textJeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.
Full textDeep neural architectures have demonstrated outstanding results in a variety of computer vision tasks. However, their extraordinary performance comes at the cost of interpretability. As a result, the field of Explanable AI has emerged to understand what these models are learning as well as to uncover their sources of error. In this thesis, we explore the world of explainable algorithms to uncover the biases and variables used by these parametric models in the context of image classification. To this end, we divide this thesis into four parts. The first three chapters proposes several methods to generate counterfactual explanations. In the first chapter, we proposed to incorporate diffusion models to generate these explanations. Next, we link the research areas of adversarial attacks and counterfactuals. The next chapter proposes a new pipeline to generate counterfactuals in a fully black-box mode, \ie, using only the input and the prediction without accessing the model. The final part of this thesis is related to the creation of interpretable by-design methods. More specifically, we investigate how to extend vision transformers into interpretable architectures. Our proposed methods have shown promising results and have made a step forward in the knowledge frontier of current XAI literature
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Full textThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Lerouge, Mathieu. "Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.
Full textDecision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
Kuo, Chia-Yu, and 郭家諭. "Explainable Risk Prediction System for Child Abuse Event by Individual Feature Attribution and Counterfactual Explanation." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yp2nr3.
Full text國立交通大學
統計學研究所
107
There always have a trade-off: Performance or Interpretability. The complex model, such as ensemble learning can achieve outstanding prediction accuracy. However, it is not easy to interpret the complex model. Understanding why a model made a prediction help us to trust the black-box model, and also help users to make decisions. This work plans to use the techniques of explainable machine learning to develop the appropriate model for empirical data with high prediction and good interpretability. In this study, we use the data provided by Taipei City Center for Prevention of Domestic Violence and Sexual Assault (臺北市家庭暴力暨性侵害防治中心) to develop the risk prediction model to predict the recurrence probability of violence accident for the same case before this case is resolved. This prediction model can also provide individual feature explanation and the counterfactual explanation to help social workers conduct an intervention for violence prevention.
Books on the topic "Counterfactual Explanation"
Reutlinger, Alexander. Extending the Counterfactual Theory of Explanation. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0005.
Full textBrady, Henry E. Causation and Explanation in Social Science. Edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0010.
Full textGerstenberg, Tobias, and Joshua B. Tenenbaum. Intuitive Theories. Edited by Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.28.
Full textMcGregor, Rafe. A Criminology Of Narrative Fiction. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529208054.001.0001.
Full textKutach, Douglas. The Asymmetry of Influence. Edited by Craig Callender. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199298204.003.0009.
Full textSilberstein, Michael, W. M. Stuckey, and Timothy McDevitt. Relational Blockworld and Quantum Mechanics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198807087.003.0005.
Full textSt John, Taylor. Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198789918.003.0001.
Full textStock, Kathleen. Fiction, Belief, and ‘Imaginative Resistance’. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198798347.003.0005.
Full textHealey, Richard. Causation and Locality. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198714057.003.0010.
Full textBerezin, Mabel. Events as Templates of Possibility: An Analytic Typology of Political Facts. Edited by Jeffrey C. Alexander, Ronald N. Jacobs, and Philip Smith. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780195377767.013.23.
Full textBook chapters on the topic "Counterfactual Explanation"
Virmajoki, Veli. "A Counterfactual Account of Historiographical Explanation." In Causal Explanation in Historiography, 67–95. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45929-0_5.
Full textWillig, Moritz, Matej Zečević, and Kristian Kersting. "“Do Not Disturb My Circles!” Identifying the Type of Counterfactual at Hand (Short Paper)." In Robust Argumentation Machines, 266–75. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63536-6_16.
Full textGerber, Doris. "Counterfactual Causality and Historical Explanations." In Explanation in Action Theory and Historiography, 167–78. 1 [edition]. | New York : Taylor & Francis, 2019. | Series: Routledge studies in contemporary philosophy ; 121: Routledge, 2019. http://dx.doi.org/10.4324/9780429506048-9.
Full textCheng, He, Depeng Xu, Shuhan Yuan, and Xintao Wu. "Achieving Counterfactual Explanation for Sequence Anomaly Detection." In Lecture Notes in Computer Science, 19–35. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70371-3_2.
Full textSingh, Vandita, Kristijonas Cyras, and Rafia Inam. "Explainability Metrics and Properties for Counterfactual Explanation Methods." In Explainable and Transparent AI and Multi-Agent Systems, 155–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15565-9_10.
Full textStepin, Ilia, Alejandro Catala, Martin Pereira-Fariña, and Jose M. Alonso. "Factual and Counterfactual Explanation of Fuzzy Information Granules." In Studies in Computational Intelligence, 153–85. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-64949-4_6.
Full textLi, Peiyu, Omar Bahri, Soukaïna Filali Boubrahimi, and Shah Muhammad Hamdi. "Attention-Based Counterfactual Explanation for Multivariate Time Series." In Big Data Analytics and Knowledge Discovery, 287–93. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39831-5_26.
Full textJi, Jiemin, Donghai Guan, Weiwei Yuan, and Yuwen Deng. "Unified Counterfactual Explanation Framework for Black-Box Models." In PRICAI 2023: Trends in Artificial Intelligence, 422–33. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7025-4_36.
Full textBurkart, Nadia, Maximilian Franz, and Marco F. Huber. "Explanation Framework for Intrusion Detection." In Machine Learning for Cyber Physical Systems, 83–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_9.
Full textPeng, Bo, Siwei Lyu, Wei Wang, and Jing Dong. "Counterfactual Image Enhancement for Explanation of Face Swap Deepfakes." In Pattern Recognition and Computer Vision, 492–508. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_40.
Full textConference papers on the topic "Counterfactual Explanation"
Liu, Diwen, and Xiaodong Yue. "Counterfactual-Driven Model Explanation Evaluation Method." In 2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE), 471–75. IEEE, 2024. http://dx.doi.org/10.1109/cisce62493.2024.10653060.
Full textFan, Zhengyang, Wanru Li, Kathryn Blackmond Laskey, and Kuo-Chu Chang. "Towards Personalized Anti-Phishing: Counterfactual Explanation Approach - Extended Abstract." In 2024 IEEE 11th International Conference on Data Science and Advanced Analytics (DSAA), 1–2. IEEE, 2024. http://dx.doi.org/10.1109/dsaa61799.2024.10722801.
Full textYin, Xiang, Nico Potyka, and Francesca Toni. "CE-QArg: Counterfactual Explanations for Quantitative Bipolar Argumentation Frameworks." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 697–707. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/66.
Full textAlfano, Gianvincenzo, Sergio Greco, Francesco Parisi, and Irina Trubitsyna. "Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 14–26. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/2.
Full textTheobald, Claire, Frédéric Pennerath, Brieuc Conan-Guez, Miguel Couceiro, and Amedeo Napoli. "Clarity: a Deep Ensemble for Visual Counterfactual Explanations." In ESANN 2024, 655–60. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2024. http://dx.doi.org/10.14428/esann/2024.es2024-188.
Full textMolhoek, M., and J. Van Laanen. "Secure Counterfactual Explanations in a Two-party Setting." In 2024 27th International Conference on Information Fusion (FUSION), 1–10. IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706413.
Full textLeofante, Francesco, Elena Botoeva, and Vineet Rajani. "Counterfactual Explanations and Model Multiplicity: a Relational Verification View." In 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/78.
Full textYin, Xudong, and Yao Yang. "CMACE: CMAES-based Counterfactual Explanations for Black-box Models." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/60.
Full textBhan, Milan, Jean-noel Vittaut, Nicolas Chesneau, and Marie-jeanne Lesot. "Enhancing textual counterfactual explanation intelligibility through Counterfactual Feature Importance." In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.trustnlp-1.19.
Full textAryal, Saugat, and Mark T. Keane. "Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/732.
Full textReports on the topic "Counterfactual Explanation"
Gandelman, Néstor. A Comparison of Saving Rates: Micro Evidence from Seventeen Latin American and Caribbean Countries. Inter-American Development Bank, July 2015. http://dx.doi.org/10.18235/0011701.
Full text