Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Counterfactual Explanation“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Counterfactual Explanation" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Counterfactual Explanation"
VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann und Elke A. Rundensteiner. „FACET: Robust Counterfactual Explanation Analytics“. Proceedings of the ACM on Management of Data 1, Nr. 4 (08.12.2023): 1–27. http://dx.doi.org/10.1145/3626729.
Der volle Inhalt der QuelleSia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer und Lambert Mathias. „Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 9837–45. http://dx.doi.org/10.1609/aaai.v37i8.26174.
Der volle Inhalt der QuelleAsher, Nicholas, Lucas De Lara, Soumya Paul und Chris Russell. „Counterfactual Models for Fair and Adequate Explanations“. Machine Learning and Knowledge Extraction 4, Nr. 2 (31.03.2022): 316–49. http://dx.doi.org/10.3390/make4020014.
Der volle Inhalt der QuelleSi, Michelle, und Jian Pei. „Counterfactual Explanation of Shapley Value in Data Coalitions“. Proceedings of the VLDB Endowment 17, Nr. 11 (Juli 2024): 3332–45. http://dx.doi.org/10.14778/3681954.3682004.
Der volle Inhalt der QuelleBaron, Sam. „Counterfactual Scheming“. Mind 129, Nr. 514 (01.04.2019): 535–62. http://dx.doi.org/10.1093/mind/fzz008.
Der volle Inhalt der QuelleBaron, Sam, Mark Colyvan und David Ripley. „A Counterfactual Approach to Explanation in Mathematics“. Philosophia Mathematica 28, Nr. 1 (02.12.2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.
Der volle Inhalt der QuelleChapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz und Konstantinos Georgatzis. „FIMAP: Feature Importance by Minimal Adversarial Perturbation“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.
Der volle Inhalt der QuelleDOHRN, DANIEL. „Counterfactual Narrative Explanation“. Journal of Aesthetics and Art Criticism 67, Nr. 1 (Februar 2009): 37–47. http://dx.doi.org/10.1111/j.1540-6245.2008.01333.x.
Der volle Inhalt der QuelleLeofante, Francesco, und Nico Potyka. „Promoting Counterfactual Robustness through Diversity“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21322–30. http://dx.doi.org/10.1609/aaai.v38i19.30127.
Der volle Inhalt der QuelleHe, Ming, Boyang An, Jiwen Wang und Hao Wen. „CETD: Counterfactual Explanations by Considering Temporal Dependencies in Sequential Recommendation“. Applied Sciences 13, Nr. 20 (11.10.2023): 11176. http://dx.doi.org/10.3390/app132011176.
Der volle Inhalt der QuelleDissertationen zum Thema "Counterfactual Explanation"
Broadbent, Alex. „A reverse counterfactual analysis of causation“. Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/226170.
Der volle Inhalt der QuelleJeanneret, Sanmiguel Guillaume. „Towards explainable and interpretable deep neural networks“. Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.
Der volle Inhalt der QuelleDeep neural architectures have demonstrated outstanding results in a variety of computer vision tasks. However, their extraordinary performance comes at the cost of interpretability. As a result, the field of Explanable AI has emerged to understand what these models are learning as well as to uncover their sources of error. In this thesis, we explore the world of explainable algorithms to uncover the biases and variables used by these parametric models in the context of image classification. To this end, we divide this thesis into four parts. The first three chapters proposes several methods to generate counterfactual explanations. In the first chapter, we proposed to incorporate diffusion models to generate these explanations. Next, we link the research areas of adversarial attacks and counterfactuals. The next chapter proposes a new pipeline to generate counterfactuals in a fully black-box mode, \ie, using only the input and the prediction without accessing the model. The final part of this thesis is related to the creation of interpretable by-design methods. More specifically, we investigate how to extend vision transformers into interpretable architectures. Our proposed methods have shown promising results and have made a step forward in the knowledge frontier of current XAI literature
Jeyasothy, Adulam. „Génération d'explications post-hoc personnalisées“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Der volle Inhalt der QuelleThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Lerouge, Mathieu. „Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.
Der volle Inhalt der QuelleDecision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
Kuo, Chia-Yu, und 郭家諭. „Explainable Risk Prediction System for Child Abuse Event by Individual Feature Attribution and Counterfactual Explanation“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yp2nr3.
Der volle Inhalt der Quelle國立交通大學
統計學研究所
107
There always have a trade-off: Performance or Interpretability. The complex model, such as ensemble learning can achieve outstanding prediction accuracy. However, it is not easy to interpret the complex model. Understanding why a model made a prediction help us to trust the black-box model, and also help users to make decisions. This work plans to use the techniques of explainable machine learning to develop the appropriate model for empirical data with high prediction and good interpretability. In this study, we use the data provided by Taipei City Center for Prevention of Domestic Violence and Sexual Assault (臺北市家庭暴力暨性侵害防治中心) to develop the risk prediction model to predict the recurrence probability of violence accident for the same case before this case is resolved. This prediction model can also provide individual feature explanation and the counterfactual explanation to help social workers conduct an intervention for violence prevention.
Bücher zum Thema "Counterfactual Explanation"
Reutlinger, Alexander. Extending the Counterfactual Theory of Explanation. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0005.
Der volle Inhalt der QuelleBrady, Henry E. Causation and Explanation in Social Science. Herausgegeben von Janet M. Box-Steffensmeier, Henry E. Brady und David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0010.
Der volle Inhalt der QuelleGerstenberg, Tobias, und Joshua B. Tenenbaum. Intuitive Theories. Herausgegeben von Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.28.
Der volle Inhalt der QuelleMcGregor, Rafe. A Criminology Of Narrative Fiction. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529208054.001.0001.
Der volle Inhalt der QuelleKutach, Douglas. The Asymmetry of Influence. Herausgegeben von Craig Callender. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199298204.003.0009.
Der volle Inhalt der QuelleSilberstein, Michael, W. M. Stuckey und Timothy McDevitt. Relational Blockworld and Quantum Mechanics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198807087.003.0005.
Der volle Inhalt der QuelleSt John, Taylor. Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198789918.003.0001.
Der volle Inhalt der QuelleStock, Kathleen. Fiction, Belief, and ‘Imaginative Resistance’. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198798347.003.0005.
Der volle Inhalt der QuelleHealey, Richard. Causation and Locality. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198714057.003.0010.
Der volle Inhalt der QuelleBerezin, Mabel. Events as Templates of Possibility: An Analytic Typology of Political Facts. Herausgegeben von Jeffrey C. Alexander, Ronald N. Jacobs und Philip Smith. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780195377767.013.23.
Der volle Inhalt der QuelleBuchteile zum Thema "Counterfactual Explanation"
Virmajoki, Veli. „A Counterfactual Account of Historiographical Explanation“. In Causal Explanation in Historiography, 67–95. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45929-0_5.
Der volle Inhalt der QuelleWillig, Moritz, Matej Zečević und Kristian Kersting. „“Do Not Disturb My Circles!” Identifying the Type of Counterfactual at Hand (Short Paper)“. In Robust Argumentation Machines, 266–75. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63536-6_16.
Der volle Inhalt der QuelleGerber, Doris. „Counterfactual Causality and Historical Explanations“. In Explanation in Action Theory and Historiography, 167–78. 1 [edition]. | New York : Taylor & Francis, 2019. | Series: Routledge studies in contemporary philosophy ; 121: Routledge, 2019. http://dx.doi.org/10.4324/9780429506048-9.
Der volle Inhalt der QuelleCheng, He, Depeng Xu, Shuhan Yuan und Xintao Wu. „Achieving Counterfactual Explanation for Sequence Anomaly Detection“. In Lecture Notes in Computer Science, 19–35. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70371-3_2.
Der volle Inhalt der QuelleSingh, Vandita, Kristijonas Cyras und Rafia Inam. „Explainability Metrics and Properties for Counterfactual Explanation Methods“. In Explainable and Transparent AI and Multi-Agent Systems, 155–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15565-9_10.
Der volle Inhalt der QuelleStepin, Ilia, Alejandro Catala, Martin Pereira-Fariña und Jose M. Alonso. „Factual and Counterfactual Explanation of Fuzzy Information Granules“. In Studies in Computational Intelligence, 153–85. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-64949-4_6.
Der volle Inhalt der QuelleLi, Peiyu, Omar Bahri, Soukaïna Filali Boubrahimi und Shah Muhammad Hamdi. „Attention-Based Counterfactual Explanation for Multivariate Time Series“. In Big Data Analytics and Knowledge Discovery, 287–93. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39831-5_26.
Der volle Inhalt der QuelleJi, Jiemin, Donghai Guan, Weiwei Yuan und Yuwen Deng. „Unified Counterfactual Explanation Framework for Black-Box Models“. In PRICAI 2023: Trends in Artificial Intelligence, 422–33. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7025-4_36.
Der volle Inhalt der QuelleBurkart, Nadia, Maximilian Franz und Marco F. Huber. „Explanation Framework for Intrusion Detection“. In Machine Learning for Cyber Physical Systems, 83–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_9.
Der volle Inhalt der QuellePeng, Bo, Siwei Lyu, Wei Wang und Jing Dong. „Counterfactual Image Enhancement for Explanation of Face Swap Deepfakes“. In Pattern Recognition and Computer Vision, 492–508. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_40.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Counterfactual Explanation"
Liu, Diwen, und Xiaodong Yue. „Counterfactual-Driven Model Explanation Evaluation Method“. In 2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE), 471–75. IEEE, 2024. http://dx.doi.org/10.1109/cisce62493.2024.10653060.
Der volle Inhalt der QuelleFan, Zhengyang, Wanru Li, Kathryn Blackmond Laskey und Kuo-Chu Chang. „Towards Personalized Anti-Phishing: Counterfactual Explanation Approach - Extended Abstract“. In 2024 IEEE 11th International Conference on Data Science and Advanced Analytics (DSAA), 1–2. IEEE, 2024. http://dx.doi.org/10.1109/dsaa61799.2024.10722801.
Der volle Inhalt der QuelleYin, Xiang, Nico Potyka und Francesca Toni. „CE-QArg: Counterfactual Explanations for Quantitative Bipolar Argumentation Frameworks“. In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 697–707. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/66.
Der volle Inhalt der QuelleAlfano, Gianvincenzo, Sergio Greco, Francesco Parisi und Irina Trubitsyna. „Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation“. In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 14–26. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/2.
Der volle Inhalt der QuelleTheobald, Claire, Frédéric Pennerath, Brieuc Conan-Guez, Miguel Couceiro und Amedeo Napoli. „Clarity: a Deep Ensemble for Visual Counterfactual Explanations“. In ESANN 2024, 655–60. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2024. http://dx.doi.org/10.14428/esann/2024.es2024-188.
Der volle Inhalt der QuelleMolhoek, M., und J. Van Laanen. „Secure Counterfactual Explanations in a Two-party Setting“. In 2024 27th International Conference on Information Fusion (FUSION), 1–10. IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706413.
Der volle Inhalt der QuelleLeofante, Francesco, Elena Botoeva und Vineet Rajani. „Counterfactual Explanations and Model Multiplicity: a Relational Verification View“. In 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/78.
Der volle Inhalt der QuelleYin, Xudong, und Yao Yang. „CMACE: CMAES-based Counterfactual Explanations for Black-box Models“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/60.
Der volle Inhalt der QuelleBhan, Milan, Jean-noel Vittaut, Nicolas Chesneau und Marie-jeanne Lesot. „Enhancing textual counterfactual explanation intelligibility through Counterfactual Feature Importance“. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.trustnlp-1.19.
Der volle Inhalt der QuelleAryal, Saugat, und Mark T. Keane. „Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/732.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Counterfactual Explanation"
Gandelman, Néstor. A Comparison of Saving Rates: Micro Evidence from Seventeen Latin American and Caribbean Countries. Inter-American Development Bank, Juli 2015. http://dx.doi.org/10.18235/0011701.
Der volle Inhalt der Quelle