Literatura científica selecionada sobre o tema "Counterfactual Explanation"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Counterfactual Explanation".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Counterfactual Explanation"
VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann e Elke A. Rundensteiner. "FACET: Robust Counterfactual Explanation Analytics". Proceedings of the ACM on Management of Data 1, n.º 4 (8 de dezembro de 2023): 1–27. http://dx.doi.org/10.1145/3626729.
Texto completo da fonteSia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer e Lambert Mathias. "Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junho de 2023): 9837–45. http://dx.doi.org/10.1609/aaai.v37i8.26174.
Texto completo da fonteAsher, Nicholas, Lucas De Lara, Soumya Paul e Chris Russell. "Counterfactual Models for Fair and Adequate Explanations". Machine Learning and Knowledge Extraction 4, n.º 2 (31 de março de 2022): 316–49. http://dx.doi.org/10.3390/make4020014.
Texto completo da fonteSi, Michelle, e Jian Pei. "Counterfactual Explanation of Shapley Value in Data Coalitions". Proceedings of the VLDB Endowment 17, n.º 11 (julho de 2024): 3332–45. http://dx.doi.org/10.14778/3681954.3682004.
Texto completo da fonteBaron, Sam. "Counterfactual Scheming". Mind 129, n.º 514 (1 de abril de 2019): 535–62. http://dx.doi.org/10.1093/mind/fzz008.
Texto completo da fonteBaron, Sam, Mark Colyvan e David Ripley. "A Counterfactual Approach to Explanation in Mathematics". Philosophia Mathematica 28, n.º 1 (2 de dezembro de 2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.
Texto completo da fonteChapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz e Konstantinos Georgatzis. "FIMAP: Feature Importance by Minimal Adversarial Perturbation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de maio de 2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.
Texto completo da fonteDOHRN, DANIEL. "Counterfactual Narrative Explanation". Journal of Aesthetics and Art Criticism 67, n.º 1 (fevereiro de 2009): 37–47. http://dx.doi.org/10.1111/j.1540-6245.2008.01333.x.
Texto completo da fonteLeofante, Francesco, e Nico Potyka. "Promoting Counterfactual Robustness through Diversity". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21322–30. http://dx.doi.org/10.1609/aaai.v38i19.30127.
Texto completo da fonteHe, Ming, Boyang An, Jiwen Wang e Hao Wen. "CETD: Counterfactual Explanations by Considering Temporal Dependencies in Sequential Recommendation". Applied Sciences 13, n.º 20 (11 de outubro de 2023): 11176. http://dx.doi.org/10.3390/app132011176.
Texto completo da fonteTeses / dissertações sobre o assunto "Counterfactual Explanation"
Broadbent, Alex. "A reverse counterfactual analysis of causation". Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/226170.
Texto completo da fonteJeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.
Texto completo da fonteDeep neural architectures have demonstrated outstanding results in a variety of computer vision tasks. However, their extraordinary performance comes at the cost of interpretability. As a result, the field of Explanable AI has emerged to understand what these models are learning as well as to uncover their sources of error. In this thesis, we explore the world of explainable algorithms to uncover the biases and variables used by these parametric models in the context of image classification. To this end, we divide this thesis into four parts. The first three chapters proposes several methods to generate counterfactual explanations. In the first chapter, we proposed to incorporate diffusion models to generate these explanations. Next, we link the research areas of adversarial attacks and counterfactuals. The next chapter proposes a new pipeline to generate counterfactuals in a fully black-box mode, \ie, using only the input and the prediction without accessing the model. The final part of this thesis is related to the creation of interpretable by-design methods. More specifically, we investigate how to extend vision transformers into interpretable architectures. Our proposed methods have shown promising results and have made a step forward in the knowledge frontier of current XAI literature
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Texto completo da fonteThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Lerouge, Mathieu. "Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.
Texto completo da fonteDecision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
Kuo, Chia-Yu, e 郭家諭. "Explainable Risk Prediction System for Child Abuse Event by Individual Feature Attribution and Counterfactual Explanation". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yp2nr3.
Texto completo da fonte國立交通大學
統計學研究所
107
There always have a trade-off: Performance or Interpretability. The complex model, such as ensemble learning can achieve outstanding prediction accuracy. However, it is not easy to interpret the complex model. Understanding why a model made a prediction help us to trust the black-box model, and also help users to make decisions. This work plans to use the techniques of explainable machine learning to develop the appropriate model for empirical data with high prediction and good interpretability. In this study, we use the data provided by Taipei City Center for Prevention of Domestic Violence and Sexual Assault (臺北市家庭暴力暨性侵害防治中心) to develop the risk prediction model to predict the recurrence probability of violence accident for the same case before this case is resolved. This prediction model can also provide individual feature explanation and the counterfactual explanation to help social workers conduct an intervention for violence prevention.
Livros sobre o assunto "Counterfactual Explanation"
Reutlinger, Alexander. Extending the Counterfactual Theory of Explanation. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0005.
Texto completo da fonteBrady, Henry E. Causation and Explanation in Social Science. Editado por Janet M. Box-Steffensmeier, Henry E. Brady e David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0010.
Texto completo da fonteGerstenberg, Tobias, e Joshua B. Tenenbaum. Intuitive Theories. Editado por Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.28.
Texto completo da fonteMcGregor, Rafe. A Criminology Of Narrative Fiction. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529208054.001.0001.
Texto completo da fonteKutach, Douglas. The Asymmetry of Influence. Editado por Craig Callender. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199298204.003.0009.
Texto completo da fonteSilberstein, Michael, W. M. Stuckey e Timothy McDevitt. Relational Blockworld and Quantum Mechanics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198807087.003.0005.
Texto completo da fonteSt John, Taylor. Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198789918.003.0001.
Texto completo da fonteStock, Kathleen. Fiction, Belief, and ‘Imaginative Resistance’. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198798347.003.0005.
Texto completo da fonteHealey, Richard. Causation and Locality. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198714057.003.0010.
Texto completo da fonteBerezin, Mabel. Events as Templates of Possibility: An Analytic Typology of Political Facts. Editado por Jeffrey C. Alexander, Ronald N. Jacobs e Philip Smith. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780195377767.013.23.
Texto completo da fonteCapítulos de livros sobre o assunto "Counterfactual Explanation"
Virmajoki, Veli. "A Counterfactual Account of Historiographical Explanation". In Causal Explanation in Historiography, 67–95. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45929-0_5.
Texto completo da fonteWillig, Moritz, Matej Zečević e Kristian Kersting. "“Do Not Disturb My Circles!” Identifying the Type of Counterfactual at Hand (Short Paper)". In Robust Argumentation Machines, 266–75. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63536-6_16.
Texto completo da fonteGerber, Doris. "Counterfactual Causality and Historical Explanations". In Explanation in Action Theory and Historiography, 167–78. 1 [edition]. | New York : Taylor & Francis, 2019. | Series: Routledge studies in contemporary philosophy ; 121: Routledge, 2019. http://dx.doi.org/10.4324/9780429506048-9.
Texto completo da fonteCheng, He, Depeng Xu, Shuhan Yuan e Xintao Wu. "Achieving Counterfactual Explanation for Sequence Anomaly Detection". In Lecture Notes in Computer Science, 19–35. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70371-3_2.
Texto completo da fonteSingh, Vandita, Kristijonas Cyras e Rafia Inam. "Explainability Metrics and Properties for Counterfactual Explanation Methods". In Explainable and Transparent AI and Multi-Agent Systems, 155–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15565-9_10.
Texto completo da fonteStepin, Ilia, Alejandro Catala, Martin Pereira-Fariña e Jose M. Alonso. "Factual and Counterfactual Explanation of Fuzzy Information Granules". In Studies in Computational Intelligence, 153–85. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-64949-4_6.
Texto completo da fonteLi, Peiyu, Omar Bahri, Soukaïna Filali Boubrahimi e Shah Muhammad Hamdi. "Attention-Based Counterfactual Explanation for Multivariate Time Series". In Big Data Analytics and Knowledge Discovery, 287–93. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39831-5_26.
Texto completo da fonteJi, Jiemin, Donghai Guan, Weiwei Yuan e Yuwen Deng. "Unified Counterfactual Explanation Framework for Black-Box Models". In PRICAI 2023: Trends in Artificial Intelligence, 422–33. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7025-4_36.
Texto completo da fonteBurkart, Nadia, Maximilian Franz e Marco F. Huber. "Explanation Framework for Intrusion Detection". In Machine Learning for Cyber Physical Systems, 83–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_9.
Texto completo da fontePeng, Bo, Siwei Lyu, Wei Wang e Jing Dong. "Counterfactual Image Enhancement for Explanation of Face Swap Deepfakes". In Pattern Recognition and Computer Vision, 492–508. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_40.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Counterfactual Explanation"
Liu, Diwen, e Xiaodong Yue. "Counterfactual-Driven Model Explanation Evaluation Method". In 2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE), 471–75. IEEE, 2024. http://dx.doi.org/10.1109/cisce62493.2024.10653060.
Texto completo da fonteFan, Zhengyang, Wanru Li, Kathryn Blackmond Laskey e Kuo-Chu Chang. "Towards Personalized Anti-Phishing: Counterfactual Explanation Approach - Extended Abstract". In 2024 IEEE 11th International Conference on Data Science and Advanced Analytics (DSAA), 1–2. IEEE, 2024. http://dx.doi.org/10.1109/dsaa61799.2024.10722801.
Texto completo da fonteYin, Xiang, Nico Potyka e Francesca Toni. "CE-QArg: Counterfactual Explanations for Quantitative Bipolar Argumentation Frameworks". In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 697–707. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/66.
Texto completo da fonteAlfano, Gianvincenzo, Sergio Greco, Francesco Parisi e Irina Trubitsyna. "Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation". In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 14–26. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/2.
Texto completo da fonteTheobald, Claire, Frédéric Pennerath, Brieuc Conan-Guez, Miguel Couceiro e Amedeo Napoli. "Clarity: a Deep Ensemble for Visual Counterfactual Explanations". In ESANN 2024, 655–60. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2024. http://dx.doi.org/10.14428/esann/2024.es2024-188.
Texto completo da fonteMolhoek, M., e J. Van Laanen. "Secure Counterfactual Explanations in a Two-party Setting". In 2024 27th International Conference on Information Fusion (FUSION), 1–10. IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706413.
Texto completo da fonteLeofante, Francesco, Elena Botoeva e Vineet Rajani. "Counterfactual Explanations and Model Multiplicity: a Relational Verification View". In 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/78.
Texto completo da fonteYin, Xudong, e Yao Yang. "CMACE: CMAES-based Counterfactual Explanations for Black-box Models". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/60.
Texto completo da fonteBhan, Milan, Jean-noel Vittaut, Nicolas Chesneau e Marie-jeanne Lesot. "Enhancing textual counterfactual explanation intelligibility through Counterfactual Feature Importance". In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.trustnlp-1.19.
Texto completo da fonteAryal, Saugat, e Mark T. Keane. "Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/732.
Texto completo da fonteRelatórios de organizações sobre o assunto "Counterfactual Explanation"
Gandelman, Néstor. A Comparison of Saving Rates: Micro Evidence from Seventeen Latin American and Caribbean Countries. Inter-American Development Bank, julho de 2015. http://dx.doi.org/10.18235/0011701.
Texto completo da fonte