Academic literature on the topic 'Counterfactual Explanation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Counterfactual Explanation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Counterfactual Explanation"

1

VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann, and Elke A. Rundensteiner. "FACET: Robust Counterfactual Explanation Analytics." Proceedings of the ACM on Management of Data 1, no. 4 (December 8, 2023): 1–27. http://dx.doi.org/10.1145/3626729.

Full text
Abstract:
Machine learning systems are deployed in domains such as hiring and healthcare, where undesired classifications can have serious ramifications for the user. Thus, there is a rising demand for explainable AI systems which provide actionable steps for lay users to obtain their desired outcome. To meet this need, we propose FACET, the first explanation analytics system which supports a user in interactively refining counterfactual explanations for decisions made by tree ensembles. As FACET's foundation, we design a novel type of counterfactual explanation called the counterfactual region. Unlike traditional counterfactuals, FACET's regions concisely describe portions of the feature space where the desired outcome is guaranteed, regardless of variations in exact feature values. This property, which we coin explanation robustness, is critical for the practical application of counterfactuals. We develop a rich set of novel explanation analytics queries which empower users to identify personalized counterfactual regions that account for their real-world circumstances. To process these queries, we develop a compact high-dimensional counterfactual region index along with index-aware query processing strategies for near real-time explanation analytics. We evaluate FACET against state-of-the-art explanation techniques on eight public benchmark datasets and demonstrate that FACET generates actionable explanations of similar quality in an order of magnitude less time while providing critical robustness guarantees. Finally, we conduct a preliminary user study which suggests that FACET's regions lead to higher user understanding than traditional counterfactuals.
APA, Harvard, Vancouver, ISO, and other styles
2

Sia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, and Lambert Mathias. "Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9837–45. http://dx.doi.org/10.1609/aaai.v37i8.26174.

Full text
Abstract:
Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors. In this work, which focuses on the NLI task, we introduce the methodology of Faithfulness-through-Counterfactuals, which first generates a counterfactual hypothesis based on the logical predicates expressed in the explanation, and then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic (i.e. if the new formula is \textit{logically satisfiable}). In contrast to existing approaches, this does not require any explanations for training a separate verification model. We first validate the efficacy of automatic counterfactual hypothesis generation, leveraging on the few-shot priming paradigm. Next, we show that our proposed metric distinguishes between human-model agreement and disagreement on new counterfactual input. In addition, we conduct a sensitivity analysis to validate that our metric is sensitive to unfaithful explanations.
APA, Harvard, Vancouver, ISO, and other styles
3

Asher, Nicholas, Lucas De Lara, Soumya Paul, and Chris Russell. "Counterfactual Models for Fair and Adequate Explanations." Machine Learning and Knowledge Extraction 4, no. 2 (March 31, 2022): 316–49. http://dx.doi.org/10.3390/make4020014.

Full text
Abstract:
Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they are often too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute. We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We provide formal results about the algorithmic complexity of fair and adequate explanations. We then detail two sophisticated counterfactual models, one based on causal graphs, and one based on transport theories. We show transport based models have several theoretical advantages over the competition as explanation frameworks for machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Si, Michelle, and Jian Pei. "Counterfactual Explanation of Shapley Value in Data Coalitions." Proceedings of the VLDB Endowment 17, no. 11 (July 2024): 3332–45. http://dx.doi.org/10.14778/3681954.3682004.

Full text
Abstract:
The Shapley value is widely used for data valuation in data markets. However, explaining the Shapley value of an owner in a data coalition is an unexplored and challenging task. To tackle this, we formulate the problem of finding the counterfactual explanation of Shapley value in data coalitions. Essentially, given two data owners A and B such that A has a higher Shapley value than B , a counter-factual explanation is a smallest subset of data entries in A such that transferring the subset from A to B makes the Shapley value of A less than that of B. We show that counterfactual explanations always exist, but finding an exact counterfactual explanation is NP-hard. Using Monte Carlo estimation to approximate counterfactual explanations directly according to the definition is still very costly, since we have to estimate the Shapley values of owners A and B after each possible subset shift. We develop a series of heuristic techniques to speed up computation by estimating differential Shapley values, computing the power of singular data entries, and shifting subsets greedily, culminating in the SV-Exp algorithm. Our experimental results on real datasets clearly demonstrate the efficiency of our method and the effectiveness of counterfactuals in interpreting the Shapley value of an owner.
APA, Harvard, Vancouver, ISO, and other styles
5

Baron, Sam. "Counterfactual Scheming." Mind 129, no. 514 (April 1, 2019): 535–62. http://dx.doi.org/10.1093/mind/fzz008.

Full text
Abstract:
Abstract Mathematics appears to play a genuine explanatory role in science. But how do mathematical explanations work? Recently, a counterfactual approach to mathematical explanation has been suggested. I argue that such a view fails to differentiate the explanatory uses of mathematics within science from the non-explanatory uses. I go on to offer a solution to this problem by combining elements of the counterfactual theory of explanation with elements of a unification theory of explanation. The result is a theory according to which a counterfactual is explanatory when it is an instance of a generalized counterfactual scheme.
APA, Harvard, Vancouver, ISO, and other styles
6

Baron, Sam, Mark Colyvan, and David Ripley. "A Counterfactual Approach to Explanation in Mathematics." Philosophia Mathematica 28, no. 1 (December 2, 2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.

Full text
Abstract:
ABSTRACT Our goal in this paper is to extend counterfactual accounts of scientific explanation to mathematics. Our focus, in particular, is on intra-mathematical explanations: explanations of one mathematical fact in terms of another. We offer a basic counterfactual theory of intra-mathematical explanations, before modelling the explanatory structure of a test case using counterfactual machinery. We finish by considering the application of counterpossibles to mathematical explanation, and explore a second test case along these lines.
APA, Harvard, Vancouver, ISO, and other styles
7

Chapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz, and Konstantinos Georgatzis. "FIMAP: Feature Importance by Minimal Adversarial Perturbation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.

Full text
Abstract:
Instance-based model-agnostic feature importance explanations (LIME, SHAP, L2X) are a popular form of algorithmic transparency. These methods generally return either a weighting or subset of input features as an explanation for the classification of an instance. An alternative literature argues instead that counterfactual instances, which alter the black-box model's classification, provide a more actionable form of explanation. We present Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanations. We show that this approach combines the two paradigms, recovering the output of feature-weighting methods in continuous feature spaces, whilst indicating the direction in which the nearest counterfactuals can be found. Our method also provides an implicit confidence estimate in its own explanations, something existing methods lack. Additionally, FIMAP improves upon the speed of sampling-based methods, such as LIME, by an order of magnitude, allowing for explanation deployment in time-critical applications. We extend our approach to categorical features using a partitioned Gumbel layer and demonstrate its efficacy on standard datasets.
APA, Harvard, Vancouver, ISO, and other styles
8

DOHRN, DANIEL. "Counterfactual Narrative Explanation." Journal of Aesthetics and Art Criticism 67, no. 1 (February 2009): 37–47. http://dx.doi.org/10.1111/j.1540-6245.2008.01333.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leofante, Francesco, and Nico Potyka. "Promoting Counterfactual Robustness through Diversity." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21322–30. http://dx.doi.org/10.1609/aaai.v38i19.30127.

Full text
Abstract:
Counterfactual explanations shed light on the decisions of black-box models by explaining how an input can be altered to obtain a favourable decision from the model (e.g., when a loan application has been rejected). However, as noted recently, counterfactual explainers may lack robustness in the sense that a minor change in the input can cause a major change in the explanation. This can cause confusion on the user side and open the door for adversarial attacks. In this paper, we study some sources of non-robustness. While there are fundamental reasons for why an explainer that returns a single counterfactual cannot be robust in all instances, we show that some interesting robustness guarantees can be given by reporting multiple rather than a single counterfactual. Unfortunately, the number of counterfactuals that need to be reported for the theoretical guarantees to hold can be prohibitively large. We therefore propose an approximation algorithm that uses a diversity criterion to select a feasible number of most relevant explanations and study its robustness empirically. Our experiments indicate that our method improves the state-of-the-art in generating robust explanations, while maintaining other desirable properties and providing competitive computational performance.
APA, Harvard, Vancouver, ISO, and other styles
10

He, Ming, Boyang An, Jiwen Wang, and Hao Wen. "CETD: Counterfactual Explanations by Considering Temporal Dependencies in Sequential Recommendation." Applied Sciences 13, no. 20 (October 11, 2023): 11176. http://dx.doi.org/10.3390/app132011176.

Full text
Abstract:
Providing interpretable explanations can notably enhance users’ confidence and satisfaction with regard to recommender systems. Counterfactual explanations demonstrate remarkable performance in the realm of explainable sequential recommendation. However, current counterfactual explanation models designed for sequential recommendation overlook the temporal dependencies in a user’s past behavior sequence. Furthermore, counterfactual histories should be as similar to the real history as possible to avoid conflicting with the user’s genuine behavioral preferences. This paper presents counterfactual explanations by Considering temporal dependencies (CETD), a counterfactual explanation model that utilizes a variational autoencoder (VAE) for sequential recommendation and takes into account temporal dependencies. To improve explainability, CETD employs a recurrent neural network (RNN) when generating counterfactual histories, thereby capturing both the user’s long-term preferences and short-term behavior in their real behavioral history. Meanwhile, CETD fits the distribution of reconstructed data (i.e., the counterfactual sequences generated by VAE perturbation) in a latent space, and leverages learned variance to decrease the proximity of counterfactual histories by minimizing the distance between the counterfactual sequences and the original sequence. Thorough experiments conducted on two real-world datasets demonstrate that the proposed CETD consistently surpasses current state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Counterfactual Explanation"

1

Broadbent, Alex. "A reverse counterfactual analysis of causation." Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/226170.

Full text
Abstract:
Lewis's counterfactual analysis of causation starts with the claim that c causes e if ~ C > ~ E, where c and e are events, C and E are the propositions that c and e respectively occur, ~ is negation and > is the counterfactual conditional. The purpose of my project is to provide a counterfactual analysis of causation which departs signigicantly from Lewis's starting point, and thus can hope to solve several stubborn problems for that approach. Whereas Lewis starts with a sufficiency claim, my analysis claims that a certain counterfactual is necessary for causation. I say that, if c causes e, then ~ E > ~ C - I call the latter the Reverse Counterfactual. This will often, perhaps always, be a backtracking counterfactual, so two chapters are devoted to defending a conception of counterfactuals which allows backtracking. Thus prepared, I argue that the Reverse Counterfactual is true of causes, but not of mere conditions for an effect. This provides a neat analysis of the principles governing causal selection, which is extended in a discussion of causal transitivity. Standard counterfactual accounts suffer counterexamples from preemption, but I argue that the Reverse Counterfactual has resources to deal neatly with those too. Finally I argue that the Reverse counterfactual, as a necessary condition oncausation, is the most we can hope for: in principle, there can be no counterfactual sufficient condition for causation.
APA, Harvard, Vancouver, ISO, and other styles
2

Jeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.

Full text
Abstract:
Les architectures neuronales profondes ont démontré des résultats remarquables dans diverses tâches de vision par ordinateur. Cependant, leur performance extraordinaire se fait au détriment de l'interprétabilité. En conséquence, le domaine de l'IA explicable a émergé pour comprendre réellement ce que ces modèles apprennent et pour découvrir leurs sources d'erreur. Cette thèse explore les algorithmes explicables afin de révéler les biais et les variables utilisés par ces modèles de boîte noire dans le contexte de la classification d'images. Par conséquent, nous divisons cette thèse en quatre parties. Dans les trois premiers chapitres, nous proposons plusieurs méthodes pour générer des explications contrefactuelles. Tout d'abord, nous incorporons des modèles de diffusion pour générer ces explications. Ensuite, nous lions les domaines de recherche des exemples adversariaux et des contrefactuels pour générer ces derniers. Le suivant chapitre propose une nouvelle méthode pour générer des contrefactuels en mode totalement boîte noire, c'est-à-dire en utilisant uniquement l'entrée et la prédiction sans accéder au modèle. La dernière partie de cette thèse concerne la création de méthodes interprétables par conception. Plus précisément, nous étudions comment étendre les transformeurs de vision en architectures interprétables. Nos méthodes proposées ont montré des résultats prometteurs et ont avancé la frontière des connaissances de la littérature actuelle sur l'IA explicable
Deep neural architectures have demonstrated outstanding results in a variety of computer vision tasks. However, their extraordinary performance comes at the cost of interpretability. As a result, the field of Explanable AI has emerged to understand what these models are learning as well as to uncover their sources of error. In this thesis, we explore the world of explainable algorithms to uncover the biases and variables used by these parametric models in the context of image classification. To this end, we divide this thesis into four parts. The first three chapters proposes several methods to generate counterfactual explanations. In the first chapter, we proposed to incorporate diffusion models to generate these explanations. Next, we link the research areas of adversarial attacks and counterfactuals. The next chapter proposes a new pipeline to generate counterfactuals in a fully black-box mode, \ie, using only the input and the prediction without accessing the model. The final part of this thesis is related to the creation of interpretable by-design methods. More specifically, we investigate how to extend vision transformers into interpretable architectures. Our proposed methods have shown promising results and have made a step forward in the knowledge frontier of current XAI literature
APA, Harvard, Vancouver, ISO, and other styles
3

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Full text
Abstract:
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général qui intègre explicitement la connaissance via un nouveau critère dans les objectifs d'interprétabilité. Ce formalisme est ensuite décliné pour différents types connaissances et différents types d'explications, particulièrement les exemples contre-factuels, conduisant à la proposition de plusieurs algorithmes (KICE, Knowledge Integration in Counterfactual Explanation, rKICE pour sa variante incluant des connaissances exprimées par des règles et KISM, Knowledge Integration in Surrogate Models). La question de l'agrégation des contraintes de qualité classique et de compatibilité avec les connaissances est également étudiée et nous proposons d'utiliser l'intégrale de Gödel comme opérateur d'agrégation. Enfin nous discutons de la difficulté à générer une unique explication adaptée à tous types d'utilisateurs et de la notion de diversité dans les explications
This thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
APA, Harvard, Vancouver, ISO, and other styles
4

Lerouge, Mathieu. "Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.

Full text
Abstract:
Les systèmes d'aide à la décision basés sur l'optimisation combinatoire trouvent des applications dans divers domaines professionnels. Cependant, les décideurs qui utilisent ces systèmes ne comprennent souvent pas les concepts mathématiques et les principes algorithmiques qui les sous-tendent. Ce manque de compréhension peut entraîner du scepticisme et une réticence à accepter les solutions générées par le système, érodant ainsi la confiance placée dans le système. Cette thèse traite cette problématique dans le cas du problème de planification d'employés mobiles, en anglais Workforce Scheduling and Routing Problem (WSRP), un problème d'optimisation combinatoire couplant de l'allocation de ressources humaines et du routage.Tout d'abord, nous proposons un cadre qui modélise le processus d'explication de solutions pour les utilisateurs d'un système de résolution de WSRP, permettant d'aborder une large gamme de sujets. Les utilisateurs initient le processus en faisant des observations sur une solution et en formulant des questions liées à ces observations grâce à des modèles de texte prédéfinis. Ces questions peuvent être de type contrastif, scénario ou contrefactuel. D'un point de vue mathématique, elles reviennent essentiellement à se demander s'il existe une solution faisable et meilleure dans un voisinage de la solution courante. Selon les types de questions, cela conduit à la formulation d'un ou de plusieurs problèmes de décision et de programmes mathématiques.Ensuite, nous développons une méthode pour générer des textes d'explication de différents types, avec un vocabulaire de haut niveau adapté aux utilisateurs. Notre méthode repose sur des algorithmes efficaces calculant du contenu explicatif afin de remplir des modèles de textes d'explication. Des expériences numériques montrent que ces algorithmes ont des temps d'exécution globalement compatibles avec une utilisation en temps quasi-réel des explications par les utilisateurs.Enfin, nous présentons un design de système structurant les interactions entre nos techniques de génération d'explications et les utilisateurs qui reçoivent les textes d'explication. Ce système sert de base à un prototype d'interface graphique visant à démontrer l'applicabilité pratique et les potentiels bénéfices de notre approche dans son ensemble
Decision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
APA, Harvard, Vancouver, ISO, and other styles
5

Kuo, Chia-Yu, and 郭家諭. "Explainable Risk Prediction System for Child Abuse Event by Individual Feature Attribution and Counterfactual Explanation." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yp2nr3.

Full text
Abstract:
碩士
國立交通大學
統計學研究所
107
There always have a trade-off: Performance or Interpretability. The complex model, such as ensemble learning can achieve outstanding prediction accuracy. However, it is not easy to interpret the complex model. Understanding why a model made a prediction help us to trust the black-box model, and also help users to make decisions. This work plans to use the techniques of explainable machine learning to develop the appropriate model for empirical data with high prediction and good interpretability. In this study, we use the data provided by Taipei City Center for Prevention of Domestic Violence and Sexual Assault (臺北市家庭暴力暨性侵害防治中心) to develop the risk prediction model to predict the recurrence probability of violence accident for the same case before this case is resolved. This prediction model can also provide individual feature explanation and the counterfactual explanation to help social workers conduct an intervention for violence prevention.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Counterfactual Explanation"

1

Reutlinger, Alexander. Extending the Counterfactual Theory of Explanation. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0005.

Full text
Abstract:
In the recent debate on explanation philosophers tend to agree that the natural and social sciences do not only provide causal but also non-causal explanations. It is a challenging aspect of this agreement that currently dominant causal accounts of explanation fail to cover non-causal types of explanation. So, how shall we react to this challenge? The goal of this chapter is to articulate and to extend the counterfactual theory of explanation (CTE). The CTE is a monist account of explanation. Monism is the view that there is one single philosophical account capturing both causal and non-causal explanations. According to the CTE, both causal and non-causal explanations are explanatory by virtue of revealing counterfactual dependencies between the explanandum and the explanans.
APA, Harvard, Vancouver, ISO, and other styles
2

Brady, Henry E. Causation and Explanation in Social Science. Edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0010.

Full text
Abstract:
This article provides an overview of causal thinking by characterizing four approaches to causal inference. It also describes the INUS model. It specifically presents a user-friendly synopsis of philosophical and statistical musings about causation. The four approaches to causality include neo-Humean regularity, counterfactual, manipulation and mechanisms, and capacities. A counterfactual is a statement, typically in the subjunctive mood, in which a false or ‘counter to fact’ premise is followed by some assertion about what would have happened if the premise were true. Three basic questions about causality are then addressed. Moreover, the article gives a review of four approaches of what causality might be. It pays attention on a counterfactual definition, mostly amounting to a recipe that is now widely used in statistics. It ends with a discussion of the limitations of the recipe and how far it goes toward solving the epistemological and ontological problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Gerstenberg, Tobias, and Joshua B. Tenenbaum. Intuitive Theories. Edited by Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.28.

Full text
Abstract:
This chapter first explains what intuitive theories are, how they can be modeled as probabilistic, generative programs, and how intuitive theories support various cognitive functions such as prediction, counterfactual reasoning, and explanation. It focuses on two domains of knowledge: people’s intuitive understanding of physics, and their intuitive understanding of psychology. It shows how causal judgments can be modeled as counterfactual contrasts operating over an intuitive theory of physics, and how explanations of an agent’s behavior are grounded in a rational planning model that is inverted to infer the agent’s beliefs, desires, and abilities. It concludes by highlighting some of the challenges that the intuitive theories framework faces, such as understanding how intuitive theories are learned and developed.
APA, Harvard, Vancouver, ISO, and other styles
4

McGregor, Rafe. A Criminology Of Narrative Fiction. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529208054.001.0001.

Full text
Abstract:
This book answers the question of the usefulness of criminological fiction. Criminological fiction is fiction that can provide an explanation of the causes of crime or social harm and could, in consequence, contribute to the development of crime or social harm reduction policies. The book argues that criminological fiction can provide at least the following three types of criminological knowledge: (1) phenomenological, i.e. representing what certain experiences are like; (2) counterfactual, i.e. representing possible but non-existent situations; and (3) mimetic, i.e. representing everyday reality in detail and with accuracy. The book employs the phenomenological, counterfactual, and mimetic values of fiction to establish a theory of the criminological value of narrative fiction. It begins with a critical analysis of current work in narrative criminology and current criminological work on fiction. It then demonstrates the phenomenological, counterfactual, and mimetic values of narrative fiction using case studies from fictional novels, graphic novels, television series, and feature films. The argument concludes with an explanation of the relationship between the aetiological and pedagogic values of narrative fiction, focusing on cinematic fictions in virtue of the vast audiences they reach courtesy of their place in global popular culture.
APA, Harvard, Vancouver, ISO, and other styles
5

Kutach, Douglas. The Asymmetry of Influence. Edited by Craig Callender. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199298204.003.0009.

Full text
Abstract:
This chapter considers the nature of the causal asymmetry, or even more generally, the asymmetry of influence. Putting aside explanations which would appeal to an asymmetry in time as explaining this asymmetry, it aims to show, using current physical theory and no ad hoc time asymmetric assumptions, why it is that future-directed influence sometimes advances one's goals but backward-directed influence does not. The chapter claims that agency is crucial to the explanation of the influence asymmetry. It provides an exhaustive account of the advancement asymmetry that is connected with fundamental physics, influence, causation, counterfactual dependence, and related notions in palatable ways.
APA, Harvard, Vancouver, ISO, and other styles
6

Silberstein, Michael, W. M. Stuckey, and Timothy McDevitt. Relational Blockworld and Quantum Mechanics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198807087.003.0005.

Full text
Abstract:
The main thread of chapter 4 introduces some of the major mysteries and interpretational issues of quantum mechanics (QM). These mysteries and issues include: quantum superposition, quantum nonlocality, Bell’s inequality, entanglement, delayed choice, the measurement problem, and the lack of counterfactual definiteness. All these mysteries and interpretational issues of QM result from dynamical explanation in the mechanical universe and are dispatched using the authors’ adynamical explanation in the block universe, called Relational Blockworld (RBW). A possible link between RBW and quantum information theory is provided. The metaphysical underpinnings of RBW, such as contextual emergence, spatiotemporal ontological contextuality, and adynamical global constraints, are provided in Philosophy of Physics for Chapter 4. That is also where RBW is situated with respect to retrocausal accounts and it is shown that RBW is a realist, psi-epistemic account of QM. All the relevant formalism for this chapter is provided in Foundational Physics for Chapter 4.
APA, Harvard, Vancouver, ISO, and other styles
7

St John, Taylor. Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198789918.003.0001.

Full text
Abstract:
This chapter sets out the puzzle: what explains the rise of investor–state arbitration? It defines the rise of investor–state arbitration as one process with two phases: the creation of the ICSID Convention and eliciting state consent to ISDS. Conventional theoretical accounts, in which investor lobbying and then intergovernmental bargaining drive the rise of investor–state arbitration, are outlined. These accounts contrast with the book’s explanation, that international officials provided support to one institutional framework, ICSID, which led to its creation over other possibilities. The creation of ICSID kicked off a process of gradual institutional development that led to contemporary investor–state arbitration. The book’s sources include 30,000 pages of primary material and dozens of interviews, and the methods are based on counterfactual reasoning. Finally, the book’s four main findings are introduced.
APA, Harvard, Vancouver, ISO, and other styles
8

Stock, Kathleen. Fiction, Belief, and ‘Imaginative Resistance’. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198798347.003.0005.

Full text
Abstract:
The chapter starts with a focus on the relation between fiction and the inculcation of justified belief via testimony. The claim, relied upon in Chapter 3, that fictions can be sources of testimony and so justified belief, is defended. Then the fact that fictive utterances can, effectively, instruct readers to have beliefs, is implicated in a new explanation of ‘imaginative resistance’. The author suggests that the right account of this phenomenon should cite the reader’s perception of an authorial intention that she believe a counterfactual, which in fact she cannot believe. This view is defended against several rivals, and distinguished from certain other views, including the influential view of Tamar Gendler. Finally there is a consideration of whether one can propositionally imagine what one believes to be conceptually impossible.
APA, Harvard, Vancouver, ISO, and other styles
9

Healey, Richard. Causation and Locality. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198714057.003.0010.

Full text
Abstract:
By moving to the context of relativistic space-time structure, this chapter completes the argument of Chapter 4 that we can use quantum theory locally to explain correlations that violate Bell inequalities with no instantaneous action at a distance. Chance here must be relativized not just to time but to a space-time point, so that an event may have more than one chance at the same time—it may even be certain relative to one space-time point but ‘at the same time’ completely uncertain relative to another. This renders Bell’s principle of Local Causality either inapplicable or intuitively unmotivated. Counterfactual dependence between the outcomes of measurements on systems assigned an entangled state is not causal since neither outcome is subject to intervention: but it may still be appealed to in a non-causal explanation of one in terms of the other.
APA, Harvard, Vancouver, ISO, and other styles
10

Berezin, Mabel. Events as Templates of Possibility: An Analytic Typology of Political Facts. Edited by Jeffrey C. Alexander, Ronald N. Jacobs, and Philip Smith. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780195377767.013.23.

Full text
Abstract:
This article extends the concept of events to bring cultural analysis to bear on political explanation and privileges “thick description” and narrative as methodological tools. Drawing on the views of Emile Durkheim, it argues that events constitute “social facts”—phenomena with sufficient identity and coherence that the social collectivity recognizes them as discrete and important. The article first considers the tension between the political and the cultural using a metaphor from sports and biology that unites agency and nature. It then discusses the intersection of events and experience as an analytic category that incorporates the “counterfactual” turn in historical analysis by drawing on William Sewell’s sociological theory of events. It also argues for the existence of “political facts” and concludes by proposing an analytic typology of political facts based on the classification of events along a temporal or spatial axis.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Counterfactual Explanation"

1

Virmajoki, Veli. "A Counterfactual Account of Historiographical Explanation." In Causal Explanation in Historiography, 67–95. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45929-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Willig, Moritz, Matej Zečević, and Kristian Kersting. "“Do Not Disturb My Circles!” Identifying the Type of Counterfactual at Hand (Short Paper)." In Robust Argumentation Machines, 266–75. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63536-6_16.

Full text
Abstract:
AbstractWhen the phenomena of interest are in need of explanation, we are often in search of the underlying root causes. Causal inference provides tools for identifying these root causes—by performing interventions on suitably chosen variables we can observe down-stream effects in the outcome variable of interest. On the other hand, argumentation as an approach of attributing observed outcomes to specific factors, naturally lends itself as a tool for determining the most plausible explanation. We can further improve the robustness of such explanations by measuring their likelihood within a mutually agreed-upon causal model. For this, typically one of in-principle two distinct types of counterfactual explanations is used: interventional counterfactuals, which treat changes as deliberate interventions to the causal system, and backtracking counterfactuals, which attribute changes exclusively to exogenous factors. Although both frameworks share the common goal of inferring true causal factors, they fundamentally differ in their conception of counterfactuals. Here, we present the first approach that decides when to expect interventional and when to opt for backtracking counterfactuals.
APA, Harvard, Vancouver, ISO, and other styles
3

Gerber, Doris. "Counterfactual Causality and Historical Explanations." In Explanation in Action Theory and Historiography, 167–78. 1 [edition]. | New York : Taylor & Francis, 2019. | Series: Routledge studies in contemporary philosophy ; 121: Routledge, 2019. http://dx.doi.org/10.4324/9780429506048-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cheng, He, Depeng Xu, Shuhan Yuan, and Xintao Wu. "Achieving Counterfactual Explanation for Sequence Anomaly Detection." In Lecture Notes in Computer Science, 19–35. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70371-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Vandita, Kristijonas Cyras, and Rafia Inam. "Explainability Metrics and Properties for Counterfactual Explanation Methods." In Explainable and Transparent AI and Multi-Agent Systems, 155–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15565-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stepin, Ilia, Alejandro Catala, Martin Pereira-Fariña, and Jose M. Alonso. "Factual and Counterfactual Explanation of Fuzzy Information Granules." In Studies in Computational Intelligence, 153–85. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-64949-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Peiyu, Omar Bahri, Soukaïna Filali Boubrahimi, and Shah Muhammad Hamdi. "Attention-Based Counterfactual Explanation for Multivariate Time Series." In Big Data Analytics and Knowledge Discovery, 287–93. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39831-5_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ji, Jiemin, Donghai Guan, Weiwei Yuan, and Yuwen Deng. "Unified Counterfactual Explanation Framework for Black-Box Models." In PRICAI 2023: Trends in Artificial Intelligence, 422–33. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7025-4_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Burkart, Nadia, Maximilian Franz, and Marco F. Huber. "Explanation Framework for Intrusion Detection." In Machine Learning for Cyber Physical Systems, 83–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_9.

Full text
Abstract:
AbstractMachine learning and deep learning are widely used in various applications to assist or even replace human reasoning. For instance, a machine learning based intrusion detection system (IDS) monitors a network for malicious activity or specific policy violations. We propose that IDSs should attach a sufficiently understandable report to each alert to allow the operator to review them more efficiently. This work aims at complementing an IDS by means of a framework to create explanations. The explanations support the human operator in understanding alerts and reveal potential false positives. The focus lies on counterfactual instances and explanations based on locally faithful decision-boundaries.
APA, Harvard, Vancouver, ISO, and other styles
10

Peng, Bo, Siwei Lyu, Wei Wang, and Jing Dong. "Counterfactual Image Enhancement for Explanation of Face Swap Deepfakes." In Pattern Recognition and Computer Vision, 492–508. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Counterfactual Explanation"

1

Liu, Diwen, and Xiaodong Yue. "Counterfactual-Driven Model Explanation Evaluation Method." In 2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE), 471–75. IEEE, 2024. http://dx.doi.org/10.1109/cisce62493.2024.10653060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fan, Zhengyang, Wanru Li, Kathryn Blackmond Laskey, and Kuo-Chu Chang. "Towards Personalized Anti-Phishing: Counterfactual Explanation Approach - Extended Abstract." In 2024 IEEE 11th International Conference on Data Science and Advanced Analytics (DSAA), 1–2. IEEE, 2024. http://dx.doi.org/10.1109/dsaa61799.2024.10722801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yin, Xiang, Nico Potyka, and Francesca Toni. "CE-QArg: Counterfactual Explanations for Quantitative Bipolar Argumentation Frameworks." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 697–707. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/66.

Full text
Abstract:
There is a growing interest in understanding arguments' strength in Quantitative Bipolar Argumentation Frameworks (QBAFs). Most existing studies focus on attribution-based methods that explain an argument's strength by assigning importance scores to other arguments but fail to explain how to change the current strength to a desired one. To solve this issue, we introduce counterfactual explanations for QBAFs. We discuss problem variants and propose an iterative algorithm named Counterfactual Explanations for Quantitative bipolar Argumentation frameworks (CE-QArg). CE-QArg can identify valid and cost-effective counterfactual explanations based on two core modules, polarity and priority, which help determine the updating direction and magnitude for each argument, respectively. We discuss some formal properties of our counterfactual explanations and empirically evaluate CE-QArg on randomly generated QBAFs.
APA, Harvard, Vancouver, ISO, and other styles
4

Alfano, Gianvincenzo, Sergio Greco, Francesco Parisi, and Irina Trubitsyna. "Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 14–26. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/2.

Full text
Abstract:
Explainable Artificial Intelligence and Formal Argumentation have received significant attention in recent years. Argumentation frameworks are useful for representing knowledge and reasoning on it. Counterfactual and semifactual explanations are interpretability techniques that provide insights into the outcome of a model by generating alternative hypothetical instances. While there has been important work on counterfactual and semifactual explanations for Machine Learning (ML) models, less attention has been devoted to these kinds of problems in argumentation. In this paper, we explore counterfactual and semifactual reasoning in abstract Argumentation Framework. We investigate the computational complexity of counterfactual- and semifactual-based reasoning problems, showing that they are generally harder than classical argumentation problems such as credulous and skeptical acceptance. Finally, we show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework, and provide a computational strategy through ASP solvers.
APA, Harvard, Vancouver, ISO, and other styles
5

Theobald, Claire, Frédéric Pennerath, Brieuc Conan-Guez, Miguel Couceiro, and Amedeo Napoli. "Clarity: a Deep Ensemble for Visual Counterfactual Explanations." In ESANN 2024, 655–60. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2024. http://dx.doi.org/10.14428/esann/2024.es2024-188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Molhoek, M., and J. Van Laanen. "Secure Counterfactual Explanations in a Two-party Setting." In 2024 27th International Conference on Information Fusion (FUSION), 1–10. IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leofante, Francesco, Elena Botoeva, and Vineet Rajani. "Counterfactual Explanations and Model Multiplicity: a Relational Verification View." In 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/78.

Full text
Abstract:
We study the interplay between counterfactual explanations and model multiplicity in the context of neural network classifiers. We show that current explanation methods often produce counterfactuals whose validity is not preserved under model multiplicity. We then study the problem of generating counterfactuals that are guaranteed to be robust to model multiplicity, characterise its complexity and propose an approach to solve this problem using ideas from relational verification.
APA, Harvard, Vancouver, ISO, and other styles
8

Yin, Xudong, and Yao Yang. "CMACE: CMAES-based Counterfactual Explanations for Black-box Models." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/60.

Full text
Abstract:
Explanatory Artificial Intelligence plays a vital role in machine learning, due to its widespread application in decision-making scenarios, e.g., credit lending. Counterfactual Explanation (CFE) is a new kind of explanatory method that involves asking “what if ”, i.e. what would have happened if model inputs slightly change. To answer the question, Counterfactual Explanation aims at finding a minimum perturbation in model inputs leading to a different model decision. Compared with model-agnostic approaches, model-specific CFE approaches designed only for specific type of models usually have better performance in finding optimal counterfactual perturbations, owing to access to the inner workings of models. To deal with this dilemma, this work first proposes CMAES-based Counterfactual Explanations (CMACE): an effective model-agnostic counterfactual generating approach based on Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and a warm starting scheme that provides good initialization of the counterfactual's mean and covariance parameters for CMA-ES taking advantage of prior information of training samples. CMACE significantly outperforms another state-of-art (SOTA) model-agnostic approach (Bayesian Counterfactual Generator, BayCon) with various experimental settings. Extensive experiments also demonstrate that CMACE is superior to a SOTA model-specific approach (Flexible Optimizable Counterfactual Explanations for Tree Ensembles, FOCUS) that is designed for tree-based models using gradient-based optimization.
APA, Harvard, Vancouver, ISO, and other styles
9

Bhan, Milan, Jean-noel Vittaut, Nicolas Chesneau, and Marie-jeanne Lesot. "Enhancing textual counterfactual explanation intelligibility through Counterfactual Feature Importance." In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.trustnlp-1.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aryal, Saugat, and Mark T. Keane. "Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/732.

Full text
Abstract:
Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e.g., a customer refused a loan might be told “if you asked for a loan with a shorter term, it would have been approved”). Counterfactuals explain what changes to the input-features of an AI system change the output-decision. However, there is a sub-type of counterfactual, semi-factuals, that have received less attention in AI (though the Cognitive Sciences have studied them more). This paper surveys semi-factual explanation, summarising historical and recent work. It defines key desiderata for semi-factual XAI, reporting benchmark tests of historical algorithms (as well as a novel, naïve method) to provide a solid basis for future developments.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Counterfactual Explanation"

1

Gandelman, Néstor. A Comparison of Saving Rates: Micro Evidence from Seventeen Latin American and Caribbean Countries. Inter-American Development Bank, July 2015. http://dx.doi.org/10.18235/0011701.

Full text
Abstract:
Using micro data on expenditure and income for 17 Latin American and Caribbean (LAC) countries, this paper presents stylized facts on saving behavior by age, education, income and place of residence. Counterfactual saving rates are computed by imposing the saving behavior, the population distribution or the income distribution of two benchmark economies (the United States and Korea). The results suggest that the difference in national saving rates between LAC and the benchmark economies can mainly be attributed to differences in saving behavior of the population and, to a lesser extent, to differences in the distribution of the population by educational levels. Other demographic or income distribution differences are not quantitatively important as explanations of saving rates.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography