Literatura académica sobre el tema "Counterfactual explanations"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Counterfactual explanations".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Counterfactual explanations"

1

Sia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer y Lambert Mathias. "Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junio de 2023): 9837–45. http://dx.doi.org/10.1609/aaai.v37i8.26174.

Texto completo
Resumen
Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors. In this work, which focuses on the NLI task, we introduce the methodology of Faithfulness-through-Counterfactuals, which first generates a counterfactual hypothesis based on the logical predicates expressed in the explanation, and then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic (i.e. if the new formula is \textit{logically satisfiable}). In contrast to existing approaches, this does not require any explanations for training a separate verification model. We first validate the efficacy of automatic counterfactual hypothesis generation, leveraging on the few-shot priming paradigm. Next, we show that our proposed metric distinguishes between human-model agreement and disagreement on new counterfactual input. In addition, we conduct a sensitivity analysis to validate that our metric is sensitive to unfaithful explanations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Asher, Nicholas, Lucas De Lara, Soumya Paul y Chris Russell. "Counterfactual Models for Fair and Adequate Explanations". Machine Learning and Knowledge Extraction 4, n.º 2 (31 de marzo de 2022): 316–49. http://dx.doi.org/10.3390/make4020014.

Texto completo
Resumen
Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they are often too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute. We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We provide formal results about the algorithmic complexity of fair and adequate explanations. We then detail two sophisticated counterfactual models, one based on causal graphs, and one based on transport theories. We show transport based models have several theoretical advantages over the competition as explanation frameworks for machine learning algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann y Elke A. Rundensteiner. "FACET: Robust Counterfactual Explanation Analytics". Proceedings of the ACM on Management of Data 1, n.º 4 (8 de diciembre de 2023): 1–27. http://dx.doi.org/10.1145/3626729.

Texto completo
Resumen
Machine learning systems are deployed in domains such as hiring and healthcare, where undesired classifications can have serious ramifications for the user. Thus, there is a rising demand for explainable AI systems which provide actionable steps for lay users to obtain their desired outcome. To meet this need, we propose FACET, the first explanation analytics system which supports a user in interactively refining counterfactual explanations for decisions made by tree ensembles. As FACET's foundation, we design a novel type of counterfactual explanation called the counterfactual region. Unlike traditional counterfactuals, FACET's regions concisely describe portions of the feature space where the desired outcome is guaranteed, regardless of variations in exact feature values. This property, which we coin explanation robustness, is critical for the practical application of counterfactuals. We develop a rich set of novel explanation analytics queries which empower users to identify personalized counterfactual regions that account for their real-world circumstances. To process these queries, we develop a compact high-dimensional counterfactual region index along with index-aware query processing strategies for near real-time explanation analytics. We evaluate FACET against state-of-the-art explanation techniques on eight public benchmark datasets and demonstrate that FACET generates actionable explanations of similar quality in an order of magnitude less time while providing critical robustness guarantees. Finally, we conduct a preliminary user study which suggests that FACET's regions lead to higher user understanding than traditional counterfactuals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kenny, Eoin M. y Mark T. Keane. "On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de mayo de 2021): 11575–85. http://dx.doi.org/10.1609/aaai.v35i13.17377.

Texto completo
Resumen
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs. In response to this disquiet, counterfactual explanations have become very popular in eXplainable AI (XAI) due to their asserted computational, psychological, and legal benefits. In contrast however, semi-factuals (which appear to be equally useful) have surprisingly received no attention. Most counterfactual methods address tabular rather than image data, partly because the non-discrete nature of images makes good counterfactuals difficult to define; indeed, generating plausible counterfactual images which lie on the data manifold is also problematic. This paper advances a novel method for generating plausible counterfactuals and semi-factuals for black-box CNN classifiers doing computer vision. The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all “exceptional” features in a test image to be “normal” from the perspective of the counterfactual class, to generate plausible counterfactual images. Two controlled experiments compare this method to others in the literature, showing that PIECE generates highly plausible counterfactuals (and the best semi-factuals) on several benchmark measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Leofante, Francesco y Nico Potyka. "Promoting Counterfactual Robustness through Diversity". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21322–30. http://dx.doi.org/10.1609/aaai.v38i19.30127.

Texto completo
Resumen
Counterfactual explanations shed light on the decisions of black-box models by explaining how an input can be altered to obtain a favourable decision from the model (e.g., when a loan application has been rejected). However, as noted recently, counterfactual explainers may lack robustness in the sense that a minor change in the input can cause a major change in the explanation. This can cause confusion on the user side and open the door for adversarial attacks. In this paper, we study some sources of non-robustness. While there are fundamental reasons for why an explainer that returns a single counterfactual cannot be robust in all instances, we show that some interesting robustness guarantees can be given by reporting multiple rather than a single counterfactual. Unfortunately, the number of counterfactuals that need to be reported for the theoretical guarantees to hold can be prohibitively large. We therefore propose an approximation algorithm that uses a diversity criterion to select a feasible number of most relevant explanations and study its robustness empirically. Our experiments indicate that our method improves the state-of-the-art in generating robust explanations, while maintaining other desirable properties and providing competitive computational performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Delaney, Eoin, Arjun Pakrashi, Derek Greene y Mark T. Keane. "Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ (Abstract Reprint)". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 20 (24 de marzo de 2024): 22696. http://dx.doi.org/10.1609/aaai.v38i20.30596.

Texto completo
Resumen
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Baron, Sam, Mark Colyvan y David Ripley. "A Counterfactual Approach to Explanation in Mathematics". Philosophia Mathematica 28, n.º 1 (2 de diciembre de 2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.

Texto completo
Resumen
ABSTRACT Our goal in this paper is to extend counterfactual accounts of scientific explanation to mathematics. Our focus, in particular, is on intra-mathematical explanations: explanations of one mathematical fact in terms of another. We offer a basic counterfactual theory of intra-mathematical explanations, before modelling the explanatory structure of a test case using counterfactual machinery. We finish by considering the application of counterpossibles to mathematical explanation, and explore a second test case along these lines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Schleich, Maximilian, Zixuan Geng, Yihong Zhang y Dan Suciu. "GeCo". Proceedings of the VLDB Endowment 14, n.º 9 (mayo de 2021): 1681–93. http://dx.doi.org/10.14778/3461535.3461555.

Texto completo
Resumen
Machine learning is increasingly applied in high-stakes decision making that directly affect people's lives, and this leads to an increased demand for systems to explain their decisions. Explanations often take the form of counterfactuals , which consists of conveying to the end user what she/he needs to change in order to improve the outcome. Computing counterfactual explanations is challenging, because of the inherent tension between a rich semantics of the domain, and the need for real time response. In this paper we present CeCo, the first system that can compute plausible and feasible counterfactual explanations in real time. At its core, CeCo relies on a genetic algorithm, which is customized to favor searching counterfactual explanations with the smallest number of changes. To achieve real-time performance, we introduce two novel optimizations: Δ-representation of candidate counterfactuals, and partial evaluation of the classifier. We compare empirically CeCo against five other systems described in the literature, and show that it is the only system that can achieve both high quality explanations and real time answers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Barzekar, Hosein y Susan McRoy. "Achievable Minimally-Contrastive Counterfactual Explanations". Machine Learning and Knowledge Extraction 5, n.º 3 (3 de agosto de 2023): 922–36. http://dx.doi.org/10.3390/make5030048.

Texto completo
Resumen
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz y Konstantinos Georgatzis. "FIMAP: Feature Importance by Minimal Adversarial Perturbation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de mayo de 2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.

Texto completo
Resumen
Instance-based model-agnostic feature importance explanations (LIME, SHAP, L2X) are a popular form of algorithmic transparency. These methods generally return either a weighting or subset of input features as an explanation for the classification of an instance. An alternative literature argues instead that counterfactual instances, which alter the black-box model's classification, provide a more actionable form of explanation. We present Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanations. We show that this approach combines the two paradigms, recovering the output of feature-weighting methods in continuous feature spaces, whilst indicating the direction in which the nearest counterfactuals can be found. Our method also provides an implicit confidence estimate in its own explanations, something existing methods lack. Additionally, FIMAP improves upon the speed of sampling-based methods, such as LIME, by an order of magnitude, allowing for explanation deployment in time-critical applications. We extend our approach to categorical features using a partitioned Gumbel layer and demonstrate its efficacy on standard datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Counterfactual explanations"

1

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Texto completo
Resumen
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général qui intègre explicitement la connaissance via un nouveau critère dans les objectifs d'interprétabilité. Ce formalisme est ensuite décliné pour différents types connaissances et différents types d'explications, particulièrement les exemples contre-factuels, conduisant à la proposition de plusieurs algorithmes (KICE, Knowledge Integration in Counterfactual Explanation, rKICE pour sa variante incluant des connaissances exprimées par des règles et KISM, Knowledge Integration in Surrogate Models). La question de l'agrégation des contraintes de qualité classique et de compatibilité avec les connaissances est également étudiée et nous proposons d'utiliser l'intégrale de Gödel comme opérateur d'agrégation. Enfin nous discutons de la difficulté à générer une unique explication adaptée à tous types d'utilisateurs et de la notion de diversité dans les explications
This thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Broadbent, Alex. "A reverse counterfactual analysis of causation". Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/226170.

Texto completo
Resumen
Lewis's counterfactual analysis of causation starts with the claim that c causes e if ~ C > ~ E, where c and e are events, C and E are the propositions that c and e respectively occur, ~ is negation and > is the counterfactual conditional. The purpose of my project is to provide a counterfactual analysis of causation which departs signigicantly from Lewis's starting point, and thus can hope to solve several stubborn problems for that approach. Whereas Lewis starts with a sufficiency claim, my analysis claims that a certain counterfactual is necessary for causation. I say that, if c causes e, then ~ E > ~ C - I call the latter the Reverse Counterfactual. This will often, perhaps always, be a backtracking counterfactual, so two chapters are devoted to defending a conception of counterfactuals which allows backtracking. Thus prepared, I argue that the Reverse Counterfactual is true of causes, but not of mere conditions for an effect. This provides a neat analysis of the principles governing causal selection, which is extended in a discussion of causal transitivity. Standard counterfactual accounts suffer counterexamples from preemption, but I argue that the Reverse Counterfactual has resources to deal neatly with those too. Finally I argue that the Reverse counterfactual, as a necessary condition oncausation, is the most we can hope for: in principle, there can be no counterfactual sufficient condition for causation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kuo, Chia-Yu y 郭家諭. "Explainable Risk Prediction System for Child Abuse Event by Individual Feature Attribution and Counterfactual Explanation". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yp2nr3.

Texto completo
Resumen
碩士
國立交通大學
統計學研究所
107
There always have a trade-off: Performance or Interpretability. The complex model, such as ensemble learning can achieve outstanding prediction accuracy. However, it is not easy to interpret the complex model. Understanding why a model made a prediction help us to trust the black-box model, and also help users to make decisions. This work plans to use the techniques of explainable machine learning to develop the appropriate model for empirical data with high prediction and good interpretability. In this study, we use the data provided by Taipei City Center for Prevention of Domestic Violence and Sexual Assault (臺北市家庭暴力暨性侵害防治中心) to develop the risk prediction model to predict the recurrence probability of violence accident for the same case before this case is resolved. This prediction model can also provide individual feature explanation and the counterfactual explanation to help social workers conduct an intervention for violence prevention.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Counterfactual explanations"

1

Reutlinger, Alexander. Extending the Counterfactual Theory of Explanation. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0005.

Texto completo
Resumen
In the recent debate on explanation philosophers tend to agree that the natural and social sciences do not only provide causal but also non-causal explanations. It is a challenging aspect of this agreement that currently dominant causal accounts of explanation fail to cover non-causal types of explanation. So, how shall we react to this challenge? The goal of this chapter is to articulate and to extend the counterfactual theory of explanation (CTE). The CTE is a monist account of explanation. Monism is the view that there is one single philosophical account capturing both causal and non-causal explanations. According to the CTE, both causal and non-causal explanations are explanatory by virtue of revealing counterfactual dependencies between the explanandum and the explanans.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gerstenberg, Tobias y Joshua B. Tenenbaum. Intuitive Theories. Editado por Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.28.

Texto completo
Resumen
This chapter first explains what intuitive theories are, how they can be modeled as probabilistic, generative programs, and how intuitive theories support various cognitive functions such as prediction, counterfactual reasoning, and explanation. It focuses on two domains of knowledge: people’s intuitive understanding of physics, and their intuitive understanding of psychology. It shows how causal judgments can be modeled as counterfactual contrasts operating over an intuitive theory of physics, and how explanations of an agent’s behavior are grounded in a rational planning model that is inverted to infer the agent’s beliefs, desires, and abilities. It concludes by highlighting some of the challenges that the intuitive theories framework faces, such as understanding how intuitive theories are learned and developed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kutach, Douglas. The Asymmetry of Influence. Editado por Craig Callender. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199298204.003.0009.

Texto completo
Resumen
This chapter considers the nature of the causal asymmetry, or even more generally, the asymmetry of influence. Putting aside explanations which would appeal to an asymmetry in time as explaining this asymmetry, it aims to show, using current physical theory and no ad hoc time asymmetric assumptions, why it is that future-directed influence sometimes advances one's goals but backward-directed influence does not. The chapter claims that agency is crucial to the explanation of the influence asymmetry. It provides an exhaustive account of the advancement asymmetry that is connected with fundamental physics, influence, causation, counterfactual dependence, and related notions in palatable ways.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

French, Steven y Juha Saatsi. Symmetries and Explanatory Dependencies in Physics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0010.

Texto completo
Resumen
Many important explanations in physics are based on ideas and assumptions about symmetries, but little has been said about the nature of such explanations. This chapter aims to fill this lacuna, arguing that various symmetry explanations can be naturally captured in the spirit of the counterfactual-dependence account of Woodward, liberalized from its causal trappings. From the perspective of this account symmetries explain by providing modal information about an explanatory dependence, by showing how the explanandum would have been different, had the facts about an explanatory symmetry been different. Furthermore, the authors argue that such explanatory dependencies need not be causal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Solstad, Torgrim y Oliver Bott. Causality and Causal Reasoning in Natural Language. Editado por Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.32.

Texto completo
Resumen
This chapter provides a combined overview of theoretical and psycholinguistic approaches to causality in language. The chapter’s main phenomenological focus is on causal relations as expressed intra-clausally by verbs (e.g., break, open) and between sentences by discourse markers (e.g., because, therefore). Special attention is given to implicit causality verbs that are argued to trigger expectations of explanations to occur in subsequent discourse. The chapter also discusses linguistic expressions that do not encode causation as such, but that seem to be dependent on a causal model for their adequate evaluation, such as counterfactual conditionals. The discussion of the phenomena is complemented by an overview of important aspects of their cognitive processing as revealed by psycholinguistic experimentation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Brady, Henry E. Causation and Explanation in Social Science. Editado por Janet M. Box-Steffensmeier, Henry E. Brady y David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0010.

Texto completo
Resumen
This article provides an overview of causal thinking by characterizing four approaches to causal inference. It also describes the INUS model. It specifically presents a user-friendly synopsis of philosophical and statistical musings about causation. The four approaches to causality include neo-Humean regularity, counterfactual, manipulation and mechanisms, and capacities. A counterfactual is a statement, typically in the subjunctive mood, in which a false or ‘counter to fact’ premise is followed by some assertion about what would have happened if the premise were true. Three basic questions about causality are then addressed. Moreover, the article gives a review of four approaches of what causality might be. It pays attention on a counterfactual definition, mostly amounting to a recipe that is now widely used in statistics. It ends with a discussion of the limitations of the recipe and how far it goes toward solving the epistemological and ontological problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McGregor, Rafe. A Criminology Of Narrative Fiction. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529208054.001.0001.

Texto completo
Resumen
This book answers the question of the usefulness of criminological fiction. Criminological fiction is fiction that can provide an explanation of the causes of crime or social harm and could, in consequence, contribute to the development of crime or social harm reduction policies. The book argues that criminological fiction can provide at least the following three types of criminological knowledge: (1) phenomenological, i.e. representing what certain experiences are like; (2) counterfactual, i.e. representing possible but non-existent situations; and (3) mimetic, i.e. representing everyday reality in detail and with accuracy. The book employs the phenomenological, counterfactual, and mimetic values of fiction to establish a theory of the criminological value of narrative fiction. It begins with a critical analysis of current work in narrative criminology and current criminological work on fiction. It then demonstrates the phenomenological, counterfactual, and mimetic values of narrative fiction using case studies from fictional novels, graphic novels, television series, and feature films. The argument concludes with an explanation of the relationship between the aetiological and pedagogic values of narrative fiction, focusing on cinematic fictions in virtue of the vast audiences they reach courtesy of their place in global popular culture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Silberstein, Michael, W. M. Stuckey y Timothy McDevitt. Relational Blockworld and Quantum Mechanics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198807087.003.0005.

Texto completo
Resumen
The main thread of chapter 4 introduces some of the major mysteries and interpretational issues of quantum mechanics (QM). These mysteries and issues include: quantum superposition, quantum nonlocality, Bell’s inequality, entanglement, delayed choice, the measurement problem, and the lack of counterfactual definiteness. All these mysteries and interpretational issues of QM result from dynamical explanation in the mechanical universe and are dispatched using the authors’ adynamical explanation in the block universe, called Relational Blockworld (RBW). A possible link between RBW and quantum information theory is provided. The metaphysical underpinnings of RBW, such as contextual emergence, spatiotemporal ontological contextuality, and adynamical global constraints, are provided in Philosophy of Physics for Chapter 4. That is also where RBW is situated with respect to retrocausal accounts and it is shown that RBW is a realist, psi-epistemic account of QM. All the relevant formalism for this chapter is provided in Foundational Physics for Chapter 4.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

St John, Taylor. Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198789918.003.0001.

Texto completo
Resumen
This chapter sets out the puzzle: what explains the rise of investor–state arbitration? It defines the rise of investor–state arbitration as one process with two phases: the creation of the ICSID Convention and eliciting state consent to ISDS. Conventional theoretical accounts, in which investor lobbying and then intergovernmental bargaining drive the rise of investor–state arbitration, are outlined. These accounts contrast with the book’s explanation, that international officials provided support to one institutional framework, ICSID, which led to its creation over other possibilities. The creation of ICSID kicked off a process of gradual institutional development that led to contemporary investor–state arbitration. The book’s sources include 30,000 pages of primary material and dozens of interviews, and the methods are based on counterfactual reasoning. Finally, the book’s four main findings are introduced.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Stock, Kathleen. Fiction, Belief, and ‘Imaginative Resistance’. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198798347.003.0005.

Texto completo
Resumen
The chapter starts with a focus on the relation between fiction and the inculcation of justified belief via testimony. The claim, relied upon in Chapter 3, that fictions can be sources of testimony and so justified belief, is defended. Then the fact that fictive utterances can, effectively, instruct readers to have beliefs, is implicated in a new explanation of ‘imaginative resistance’. The author suggests that the right account of this phenomenon should cite the reader’s perception of an authorial intention that she believe a counterfactual, which in fact she cannot believe. This view is defended against several rivals, and distinguished from certain other views, including the influential view of Tamar Gendler. Finally there is a consideration of whether one can propositionally imagine what one believes to be conceptually impossible.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Counterfactual explanations"

1

Dandl, Susanne, Christoph Molnar, Martin Binder y Bernd Bischl. "Multi-Objective Counterfactual Explanations". En Parallel Problem Solving from Nature – PPSN XVI, 448–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58112-1_31.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kuratomi, Alejandro, Ioanna Miliou, Zed Lee, Tony Lindgren y Panagiotis Papapetrou. "JUICE: JUstIfied Counterfactual Explanations". En Discovery Science, 493–508. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18840-4_35.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Guyomard, Victor, Françoise Fessant, Thomas Guyet, Tassadit Bouadi y Alexandre Termier. "Generating Robust Counterfactual Explanations". En Machine Learning and Knowledge Discovery in Databases: Research Track, 394–409. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43418-1_24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Gerber, Doris. "Counterfactual Causality and Historical Explanations". En Explanation in Action Theory and Historiography, 167–78. 1 [edition]. | New York : Taylor & Francis, 2019. | Series: Routledge studies in contemporary philosophy ; 121: Routledge, 2019. http://dx.doi.org/10.4324/9780429506048-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Mishra, Pradeepta. "Counterfactual Explanations for XAI Models". En Practical Explainable AI Using Python, 265–78. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7158-2_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Jeanneret, Guillaume, Loïc Simon y Frédéric Jurie. "Diffusion Models for Counterfactual Explanations". En Computer Vision – ACCV 2022, 219–37. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26293-7_14.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Boumazouza, Ryma, Fahima Cheikh-Alili, Bertrand Mazure y Karim Tabia. "A Symbolic Approach for Counterfactual Explanations". En Lecture Notes in Computer Science, 270–77. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58449-8_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Jacob, Paul, Éloi Zablocki, Hédi Ben-Younes, Mickaël Chen, Patrick Pérez y Matthieu Cord. "STEEX: Steering Counterfactual Explanations with Semantics". En Lecture Notes in Computer Science, 387–403. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19775-8_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Van Looveren, Arnaud y Janis Klaise. "Interpretable Counterfactual Explanations Guided by Prototypes". En Machine Learning and Knowledge Discovery in Databases. Research Track, 650–65. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86520-7_40.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Li, Peiyu, Soukaïna Filali Boubrahimi y Shah Muhammad Hamdi. "Motif-Guided Time Series Counterfactual Explanations". En Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, 203–15. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37731-0_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Counterfactual explanations"

1

Leofante, Francesco, Elena Botoeva y Vineet Rajani. "Counterfactual Explanations and Model Multiplicity: a Relational Verification View". En 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/78.

Texto completo
Resumen
We study the interplay between counterfactual explanations and model multiplicity in the context of neural network classifiers. We show that current explanation methods often produce counterfactuals whose validity is not preserved under model multiplicity. We then study the problem of generating counterfactuals that are guaranteed to be robust to model multiplicity, characterise its complexity and propose an approach to solve this problem using ideas from relational verification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Wenqi, Satoshi Oyama y Masahito Kurihara. "Generating Natural Counterfactual Visual Explanations". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/742.

Texto completo
Resumen
Counterfactual explanations help users to understand the behaviors of machine learning models by changing the inputs for the existing outputs. For an image classification task, an example counterfactual visual explanation explains: "for an example that belongs to class A, what changes do we need to make to the input so that the output is more inclined to class B." Our research considers changing the attribute description text of class A on the basis of the attributes of class B and generating counterfactual images on the basis of the modified text. We can use the prediction results of the model on counterfactual images to find the attributes that have the greatest effect when the model is predicting classes A and B. We applied our method to a fine-grained image classification dataset and used the generative adversarial network to generate natural counterfactual visual explanations. To evaluate these explanations, we used them to assist crowdsourcing workers in an image classification task. We found that, within a specific range, they improved classification accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Albini, Emanuele, Jason Long, Danial Dervovic y Daniele Magazzeni. "Counterfactual Shapley Additive Explanations". En FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533168.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jeanneret, Guillaume, Loïc Simon y Frédéric Jurie. "Adversarial Counterfactual Visual Explanations". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01576.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Dervakos, Edmund, Konstantinos Thomas, Giorgos Filandrianos y Giorgos Stamou. "Choose your Data Wisely: A Framework for Semantic Counterfactuals". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/43.

Texto completo
Resumen
Counterfactual explanations have been argued to be one of the most intuitive forms of explanation. They are typically defined as a minimal set of edits on a given data sample that, when applied, changes the output of a model on that sample. However, a minimal set of edits is not always clear and understandable to an end-user, as it could constitute an adversarial example (which is indistinguishable from the original data sample to an end-user). Instead, there are recent ideas that the notion of minimality in the context of counterfactuals should refer to the semantics of the data sample, and not to the feature space. In this work, we build on these ideas, and propose a framework that provides counterfactual explanations in terms of knowledge graphs. We provide an algorithm for computing such explanations (given some assumptions about the underlying knowledge), and quantitatively evaluate the framework with a user study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Aryal, Saugat y Mark T. Keane. "Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/732.

Texto completo
Resumen
Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e.g., a customer refused a loan might be told “if you asked for a loan with a shorter term, it would have been approved”). Counterfactuals explain what changes to the input-features of an AI system change the output-decision. However, there is a sub-type of counterfactual, semi-factuals, that have received less attention in AI (though the Cognitive Sciences have studied them more). This paper surveys semi-factual explanation, summarising historical and recent work. It defines key desiderata for semi-factual XAI, reporting benchmark tests of historical algorithms (as well as a novel, naïve method) to provide a solid basis for future developments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rodriguez, Pau, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin y David Vazquez. "Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations". En 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00109.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Tran, Khanh Hiep, Azin Ghazimatin y Rishiraj Saha Roy. "Counterfactual Explanations for Neural Recommenders". En SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3404835.3463005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zemni, Mehdi, Mickaël Chen, Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez y Matthieu Cord. "OCTET: Object-aware Counterfactual Explanations". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01446.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Artelt, Andre, Valerie Vaquet, Riza Velioglu, Fabian Hinder, Johannes Brinkrolf, Malte Schilling y Barbara Hammer. "Evaluating Robustness of Counterfactual Explanations". En 2021 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2021. http://dx.doi.org/10.1109/ssci50451.2021.9660058.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Counterfactual explanations"

1

Gandelman, Néstor. A Comparison of Saving Rates: Micro Evidence from Seventeen Latin American and Caribbean Countries. Inter-American Development Bank, julio de 2015. http://dx.doi.org/10.18235/0011701.

Texto completo
Resumen
Using micro data on expenditure and income for 17 Latin American and Caribbean (LAC) countries, this paper presents stylized facts on saving behavior by age, education, income and place of residence. Counterfactual saving rates are computed by imposing the saving behavior, the population distribution or the income distribution of two benchmark economies (the United States and Korea). The results suggest that the difference in national saving rates between LAC and the benchmark economies can mainly be attributed to differences in saving behavior of the population and, to a lesser extent, to differences in the distribution of the population by educational levels. Other demographic or income distribution differences are not quantitatively important as explanations of saving rates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía