Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Scenario and counterfactual explanations.

Zeitschriftenartikel zum Thema „Scenario and counterfactual explanations“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Scenario and counterfactual explanations" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Labaien Soto, Jokin, Ekhi Zugasti Uriguen und Xabier De Carlos Garcia. „Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencoders“. Applied Sciences 13, Nr. 5 (24.02.2023): 2912. http://dx.doi.org/10.3390/app13052912.

Der volle Inhalt der Quelle
Annotation:
Explainable Artificial Intelligence (XAI) has gained significant attention in recent years due to concerns over the lack of interpretability of Deep Learning models, which hinders their decision-making processes. To address this issue, counterfactual explanations have been proposed to elucidate the reasoning behind a model’s decisions by providing what-if statements as explanations. However, generating counterfactuals traditionally involves solving an optimization problem for each input, making it impractical for real-time feedback. Moreover, counterfactuals must meet specific criteria, including being user-driven, causing minimal changes, and staying within the data distribution. To overcome these challenges, a novel model-agnostic approach called Real-Time Guided Counterfactual Explanations (RTGCEx) is proposed. This approach utilizes autoencoders to generate real-time counterfactual explanations that adhere to these criteria by optimizing a multiobjective loss function. The performance of RTGCEx has been evaluated on two datasets: MNIST and Gearbox, a synthetic time series dataset. The results demonstrate that RTGCEx outperforms traditional methods in terms of speed and efficacy on MNIST, while also effectively identifying and rectifying anomalies in the Gearbox dataset, highlighting its versatility across different scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Virmajoki, Veli. „Frameworks in Historiography: Explanation, Scenarios, and Futures“. Journal of the Philosophy of History 17, Nr. 2 (03.07.2023): 288–309. http://dx.doi.org/10.1163/18722636-12341501.

Der volle Inhalt der Quelle
Annotation:
Abstract In this paper, I analyze how frameworks shape historiographical explanations. I argue that, in order to identify a sequence of events as relevant to a historical outcome, assumptions about the workings of the relevant domain have to be made. By extending Lakatosian considerations, I argue that these assumptions are provided by a framework that contains a set of factors and intertwined principles that (supposedly) govern how a historical phenomenon works. I connect frameworks with a counterfactual account of historical explanation. Frameworks enable us to explain the past by providing a backbone of explanatory patterns of counterfactual dependency. I conclude by noting that both counterfactual scenarios and scenarios of the future require frameworks and, therefore, historiographical explanation generates a set of possible futures. Analyzing these possible futures enables us to reveal the theoretical commitments of historiography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Crawford, Beverly. „Germany's Future Political Challenges: Imagine that The New Yorker Profiled the German Chancellor in 2015“. German Politics and Society 23, Nr. 4 (01.12.2005): 69–87. http://dx.doi.org/10.3167/gps.2005.230404.

Der volle Inhalt der Quelle
Annotation:
What follows is a fictitious scenario, a "thought experiment," meant to project a particular future for Germany if certain assumptions hold. Scenarios are hypotheses that rest on a set of assumptions and one or two "wild cards." They can reveal forces of change that might be otherwise hidden, discard those that are not plausible, and describe the future of trends that are relatively certain. Indeed, scenarios create a particular future in the same way that counterfactual methods create a different past. Counterfactual methods predict how events would have unfolded had a few elements of the story been changed, with a focus on varying conditions that seem important and that can be manipulated. For instance, to explore the effects of military factors on the likelihood of war, one might ask: "how would pre 1914 diplomacy have evolved if the leaders of Europe had not believed that conquest was easy?" Or to explore the importance of broad social and political factors in causing Nazi aggression: "How might the 1930s have unfolded had Hitler died in 1932?" The greater the impact of the posited changes, the more important the actual factors that were manipulated. Assuming that the structure of explanation and prediction are the same, scenario writing pursues a similar method. But, instead of seeking alternative explanations for the past, scenarios project relative certainties and then manipulate the important but uncertain factors, to create a plausible story about the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nolan, Daniel. „The Possibilities of History“. Journal of the Philosophy of History 10, Nr. 3 (17.11.2016): 441–56. http://dx.doi.org/10.1163/18722636-12341346.

Der volle Inhalt der Quelle
Annotation:
Several kinds of historical alternatives are distinguished. Different kinds of historical alternatives are valuable to the practice of history for different reasons. Important uses for historical alternatives include representing different sides of historical disputes; distributing chances of different outcomes over alternatives; and offering explanations of why various alternatives did not in fact happen. Consideration of counterfactuals about what would have happened had things been different in particular ways plays particularly useful roles in reasoning about historical analogues of current conditions; reasoning about causal claims; and in evaluating historical explanations. When evaluating the role of alternative histories in historical thinking, we should keep in mind the uses of historical alternatives that go well beyond the long-term and specific scenarios that are the focus of so-called “counterfactual history”.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rahimi, Saeed, Antoni B. Moore und Peter A. Whigham. „Beyond Objects in Space-Time: Towards a Movement Analysis Framework with ‘How’ and ‘Why’ Elements“. ISPRS International Journal of Geo-Information 10, Nr. 3 (22.03.2021): 190. http://dx.doi.org/10.3390/ijgi10030190.

Der volle Inhalt der Quelle
Annotation:
Current spatiotemporal data has facilitated movement studies to shift objectives from descriptive models to explanations of the underlying causes of movement. From both a practical and theoretical standpoint, progress in developing approaches for these explanations should be founded on a conceptual model. This paper presents such a model in which three conceptual levels of abstraction are proposed to frame an agent-based representation of movement decision-making processes: ‘attribute,’ ‘actor,’ and ‘autonomous agent’. These in combination with three temporal, spatial, and spatiotemporal general forms of observations distinguish nine (3 × 3) representation typologies of movement data within the agent framework. Thirdly, there are three levels of cognitive reasoning: ‘association,’ ‘intervention,’ and ‘counterfactual’. This makes for 27 possible types of operation embedded in a conceptual cube with the level of abstraction, type of observation, and degree of cognitive reasoning forming the three axes. The conceptual model is an arena where movement queries and the statement of relevant objectives takes place. An example implementation of a tightly constrained spatiotemporal scenario to ground the agent-structure was summarised. The platform has been well-defined so as to accommodate different tools and techniques to drive causal inference in computational movement analysis as an immediate future step.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Delaney, Eoin, Arjun Pakrashi, Derek Greene und Mark T. Keane. „Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ (Abstract Reprint)“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 20 (24.03.2024): 22696. http://dx.doi.org/10.1609/aaai.v38i20.30596.

Der volle Inhalt der Quelle
Annotation:
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Barzekar, Hosein, und Susan McRoy. „Achievable Minimally-Contrastive Counterfactual Explanations“. Machine Learning and Knowledge Extraction 5, Nr. 3 (03.08.2023): 922–36. http://dx.doi.org/10.3390/make5030048.

Der volle Inhalt der Quelle
Annotation:
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Baron, Sam, Mark Colyvan und David Ripley. „A Counterfactual Approach to Explanation in Mathematics“. Philosophia Mathematica 28, Nr. 1 (02.12.2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT Our goal in this paper is to extend counterfactual accounts of scientific explanation to mathematics. Our focus, in particular, is on intra-mathematical explanations: explanations of one mathematical fact in terms of another. We offer a basic counterfactual theory of intra-mathematical explanations, before modelling the explanatory structure of a test case using counterfactual machinery. We finish by considering the application of counterpossibles to mathematical explanation, and explore a second test case along these lines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Fernández-Loría, Carlos, Foster Provost und Xintian Han. „Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach“. MIS Quarterly 45, Nr. 3 (01.09.2022): 1635–60. http://dx.doi.org/10.25300/misq/2022/16749.

Der volle Inhalt der Quelle
Annotation:
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Prado-Romero, Mario Alfonso, Bardh Prenkaj und Giovanni Stilo. „Robust Stochastic Graph Generator for Counterfactual Explanations“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21518–26. http://dx.doi.org/10.1609/aaai.v38i19.30149.

Der volle Inhalt der Quelle
Annotation:
Counterfactual Explanation (CE) techniques have garnered attention as a means to provide insights to the users engaging with AI systems. While extensively researched in domains such as medical imaging and autonomous vehicles, Graph Counterfactual Explanation (GCE) methods have been comparatively under-explored. GCEs generate a new graph similar to the original one, with a different outcome grounded on the underlying predictive model. Among these GCE techniques, those rooted in generative mechanisms have received relatively limited investigation despite demonstrating impressive accomplishments in other domains, such as artistic styles and natural language modelling. The preference for generative explainers stems from their capacity to generate counterfactual instances during inference, leveraging autonomously acquired perturbations of the input graph. Motivated by the rationales above, our study introduces RSGG-CE, a novel Robust Stochastic Graph Generator for Counterfactual Explanations able to produce counterfactual examples from the learned latent space considering a partially ordered generation sequence. Furthermore, we undertake quantitative and qualitative analyses to compare RSGG-CE's performance against SoA generative explainers, highlighting its increased ability to engendering plausible counterfactual candidates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

He, Ming, Boyang An, Jiwen Wang und Hao Wen. „CETD: Counterfactual Explanations by Considering Temporal Dependencies in Sequential Recommendation“. Applied Sciences 13, Nr. 20 (11.10.2023): 11176. http://dx.doi.org/10.3390/app132011176.

Der volle Inhalt der Quelle
Annotation:
Providing interpretable explanations can notably enhance users’ confidence and satisfaction with regard to recommender systems. Counterfactual explanations demonstrate remarkable performance in the realm of explainable sequential recommendation. However, current counterfactual explanation models designed for sequential recommendation overlook the temporal dependencies in a user’s past behavior sequence. Furthermore, counterfactual histories should be as similar to the real history as possible to avoid conflicting with the user’s genuine behavioral preferences. This paper presents counterfactual explanations by Considering temporal dependencies (CETD), a counterfactual explanation model that utilizes a variational autoencoder (VAE) for sequential recommendation and takes into account temporal dependencies. To improve explainability, CETD employs a recurrent neural network (RNN) when generating counterfactual histories, thereby capturing both the user’s long-term preferences and short-term behavior in their real behavioral history. Meanwhile, CETD fits the distribution of reconstructed data (i.e., the counterfactual sequences generated by VAE perturbation) in a latent space, and leverages learned variance to decrease the proximity of counterfactual histories by minimizing the distance between the counterfactual sequences and the original sequence. Thorough experiments conducted on two real-world datasets demonstrate that the proposed CETD consistently surpasses current state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Lee, Min Hun, und Chong Jun Chew. „Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making“. Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (28.09.2023): 1–22. http://dx.doi.org/10.1145/3610218.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) is increasingly being considered to assist human decision-making in high-stake domains (e.g. health). However, researchers have discussed an issue that humans can over-rely on wrong suggestions of the AI model instead of achieving human AI complementary performance. In this work, we utilized salient feature explanations along with what-if, counterfactual explanations to make humans review AI suggestions more analytically to reduce overreliance on AI and explored the effect of these explanations on trust and reliance on AI during clinical decision-making. We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion, and analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations. Our results showed that the AI model with both salient features and counterfactual explanations assisted therapists and laypersons to improve their performance and agreement level on the task when 'right' AI outputs are presented. While both therapists and laypersons over-relied on 'wrong' AI outputs, counterfactual explanations assisted both therapists and laypersons to reduce their over-reliance on 'wrong' AI outputs by 21% compared to salient feature explanations. Specifically, laypersons had higher performance degrades by 18.0 f1-score with salient feature explanations and 14.0 f1-score with counterfactual explanations than therapists with performance degrades of 8.6 and 2.8 f1-scores respectively. Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on 'wrong' AI outputs and implications for improving human-AI collaborative decision-making.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Asher, Nicholas, Lucas De Lara, Soumya Paul und Chris Russell. „Counterfactual Models for Fair and Adequate Explanations“. Machine Learning and Knowledge Extraction 4, Nr. 2 (31.03.2022): 316–49. http://dx.doi.org/10.3390/make4020014.

Der volle Inhalt der Quelle
Annotation:
Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they are often too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute. We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We provide formal results about the algorithmic complexity of fair and adequate explanations. We then detail two sophisticated counterfactual models, one based on causal graphs, and one based on transport theories. We show transport based models have several theoretical advantages over the competition as explanation frameworks for machine learning algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Lucic, Ana, Harrie Oosterhuis, Hinda Haned und Maarten de Rijke. „FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 5 (28.06.2022): 5313–22. http://dx.doi.org/10.1609/aaai.v36i5.20468.

Der volle Inhalt der Quelle
Annotation:
Model interpretability has become an important problem in machine learning (ML) due to the increased effect algorithmic decisions have on humans. Counterfactual explanations can help users understand not only why ML models make certain decisions, but also how these decisions can be changed. We frame the problem of finding counterfactual explanations as an optimization task and extend previous work that could only be applied to differentiable models. In order to accommodate non-differentiable models such as tree ensembles, we use probabilistic model approximations in the optimization framework. We introduce an approximation technique that is effective for finding counterfactual explanations for predictions of the original model and show that our counterfactual examples are significantly closer to the original instances than those produced by other methods specifically designed for tree ensembles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Cong, Zicun, Lingyang Chu, Yu Yang und Jian Pei. „Comprehensible counterfactual explanation on Kolmogorov-Smirnov test“. Proceedings of the VLDB Endowment 14, Nr. 9 (Mai 2021): 1583–96. http://dx.doi.org/10.14778/3461535.3461546.

Der volle Inhalt der Quelle
Annotation:
The Kolmogorov-Smirnov (KS) test is popularly used in many applications, such as anomaly detection, astronomy, database security and AI systems. One challenge remained untouched is how we can obtain an explanation on why a test set fails the KS test. In this paper, we tackle the problem of producing counterfactual explanations for test data failing the KS test. Concept-wise, we propose the notion of most comprehensible counterfactual explanations, which accommodates both the KS test data and the user domain knowledge in producing explanations. Computation-wise, we develop an efficient algorithm MOCHE (for <u>MO</u>st <u>C</u>ompre<u>H</u>ensible <u>E</u>xplanation) that avoids enumerating and checking an exponential number of subsets of the test set failing the KS test. MOCHE not only guarantees to produce the most comprehensible counterfactual explanations, but also is orders of magnitudes faster than the baselines. Experiment-wise, we present a systematic empirical study on a series of benchmark real datasets to verify the effectiveness, efficiency and scalability of most comprehensible counterfactual explanations and MOCHE.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Geng, Zixuan, Maximilian Schleich und Dan Suciu. „Computing Rule-Based Explanations by Leveraging Counterfactuals“. Proceedings of the VLDB Endowment 16, Nr. 3 (November 2022): 420–32. http://dx.doi.org/10.14778/3570690.3570693.

Der volle Inhalt der Quelle
Annotation:
Sophisticated machine models are increasingly used for high-stakes decisions in everyday life. There is an urgent need to develop effective explanation techniques for such automated decisions. Rule-Based Explanations have been proposed for high-stake decisions like loan applications, because they increase the users' trust in the decision. However, rule-based explanations are very inefficient to compute, and existing systems sacrifice their quality in order to achieve reasonable performance. We propose a novel approach to compute rule-based explanations, by using a different type of explanation, Counterfactual Explanations, for which several efficient systems have already been developed. We prove a Duality Theorem, showing that rule-based and counterfactual-based explanations are dual to each other, then use this observation to develop an efficient algorithm for computing rule-based explanations, which uses the counterfactual-based explanation as an oracle. We conduct extensive experiments showing that our system computes rule-based explanations of higher quality, and with the same or better performance, than two previous systems, MinSetCover and Anchor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Sunstein, Cass R. „Historical Explanations Always Involve Counterfactual History“. Journal of the Philosophy of History 10, Nr. 3 (17.11.2016): 433–40. http://dx.doi.org/10.1163/18722636-12341345.

Der volle Inhalt der Quelle
Annotation:
Historical explanations are a form of counterfactual history. To offer an explanation of what happened, historians have to identify causes, and whenever they identify causes, they immediately conjure up a counterfactual history, a parallel world. No one doubts that there is a great deal of distance between science fiction novelists and the world’s great historians, but along an important dimension, they are playing the same game.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

McEleney, Alice, und Ruth M. J. Byrne. „Spontaneous counterfactual thoughts and causal explanations“. Thinking & Reasoning 12, Nr. 2 (Mai 2006): 235–55. http://dx.doi.org/10.1080/13546780500317897.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Carreira-Perpiñán, Miguel Á., und Suryabhan Singh Hada. „Counterfactual Explanations for Oblique Decision Trees:Exact, Efficient Algorithms“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 6903–11. http://dx.doi.org/10.1609/aaai.v35i8.16851.

Der volle Inhalt der Quelle
Annotation:
We consider counterfactual explanations, the problem of minimally adjusting features in a source input instance so that it is classified as a target class under a given classifier. This has become a topic of recent interest as a way to query a trained model and suggest possible actions to overturn its decision. Mathematically, the problem is formally equivalent to that of finding adversarial examples, which also has attracted significant attention recently. Most work on either counterfactual explanations or adversarial examples has focused on differentiable classifiers, such as neural nets. We focus on classification trees, both axis-aligned and oblique (having hyperplane splits). Although here the counterfactual optimization problem is nonconvex and nondifferentiable, we show that an exact solution can be computed very efficiently, even with high-dimensional feature vectors and with both continuous and categorical features, and demonstrate it in different datasets and settings. The results are particularly relevant for finance, medicine or legal applications, where interpretability and counterfactual explanations are particularly important.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Baron, Sam. „Counterfactual Scheming“. Mind 129, Nr. 514 (01.04.2019): 535–62. http://dx.doi.org/10.1093/mind/fzz008.

Der volle Inhalt der Quelle
Annotation:
Abstract Mathematics appears to play a genuine explanatory role in science. But how do mathematical explanations work? Recently, a counterfactual approach to mathematical explanation has been suggested. I argue that such a view fails to differentiate the explanatory uses of mathematics within science from the non-explanatory uses. I go on to offer a solution to this problem by combining elements of the counterfactual theory of explanation with elements of a unification theory of explanation. The result is a theory according to which a counterfactual is explanatory when it is an instance of a generalized counterfactual scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Schleich, Maximilian, Zixuan Geng, Yihong Zhang und Dan Suciu. „GeCo“. Proceedings of the VLDB Endowment 14, Nr. 9 (Mai 2021): 1681–93. http://dx.doi.org/10.14778/3461535.3461555.

Der volle Inhalt der Quelle
Annotation:
Machine learning is increasingly applied in high-stakes decision making that directly affect people's lives, and this leads to an increased demand for systems to explain their decisions. Explanations often take the form of counterfactuals , which consists of conveying to the end user what she/he needs to change in order to improve the outcome. Computing counterfactual explanations is challenging, because of the inherent tension between a rich semantics of the domain, and the need for real time response. In this paper we present CeCo, the first system that can compute plausible and feasible counterfactual explanations in real time. At its core, CeCo relies on a genetic algorithm, which is customized to favor searching counterfactual explanations with the smallest number of changes. To achieve real-time performance, we introduce two novel optimizations: Δ-representation of candidate counterfactuals, and partial evaluation of the classifier. We compare empirically CeCo against five other systems described in the literature, and show that it is the only system that can achieve both high quality explanations and real time answers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Sia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer und Lambert Mathias. „Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 9837–45. http://dx.doi.org/10.1609/aaai.v37i8.26174.

Der volle Inhalt der Quelle
Annotation:
Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors. In this work, which focuses on the NLI task, we introduce the methodology of Faithfulness-through-Counterfactuals, which first generates a counterfactual hypothesis based on the logical predicates expressed in the explanation, and then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic (i.e. if the new formula is \textit{logically satisfiable}). In contrast to existing approaches, this does not require any explanations for training a separate verification model. We first validate the efficacy of automatic counterfactual hypothesis generation, leveraging on the few-shot priming paradigm. Next, we show that our proposed metric distinguishes between human-model agreement and disagreement on new counterfactual input. In addition, we conduct a sensitivity analysis to validate that our metric is sensitive to unfaithful explanations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Leofante, Francesco, und Nico Potyka. „Promoting Counterfactual Robustness through Diversity“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21322–30. http://dx.doi.org/10.1609/aaai.v38i19.30127.

Der volle Inhalt der Quelle
Annotation:
Counterfactual explanations shed light on the decisions of black-box models by explaining how an input can be altered to obtain a favourable decision from the model (e.g., when a loan application has been rejected). However, as noted recently, counterfactual explainers may lack robustness in the sense that a minor change in the input can cause a major change in the explanation. This can cause confusion on the user side and open the door for adversarial attacks. In this paper, we study some sources of non-robustness. While there are fundamental reasons for why an explainer that returns a single counterfactual cannot be robust in all instances, we show that some interesting robustness guarantees can be given by reporting multiple rather than a single counterfactual. Unfortunately, the number of counterfactuals that need to be reported for the theoretical guarantees to hold can be prohibitively large. We therefore propose an approximation algorithm that uses a diversity criterion to select a feasible number of most relevant explanations and study its robustness empirically. Our experiments indicate that our method improves the state-of-the-art in generating robust explanations, while maintaining other desirable properties and providing competitive computational performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Gulshad, Sadaf, und Arnold Smeulders. „Counterfactual attribute-based visual explanations for classification“. International Journal of Multimedia Information Retrieval 10, Nr. 2 (18.04.2021): 127–40. http://dx.doi.org/10.1007/s13735-021-00208-3.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this paper, our aim is to provide human understandable intuitive factual and counterfactual explanations for the decisions of neural networks. Humans tend to reinforce their decisions by providing attributes and counterattributes. Hence, in this work, we utilize attributes as well as examples to provide explanations. In order to provide counterexplanations we make use of directed perturbations to arrive at the counterclass attribute values in doing so, we explain what is present and what is absent in the original image. We evaluate our method when images are misclassified into closer counterclasses as well as when misclassified into completely different counterclasses. We conducted experiments on both finegrained as well as coarsegrained datasets. We verified our attribute-based explanations method both quantitatively and qualitatively and showed that attributes provide discriminating and human understandable explanations for both standard as well as robust networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

de Oliveira, Raphael Mazzine Barbosa, und David Martens. „A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data“. Applied Sciences 11, Nr. 16 (07.08.2021): 7274. http://dx.doi.org/10.3390/app11167274.

Der volle Inhalt der Quelle
Annotation:
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. This interest is reflected by a relatively young literature with already dozens of algorithms aiming to generate such explanations. These algorithms are focused on finding how features can be modified to change the output classification. However, this rather general objective can be achieved in different ways, which brings about the need for a methodology to test and benchmark these algorithms. The contributions of this work are manifold: First, a large benchmarking study of 10 algorithmic approaches on 22 tabular datasets is performed, using nine relevant evaluation metrics; second, the introduction of a novel, first of its kind, framework to test counterfactual generation algorithms; third, a set of objective metrics to evaluate and compare counterfactual results; and, finally, insight from the benchmarking results that indicate which approaches obtain the best performance on what type of dataset. This benchmarking study and framework can help practitioners in determining which technique and building blocks most suit their context, and can help researchers in the design and evaluation of current and future counterfactual generation algorithms. Our findings show that, overall, there’s no single best algorithm to generate counterfactual explanations as the performance highly depends on properties related to the dataset, model, score, and factual point specificities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Yang, Fan, Ninghao Liu, Mengnan Du und Xia Hu. „Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation“. ACM SIGKDD Explorations Newsletter 23, Nr. 1 (26.05.2021): 59–68. http://dx.doi.org/10.1145/3468507.3468517.

Der volle Inhalt der Quelle
Annotation:
With the wide use of deep neural networks (DNN), model interpretability has become a critical concern, since explainable decisions are preferred in high-stake scenarios. Current interpretation techniques mainly focus on the feature attribution perspective, which are limited in indicating why and how particular explanations are related to the prediction. To this end, an intriguing class of explanations, named counterfactuals, has been developed to further explore the "what-if" circumstances for interpretation, and enables the reasoning capability on black-box models. However, generating counterfactuals for raw data instances (i.e., text and image) is still in the early stage due to its challenges on high data dimensionality and unsemantic raw features. In this paper, we design a framework to generate counterfactuals specifically for raw data instances with the proposed Attribute-Informed Perturbation (AIP). By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently. Instead of directly modifying instances in the data space, we iteratively optimize the constructed attributeinformed latent space, where features are more robust and semantic. Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework, and show the superiority over other alternatives. Besides, we also introduce some practical applications based on our framework, indicating its potential beyond the model interpretability aspect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kenny, Eoin M., und Mark T. Keane. „On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 11575–85. http://dx.doi.org/10.1609/aaai.v35i13.17377.

Der volle Inhalt der Quelle
Annotation:
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs. In response to this disquiet, counterfactual explanations have become very popular in eXplainable AI (XAI) due to their asserted computational, psychological, and legal benefits. In contrast however, semi-factuals (which appear to be equally useful) have surprisingly received no attention. Most counterfactual methods address tabular rather than image data, partly because the non-discrete nature of images makes good counterfactuals difficult to define; indeed, generating plausible counterfactual images which lie on the data manifold is also problematic. This paper advances a novel method for generating plausible counterfactuals and semi-factuals for black-box CNN classifiers doing computer vision. The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all “exceptional” features in a test image to be “normal” from the perspective of the counterfactual class, to generate plausible counterfactual images. Two controlled experiments compare this method to others in the literature, showing that PIECE generates highly plausible counterfactuals (and the best semi-factuals) on several benchmark measures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Le, Thao, Tim Miller, Ronal Singh und Liz Sonenberg. „Explaining Model Confidence Using Counterfactuals“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 10 (26.06.2023): 11856–64. http://dx.doi.org/10.1609/aaai.v37i10.26399.

Der volle Inhalt der Quelle
Annotation:
Displaying confidence scores in human-AI interaction has been shown to help build trust between humans and AI systems. However, most existing research uses only the confidence score as a form of communication. As confidence scores are just another model output, users may want to understand why the algorithm is confident to determine whether to accept the confidence score. In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction. We present two methods for understanding model confidence using counterfactual explanation: (1) based on counterfactual examples; and (2) based on visualisation of the counterfactual space. Both increase understanding and trust for study participants over a baseline of no explanation, but qualitative results show that they are used quite differently, leading to recommendations of when to use each one and directions of designing better explanations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Wellawatte, Geemi P., Aditi Seshadri und Andrew D. White. „Model agnostic generation of counterfactual explanations for molecules“. Chemical Science 13, Nr. 13 (2022): 3697–705. http://dx.doi.org/10.1039/d1sc05259d.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

R, Jain. „Transparency in AI Decision Making: A Survey of Explainable AI Methods and Applications“. Advances in Robotic Technology 2, Nr. 1 (19.01.2024): 1–10. http://dx.doi.org/10.23880/art-16000110.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence (AI) systems have become pervasive in numerous facets of modern life, wielding considerable influence in critical decision-making realms such as healthcare, finance, criminal justice, and beyond. Yet, the inherent opacity of many AI models presents significant hurdles concerning trust, accountability, and fairness. To address these challenges, Explainable AI (XAI) has emerged as a pivotal area of research, striving to augment the transparency and interpretability of AI systems. This survey paper serves as a comprehensive exploration of the state-of-the-art in XAI methods and their practical applications. We delve into a spectrum of techniques, spanning from model-agnostic approaches to interpretable machine learning models, meticulously scrutinizing their respective strengths, limitations, and real-world implications. The landscape of XAI is rich and varied, with diverse methodologies tailored to address different facets of interpretability. Model-agnostic approaches offer versatility by providing insights into model behavior across various AI architectures. In contrast, interpretable machine learning models prioritize transparency by design, offering inherent understandability at the expense of some predictive performance. Layer-wise Relevance Propagation (LRP) and attention mechanisms delve into the inner workings of neural networks, shedding light on feature importance and decision processes. Additionally, counterfactual explanations open avenues for exploring what-if scenarios, elucidating the causal relationships between input features and model outcomes. In tandem with methodological exploration, this survey scrutinizes the deployment and impact of XAI across multifarious domains. Successful case studies showcase the practical utility of transparent AI in healthcare diagnostics, financial risk assessment, criminal justice systems, and more. By elucidating these use cases, we illuminate the transformative potential of XAI in enhancing decision-making processes while fostering accountability and fairness. Nevertheless, the journey towards fully transparent AI systems is fraught with challenges and opportunities. As we traverse the current landscape of XAI, we identify pressing areas for further research and development. These include refining interpretability metrics, addressing the scalability of XAI techniques to complex models, and navigating the ethical dimensions of transparency in AI decision-making.Through this survey, we endeavor to cultivate a deeper understanding of transparency in AI decision-making, empowering stakeholders to navigate the intricate interplay between accuracy, interpretability, and ethical considerations. By fostering interdisciplinary dialogue and inspiring collaborative innovation, we aspire to catalyze future advancements in Explainable AI, ultimately paving the way towards more accountable and trustworthy AI systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Carreira-Perpinan, Miguel Á., und Suryabhan Singh Hada. „Very Fast, Approximate Counterfactual Explanations for Decision Forests“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 6 (26.06.2023): 6935–43. http://dx.doi.org/10.1609/aaai.v37i6.25848.

Der volle Inhalt der Quelle
Annotation:
We consider finding a counterfactual explanation for a classification or regression forest, such as a random forest. This requires solving an optimization problem to find the closest input instance to a given instance for which the forest outputs a desired value. Finding an exact solution has a cost that is exponential on the number of leaves in the forest. We propose a simple but very effective approach: we constrain the optimization to input space regions populated by actual data points. The problem reduces to a form of nearest-neighbor search using a certain distance on a certain dataset. This has two advantages: first, the solution can be found very quickly, scaling to large forests and high-dimensional data, and enabling interactive use. Second, the solution found is more likely to be realistic in that it is guided towards high-density areas of input space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Sokol, Kacper, und Peter Flach. „Desiderata for Interpretability: Explaining Decision Tree Predictions with Counterfactuals“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 10035–36. http://dx.doi.org/10.1609/aaai.v33i01.330110035.

Der volle Inhalt der Quelle
Annotation:
Explanations in machine learning come in many forms, but a consensus regarding their desired properties is still emerging. In our work we collect and organise these explainability desiderata and discuss how they can be used to systematically evaluate properties and quality of an explainable system using the case of class-contrastive counterfactual statements. This leads us to propose a novel method for explaining predictions of a decision tree with counterfactuals. We show that our model-specific approach exploits all the theoretical advantages of counterfactual explanations, hence improves decision tree interpretability by decoupling the quality of the interpretation from the depth and width of the tree.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Amitai, Yotam, Yael Septon und Ofra Amir. „Explaining Reinforcement Learning Agents through Counterfactual Action Outcomes“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 9 (24.03.2024): 10003–11. http://dx.doi.org/10.1609/aaai.v38i9.28863.

Der volle Inhalt der Quelle
Annotation:
Explainable reinforcement learning (XRL) methods aim to help elucidate agent policies and decision-making processes. The majority of XRL approaches focus on local explanations, seeking to shed light on the reasons an agent acts the way it does at a specific world state. While such explanations are both useful and necessary, they typically do not portray the outcomes of the agent's selected choice of action. In this work, we propose ``COViz'', a new local explanation method that visually compares the outcome of an agent's chosen action to a counterfactual one. In contrast to most local explanations that provide state-limited observations of the agent's motivation, our method depicts alternative trajectories the agent could have taken from the given state and their outcomes. We evaluated the usefulness of COViz in supporting people's understanding of agents' preferences and compare it with reward decomposition, a local explanation method that describes an agent's expected utility for different actions by decomposing it into meaningful reward types. Furthermore, we examine the complementary benefits of integrating both methods. Our results show that such integration significantly improved participants' performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann und Elke A. Rundensteiner. „FACET: Robust Counterfactual Explanation Analytics“. Proceedings of the ACM on Management of Data 1, Nr. 4 (08.12.2023): 1–27. http://dx.doi.org/10.1145/3626729.

Der volle Inhalt der Quelle
Annotation:
Machine learning systems are deployed in domains such as hiring and healthcare, where undesired classifications can have serious ramifications for the user. Thus, there is a rising demand for explainable AI systems which provide actionable steps for lay users to obtain their desired outcome. To meet this need, we propose FACET, the first explanation analytics system which supports a user in interactively refining counterfactual explanations for decisions made by tree ensembles. As FACET's foundation, we design a novel type of counterfactual explanation called the counterfactual region. Unlike traditional counterfactuals, FACET's regions concisely describe portions of the feature space where the desired outcome is guaranteed, regardless of variations in exact feature values. This property, which we coin explanation robustness, is critical for the practical application of counterfactuals. We develop a rich set of novel explanation analytics queries which empower users to identify personalized counterfactual regions that account for their real-world circumstances. To process these queries, we develop a compact high-dimensional counterfactual region index along with index-aware query processing strategies for near real-time explanation analytics. We evaluate FACET against state-of-the-art explanation techniques on eight public benchmark datasets and demonstrate that FACET generates actionable explanations of similar quality in an order of magnitude less time while providing critical robustness guarantees. Finally, we conduct a preliminary user study which suggests that FACET's regions lead to higher user understanding than traditional counterfactuals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan und Guangneng Hu. „Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Der volle Inhalt der Quelle
Annotation:
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations. These problems reduce the faithfulness of explanations generated by models. To address the above issues, we propose a novel self-supervised Multi-level Contrastive Learning based natural language Explanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples. MCLE extracts discriminative features and aligns the feature spaces from explanations with visual question and answer to generate more consistent explanations. We conduct extensive experiments, ablation analysis, and case study to demonstrate the effectiveness of our method on two VQA-NLE benchmarks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Fernandes, Alison. „Back to the Present: How Not to Use Counterfactuals to Explain Causal Asymmetry“. Philosophies 7, Nr. 2 (09.04.2022): 43. http://dx.doi.org/10.3390/philosophies7020043.

Der volle Inhalt der Quelle
Annotation:
A plausible thought is that we should evaluate counterfactuals in the actual world by holding the present ‘fixed’; the state of the counterfactual world at the time of the antecedent, outside the area of the antecedent, is required to match that of the actual world. When used to evaluate counterfactuals in the actual world, this requirement may produce reasonable results. However, the requirement is deeply problematic when used in the context of explaining causal asymmetry (why causes come before their effects). The requirement plays a crucial role in certain statistical mechanical explanations of the temporal asymmetry of causation. I will use a case of backwards time travel to show how the requirement enforces certain features of counterfactual structure a priori. For this reason, the requirement cannot be part of a completely general method of evaluating counterfactuals. More importantly, the way the requirement enforces features of counterfactual structure prevents counterfactual structure being derived from more fundamental physical structure—as explanations of causal asymmetry demand. Therefore, the requirement cannot be used when explaining causal asymmetry. To explain causal asymmetry, we need more temporally neutral methods for evaluating counterfactuals—those that produce the right results in cases involving backwards time travel, as well as in the actual world.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Akula, Arjun, Shuai Wang und Song-Chun Zhu. „CoCoX: Generating Conceptual and Counterfactual Explanations via Fault-Lines“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 03 (03.04.2020): 2594–601. http://dx.doi.org/10.1609/aaai.v34i03.5643.

Der volle Inhalt der Quelle
Annotation:
We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ley, Dan, Umang Bhatt und Adrian Weller. „Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 7 (28.06.2022): 7390–98. http://dx.doi.org/10.1609/aaai.v36i7.20702.

Der volle Inhalt der Quelle
Annotation:
To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating a single Counterfactual Latent Uncertainty Explanation (CLUE) for a given data point where the model is uncertain. We broaden the exploration to examine δ-CLUE, the set of potential CLUEs within a δ ball of the original input in latent space. We study the diversity of such sets and find that many CLUEs are redundant; as such, we propose DIVerse CLUE (∇-CLUE), a set of CLUEs which each propose a distinct explanation as to how one can decrease the uncertainty associated with an input. We then further propose GLobal AMortised CLUE (GLAM-CLUE), a distinct, novel method which learns amortised mappings that apply to specific groups of uncertain inputs, taking them and efficiently transforming them in a single function call into inputs for which a model will be certain. Our experiments show that δ-CLUE, ∇-CLUE, and GLAM-CLUE all address shortcomings of CLUE and provide beneficial explanations of uncertainty estimates to practitioners.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Freiesleben, Timo. „The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples“. Minds and Machines 32, Nr. 1 (30.10.2021): 77–109. http://dx.doi.org/10.1007/s11023-021-09580-9.

Der volle Inhalt der Quelle
Annotation:
AbstractThe same method that creates adversarial examples (AEs) to fool image-classifiers can be used to generate counterfactual explanations (CEs) that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current methods for generating CEs and AEs, and estimate that the fields will merge more and more as the number of common use-cases grows.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Guidotti, Riccardo, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri und Franco Turini. „Factual and Counterfactual Explanations for Black Box Decision Making“. IEEE Intelligent Systems 34, Nr. 6 (01.11.2019): 14–23. http://dx.doi.org/10.1109/mis.2019.2957223.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Jiang, Junqi, Francesco Leofante, Antonio Rago und Francesca Toni. „Formalising the Robustness of Counterfactual Explanations for Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 12 (26.06.2023): 14901–9. http://dx.doi.org/10.1609/aaai.v37i12.26740.

Der volle Inhalt der Quelle
Annotation:
The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose ∆-robustness, the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks. We introduce an abstraction framework based on interval neural networks to verify the ∆-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the ∆-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding ∆-robustness within existing methods can provide CFXs which are provably robust.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Bernstein, Jodi T., Anthea K. Christoforou, Alena (Praneet) Ng, Madyson Weippert, Christine Mulligan, Nadia Flexner und Mary R. L’Abbe. „Canadian Free Sugar Intake and Modelling of a Reformulation Scenario“. Foods 12, Nr. 9 (25.04.2023): 1771. http://dx.doi.org/10.3390/foods12091771.

Der volle Inhalt der Quelle
Annotation:
Recommendations suggest limiting the intake of free sugar to under 10% or 5% of calories in order to reduce the risk of negative health outcomes. This study aimed to examine Canadian free sugar intake and model how intakes change following the implementation of a systematic reformulation of foods and beverages to be 20% lower in free sugar. Additionally, this study aimed to examine how calorie intake might be impacted by this reformulation scenario. Canadians’ free sugar and calorie intakes were determined using free sugar and calorie data from the Food Label Information Program (FLIP) 2017, a Canadian branded food composition database, and applied to foods reported as being consumed in Canadian Community Health Survey—Nutrition (CCHS-Nutrition) 2015. A “counterfactual” scenario was modelled to examine changes in intake following the reformulation of foods to be 20% lower in free sugar. The overall mean free sugar intake was 12.1% of calories and was reduced to align with the intake recommendations at 10% of calories in the “counterfactual” scenario (p < 0.05). Calorie intake was reduced by 3.2% (60 calories) in the “counterfactual” scenario (p < 0.05). Although the overall average intake was aligned with the recommendations, many age/sex groups exceeded the recommended intake, even in the “counterfactual” scenario. The results demonstrate a need to reduce the intake of free sugar in Canada to align with dietary recommendations, potentially through reformulation. The results can be used to inform future program and policy decisions related to achieving the recommended intake levels of free sugar in Canada.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Chalyi, Serhii, Volodymyr Leshchynskyi und Irina Leshchynska. „COUNTERFACTUAL TEMPORAL MODEL OF CAUSAL RELATIONSHIPS FOR CONSTRUCTING EXPLANATIONS IN INTELLIGENT SYSTEMS“. Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, Nr. 2 (6) (28.12.2021): 41–46. http://dx.doi.org/10.20998/2079-0023.2021.02.07.

Der volle Inhalt der Quelle
Annotation:
The subject of the research is the processes of constructing explanations based on causal relationships between states or actions of an intellectualsystem. An explanation is knowledge about the sequence of causes and effects that determine the process and result of an intelligent informationsystem. The aim of the work is to develop a counterfactual temporal model of cause-and-effect relationships as part of an explanation of the process offunctioning of an intelligent system in order to ensure the identification of causal dependencies based on the analysis of the logs of the behavior ofsuch a system. To achieve the stated goals, the following tasks are solved: determination of the temporal properties of the counterfactual description ofcause-and-effect relationships between actions or states of an intelligent information system; development of a temporal model of causal connections,taking into account both the facts of occurrence of events in the intellectual system, and the possibility of occurrence of events that do not affect theformation of the current decision. Conclusions. The structuring of the temporal properties of causal links for pairs of events that occur sequentially intime or have intermediate events is performed. Such relationships are represented by alternative causal relationships using the temporal operators"Next" and "Future", which allows realizing a counterfactual approach to the representation of causality. A counterfactual temporal model of causalrelationships is proposed, which determines deterministic causal relationships for pairs of consecutive events and pairs of events between which thereare other events, which determines the transitivity property of such dependencies and, accordingly, creates conditions for describing the sequence ofcauses and effects as part of the explanation in intelligent system with a given degree of detail The model provides the ability to determine cause-andeffect relationships, between which there are intermediate events that do not affect the final result of the intelligent information system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Chapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz und Konstantinos Georgatzis. „FIMAP: Feature Importance by Minimal Adversarial Perturbation“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.

Der volle Inhalt der Quelle
Annotation:
Instance-based model-agnostic feature importance explanations (LIME, SHAP, L2X) are a popular form of algorithmic transparency. These methods generally return either a weighting or subset of input features as an explanation for the classification of an instance. An alternative literature argues instead that counterfactual instances, which alter the black-box model's classification, provide a more actionable form of explanation. We present Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanations. We show that this approach combines the two paradigms, recovering the output of feature-weighting methods in continuous feature spaces, whilst indicating the direction in which the nearest counterfactuals can be found. Our method also provides an implicit confidence estimate in its own explanations, something existing methods lack. Additionally, FIMAP improves upon the speed of sampling-based methods, such as LIME, by an order of magnitude, allowing for explanation deployment in time-critical applications. We extend our approach to categorical features using a partitioned Gumbel layer and demonstrate its efficacy on standard datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Aryal, Saugat. „Semi-factual Explanations in AI“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 21 (24.03.2024): 23379–80. http://dx.doi.org/10.1609/aaai.v38i21.30390.

Der volle Inhalt der Quelle
Annotation:
Most of the recent works on post-hoc example-based eXplainable AI (XAI) methods revolves around employing counterfactual explanations to provide justification of the predictions made by AI systems. Counterfactuals show what changes to the input-features change the output decision. However, a lesser-known, special-case of the counterfacual is the semi-factual, which provide explanations about what changes to the input-features do not change the output decision. Semi-factuals are potentially as useful as counterfactuals but have received little attention in the XAI literature. My doctoral research aims to establish a comprehensive framework for the use of semi-factuals in XAI by developing novel methods for their computation, supported by user tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Born, Benjamin, Alexander M. Dietrich und Gernot J. Müller. „The lockdown effect: A counterfactual for Sweden“. PLOS ONE 16, Nr. 4 (08.04.2021): e0249732. http://dx.doi.org/10.1371/journal.pone.0249732.

Der volle Inhalt der Quelle
Annotation:
While most countries imposed a lockdown in response to the first wave of COVID-19 infections, Sweden did not. To quantify the lockdown effect, we approximate a counterfactual lockdown scenario for Sweden through the outcome in a synthetic control unit. We find, first, that a 9-week lockdown in the first half of 2020 would have reduced infections and deaths by about 75% and 38%, respectively. Second, the lockdown effect starts to materialize with a delay of 3–4 weeks only. Third, the actual adjustment of mobility patterns in Sweden suggests there has been substantial voluntary social restraint, although the adjustment was less strong than under the lockdown scenario. Lastly, we find that a lockdown would not have caused much additional output loss.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Virmajoki, Veli. „On the Function and Nature of Historical Counterfactuals. Clarifying Confusions“. Journal of the Philosophy of History, 06.05.2024, 1–25. http://dx.doi.org/10.1163/18722636-12341519.

Der volle Inhalt der Quelle
Annotation:
Abstract In this article, I analyze historical counterfactuals. Historical counterfactuals are conditional statements in which the antecedent refers to some change in the past. We ask what would have happened, had that change occurred. I discuss the nature of such counterfactuals. I then identify important functions that historical counterfactuals have. I point out that they are at the heart of explanations and, therefore, reveal issues related to contingency and actual history. I then discuss counterfactual reasoning in historiography. I argue that the problem of suitable antecedent conditions has been exaggerated, and more serious issues concern the tracking of counterfactual scenarios. Throughout the paper, I argue that the interventionist way of thinking about historical counterfactuals clarifies both historical explanations and the nature of historical counterfactuals and should be adopted as the standard. I conclude by noting that historical counterfactuals may not fundamentally differ from more familiar forms of historiography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Li, Hanzhe, Jingjing Gu, Xinjiang Lu, Dazhong Shen, Yuting Liu, YaNan Deng, Guoliang Shi und Hui Xiong. „Beyond Relevance: Factor-level Causal Explanation for User Travel Decisions with Counterfactual Data Augmentation“. ACM Transactions on Information Systems, 22.03.2024. http://dx.doi.org/10.1145/3653673.

Der volle Inhalt der Quelle
Annotation:
Point-of-Interest (POI) recommendation, an important research hotspot in the field of urban computing, plays a crucial role in urban construction. While understanding the process of users’ travel decisions and exploring the causality of POI choosing is not easy due to the complex and diverse influencing factors in urban travel scenarios. Moreover, the spurious explanations caused by severe data sparsity, i.e., misrepresenting universal relevance as causality, may also hinder us from understanding users’ travel decisions. To this end, in this paper, we propose a factor-level causal explanation generation framework based on counterfactual data augmentation for user travel decisions, named Factor-level Causal Explanation for User Travel Decisions (FCE-UTD), which can distinguish between true and false causal factors and generate true causal explanations. Specifically, we first assume that a user decision is composed of a set of several different factors. Then, by preserving the user decision structure with a joint counterfactual contrastive learning paradigm, we learn the representation of factors and detect the relevant factors. Next, we further identify true causal factors by constructing counterfactual decisions with a counterfactual representation generator, in particular, it can not only augment the dataset and mitigate the sparsity but also contribute to clarifying the causal factors from other false causal factors that may cause spurious explanations. Besides, a causal dependency learner is proposed to identify causal factors for each decision by learning causal dependency scores. Extensive experiments conducted on three real-world datasets demonstrate the superiority of our approach in terms of check-in rate, fidelity, and downstream tasks under different behavior scenarios. The extra case studies also demonstrate the ability of FCE-UTD to generate causal explanations in POI choosing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Mehedi Hasan, Md Golam Moula, und Douglas A. Talbert. „Counterfactual Examples for Data Augmentation: A Case Study“. International FLAIRS Conference Proceedings 34, Nr. 1 (18.04.2021). http://dx.doi.org/10.32473/flairs.v34i1.128503.

Der volle Inhalt der Quelle
Annotation:
Counterfactual explanations are gaining in popularity as a way of explaining machine learning models. Counterfactual examples are generally created to help interpret the decision of a model. In this case, if a model makes a certain decision for an instance, the counterfactual examples of that instance reverse the decision of the model. The counterfactual examples can be created by craftily changing particular feature values of the instance. Though counterfactual examples are generated to explain the decision of machine learning models, in this work, we explore another potential application area of counterfactual examples, whether counterfactual examples are useful for data augmentation. We demonstrate the efficacy of this approach on the widely used “Adult-Income” dataset. We consider several scenarios where we do not have enough data and use counterfactual examples to augment the dataset. We compare our approach with Generative Adversarial Networks approach for dataset augmentation. The experimental results show that our proposed approach can be an effective way to augment a dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Kuhl, Ulrike, André Artelt und Barbara Hammer. „Let's go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning“. Frontiers in Computer Science 5 (21.03.2023). http://dx.doi.org/10.3389/fcomp.2023.1087929.

Der volle Inhalt der Quelle
Annotation:
IntroductionTo foster usefulness and accountability of machine learning (ML), it is essential to explain a model's decisions in addition to evaluating its performance. Accordingly, the field of explainable artificial intelligence (XAI) has resurfaced as a topic of active research, offering approaches to address the “how” and “why” of automated decision-making. Within this domain, counterfactual explanations (CFEs) have gained considerable traction as a psychologically grounded approach to generate post-hoc explanations. To do so, CFEs highlight what changes to a model's input would have changed its prediction in a particular way. However, despite the introduction of numerous CFE approaches, their usability has yet to be thoroughly validated at the human level.MethodsTo advance the field of XAI, we introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework. The Alien Zoo provides the means to evaluate usability of CFEs for gaining new knowledge from an automated system, targeting novice users in a domain-general context. As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.ResultsOur results suggest the efficacy of the Alien Zoo framework for empirically investigating aspects of counterfactual explanations in a game-type scenario and a low-knowledge domain. The proof of concept study reveals that users benefit from receiving CFEs compared to no explanation, both in terms of objective performance in the proposed iterative learning task, and subjective usability.DiscussionWith this work, we aim to equip research groups and practitioners with the means to easily run controlled and well-powered user studies to complement their otherwise often more technology-oriented work. Thus, in the interest of reproducible research, we provide the entire code, together with the underlying models and user data: https://github.com/ukuhl/IntroAlienZoo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie