Siga este link para ver outros tipos de publicações sobre o tema: Counterfactual Explanation.

Artigos de revistas sobre o tema "Counterfactual Explanation"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Counterfactual Explanation".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann e Elke A. Rundensteiner. "FACET: Robust Counterfactual Explanation Analytics". Proceedings of the ACM on Management of Data 1, n.º 4 (8 de dezembro de 2023): 1–27. http://dx.doi.org/10.1145/3626729.

Texto completo da fonte
Resumo:
Machine learning systems are deployed in domains such as hiring and healthcare, where undesired classifications can have serious ramifications for the user. Thus, there is a rising demand for explainable AI systems which provide actionable steps for lay users to obtain their desired outcome. To meet this need, we propose FACET, the first explanation analytics system which supports a user in interactively refining counterfactual explanations for decisions made by tree ensembles. As FACET's foundation, we design a novel type of counterfactual explanation called the counterfactual region. Unlike traditional counterfactuals, FACET's regions concisely describe portions of the feature space where the desired outcome is guaranteed, regardless of variations in exact feature values. This property, which we coin explanation robustness, is critical for the practical application of counterfactuals. We develop a rich set of novel explanation analytics queries which empower users to identify personalized counterfactual regions that account for their real-world circumstances. To process these queries, we develop a compact high-dimensional counterfactual region index along with index-aware query processing strategies for near real-time explanation analytics. We evaluate FACET against state-of-the-art explanation techniques on eight public benchmark datasets and demonstrate that FACET generates actionable explanations of similar quality in an order of magnitude less time while providing critical robustness guarantees. Finally, we conduct a preliminary user study which suggests that FACET's regions lead to higher user understanding than traditional counterfactuals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Sia, Suzanna, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer e Lambert Mathias. "Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junho de 2023): 9837–45. http://dx.doi.org/10.1609/aaai.v37i8.26174.

Texto completo da fonte
Resumo:
Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors. In this work, which focuses on the NLI task, we introduce the methodology of Faithfulness-through-Counterfactuals, which first generates a counterfactual hypothesis based on the logical predicates expressed in the explanation, and then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic (i.e. if the new formula is \textit{logically satisfiable}). In contrast to existing approaches, this does not require any explanations for training a separate verification model. We first validate the efficacy of automatic counterfactual hypothesis generation, leveraging on the few-shot priming paradigm. Next, we show that our proposed metric distinguishes between human-model agreement and disagreement on new counterfactual input. In addition, we conduct a sensitivity analysis to validate that our metric is sensitive to unfaithful explanations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Asher, Nicholas, Lucas De Lara, Soumya Paul e Chris Russell. "Counterfactual Models for Fair and Adequate Explanations". Machine Learning and Knowledge Extraction 4, n.º 2 (31 de março de 2022): 316–49. http://dx.doi.org/10.3390/make4020014.

Texto completo da fonte
Resumo:
Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they are often too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute. We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We provide formal results about the algorithmic complexity of fair and adequate explanations. We then detail two sophisticated counterfactual models, one based on causal graphs, and one based on transport theories. We show transport based models have several theoretical advantages over the competition as explanation frameworks for machine learning algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Si, Michelle, e Jian Pei. "Counterfactual Explanation of Shapley Value in Data Coalitions". Proceedings of the VLDB Endowment 17, n.º 11 (julho de 2024): 3332–45. http://dx.doi.org/10.14778/3681954.3682004.

Texto completo da fonte
Resumo:
The Shapley value is widely used for data valuation in data markets. However, explaining the Shapley value of an owner in a data coalition is an unexplored and challenging task. To tackle this, we formulate the problem of finding the counterfactual explanation of Shapley value in data coalitions. Essentially, given two data owners A and B such that A has a higher Shapley value than B , a counter-factual explanation is a smallest subset of data entries in A such that transferring the subset from A to B makes the Shapley value of A less than that of B. We show that counterfactual explanations always exist, but finding an exact counterfactual explanation is NP-hard. Using Monte Carlo estimation to approximate counterfactual explanations directly according to the definition is still very costly, since we have to estimate the Shapley values of owners A and B after each possible subset shift. We develop a series of heuristic techniques to speed up computation by estimating differential Shapley values, computing the power of singular data entries, and shifting subsets greedily, culminating in the SV-Exp algorithm. Our experimental results on real datasets clearly demonstrate the efficiency of our method and the effectiveness of counterfactuals in interpreting the Shapley value of an owner.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Baron, Sam. "Counterfactual Scheming". Mind 129, n.º 514 (1 de abril de 2019): 535–62. http://dx.doi.org/10.1093/mind/fzz008.

Texto completo da fonte
Resumo:
Abstract Mathematics appears to play a genuine explanatory role in science. But how do mathematical explanations work? Recently, a counterfactual approach to mathematical explanation has been suggested. I argue that such a view fails to differentiate the explanatory uses of mathematics within science from the non-explanatory uses. I go on to offer a solution to this problem by combining elements of the counterfactual theory of explanation with elements of a unification theory of explanation. The result is a theory according to which a counterfactual is explanatory when it is an instance of a generalized counterfactual scheme.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Baron, Sam, Mark Colyvan e David Ripley. "A Counterfactual Approach to Explanation in Mathematics". Philosophia Mathematica 28, n.º 1 (2 de dezembro de 2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.

Texto completo da fonte
Resumo:
ABSTRACT Our goal in this paper is to extend counterfactual accounts of scientific explanation to mathematics. Our focus, in particular, is on intra-mathematical explanations: explanations of one mathematical fact in terms of another. We offer a basic counterfactual theory of intra-mathematical explanations, before modelling the explanatory structure of a test case using counterfactual machinery. We finish by considering the application of counterpossibles to mathematical explanation, and explore a second test case along these lines.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Chapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz e Konstantinos Georgatzis. "FIMAP: Feature Importance by Minimal Adversarial Perturbation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de maio de 2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.

Texto completo da fonte
Resumo:
Instance-based model-agnostic feature importance explanations (LIME, SHAP, L2X) are a popular form of algorithmic transparency. These methods generally return either a weighting or subset of input features as an explanation for the classification of an instance. An alternative literature argues instead that counterfactual instances, which alter the black-box model's classification, provide a more actionable form of explanation. We present Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanations. We show that this approach combines the two paradigms, recovering the output of feature-weighting methods in continuous feature spaces, whilst indicating the direction in which the nearest counterfactuals can be found. Our method also provides an implicit confidence estimate in its own explanations, something existing methods lack. Additionally, FIMAP improves upon the speed of sampling-based methods, such as LIME, by an order of magnitude, allowing for explanation deployment in time-critical applications. We extend our approach to categorical features using a partitioned Gumbel layer and demonstrate its efficacy on standard datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

DOHRN, DANIEL. "Counterfactual Narrative Explanation". Journal of Aesthetics and Art Criticism 67, n.º 1 (fevereiro de 2009): 37–47. http://dx.doi.org/10.1111/j.1540-6245.2008.01333.x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Leofante, Francesco, e Nico Potyka. "Promoting Counterfactual Robustness through Diversity". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21322–30. http://dx.doi.org/10.1609/aaai.v38i19.30127.

Texto completo da fonte
Resumo:
Counterfactual explanations shed light on the decisions of black-box models by explaining how an input can be altered to obtain a favourable decision from the model (e.g., when a loan application has been rejected). However, as noted recently, counterfactual explainers may lack robustness in the sense that a minor change in the input can cause a major change in the explanation. This can cause confusion on the user side and open the door for adversarial attacks. In this paper, we study some sources of non-robustness. While there are fundamental reasons for why an explainer that returns a single counterfactual cannot be robust in all instances, we show that some interesting robustness guarantees can be given by reporting multiple rather than a single counterfactual. Unfortunately, the number of counterfactuals that need to be reported for the theoretical guarantees to hold can be prohibitively large. We therefore propose an approximation algorithm that uses a diversity criterion to select a feasible number of most relevant explanations and study its robustness empirically. Our experiments indicate that our method improves the state-of-the-art in generating robust explanations, while maintaining other desirable properties and providing competitive computational performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

He, Ming, Boyang An, Jiwen Wang e Hao Wen. "CETD: Counterfactual Explanations by Considering Temporal Dependencies in Sequential Recommendation". Applied Sciences 13, n.º 20 (11 de outubro de 2023): 11176. http://dx.doi.org/10.3390/app132011176.

Texto completo da fonte
Resumo:
Providing interpretable explanations can notably enhance users’ confidence and satisfaction with regard to recommender systems. Counterfactual explanations demonstrate remarkable performance in the realm of explainable sequential recommendation. However, current counterfactual explanation models designed for sequential recommendation overlook the temporal dependencies in a user’s past behavior sequence. Furthermore, counterfactual histories should be as similar to the real history as possible to avoid conflicting with the user’s genuine behavioral preferences. This paper presents counterfactual explanations by Considering temporal dependencies (CETD), a counterfactual explanation model that utilizes a variational autoencoder (VAE) for sequential recommendation and takes into account temporal dependencies. To improve explainability, CETD employs a recurrent neural network (RNN) when generating counterfactual histories, thereby capturing both the user’s long-term preferences and short-term behavior in their real behavioral history. Meanwhile, CETD fits the distribution of reconstructed data (i.e., the counterfactual sequences generated by VAE perturbation) in a latent space, and leverages learned variance to decrease the proximity of counterfactual histories by minimizing the distance between the counterfactual sequences and the original sequence. Thorough experiments conducted on two real-world datasets demonstrate that the proposed CETD consistently surpasses current state-of-the-art methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Delaney, Eoin, Arjun Pakrashi, Derek Greene e Mark T. Keane. "Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ (Abstract Reprint)". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 20 (24 de março de 2024): 22696. http://dx.doi.org/10.1609/aaai.v38i20.30596.

Texto completo da fonte
Resumo:
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

An, Shuai, e Yang Cao. "Counterfactual Explanation at Will, with Zero Privacy Leakage". Proceedings of the ACM on Management of Data 2, n.º 3 (29 de maio de 2024): 1–29. http://dx.doi.org/10.1145/3654933.

Texto completo da fonte
Resumo:
While counterfactuals have been extensively studied as an intuitive explanation of model predictions, they still have limited adoption in practice due to two obstacles: (a) They rely on excessive access to the model for explanation that the model owner may not provide; and (b) counterfactuals carry information that adversarial users can exploit to launch model extraction attacks. To address the challenges, we propose CPC, a data-driven approach to counterfactual. CPC works at the client side and gives full control and right-to-explain to model users, even when model owners opt not to. Moreover, CPC warrants that adversarial users cannot exploit counterfactuals to extract models. We formulate properties and fundamental problems underlying CPC, study their complexity and develop effective algorithms. Using real-world datasets and user study, we verify that CPC does prevent adversaries from exploiting counterfactuals for model extraction attacks, and is orders of magnitude faster than existing explainers, while maintaining comparable and often higher quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Park, Rinseo, e Young Min Baek. "Talking about what would happen versus what happened: Tracking Congressional speeches during COVID-19". Journal of Social and Political Psychology 9, n.º 2 (1 de dezembro de 2021): 608–22. http://dx.doi.org/10.5964/jspp.6153.

Texto completo da fonte
Resumo:
In counterfactual thinking, an imagined alternative to the reality that comprises an antecedent and a consequent is widely adopted in political discourse to justify past behaviors (i.e., counterfactual explanation) or to depict a better future (i.e., prefactual). However, they have not been properly addressed in political communication literature. Our study examines how politicians used counterfactual expressions for explanation of the past or preparation of the future during COVID-19, one of the most severe public health crises. All Congressional speeches of the Senate and House in the 116th Congress (2019-2020) were retrieved, and counterfactual expressions were identified along with time-focusing in each speech, using recent advances in natural language processing (NLP) techniques. The results show that counterfactuals were more practiced among Democrats in the Senate and Republicans in the House. With the spread of the pandemic, the use of counterfactuals decreased, maintaining a partisan gap in the House. However, it was nearly stable, with no party differences in the Senate. Implications of our findings are discussed, regarding party polarization, institutional constraints, and the quality of Congressional deliberation. Limitations and suggestions for future research are also provided.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Barzekar, Hosein, e Susan McRoy. "Achievable Minimally-Contrastive Counterfactual Explanations". Machine Learning and Knowledge Extraction 5, n.º 3 (3 de agosto de 2023): 922–36. http://dx.doi.org/10.3390/make5030048.

Texto completo da fonte
Resumo:
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Geng, Zixuan, Maximilian Schleich e Dan Suciu. "Computing Rule-Based Explanations by Leveraging Counterfactuals". Proceedings of the VLDB Endowment 16, n.º 3 (novembro de 2022): 420–32. http://dx.doi.org/10.14778/3570690.3570693.

Texto completo da fonte
Resumo:
Sophisticated machine models are increasingly used for high-stakes decisions in everyday life. There is an urgent need to develop effective explanation techniques for such automated decisions. Rule-Based Explanations have been proposed for high-stake decisions like loan applications, because they increase the users' trust in the decision. However, rule-based explanations are very inefficient to compute, and existing systems sacrifice their quality in order to achieve reasonable performance. We propose a novel approach to compute rule-based explanations, by using a different type of explanation, Counterfactual Explanations, for which several efficient systems have already been developed. We prove a Duality Theorem, showing that rule-based and counterfactual-based explanations are dual to each other, then use this observation to develop an efficient algorithm for computing rule-based explanations, which uses the counterfactual-based explanation as an oracle. We conduct extensive experiments showing that our system computes rule-based explanations of higher quality, and with the same or better performance, than two previous systems, MinSetCover and Anchor.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

VanNostrand, Peter M., Dennis M. Hofmann, Lei Ma, Belisha Genin, Randy Huang e Elke A. Rundensteiner. "Counterfactual Explanation Analytics: Empowering Lay Users to Take Action Against Consequential Automated Decisions". Proceedings of the VLDB Endowment 17, n.º 12 (agosto de 2024): 4349–52. http://dx.doi.org/10.14778/3685800.3685872.

Texto completo da fonte
Resumo:
Machine learning is routinely used to automate consequential decisions about users in domains such as finance and healthcare, raising concerns of transparency and recourse for negative outcomes. Existing Explainable AI techniques generate a static counterfactual point explanation which recommends changes to a user's instance to obtain a positive outcome. Unfortunately, these recommendations are often difficult or impossible for users to realistically enact. To overcome this, we present FACET, the first interactive robust explanation system which generates personalized counterfactual region explanations. FACET's expressive explanation analytics empower users to explore and compare multiple counterfactual options and develop a personalized actionable plan for obtaining their desired outcome. Visitors to the demonstration will interact with FACET via a new web dashboard for explanations of a loan approval scenario. In doing so, visitors will experience how lay users can easily leverage powerful explanation analytics through visual interactions and displays without the need for a strong technical background.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Le, Thao, Tim Miller, Ronal Singh e Liz Sonenberg. "Explaining Model Confidence Using Counterfactuals". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 10 (26 de junho de 2023): 11856–64. http://dx.doi.org/10.1609/aaai.v37i10.26399.

Texto completo da fonte
Resumo:
Displaying confidence scores in human-AI interaction has been shown to help build trust between humans and AI systems. However, most existing research uses only the confidence score as a form of communication. As confidence scores are just another model output, users may want to understand why the algorithm is confident to determine whether to accept the confidence score. In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction. We present two methods for understanding model confidence using counterfactual explanation: (1) based on counterfactual examples; and (2) based on visualisation of the counterfactual space. Both increase understanding and trust for study participants over a baseline of no explanation, but qualitative results show that they are used quite differently, leading to recommendations of when to use each one and directions of designing better explanations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Kasfir, Nelson. "Applying a Counterfactual: Would 1966 Ugandan University Students Be Surprised by Ugandan Governance Today?" African Studies Review 59, n.º 3 (dezembro de 2016): 139–53. http://dx.doi.org/10.1017/asr.2016.80.

Texto completo da fonte
Resumo:
Abstract:Joel Barkan surveyed Ugandan university students’ attitudes in 1966 and worked on Ugandan national governance three decades later. These two inquiries facilitate an unusual counterfactual analysis. Counterfactuals typically test historical explanation by manipulating an antecedent to estimate change to a known outcome. But a counterfactual can be constructed to examine how an antecedent would react to a later activity. By extrapolating from 1966 students’ responses to Barkan’s survey and their expected knowledge of political events, we can estimate their likely attitudes to later governance. Applying this unconventional counterfactual helps establish how far prior perception of politics illuminates later governmental practice.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Cong, Zicun, Lingyang Chu, Yu Yang e Jian Pei. "Comprehensible counterfactual explanation on Kolmogorov-Smirnov test". Proceedings of the VLDB Endowment 14, n.º 9 (maio de 2021): 1583–96. http://dx.doi.org/10.14778/3461535.3461546.

Texto completo da fonte
Resumo:
The Kolmogorov-Smirnov (KS) test is popularly used in many applications, such as anomaly detection, astronomy, database security and AI systems. One challenge remained untouched is how we can obtain an explanation on why a test set fails the KS test. In this paper, we tackle the problem of producing counterfactual explanations for test data failing the KS test. Concept-wise, we propose the notion of most comprehensible counterfactual explanations, which accommodates both the KS test data and the user domain knowledge in producing explanations. Computation-wise, we develop an efficient algorithm MOCHE (for <u>MO</u>st <u>C</u>ompre<u>H</u>ensible <u>E</u>xplanation) that avoids enumerating and checking an exponential number of subsets of the test set failing the KS test. MOCHE not only guarantees to produce the most comprehensible counterfactual explanations, but also is orders of magnitudes faster than the baselines. Experiment-wise, we present a systematic empirical study on a series of benchmark real datasets to verify the effectiveness, efficiency and scalability of most comprehensible counterfactual explanations and MOCHE.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Prado-Romero, Mario Alfonso, Bardh Prenkaj e Giovanni Stilo. "Robust Stochastic Graph Generator for Counterfactual Explanations". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21518–26. http://dx.doi.org/10.1609/aaai.v38i19.30149.

Texto completo da fonte
Resumo:
Counterfactual Explanation (CE) techniques have garnered attention as a means to provide insights to the users engaging with AI systems. While extensively researched in domains such as medical imaging and autonomous vehicles, Graph Counterfactual Explanation (GCE) methods have been comparatively under-explored. GCEs generate a new graph similar to the original one, with a different outcome grounded on the underlying predictive model. Among these GCE techniques, those rooted in generative mechanisms have received relatively limited investigation despite demonstrating impressive accomplishments in other domains, such as artistic styles and natural language modelling. The preference for generative explainers stems from their capacity to generate counterfactual instances during inference, leveraging autonomously acquired perturbations of the input graph. Motivated by the rationales above, our study introduces RSGG-CE, a novel Robust Stochastic Graph Generator for Counterfactual Explanations able to produce counterfactual examples from the learned latent space considering a partially ordered generation sequence. Furthermore, we undertake quantitative and qualitative analyses to compare RSGG-CE's performance against SoA generative explainers, highlighting its increased ability to engendering plausible counterfactual candidates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Virmajoki, Veli. "Frameworks in Historiography: Explanation, Scenarios, and Futures". Journal of the Philosophy of History 17, n.º 2 (3 de julho de 2023): 288–309. http://dx.doi.org/10.1163/18722636-12341501.

Texto completo da fonte
Resumo:
Abstract In this paper, I analyze how frameworks shape historiographical explanations. I argue that, in order to identify a sequence of events as relevant to a historical outcome, assumptions about the workings of the relevant domain have to be made. By extending Lakatosian considerations, I argue that these assumptions are provided by a framework that contains a set of factors and intertwined principles that (supposedly) govern how a historical phenomenon works. I connect frameworks with a counterfactual account of historical explanation. Frameworks enable us to explain the past by providing a backbone of explanatory patterns of counterfactual dependency. I conclude by noting that both counterfactual scenarios and scenarios of the future require frameworks and, therefore, historiographical explanation generates a set of possible futures. Analyzing these possible futures enables us to reveal the theoretical commitments of historiography.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan e Guangneng Hu. "Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Texto completo da fonte
Resumo:
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations. These problems reduce the faithfulness of explanations generated by models. To address the above issues, we propose a novel self-supervised Multi-level Contrastive Learning based natural language Explanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples. MCLE extracts discriminative features and aligns the feature spaces from explanations with visual question and answer to generate more consistent explanations. We conduct extensive experiments, ablation analysis, and case study to demonstrate the effectiveness of our method on two VQA-NLE benchmarks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Bakaiev, Mykola. "A Methodological Inquiry on Compatibility of Droysen’s Understanding and Weber’s Counterfactuals". NaUKMA Research Papers in Philosophy and Religious Studies, n.º 9-10 (20 de janeiro de 2023): 127–36. http://dx.doi.org/10.18523/2617-1678.2022.9-10.127-136.

Texto completo da fonte
Resumo:
Gustav Droysen introduced understanding as the method of history. Max Weber analyzed what-if statements or counterfactuals as a form of causal explanation. Both scholars had a common interest in understanding and explanation. However, Droysen’s explanation was defined as method of natural sciences and served no use in history, while Weber’s understanding was focused on social reality rather than historical one. Still, precisely Weber’s idea of difference-making counterfactuals was later reinterpreted as defining for historical counterfactuals. In this paper, I determine what their methodologies say about understanding and counterfactuals, whether their views are compatible and whether historical research can benefit from combination of understanding and counterfactuals. To do this, I reconstruct Gustav Droysen’s views on understanding in the first part. Understanding here is a method that allows us to grasp events that are distant in time as contemporary ones through historical material and criticism. In the second part I review the tradition of counterfactuals of analytic philosophers (from Roderick Chisholm and Nelson Goodman to Julian Reiss) and Max Weber. Counterfactuals are conditional statements that contradict existing historical facts by changing or removing the causes of certain events, so that they can demonstrate the significance of these causes for historical events in case the counterfactual causes make a difference for the events. In the third part of the paper, I argue for compatibility between the methodologies, maintaining that understanding and counterfactuals can be beneficial for historical research in the following way: counterfactuals pinpoint the causes and main figures of historical events; knowledge about the figures improves our understanding of them; this understanding helps to see more counterfactual possibilities that can bring to light new causes, deepening our view of history.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Sunstein, Cass R. "Historical Explanations Always Involve Counterfactual History". Journal of the Philosophy of History 10, n.º 3 (17 de novembro de 2016): 433–40. http://dx.doi.org/10.1163/18722636-12341345.

Texto completo da fonte
Resumo:
Historical explanations are a form of counterfactual history. To offer an explanation of what happened, historians have to identify causes, and whenever they identify causes, they immediately conjure up a counterfactual history, a parallel world. No one doubts that there is a great deal of distance between science fiction novelists and the world’s great historians, but along an important dimension, they are playing the same game.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Zelenkov, Yuri, e Elizaveta Lashkevich. "Counterfactual explanations based on synthetic data generation". Business Informatics 18, n.º 3 (30 de setembro de 2024): 24–40. http://dx.doi.org/10.17323/2587-814x.2024.3.24.40.

Texto completo da fonte
Resumo:
A counterfactual explanation is the generation for a particular sample of a set of instances that belong to the opposite class but are as close as possible in the feature space to the factual being explained. Existing algorithms that solve this problem are usually based on complicated models that require a large amount of training data and significant computational cost. We suggest here a method that involves two stages. First, a synthetic set of potential counterfactuals is generated based on simple statistical models (Gaussian copula, sequential model based on conditional distributions, Bayesian network, etc.), and second, instances satisfying constraints on probability, proximity, diversity, etc. are selected. Such an approach enables us to make the process transparent, manageable and to reuse the generative models. Experiments on three public datasets have demonstrated that the proposed method provides results at least comparable to known algorithms of counterfactual explanations, and superior to them in some cases, especially on low-sized datasets. The most effective generation model is a Bayesian network in this case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Ruben, David-Hillel. "A Counterfactual Theory of Causal Explanation". Noûs 28, n.º 4 (dezembro de 1994): 465. http://dx.doi.org/10.2307/2215475.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Rips, Lance J., e Brian J. Edwards. "Inference and Explanation in Counterfactual Reasoning". Cognitive Science 37, n.º 6 (31 de janeiro de 2013): 1107–35. http://dx.doi.org/10.1111/cogs.12024.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Mediavilla-Relaño, Javier, e Marcelino Lázaro. "COCOA: Cost-Optimized COunterfactuAl explanation method". Information Sciences 670 (junho de 2024): 120616. http://dx.doi.org/10.1016/j.ins.2024.120616.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Amitai, Yotam, Yael Septon e Ofra Amir. "Explaining Reinforcement Learning Agents through Counterfactual Action Outcomes". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 9 (24 de março de 2024): 10003–11. http://dx.doi.org/10.1609/aaai.v38i9.28863.

Texto completo da fonte
Resumo:
Explainable reinforcement learning (XRL) methods aim to help elucidate agent policies and decision-making processes. The majority of XRL approaches focus on local explanations, seeking to shed light on the reasons an agent acts the way it does at a specific world state. While such explanations are both useful and necessary, they typically do not portray the outcomes of the agent's selected choice of action. In this work, we propose ``COViz'', a new local explanation method that visually compares the outcome of an agent's chosen action to a counterfactual one. In contrast to most local explanations that provide state-limited observations of the agent's motivation, our method depicts alternative trajectories the agent could have taken from the given state and their outcomes. We evaluated the usefulness of COViz in supporting people's understanding of agents' preferences and compare it with reward decomposition, a local explanation method that describes an agent's expected utility for different actions by decomposing it into meaningful reward types. Furthermore, we examine the complementary benefits of integrating both methods. Our results show that such integration significantly improved participants' performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Snider, Todd, e Adam Bjorndahl. "Informative counterfactuals". Semantics and Linguistic Theory 25 (29 de outubro de 2015): 1. http://dx.doi.org/10.3765/salt.v25i0.3077.

Texto completo da fonte
Resumo:
A single counterfactual conditional can have a multitude of interpretations that differ, intuitively, in the connection between antecedent and consequent. Using structural equation models (SEMs) to represent event dependencies, we illustrate various types of explanation compatible with a given counterfactual. We then formalize in the SEM framework the notion of an acceptable explanation, identifying the class of event dependencies compatible with a given counterfactual. Finally, by incorporating SEMs into possible worlds, we provide an update semantics with the enriched structure necessary for the evaluation of counterfactual conditionals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Admassu, Tsehay. "Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection". Proceedings of Engineering and Technology Innovation 23 (1 de janeiro de 2023): 48–59. http://dx.doi.org/10.46604/peti.2023.10101.

Texto completo da fonte
Resumo:
This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the XGBoost model with various measures, such as consistency and proximity. Finally, Python 3.7 programming language with Jupyter Notebook integrated development environment is used for simulation. The simulation result shows that the XGBoost model achieves 99.79% accuracy, indicating that the counterfactual explanation of the XGBoost model describes the smallest changes in the feature values for changing the diagnosis outcome to the predefined output.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Madumal, Prashan, Tim Miller, Liz Sonenberg e Frank Vetere. "Explainable Reinforcement Learning through a Causal Lens". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 03 (3 de abril de 2020): 2493–500. http://dx.doi.org/10.1609/aaai.v34i03.5631.

Texto completo da fonte
Resumo:
Prominent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen by referring to counterfactuals — things that did not happen. In this paper, we use causal models to derive causal explanations of the behaviour of model-free reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We computationally evaluate the model in 6 domains and measure performance and task prediction accuracy. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigate: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Li, Xueyan, Yahui Zhao, Huili Wang e Xue Zhang. "Embodied Meaning in Comprehending Abstract Chinese Counterfactuals". Chinese Journal of Applied Linguistics 47, n.º 3 (1 de setembro de 2024): 414–32. http://dx.doi.org/10.1515/cjal-2024-0303.

Texto completo da fonte
Resumo:
Abstract Embodied cognition theories propose that language comprehension triggers a sensorimotor system in the brain. However, most previous research has paid much attention to concrete and factual sentences, and little emphasis has been put on the research of abstract and counterfactual sentences. The primary challenges for embodied theories lie in elucidating the meanings of abstract and counterfactual sentences. The most prevalent explanation is that abstract and counterfactual sentences are grounded in the activation of a sensorimotor system, in exactly the same way as concrete and factual ones. The present research employed a dual-task experimental paradigm to investigate whether the embodied meaning is activated in comprehending action-related abstract Chinese counterfactual sentences through the presence or absence of action-sentence compatibility effect (ACE). Participants were instructed to read and listen to the action-related abstract Chinese factual or counterfactual sentences describing an abstract transfer word towards or away from them, and then move their fingers towards or away from them to press the buttons in the same direction as the motion cue of the transfer verb. The action-sentence compatibility effect was observed in both abstract factual and counterfactual sentences, in line with the embodied cognition theories, which indicated that the embodied meanings were activated in both action-related abstract factuals and counterfactuals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Fernández-Loría, Carlos, Foster Provost e Xintian Han. "Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach". MIS Quarterly 45, n.º 3 (1 de setembro de 2022): 1635–60. http://dx.doi.org/10.25300/misq/2022/16749.

Texto completo da fonte
Resumo:
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Kovalev, Maxim, Lev Utkin, Frank Coolen e Andrei Konstantinov. "Counterfactual Explanation of Machine Learning Survival Models". Informatica 32, n.º 4 (2021): 817–47. http://dx.doi.org/10.15388/21-infor468.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Zahedi, Zahra, Sailik Sengupta e Subbarao Kambhampati. "‘Why Didn’t You Allocate This Task to Them?’ Negotiation-Aware Task Allocation and Contrastive Explanation Generation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 9 (24 de março de 2024): 10243–51. http://dx.doi.org/10.1609/aaai.v38i9.28890.

Texto completo da fonte
Resumo:
In this work, we design an Artificially Intelligent Task Allocator (AITA) that proposes a task allocation for a team of humans. A key property of this allocation is that when an agent with imperfect knowledge (about their teammate's costs and/or the team's performance metric) contests the allocation with a counterfactual, a contrastive explanation can always be provided to showcase why the proposed allocation is better than the proposed counterfactual. For this, we consider a negotiation process that produces a negotiation-aware task allocation and, when contested, leverages a negotiation tree to provide a contrastive explanation. With human subject studies, we show that the proposed allocation indeed appears fair to a majority of participants and, when not, the explanations generated are judged as convincing and easy to comprehend.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Chalyi, Serhii, e Volodymyr Leshchynskyi. "PROBABILISTIC COUNTERFACTUAL CAUSAL MODEL FOR A SINGLE INPUT VARIABLE IN EXPLAINABILITY TASK". Advanced Information Systems 7, n.º 3 (20 de setembro de 2023): 54–59. http://dx.doi.org/10.20998/2522-9052.2023.3.08.

Texto completo da fonte
Resumo:
The subject of research in this article is the process of constructing explanations in intelligent systems represented as black boxes. The aim is to develop a counterfactual causal model between the values of an input variable and the output of an artificial intelligence system, considering possible alternatives for different input variable values, as well as the probabilities of these alternatives. The goal is to explain the actual outcome of the system's operation to the user, along with potential changes in this outcome according to the user's requirements based on changes in the input variable value. The intelligent system is considered as a "black box." Therefore, this causal relationship is formed using possibility theory, which allows accounting for the uncertainty arising due to the incompleteness of information about changes in the states of the intelligent system in the decision-making process. The tasks involve: structuring the properties of a counterfactual explanation in the form of a causal dependency; formulating the task of building a potential counterfactual causal model for explanation; developing a possible counterfactual causal model. The employed approaches include: the set-theoretic approach, used to describe the components of the explanation construction process in intelligent systems; the logical approach, providing the representation of causal dependencies between input data and the system's decision. The following results were obtained. The structuring of counterfactual causal dependency was executed. A comprehensive task of constructing a counterfactual causal dependency was formulated as a set of subtasks aimed at establishing connections between causes and consequences based on minimizing discrepancies in input data values and deviations in the decisions of the intelligent system under conditions of incomplete information regarding the functioning process of the system. A potential counterfactual causal model for a single input variable was developed. Conclusions. The scientific novelty of the obtained results lies in the proposal of a potential counterfactual causal model for a single input variable. This model defines a set of alternative connections between the values of the input variable and the obtained result based on estimates of the possibility and necessity of using these variables to obtain a decision from the intelligent system. The model enables the formation of a set of dependencies that explain to the user the importance of input data values for achieving an acceptable decision for the user.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Ley, Dan, Umang Bhatt e Adrian Weller. "Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junho de 2022): 7390–98. http://dx.doi.org/10.1609/aaai.v36i7.20702.

Texto completo da fonte
Resumo:
To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating a single Counterfactual Latent Uncertainty Explanation (CLUE) for a given data point where the model is uncertain. We broaden the exploration to examine δ-CLUE, the set of potential CLUEs within a δ ball of the original input in latent space. We study the diversity of such sets and find that many CLUEs are redundant; as such, we propose DIVerse CLUE (∇-CLUE), a set of CLUEs which each propose a distinct explanation as to how one can decrease the uncertainty associated with an input. We then further propose GLobal AMortised CLUE (GLAM-CLUE), a distinct, novel method which learns amortised mappings that apply to specific groups of uncertain inputs, taking them and efficiently transforming them in a single function call into inputs for which a model will be certain. Our experiments show that δ-CLUE, ∇-CLUE, and GLAM-CLUE all address shortcomings of CLUE and provide beneficial explanations of uncertainty estimates to practitioners.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Reutlinger, Alexander. "Does the counterfactual theory of explanation apply to non-causal explanations in metaphysics?" European Journal for Philosophy of Science 7, n.º 2 (19 de agosto de 2016): 239–56. http://dx.doi.org/10.1007/s13194-016-0155-z.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Kanamori, Kentaro, Takuya Takagi, Ken Kobayashi, Yuichi Ike, Kento Uemura e Hiroki Arimura. "Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de maio de 2021): 11564–74. http://dx.doi.org/10.1609/aaai.v35i13.17376.

Texto completo da fonte
Resumo:
Post-hoc explanation methods for machine learning models have been widely used to support decision-making. One of the popular methods is Counterfactual Explanation (CE), also known as Actionable Recourse, which provides a user with a perturbation vector of features that alters the prediction result. Given a perturbation vector, a user can interpret it as an "action" for obtaining one's desired decision result. In practice, however, showing only a perturbation vector is often insufficient for users to execute the action. The reason is that if there is an asymmetric interaction among features, such as causality, the total cost of the action is expected to depend on the order of changing features. Therefore, practical CE methods are required to provide an appropriate order of changing features in addition to a perturbation vector. For this purpose, we propose a new framework called Ordered Counterfactual Explanation (OrdCE). We introduce a new objective function that evaluates a pair of an action and an order based on feature interaction. To extract an optimal pair, we propose a mixed-integer linear optimization approach with our objective function. Numerical experiments on real datasets demonstrated the effectiveness of our OrdCE in comparison with unordered CE methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

DEHGHANI, MORTEZA, RUMEN ILIEV e STEFAN KAUFMANN. "Causal Explanation and Fact Mutability in Counterfactual Reasoning". Mind & Language 27, n.º 1 (12 de janeiro de 2012): 55–85. http://dx.doi.org/10.1111/j.1468-0017.2011.01435.x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Chalyi, Serhii, Volodymyr Leshchynskyi e Irina Leshchynska. "COUNTERFACTUAL TEMPORAL MODEL OF CAUSAL RELATIONSHIPS FOR CONSTRUCTING EXPLANATIONS IN INTELLIGENT SYSTEMS". Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, n.º 2 (6) (28 de dezembro de 2021): 41–46. http://dx.doi.org/10.20998/2079-0023.2021.02.07.

Texto completo da fonte
Resumo:
The subject of the research is the processes of constructing explanations based on causal relationships between states or actions of an intellectualsystem. An explanation is knowledge about the sequence of causes and effects that determine the process and result of an intelligent informationsystem. The aim of the work is to develop a counterfactual temporal model of cause-and-effect relationships as part of an explanation of the process offunctioning of an intelligent system in order to ensure the identification of causal dependencies based on the analysis of the logs of the behavior ofsuch a system. To achieve the stated goals, the following tasks are solved: determination of the temporal properties of the counterfactual description ofcause-and-effect relationships between actions or states of an intelligent information system; development of a temporal model of causal connections,taking into account both the facts of occurrence of events in the intellectual system, and the possibility of occurrence of events that do not affect theformation of the current decision. Conclusions. The structuring of the temporal properties of causal links for pairs of events that occur sequentially intime or have intermediate events is performed. Such relationships are represented by alternative causal relationships using the temporal operators"Next" and "Future", which allows realizing a counterfactual approach to the representation of causality. A counterfactual temporal model of causalrelationships is proposed, which determines deterministic causal relationships for pairs of consecutive events and pairs of events between which thereare other events, which determines the transitivity property of such dependencies and, accordingly, creates conditions for describing the sequence ofcauses and effects as part of the explanation in intelligent system with a given degree of detail The model provides the ability to determine cause-andeffect relationships, between which there are intermediate events that do not affect the final result of the intelligent information system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Thagard, Paul. "Naturalizing Logic: How Knowledge of Mechanisms Enhances Inductive Inference". Philosophies 6, n.º 2 (21 de junho de 2021): 52. http://dx.doi.org/10.3390/philosophies6020052.

Texto completo da fonte
Resumo:
This paper naturalizes inductive inference by showing how scientific knowledge of real mechanisms provides large benefits to it. I show how knowledge about mechanisms contributes to generalization, inference to the best explanation, causal inference, and reasoning with probabilities. Generalization from some A are B to all A are B is more plausible when a mechanism connects A to B. Inference to the best explanation is strengthened when the explanations are mechanistic and when explanatory hypotheses are themselves mechanistically explained. Causal inference in medical explanation, counterfactual reasoning, and analogy also benefit from mechanistic connections. Mechanisms also help with problems concerning the interpretation, availability, and computation of probabilities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Srinivasan, Abishek, Varun Singapura Ravi, Juan Carlos Andresen e Anders Holst. "Counterfactual Explanation for Auto-Encoder Based Time-Series Anomaly Detection". PHM Society European Conference 8, n.º 1 (27 de junho de 2024): 9. http://dx.doi.org/10.36001/phme.2024.v8i1.4087.

Texto completo da fonte
Resumo:
The complexity of modern electro-mechanical systems require the development of sophisticated diagnostic methods like anomaly detection capable of detecting deviations. Conventional anomaly detection approaches like signal processing and statistical modelling often struggle to effectively handle the intricacies of complex systems, particularly when dealing with multi-variate signals. In contrast, neural network-based anomaly detection methods, especially Auto-Encoders, have emerged as a compelling alternative, demonstrating remarkable performance. However, Auto-Encoders exhibit inherent opaqueness in their decision-making processes, hindering their practical implementation at scale. Addressing this opacity is essential for enhancing the interpretability and trustworthiness of anomaly detection models. In this work, we address this challenge by employing a feature selector to select features and counterfactual explanations to give a context to the model output. We tested this approach on the SKAB benchmark dataset and an industrial time-series dataset. The gradient based counterfactual explanation approach was evaluated via validity, sparsity and distance measures. Our experimental findings illustrate that our proposed counterfactual approach can offer meaningful and valuable insights into the model decision-making process, by explaining fewer signals compared to conventional approaches. These insights enhance the trustworthiness and interpretability of anomaly detection models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Contreras, Antonio, e Juan Antonio García-Madruga. "The relationship of counterfactual reasoning and false belief understanding: the role of prediction and explanation tasks". Psicológica Journal 41, n.º 2 (1 de julho de 2020): 127–61. http://dx.doi.org/10.2478/psicolj-2020-0007.

Texto completo da fonte
Resumo:
AbstractThe relation between the prediction and explanation of the false belief task (FBT) with counterfactual reasoning (CFR) was explored. Fifty eight 3-5 year-olds received a prediction or an explanation FBT, a belief attribution task and some counterfactual questions of increasing difficulty. Linguistic comprehension was also controlled. CFR highly predicted FBT in the explanation version but not in the prediction one. Additionally, results in the explanation version indicate that CFR underlies achievements prior to the understanding of the representational mind and stimulates the explicitness of the mental domain. This study identifies the conditions under which CFR becomes a fundamental cognitive tool for social cognition. The results obtained contribute to the dialog between the two major theoretical approaches: theory-theory and simulation theory.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

AlJalaud, Ebtisam, e Manar Hosny. "Enhancing Explainable Artificial Intelligence: Using Adaptive Feature Weight Genetic Explanation (AFWGE) with Pearson Correlation to Identify Crucial Feature Groups". Mathematics 12, n.º 23 (27 de novembro de 2024): 3727. http://dx.doi.org/10.3390/math12233727.

Texto completo da fonte
Resumo:
The ‘black box’ nature of machine learning (ML) approaches makes it challenging to understand how most artificial intelligence (AI) models make decisions. Explainable AI (XAI) aims to provide analytical techniques to understand the behavior of ML models. XAI utilizes counterfactual explanations that indicate how variations in input features lead to different outputs. However, existing methods must also highlight the importance of features to provide more actionable explanations that would aid in the identification of key drivers behind model decisions—and, hence, more reliable interpretations—ensuring better accuracy. The method we propose utilizes feature weights obtained through adaptive feature weight genetic explanation (AFWGE) with the Pearson correlation coefficient (PCC) to determine the most crucial group of features. The proposed method was tested on four real datasets with nine different classifiers for evaluation against a nonweighted counterfactual explanation method (CERTIFAI) and the original feature values’ correlation. The results show significant enhancements in accuracy, precision, recall, and F1 score for most datasets and classifiers; this indicates the superiority of the feature weights selected via AFWGE with the PCC over CERTIFAI and the original data values in determining the most important group of features. Focusing on important feature groups elaborates the behavior of AI models and enhances decision making, resulting in more reliable AI systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Kanamori, Kentaro, Takuya Takagi, Ken Kobayashi e Hiroki Arimura. "Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization". Transactions of the Japanese Society for Artificial Intelligence 36, n.º 6 (1 de novembro de 2021): C—L44_1–12. http://dx.doi.org/10.1527/tjsai.36-6_c-l44.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Mahoney, James, e Rodrigo Barrenechea. "The logic of counterfactual analysis in case-study explanation". British Journal of Sociology 70, n.º 1 (19 de dezembro de 2017): 306–38. http://dx.doi.org/10.1111/1468-4446.12340.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Bours, Martijn J. L. "A nontechnical explanation of the counterfactual definition of confounding". Journal of Clinical Epidemiology 121 (maio de 2020): 91–100. http://dx.doi.org/10.1016/j.jclinepi.2020.01.021.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Zhang, Songming, Xiaofeng Chen, Shiping Wen e Zhongshan Li. "Density-based reliable and robust explainer for counterfactual explanation". Expert Systems with Applications 226 (setembro de 2023): 120214. http://dx.doi.org/10.1016/j.eswa.2023.120214.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia