Добірка наукової літератури з теми "Scenario and counterfactual explanations"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Scenario and counterfactual explanations".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Scenario and counterfactual explanations"

1

Labaien Soto, Jokin, Ekhi Zugasti Uriguen, and Xabier De Carlos Garcia. "Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencoders." Applied Sciences 13, no. 5 (February 24, 2023): 2912. http://dx.doi.org/10.3390/app13052912.

Повний текст джерела
Анотація:
Explainable Artificial Intelligence (XAI) has gained significant attention in recent years due to concerns over the lack of interpretability of Deep Learning models, which hinders their decision-making processes. To address this issue, counterfactual explanations have been proposed to elucidate the reasoning behind a model’s decisions by providing what-if statements as explanations. However, generating counterfactuals traditionally involves solving an optimization problem for each input, making it impractical for real-time feedback. Moreover, counterfactuals must meet specific criteria, including being user-driven, causing minimal changes, and staying within the data distribution. To overcome these challenges, a novel model-agnostic approach called Real-Time Guided Counterfactual Explanations (RTGCEx) is proposed. This approach utilizes autoencoders to generate real-time counterfactual explanations that adhere to these criteria by optimizing a multiobjective loss function. The performance of RTGCEx has been evaluated on two datasets: MNIST and Gearbox, a synthetic time series dataset. The results demonstrate that RTGCEx outperforms traditional methods in terms of speed and efficacy on MNIST, while also effectively identifying and rectifying anomalies in the Gearbox dataset, highlighting its versatility across different scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Virmajoki, Veli. "Frameworks in Historiography: Explanation, Scenarios, and Futures." Journal of the Philosophy of History 17, no. 2 (July 3, 2023): 288–309. http://dx.doi.org/10.1163/18722636-12341501.

Повний текст джерела
Анотація:
Abstract In this paper, I analyze how frameworks shape historiographical explanations. I argue that, in order to identify a sequence of events as relevant to a historical outcome, assumptions about the workings of the relevant domain have to be made. By extending Lakatosian considerations, I argue that these assumptions are provided by a framework that contains a set of factors and intertwined principles that (supposedly) govern how a historical phenomenon works. I connect frameworks with a counterfactual account of historical explanation. Frameworks enable us to explain the past by providing a backbone of explanatory patterns of counterfactual dependency. I conclude by noting that both counterfactual scenarios and scenarios of the future require frameworks and, therefore, historiographical explanation generates a set of possible futures. Analyzing these possible futures enables us to reveal the theoretical commitments of historiography.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Crawford, Beverly. "Germany's Future Political Challenges: Imagine that The New Yorker Profiled the German Chancellor in 2015." German Politics and Society 23, no. 4 (December 1, 2005): 69–87. http://dx.doi.org/10.3167/gps.2005.230404.

Повний текст джерела
Анотація:
What follows is a fictitious scenario, a "thought experiment," meant to project a particular future for Germany if certain assumptions hold. Scenarios are hypotheses that rest on a set of assumptions and one or two "wild cards." They can reveal forces of change that might be otherwise hidden, discard those that are not plausible, and describe the future of trends that are relatively certain. Indeed, scenarios create a particular future in the same way that counterfactual methods create a different past. Counterfactual methods predict how events would have unfolded had a few elements of the story been changed, with a focus on varying conditions that seem important and that can be manipulated. For instance, to explore the effects of military factors on the likelihood of war, one might ask: "how would pre 1914 diplomacy have evolved if the leaders of Europe had not believed that conquest was easy?" Or to explore the importance of broad social and political factors in causing Nazi aggression: "How might the 1930s have unfolded had Hitler died in 1932?" The greater the impact of the posited changes, the more important the actual factors that were manipulated. Assuming that the structure of explanation and prediction are the same, scenario writing pursues a similar method. But, instead of seeking alternative explanations for the past, scenarios project relative certainties and then manipulate the important but uncertain factors, to create a plausible story about the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Nolan, Daniel. "The Possibilities of History." Journal of the Philosophy of History 10, no. 3 (November 17, 2016): 441–56. http://dx.doi.org/10.1163/18722636-12341346.

Повний текст джерела
Анотація:
Several kinds of historical alternatives are distinguished. Different kinds of historical alternatives are valuable to the practice of history for different reasons. Important uses for historical alternatives include representing different sides of historical disputes; distributing chances of different outcomes over alternatives; and offering explanations of why various alternatives did not in fact happen. Consideration of counterfactuals about what would have happened had things been different in particular ways plays particularly useful roles in reasoning about historical analogues of current conditions; reasoning about causal claims; and in evaluating historical explanations. When evaluating the role of alternative histories in historical thinking, we should keep in mind the uses of historical alternatives that go well beyond the long-term and specific scenarios that are the focus of so-called “counterfactual history”.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rahimi, Saeed, Antoni B. Moore, and Peter A. Whigham. "Beyond Objects in Space-Time: Towards a Movement Analysis Framework with ‘How’ and ‘Why’ Elements." ISPRS International Journal of Geo-Information 10, no. 3 (March 22, 2021): 190. http://dx.doi.org/10.3390/ijgi10030190.

Повний текст джерела
Анотація:
Current spatiotemporal data has facilitated movement studies to shift objectives from descriptive models to explanations of the underlying causes of movement. From both a practical and theoretical standpoint, progress in developing approaches for these explanations should be founded on a conceptual model. This paper presents such a model in which three conceptual levels of abstraction are proposed to frame an agent-based representation of movement decision-making processes: ‘attribute,’ ‘actor,’ and ‘autonomous agent’. These in combination with three temporal, spatial, and spatiotemporal general forms of observations distinguish nine (3 × 3) representation typologies of movement data within the agent framework. Thirdly, there are three levels of cognitive reasoning: ‘association,’ ‘intervention,’ and ‘counterfactual’. This makes for 27 possible types of operation embedded in a conceptual cube with the level of abstraction, type of observation, and degree of cognitive reasoning forming the three axes. The conceptual model is an arena where movement queries and the statement of relevant objectives takes place. An example implementation of a tightly constrained spatiotemporal scenario to ground the agent-structure was summarised. The platform has been well-defined so as to accommodate different tools and techniques to drive causal inference in computational movement analysis as an immediate future step.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Delaney, Eoin, Arjun Pakrashi, Derek Greene, and Mark T. Keane. "Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ (Abstract Reprint)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (March 24, 2024): 22696. http://dx.doi.org/10.1609/aaai.v38i20.30596.

Повний текст джерела
Анотація:
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Barzekar, Hosein, and Susan McRoy. "Achievable Minimally-Contrastive Counterfactual Explanations." Machine Learning and Knowledge Extraction 5, no. 3 (August 3, 2023): 922–36. http://dx.doi.org/10.3390/make5030048.

Повний текст джерела
Анотація:
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Baron, Sam, Mark Colyvan, and David Ripley. "A Counterfactual Approach to Explanation in Mathematics." Philosophia Mathematica 28, no. 1 (December 2, 2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.

Повний текст джерела
Анотація:
ABSTRACT Our goal in this paper is to extend counterfactual accounts of scientific explanation to mathematics. Our focus, in particular, is on intra-mathematical explanations: explanations of one mathematical fact in terms of another. We offer a basic counterfactual theory of intra-mathematical explanations, before modelling the explanatory structure of a test case using counterfactual machinery. We finish by considering the application of counterpossibles to mathematical explanation, and explore a second test case along these lines.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fernández-Loría, Carlos, Foster Provost, and Xintian Han. "Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach." MIS Quarterly 45, no. 3 (September 1, 2022): 1635–60. http://dx.doi.org/10.25300/misq/2022/16749.

Повний текст джерела
Анотація:
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Prado-Romero, Mario Alfonso, Bardh Prenkaj, and Giovanni Stilo. "Robust Stochastic Graph Generator for Counterfactual Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21518–26. http://dx.doi.org/10.1609/aaai.v38i19.30149.

Повний текст джерела
Анотація:
Counterfactual Explanation (CE) techniques have garnered attention as a means to provide insights to the users engaging with AI systems. While extensively researched in domains such as medical imaging and autonomous vehicles, Graph Counterfactual Explanation (GCE) methods have been comparatively under-explored. GCEs generate a new graph similar to the original one, with a different outcome grounded on the underlying predictive model. Among these GCE techniques, those rooted in generative mechanisms have received relatively limited investigation despite demonstrating impressive accomplishments in other domains, such as artistic styles and natural language modelling. The preference for generative explainers stems from their capacity to generate counterfactual instances during inference, leveraging autonomously acquired perturbations of the input graph. Motivated by the rationales above, our study introduces RSGG-CE, a novel Robust Stochastic Graph Generator for Counterfactual Explanations able to produce counterfactual examples from the learned latent space considering a partially ordered generation sequence. Furthermore, we undertake quantitative and qualitative analyses to compare RSGG-CE's performance against SoA generative explainers, highlighting its increased ability to engendering plausible counterfactual candidates.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Scenario and counterfactual explanations"

1

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Повний текст джерела
Анотація:
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général qui intègre explicitement la connaissance via un nouveau critère dans les objectifs d'interprétabilité. Ce formalisme est ensuite décliné pour différents types connaissances et différents types d'explications, particulièrement les exemples contre-factuels, conduisant à la proposition de plusieurs algorithmes (KICE, Knowledge Integration in Counterfactual Explanation, rKICE pour sa variante incluant des connaissances exprimées par des règles et KISM, Knowledge Integration in Surrogate Models). La question de l'agrégation des contraintes de qualité classique et de compatibilité avec les connaissances est également étudiée et nous proposons d'utiliser l'intégrale de Gödel comme opérateur d'agrégation. Enfin nous discutons de la difficulté à générer une unique explication adaptée à tous types d'utilisateurs et de la notion de diversité dans les explications
This thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ball, Russell Andrew. "An investigation of biased depictions of normality in counterfactual scenario studies." 2004. http://hdl.handle.net/1828/546.

Повний текст джерела
Анотація:
Counterfactual research and Norrn Theory (Kahneman & Miller, 1986) predict that abnormal antecedents will be more mutable than normal antecedents. Individuals who behaved abnormally prior to accidental or criminal victimization (e.g., choosing a different route home) are usually awarded higher compensation than those victimized in more routine circumstances. Abnormality is said to provoke more available alternatives, and is cited as a positive correlate of affect (the emotional amplification hypothesis). Enhanced affective response is said to be responsible for greater compensation to victims and more severe punishment of offenders. This thesis challenged the notion that exceptional circumstances always have more available alternatives than do routine circumstances, incorporating higher methodological rigor and a more realistic legal context than previous studies. Results indicated that the degree of alternative availability is not so much a function of normality itself but of how normality is conveyed in scenarios. Routine circumstances can be just as mutable as exceptional circumstances. Scenario studies investigating criminal punishment which separated alternative availability and normality provided evidence of a moderating effect of availability, as well as an interaction between victim and offender availability. The findings help to revise assertions made by psychological and legal scholars concerning mutability.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Charrua, Vera Veríssimo Agostinho Sabino. "Pensamento contrafactual : ator e leitor em cenários negativos e positivos." Master's thesis, 2021. http://hdl.handle.net/10400.12/8098.

Повний текст джерела
Анотація:
Dissertação de Mestrado apresentada no ISPA – Instituto Universitário de Ciências Psicológicas, Sociais e da Vida, para obtenção do grau de Mestre na especialidade de Psicologia Clínica
O Pensamento Contrafactual é um tipo de pensamento espontâneo que nos permite procurar alternativas à nossa realidade, muitas vezes evidenciado através da expressão “e se…”, que ocorre frequentemente no dia-a-dia de qualquer sujeito (Byrne, 2016). Já em muitos estudos foi sugerido o impacto que certas variáveis como a posição do sujeito (ator vs. leitor) (Girotto e. al., 2007; Pighin et. al., 2011), ou o cenário/valência do resultado (sucesso/positivo vs. insucesso/negativo) (Roese & Olson, 1993a, 1993b) têm em diversas dimensões do PCF. Neste estudo acrescenta-se uma pergunta sobre este assunto: será que os sujeitos na posição de atores ou leitores têm pensamentos contrafactuais, perante um resultado positivo, idênticos aos que já se conhece perante um resultado negativo (Girotto e. al., 2007; Pighin et. al., 2011)? Assim, hipotetizou-se que, por um lado, deixaria de haver diferenças entre atores e leitores em cenário de sucesso, e, por outro, que o tipo de cenário teria apenas impacto em atores e não em leitores. Os resultados foram no sentido de ambas as hipóteses. Adicionalmente, foram propostas outras duas hipóteses: uma diz respeito à relação entre a valência de resultado e à estrutura do PCF (quanto aos atores, seria esperado que em cenário positivo fossem elaborados contrafactuais de estrutura subtrativa e em cenário negativo fossem elaborados contrafactuais de estrutura aditiva, não sendo esperadas diferenças entre os leitores); e outra diz respeito à relação entre a valência de resultado e o tipo de atribuição causal (interna vs, externa) (era esperado que, em grupos de atores, se atribuísse o sucesso a fatores internos e o insucesso a fatores externos, não se esperando diferenças entre os leitores). Os resultados relativos a estas últimas hipóteses não foram no sentido do esperado.
Counterfactual thought is a type of thought which allows us to search for mental alternatives to our present reality. It’s spontaneous and frequent in our everyday lives, and it is frequently expressed through a “what if…” conjunction (Byrne, 2016). There have been many studies in which it was suggested that variables such as role (actor vs reader) (Girotto e. al., 2007; Pighin et. al., 2011) or scenario/outcome valence (positive vs. negative) (Roese & Olson, 1993a, 1993b) have impact on certain dimensions of counterfactual though. In this study, we add up yet another question about this issue: do both actors and readers have the same counterfactual thoughts for both negative and positive scenarios? Thereby, we hypothesised that there would be no differences between roles on a positive scenario, and that the difference in scenario would only impact on actors but not on readers. The results went accordingly to the proposed hypotheses. Additionally, we proposed yet another two hypotheses regarding outcome valence and the structure of counterfactual thought (additive vs. subtractive) (within the actor groups, it was expected that on a positive scenario, subjects would elaborate more subtractive counterfactual thoughts and on a negative scenario, subjects would elaborate more additive counterfactual thoughts, not expecting any differences between the two groups of readers), and outcome valence and causal attribution (it was expected, within the actor groups, that success were attributed to internal elements and unsuccess attributed to external elements; within readers, there were no differences to be expected, no matter the outcome valence). The results concerning these last hypotheses didn’t go accordingly to that which we proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

(9375209), Cristhian Lizarazo Jimenez. "IDENTIFICATION OF FAILURE-CAUSED TRAFFIC CONFLICTS IN TRACKING SYSTEMS: A GENERAL FRAMEWORK." Thesis, 2020.

Знайти повний текст джерела
Анотація:

Proactive evaluation of road safety is one of the most important objectives of transportation engineers. While current practice typically relies on crash-based analysis after the fact to diagnose safety problems and provide corrective countermeasures on roads, surrogate measures of safety are emerging as a complementary evaluation that can allow engineers to proactively respond to safety issues. These surrogate measures attempt to address the primary limitations of crash data, which include underreporting, lack of reliable insight into the events leading to the crash, and long data collection times.

Traffic conflicts are one of the most widely adopted surrogate measures of safety because they meet the following two conditions for crash surrogacy: (1) they are non-crash events that can be physically related in a predictable and reliable way to crashes, and (2) there is a potential for bridging crash frequency and severity with traffic conflicts. However, three primary issues were identified in the literature that need to be resolved for the practical application of conflicts: (1) the lack of consistency in the definition of traffic conflict, (2) the predictive validity from such events, and (3) the adequacy of traffic conflict observations.

Tarko (2018) developed a theoretical framework in response to the first two issues and defined traffic conflicts using counterfactual theory as events where the lack of timely responses from drivers or road users can produce crashes if there is no evasive action. The author further introduced a failure-based definition to emphasize conflicts as an undesirable condition that needs to be corrected to avoid a crash. In this case, the probability of a crash, given failure, depends on the response delay. The distribution of this delay is adjusted, and the probability is estimated using the fitted distribution. As this formal theory addresses the first two issues, a complete framework for the proper identification of conflicts needs to be investigated in line with the failure mechanism proposed in this theory.

The objective of this dissertation, in response to the third issue, is to provide a generalized framework for proper identification of traffic conflicts by considering the failure-based definition of traffic conflicts. The framework introduced in this dissertation is built upon an empirical evaluation of the methods applied to identify traffic conflicts from naturalistic driving studies and video-based tracking systems. This dissertation aimed to prove the practicality of the framework for proactive safety evaluation using emerging technologies from in-vehicle and roadside instrumentation.

Two conditions must be met to properly claim observed traffic events as traffic conflicts: (1) analysis of longitudinal and lateral acceleration profiles for identification of response due to failure and (2) estimation of the time-to-collision as the period between the end of the evasion and the hypothetical collision. Extrapolating user behavior in the counterfactual scenario of no evasion is applied for identifying the hypothetical collision point.

The results from the SHRP2 study were particularly encouraging, where the appropriate identification of traffic conflicts resulted in the estimation of an expected number of crashes similar to the number reported in the study. The results also met the theoretical postulates including stabilization of the estimated crashes at lower proximity values and Lomax-distributed response delays. In terms of area-wide tracking systems, the framework was successful in identifying and removing failure-free encounters from the In-Depth understanding of accident causation for Vulnerable road users (InDeV) program.

This dissertation also extended the application of traffic conflicts technique by considering estimation of the severity of a hypothetical crash given that a conflict occurs. This component is important in order for conflicts to resemble the practical applications of crashes, including the diagnostics of hazardous locations and evaluating the effectiveness of the countermeasures. Countermeasures should not only reduce the number of conflicts but also the risk of crash given the conflict. Severity analysis identifies the environmental, road, driver, and pre-crash conditions that increase the likelihood of severe impacts. Using dynamic characterization of crash events, this dissertation structured a probability model to evaluate crash reporting and its associated severity. Multinomial logistic models were applied in the estimation; and quasi-complete separation in logistic regression was addressed by providing a Bayesian estimation of these models.

Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Scenario and counterfactual explanations"

1

Reutlinger, Alexander. Extending the Counterfactual Theory of Explanation. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0005.

Повний текст джерела
Анотація:
In the recent debate on explanation philosophers tend to agree that the natural and social sciences do not only provide causal but also non-causal explanations. It is a challenging aspect of this agreement that currently dominant causal accounts of explanation fail to cover non-causal types of explanation. So, how shall we react to this challenge? The goal of this chapter is to articulate and to extend the counterfactual theory of explanation (CTE). The CTE is a monist account of explanation. Monism is the view that there is one single philosophical account capturing both causal and non-causal explanations. According to the CTE, both causal and non-causal explanations are explanatory by virtue of revealing counterfactual dependencies between the explanandum and the explanans.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

PRO, Certified. Certified Facility Manager Practice Exam: 100 Scenario Based Questions and Answers with Explanations. Independently Published, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Anup Kumar S. Pmp PgMP. Program Management Professional Question Bank: 440 Advance Scenario Questions and Answers with Explanations. Independently Published, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gerstenberg, Tobias, and Joshua B. Tenenbaum. Intuitive Theories. Edited by Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.28.

Повний текст джерела
Анотація:
This chapter first explains what intuitive theories are, how they can be modeled as probabilistic, generative programs, and how intuitive theories support various cognitive functions such as prediction, counterfactual reasoning, and explanation. It focuses on two domains of knowledge: people’s intuitive understanding of physics, and their intuitive understanding of psychology. It shows how causal judgments can be modeled as counterfactual contrasts operating over an intuitive theory of physics, and how explanations of an agent’s behavior are grounded in a rational planning model that is inverted to infer the agent’s beliefs, desires, and abilities. It concludes by highlighting some of the challenges that the intuitive theories framework faces, such as understanding how intuitive theories are learned and developed.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rubab, Fariha. Microsoft 365 Fundamentals Certification MS-900: Exam Practice Questions Updated 2022 - Accelerate Your Exam Success with Real Questions with Answers and Explanations Scenario Based. Independently Published, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

French, Steven, and Juha Saatsi. Symmetries and Explanatory Dependencies in Physics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0010.

Повний текст джерела
Анотація:
Many important explanations in physics are based on ideas and assumptions about symmetries, but little has been said about the nature of such explanations. This chapter aims to fill this lacuna, arguing that various symmetry explanations can be naturally captured in the spirit of the counterfactual-dependence account of Woodward, liberalized from its causal trappings. From the perspective of this account symmetries explain by providing modal information about an explanatory dependence, by showing how the explanandum would have been different, had the facts about an explanatory symmetry been different. Furthermore, the authors argue that such explanatory dependencies need not be causal.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kutach, Douglas. The Asymmetry of Influence. Edited by Craig Callender. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199298204.003.0009.

Повний текст джерела
Анотація:
This chapter considers the nature of the causal asymmetry, or even more generally, the asymmetry of influence. Putting aside explanations which would appeal to an asymmetry in time as explaining this asymmetry, it aims to show, using current physical theory and no ad hoc time asymmetric assumptions, why it is that future-directed influence sometimes advances one's goals but backward-directed influence does not. The chapter claims that agency is crucial to the explanation of the influence asymmetry. It provides an exhaustive account of the advancement asymmetry that is connected with fundamental physics, influence, causation, counterfactual dependence, and related notions in palatable ways.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hassini, Achref. PMP EXAM PREP QUESTIONS 2021 - 2022 Exam Simulator: 180 Situational, and Scenario-Based Questions l Close to the Real PMP Exam l + Detailed Answers Explanations l Covering the Current PMP Exam. Independently Published, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Solstad, Torgrim, and Oliver Bott. Causality and Causal Reasoning in Natural Language. Edited by Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.32.

Повний текст джерела
Анотація:
This chapter provides a combined overview of theoretical and psycholinguistic approaches to causality in language. The chapter’s main phenomenological focus is on causal relations as expressed intra-clausally by verbs (e.g., break, open) and between sentences by discourse markers (e.g., because, therefore). Special attention is given to implicit causality verbs that are argued to trigger expectations of explanations to occur in subsequent discourse. The chapter also discusses linguistic expressions that do not encode causation as such, but that seem to be dependent on a causal model for their adequate evaluation, such as counterfactual conditionals. The discussion of the phenomena is complemented by an overview of important aspects of their cognitive processing as revealed by psycholinguistic experimentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Maron, Martine. Is “no net loss of biodiversity” a good idea? Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198808978.003.0022.

Повний текст джерела
Анотація:
This chapter explores biodiversity offsetting as a tool used to achieve “no net loss” of biodiversity. Unfortunately, no-net-loss offsetting can be—and often is—unintentionally designed in a way that inevitably results in ongoing biodiversity decline. Credit for offset sites is given in proportion to the assumed loss that would happen at those sites if not protected, and this requires clear baselines and good estimates of the risk of loss. This crediting calculation also creates a perverse incentive to overstate—or even genuinely increase—the threat to biodiversity at potential offset sites, in order to generate more offset “credit” that can then be exchanged for damaging actions elsewhere. The phrase “no net loss,” when used without an explicit frame of reference and quantified counterfactual scenario, is meaningless, and potentially misleading. Conservation scientists have a core role in interpreting, communicating, and improving the robustness of offset policy.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Scenario and counterfactual explanations"

1

Kuhl, Ulrike, André Artelt, and Barbara Hammer. "For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI." In Communications in Computer and Information Science, 280–300. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44070-0_14.

Повний текст джерела
Анотація:
AbstractCounterfactual explanations (CFEs) are a popular approach in explainable artificial intelligence (xAI), highlighting changes to input data necessary for altering a model’s output. A CFE can either describe a scenario that is better than the factual state (upward CFE), or a scenario that is worse than the factual state (downward CFE). However, potential benefits and drawbacks of the directionality of CFEs for user behavior in xAI remain unclear. The current user study (N = 161) compares the impact of CFE directionality on behavior and experience of participants tasked to extract new knowledge from an automated system based on model predictions and CFEs. Results suggest that upward CFEs provide a significant performance advantage over other forms of counterfactual feedback. Moreover, the study highlights potential benefits of mixed CFEs improving user performance compared to downward CFEs or no explanations. In line with the performance results, users’ explicit knowledge of the system is statistically higher after receiving upward CFEs compared to downward comparisons. These findings imply that the alignment between explanation and task at hand, the so-called regulatory fit, may play a crucial role in determining the effectiveness of model explanations, informing future research directions in (xAI). To ensure reproducible research, the entire code, underlying models and user data of this study is openly available: https://github.com/ukuhl/DirectionalAlienZoo
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Holzinger, Andreas, Anna Saranti, Anne-Christin Hauschild, Jacqueline Beinecke, Dominik Heider, Richard Roettger, Heimo Mueller, Jan Baumbach, and Bastian Pfeifer. "Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning." In Lecture Notes in Computer Science, 45–64. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_4.

Повний текст джерела
Анотація:
AbstractWe explore the integration of domain knowledge graphs into Deep Learning for improved interpretability and explainability using Graph Neural Networks (GNNs). Specifically, a protein-protein interaction (PPI) network is masked over a deep neural network for classification, with patient-specific multi-modal genomic features enriched into the PPI graph’s nodes. Subnetworks that are relevant to the classification (referred to as “disease subnetworks”) are detected using explainable AI. Federated learning is enabled by dividing the knowledge graph into relevant subnetworks, constructing an ensemble classifier, and allowing domain experts to analyze and manipulate detected subnetworks using a developed user interface. Furthermore, the human-in-the-loop principle can be applied with the incorporation of experts, interacting through a sophisticated User Interface (UI) driven by Explainable Artificial Intelligence (xAI) methods, changing the datasets to create counterfactual explanations. The adapted datasets could influence the local model’s characteristics and thereby create a federated version that distils their diverse knowledge in a centralized scenario. This work demonstrates the feasibility of the presented strategies, which were originally envisaged in 2021 and most of it has now been materialized into actionable items. In this paper, we report on some lessons learned during this project.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dandl, Susanne, Christoph Molnar, Martin Binder, and Bernd Bischl. "Multi-Objective Counterfactual Explanations." In Parallel Problem Solving from Nature – PPSN XVI, 448–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58112-1_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kuratomi, Alejandro, Ioanna Miliou, Zed Lee, Tony Lindgren, and Panagiotis Papapetrou. "JUICE: JUstIfied Counterfactual Explanations." In Discovery Science, 493–508. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18840-4_35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Guyomard, Victor, Françoise Fessant, Thomas Guyet, Tassadit Bouadi, and Alexandre Termier. "Generating Robust Counterfactual Explanations." In Machine Learning and Knowledge Discovery in Databases: Research Track, 394–409. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43418-1_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gerber, Doris. "Counterfactual Causality and Historical Explanations." In Explanation in Action Theory and Historiography, 167–78. 1 [edition]. | New York : Taylor & Francis, 2019. | Series: Routledge studies in contemporary philosophy ; 121: Routledge, 2019. http://dx.doi.org/10.4324/9780429506048-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mishra, Pradeepta. "Counterfactual Explanations for XAI Models." In Practical Explainable AI Using Python, 265–78. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7158-2_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Jeanneret, Guillaume, Loïc Simon, and Frédéric Jurie. "Diffusion Models for Counterfactual Explanations." In Computer Vision – ACCV 2022, 219–37. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26293-7_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Boumazouza, Ryma, Fahima Cheikh-Alili, Bertrand Mazure, and Karim Tabia. "A Symbolic Approach for Counterfactual Explanations." In Lecture Notes in Computer Science, 270–77. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58449-8_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jacob, Paul, Éloi Zablocki, Hédi Ben-Younes, Mickaël Chen, Patrick Pérez, and Matthieu Cord. "STEEX: Steering Counterfactual Explanations with Semantics." In Lecture Notes in Computer Science, 387–403. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19775-8_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Scenario and counterfactual explanations"

1

Sokol, Kacper, and Peter Flach. "Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/865.

Повний текст джерела
Анотація:
The prevalence of automated decision making, influencing important aspects of our lives -- e.g., school admission, job market, insurance and banking -- has resulted in increasing pressure from society and regulators to make this process more transparent and ensure its explainability, accountability and fairness. We demonstrate a prototype voice-enabled device, called Glass-Box, which users can question to understand automated decisions and identify the underlying model's biases and errors. Our system explains algorithmic predictions with class-contrastive counterfactual statements (e.g., ``Had a number of conditions been different:...the prediction would change...''), which show a difference in a particular scenario that causes an algorithm to ``change its mind''. Such explanations do not require any prior technical knowledge to understand, hence are suitable for a lay audience, who interact with the system in a natural way -- through an interactive dialogue. We demonstrate the capabilities of the device by allowing users to impersonate a loan applicant who can question the system to understand the automated decision that he received.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Albini, Emanuele, Jason Long, Danial Dervovic, and Daniele Magazzeni. "Counterfactual Shapley Additive Explanations." In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533168.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jeanneret, Guillaume, Loïc Simon, and Frédéric Jurie. "Adversarial Counterfactual Visual Explanations." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rodriguez, Pau, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin, and David Vazquez. "Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tran, Khanh Hiep, Azin Ghazimatin, and Rishiraj Saha Roy. "Counterfactual Explanations for Neural Recommenders." In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3404835.3463005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zemni, Mehdi, Mickaël Chen, Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, and Matthieu Cord. "OCTET: Object-aware Counterfactual Explanations." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01446.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhao, Wenqi, Satoshi Oyama, and Masahito Kurihara. "Generating Natural Counterfactual Visual Explanations." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/742.

Повний текст джерела
Анотація:
Counterfactual explanations help users to understand the behaviors of machine learning models by changing the inputs for the existing outputs. For an image classification task, an example counterfactual visual explanation explains: "for an example that belongs to class A, what changes do we need to make to the input so that the output is more inclined to class B." Our research considers changing the attribute description text of class A on the basis of the attributes of class B and generating counterfactual images on the basis of the modified text. We can use the prediction results of the model on counterfactual images to find the attributes that have the greatest effect when the model is predicting classes A and B. We applied our method to a fine-grained image classification dataset and used the generative adversarial network to generate natural counterfactual visual explanations. To evaluate these explanations, we used them to assist crowdsourcing workers in an image classification task. We found that, within a specific range, they improved classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Artelt, Andre, Valerie Vaquet, Riza Velioglu, Fabian Hinder, Johannes Brinkrolf, Malte Schilling, and Barbara Hammer. "Evaluating Robustness of Counterfactual Explanations." In 2021 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2021. http://dx.doi.org/10.1109/ssci50451.2021.9660058.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Han, Chan Sik, and Keon Myung Lee. "Gradient-based Counterfactual Generation for Sparse and Diverse Counterfactual Explanations." In SAC '23: 38th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3555776.3577737.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Pei, and Nuno Vasconcelos. "SCOUT: Self-Aware Discriminant Counterfactual Explanations." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00900.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Scenario and counterfactual explanations"

1

Gandelman, Néstor. A Comparison of Saving Rates: Micro Evidence from Seventeen Latin American and Caribbean Countries. Inter-American Development Bank, July 2015. http://dx.doi.org/10.18235/0011701.

Повний текст джерела
Анотація:
Using micro data on expenditure and income for 17 Latin American and Caribbean (LAC) countries, this paper presents stylized facts on saving behavior by age, education, income and place of residence. Counterfactual saving rates are computed by imposing the saving behavior, the population distribution or the income distribution of two benchmark economies (the United States and Korea). The results suggest that the difference in national saving rates between LAC and the benchmark economies can mainly be attributed to differences in saving behavior of the population and, to a lesser extent, to differences in the distribution of the population by educational levels. Other demographic or income distribution differences are not quantitatively important as explanations of saving rates.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Geloso, Vincent, and Chandler S. Reilly. Did the ‘Quiet Revolution’ Really Change Anything? CIRANO, December 2022. http://dx.doi.org/10.54932/itzr4537.

Повний текст джерела
Анотація:
The year 1960 is often presented as a break year in the economic history of Quebec and Canada. It is used to mark the beginning of the “Quiet Revolution” during which Canada’s French-speaking province of Quebec under rapid socio-economic change in the form of rapid economic convergence with the rest of Canada and the emergence of a more expansive state. Using synthetic control methods, we analyze whether 1960 is associated with a departure from previous developments. With regards to GDP per capita, GDP per worker, household-size adjusted income, life expectancy at birth, and enrollment rates in primary and secondary schools, we find that 1960 was not an important date. For most of these measures, the counterfactual scenario is slightly better than the actual data but not by significant margins. Only with respect to the size of government do we find sign of a break.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Juárez, Leticia. Buyer Market Power and Exchange Rate Pass-through. Inter-American Development Bank, August 2023. http://dx.doi.org/10.18235/0005083.

Повний текст джерела
Анотація:
This paper studies the role of buyer market power in determining the response of international prices to exchange rate changes (i.e., exchange rate pass-through). Using a novel dataset of the universe of Colombian export transactions that links Colombian exporters (sellers) to their foreign importers (buyers), I document three facts: i) most Colombian exports are concentrated in a few foreign buyers in each market, ii) the same seller charges different prices to different buyers in the same product and destination, and iii) markets with a higher concentration of sales among buyers display lower exchange rate pass-through. Motivated by these stylized facts, I propose an open economy model of oligopsony, a market with large number of sellers and a few buyers, that accounts for buyer market power in international markets and its consequences for price determination in international transactions. The model shows that larger foreign buyers pay a marked-down price, i.e., a price below the marginal product value for the buyer. Most importantly, these markdowns are flexible and play a role when adjusting prices to exchange rate shocks. I derive a model-based equation relating pass-through to buyer size and estimate it on the micro transaction level data for Colombia. I find that after an exchange rate shock, sellers connected to larger buyers face more moderate changes in their prices in the seller currency (i.e., lower exchange rate pass-through) than those connected to small buyers. Pass-through ranges from 1% for firms connected with the largest buyers to 17% for firms connected with the smallest buyers. I use the estimates from the empirical analysis to calibrate the model and propose a counterfactual where buyer market power is eliminated. Under this scenario, sellers' revenues increase; however, the price in seller currency is more responsive to exchange rate shocks.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії