Auswahl der wissenschaftlichen Literatur zum Thema „Contrastive, scenario and counterfactual explanations“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Contrastive, scenario and counterfactual explanations" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Contrastive, scenario and counterfactual explanations"

1

Barzekar, Hosein, und Susan McRoy. „Achievable Minimally-Contrastive Counterfactual Explanations“. Machine Learning and Knowledge Extraction 5, Nr. 3 (03.08.2023): 922–36. http://dx.doi.org/10.3390/make5030048.

Der volle Inhalt der Quelle
Annotation:
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan und Guangneng Hu. „Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Der volle Inhalt der Quelle
Annotation:
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations. These problems reduce the faithfulness of explanations generated by models. To address the above issues, we propose a novel self-supervised Multi-level Contrastive Learning based natural language Explanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples. MCLE extracts discriminative features and aligns the feature spaces from explanations with visual question and answer to generate more consistent explanations. We conduct extensive experiments, ablation analysis, and case study to demonstrate the effectiveness of our method on two VQA-NLE benchmarks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sokol, Kacper, und Peter Flach. „Desiderata for Interpretability: Explaining Decision Tree Predictions with Counterfactuals“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 10035–36. http://dx.doi.org/10.1609/aaai.v33i01.330110035.

Der volle Inhalt der Quelle
Annotation:
Explanations in machine learning come in many forms, but a consensus regarding their desired properties is still emerging. In our work we collect and organise these explainability desiderata and discuss how they can be used to systematically evaluate properties and quality of an explainable system using the case of class-contrastive counterfactual statements. This leads us to propose a novel method for explaining predictions of a decision tree with counterfactuals. We show that our model-specific approach exploits all the theoretical advantages of counterfactual explanations, hence improves decision tree interpretability by decoupling the quality of the interpretation from the depth and width of the tree.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kenny, Eoin M., und Mark T. Keane. „On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 11575–85. http://dx.doi.org/10.1609/aaai.v35i13.17377.

Der volle Inhalt der Quelle
Annotation:
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs. In response to this disquiet, counterfactual explanations have become very popular in eXplainable AI (XAI) due to their asserted computational, psychological, and legal benefits. In contrast however, semi-factuals (which appear to be equally useful) have surprisingly received no attention. Most counterfactual methods address tabular rather than image data, partly because the non-discrete nature of images makes good counterfactuals difficult to define; indeed, generating plausible counterfactual images which lie on the data manifold is also problematic. This paper advances a novel method for generating plausible counterfactuals and semi-factuals for black-box CNN classifiers doing computer vision. The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all “exceptional” features in a test image to be “normal” from the perspective of the counterfactual class, to generate plausible counterfactual images. Two controlled experiments compare this method to others in the literature, showing that PIECE generates highly plausible counterfactuals (and the best semi-factuals) on several benchmark measures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zahedi, Zahra, Sailik Sengupta und Subbarao Kambhampati. „‘Why Didn’t You Allocate This Task to Them?’ Negotiation-Aware Task Allocation and Contrastive Explanation Generation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 9 (24.03.2024): 10243–51. http://dx.doi.org/10.1609/aaai.v38i9.28890.

Der volle Inhalt der Quelle
Annotation:
In this work, we design an Artificially Intelligent Task Allocator (AITA) that proposes a task allocation for a team of humans. A key property of this allocation is that when an agent with imperfect knowledge (about their teammate's costs and/or the team's performance metric) contests the allocation with a counterfactual, a contrastive explanation can always be provided to showcase why the proposed allocation is better than the proposed counterfactual. For this, we consider a negotiation process that produces a negotiation-aware task allocation and, when contested, leverages a negotiation tree to provide a contrastive explanation. With human subject studies, we show that the proposed allocation indeed appears fair to a majority of participants and, when not, the explanations generated are judged as convincing and easy to comprehend.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Niiniluoto, Ilkka. „Explanation by Idealized Theories“. Kairos. Journal of Philosophy & Science 20, Nr. 1 (01.06.2018): 43–63. http://dx.doi.org/10.2478/kjps-2018-0003.

Der volle Inhalt der Quelle
Annotation:
AbstractThe use of idealized scientific theories in explanations of empirical facts and regularities is problematic in two ways: they don’t satisfy the condition that the explanans is true, and they may fail to entail the explanandum. An attempt to deal with the latter problem was proposed by Hempel and Popper with their notion of approximate explanation. A more systematic perspective on idealized explanations was developed with the method of idealization and concretization by the Poznan school (Nowak, Krajewski) in the 1970s. If idealizational laws are treated as counterfactual conditionals, they can be true or truthlike, and the concretizations of such laws may increase their degree of truthlikeness. By replacing Hempel’s truth requirement with the condition that an explanatory theory is truthlike one can distinguish several important types of approximate, corrective, and contrastive explanations by idealized theories. The conclusions have important consequences for the debates about scientific realism and anti-realism.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Darwiche, Adnan, und Chunxi Ji. „On the Computation of Necessary and Sufficient Explanations“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 5 (28.06.2022): 5582–91. http://dx.doi.org/10.1609/aaai.v36i5.20498.

Der volle Inhalt der Quelle
Annotation:
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Crawford, Beverly. „Germany's Future Political Challenges: Imagine that The New Yorker Profiled the German Chancellor in 2015“. German Politics and Society 23, Nr. 4 (01.12.2005): 69–87. http://dx.doi.org/10.3167/gps.2005.230404.

Der volle Inhalt der Quelle
Annotation:
What follows is a fictitious scenario, a "thought experiment," meant to project a particular future for Germany if certain assumptions hold. Scenarios are hypotheses that rest on a set of assumptions and one or two "wild cards." They can reveal forces of change that might be otherwise hidden, discard those that are not plausible, and describe the future of trends that are relatively certain. Indeed, scenarios create a particular future in the same way that counterfactual methods create a different past. Counterfactual methods predict how events would have unfolded had a few elements of the story been changed, with a focus on varying conditions that seem important and that can be manipulated. For instance, to explore the effects of military factors on the likelihood of war, one might ask: "how would pre 1914 diplomacy have evolved if the leaders of Europe had not believed that conquest was easy?" Or to explore the importance of broad social and political factors in causing Nazi aggression: "How might the 1930s have unfolded had Hitler died in 1932?" The greater the impact of the posited changes, the more important the actual factors that were manipulated. Assuming that the structure of explanation and prediction are the same, scenario writing pursues a similar method. But, instead of seeking alternative explanations for the past, scenarios project relative certainties and then manipulate the important but uncertain factors, to create a plausible story about the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Woodcock, Claire, Brent Mittelstadt, Dan Busbridge und Grant Blank. „The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study“. Journal of Medical Internet Research 23, Nr. 11 (03.11.2021): e29386. http://dx.doi.org/10.2196/29386.

Der volle Inhalt der Quelle
Annotation:
Background Artificial intelligence (AI)–driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver health care more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including why-explanations and how-explanations. Social theories suggest that why-explanations are better at communicating knowledge and cultivating trust among laypeople. Objective The aim of this study is to ascertain whether explanations provided by a symptom checker affect explanatory trust among laypeople and whether this trust is impacted by their existing knowledge of disease. Methods A cross-sectional survey of 750 healthy participants was conducted. The participants were shown a video of a chatbot simulation that resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either because of its current use in symptom checkers or because it was informed by theories of contrastive explanation. Exploratory factor analysis of participants’ responses followed by comparison-of-means tests were used to evaluate group differences in trust. Results Depending on the treatment group, two or three variables were generated, reflecting the prior knowledge and subsequent mental model that the participants held. When varying explanation type by disease, migraine was found to be nonsignificant (P=.65) and temporal arteritis, marginally significant (P=.09). Varying disease by explanation type resulted in statistical significance for input influence (P=.001), social proof (P=.049), and no explanation (P=.006), with counterfactual explanation (P=.053). The results suggest that trust in explanations is significantly affected by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate that to be successful, symptom checkers need to tailor explanations to each user’s specific question and discount the diseases that they may also be aware of. Conclusions System builders developing explanations for symptom-checking apps should consider the recipient’s knowledge of a disease and tailor explanations to each user’s specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rahimi, Saeed, Antoni B. Moore und Peter A. Whigham. „Beyond Objects in Space-Time: Towards a Movement Analysis Framework with ‘How’ and ‘Why’ Elements“. ISPRS International Journal of Geo-Information 10, Nr. 3 (22.03.2021): 190. http://dx.doi.org/10.3390/ijgi10030190.

Der volle Inhalt der Quelle
Annotation:
Current spatiotemporal data has facilitated movement studies to shift objectives from descriptive models to explanations of the underlying causes of movement. From both a practical and theoretical standpoint, progress in developing approaches for these explanations should be founded on a conceptual model. This paper presents such a model in which three conceptual levels of abstraction are proposed to frame an agent-based representation of movement decision-making processes: ‘attribute,’ ‘actor,’ and ‘autonomous agent’. These in combination with three temporal, spatial, and spatiotemporal general forms of observations distinguish nine (3 × 3) representation typologies of movement data within the agent framework. Thirdly, there are three levels of cognitive reasoning: ‘association,’ ‘intervention,’ and ‘counterfactual’. This makes for 27 possible types of operation embedded in a conceptual cube with the level of abstraction, type of observation, and degree of cognitive reasoning forming the three axes. The conceptual model is an arena where movement queries and the statement of relevant objectives takes place. An example implementation of a tightly constrained spatiotemporal scenario to ground the agent-structure was summarised. The platform has been well-defined so as to accommodate different tools and techniques to drive causal inference in computational movement analysis as an immediate future step.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Contrastive, scenario and counterfactual explanations"

1

Lerouge, Mathieu. „Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.

Der volle Inhalt der Quelle
Annotation:
Les systèmes d'aide à la décision basés sur l'optimisation combinatoire trouvent des applications dans divers domaines professionnels. Cependant, les décideurs qui utilisent ces systèmes ne comprennent souvent pas les concepts mathématiques et les principes algorithmiques qui les sous-tendent. Ce manque de compréhension peut entraîner du scepticisme et une réticence à accepter les solutions générées par le système, érodant ainsi la confiance placée dans le système. Cette thèse traite cette problématique dans le cas du problème de planification d'employés mobiles, en anglais Workforce Scheduling and Routing Problem (WSRP), un problème d'optimisation combinatoire couplant de l'allocation de ressources humaines et du routage.Tout d'abord, nous proposons un cadre qui modélise le processus d'explication de solutions pour les utilisateurs d'un système de résolution de WSRP, permettant d'aborder une large gamme de sujets. Les utilisateurs initient le processus en faisant des observations sur une solution et en formulant des questions liées à ces observations grâce à des modèles de texte prédéfinis. Ces questions peuvent être de type contrastif, scénario ou contrefactuel. D'un point de vue mathématique, elles reviennent essentiellement à se demander s'il existe une solution faisable et meilleure dans un voisinage de la solution courante. Selon les types de questions, cela conduit à la formulation d'un ou de plusieurs problèmes de décision et de programmes mathématiques.Ensuite, nous développons une méthode pour générer des textes d'explication de différents types, avec un vocabulaire de haut niveau adapté aux utilisateurs. Notre méthode repose sur des algorithmes efficaces calculant du contenu explicatif afin de remplir des modèles de textes d'explication. Des expériences numériques montrent que ces algorithmes ont des temps d'exécution globalement compatibles avec une utilisation en temps quasi-réel des explications par les utilisateurs.Enfin, nous présentons un design de système structurant les interactions entre nos techniques de génération d'explications et les utilisateurs qui reçoivent les textes d'explication. Ce système sert de base à un prototype d'interface graphique visant à démontrer l'applicabilité pratique et les potentiels bénéfices de notre approche dans son ensemble
Decision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Contrastive, scenario and counterfactual explanations"

1

McAreavey, Kevin, und Weiru Liu. „Modifications of the Miller Definition of Contrastive (Counterfactual) Explanations“. In Lecture Notes in Computer Science, 54–67. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45608-4_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liu, Xiaowei, Kevin McAreavey und Weiru Liu. „Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards“. In Communications in Computer and Information Science, 72–87. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Främling, Kary. „Counterfactual, Contrastive, and Hierarchical Explanations with Contextual Importance and Utility“. In Explainable and Transparent AI and Multi-Agent Systems, 180–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40878-6_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kuhl, Ulrike, André Artelt und Barbara Hammer. „For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI“. In Communications in Computer and Information Science, 280–300. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44070-0_14.

Der volle Inhalt der Quelle
Annotation:
AbstractCounterfactual explanations (CFEs) are a popular approach in explainable artificial intelligence (xAI), highlighting changes to input data necessary for altering a model’s output. A CFE can either describe a scenario that is better than the factual state (upward CFE), or a scenario that is worse than the factual state (downward CFE). However, potential benefits and drawbacks of the directionality of CFEs for user behavior in xAI remain unclear. The current user study (N = 161) compares the impact of CFE directionality on behavior and experience of participants tasked to extract new knowledge from an automated system based on model predictions and CFEs. Results suggest that upward CFEs provide a significant performance advantage over other forms of counterfactual feedback. Moreover, the study highlights potential benefits of mixed CFEs improving user performance compared to downward CFEs or no explanations. In line with the performance results, users’ explicit knowledge of the system is statistically higher after receiving upward CFEs compared to downward comparisons. These findings imply that the alignment between explanation and task at hand, the so-called regulatory fit, may play a crucial role in determining the effectiveness of model explanations, informing future research directions in (xAI). To ensure reproducible research, the entire code, underlying models and user data of this study is openly available: https://github.com/ukuhl/DirectionalAlienZoo
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Holzinger, Andreas, Anna Saranti, Anne-Christin Hauschild, Jacqueline Beinecke, Dominik Heider, Richard Roettger, Heimo Mueller, Jan Baumbach und Bastian Pfeifer. „Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning“. In Lecture Notes in Computer Science, 45–64. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_4.

Der volle Inhalt der Quelle
Annotation:
AbstractWe explore the integration of domain knowledge graphs into Deep Learning for improved interpretability and explainability using Graph Neural Networks (GNNs). Specifically, a protein-protein interaction (PPI) network is masked over a deep neural network for classification, with patient-specific multi-modal genomic features enriched into the PPI graph’s nodes. Subnetworks that are relevant to the classification (referred to as “disease subnetworks”) are detected using explainable AI. Federated learning is enabled by dividing the knowledge graph into relevant subnetworks, constructing an ensemble classifier, and allowing domain experts to analyze and manipulate detected subnetworks using a developed user interface. Furthermore, the human-in-the-loop principle can be applied with the incorporation of experts, interacting through a sophisticated User Interface (UI) driven by Explainable Artificial Intelligence (xAI) methods, changing the datasets to create counterfactual explanations. The adapted datasets could influence the local model’s characteristics and thereby create a federated version that distils their diverse knowledge in a centralized scenario. This work demonstrates the feasibility of the presented strategies, which were originally envisaged in 2021 and most of it has now been materialized into actionable items. In this paper, we report on some lessons learned during this project.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Contrastive, scenario and counterfactual explanations"

1

Sokol, Kacper, und Peter Flach. „Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/865.

Der volle Inhalt der Quelle
Annotation:
The prevalence of automated decision making, influencing important aspects of our lives -- e.g., school admission, job market, insurance and banking -- has resulted in increasing pressure from society and regulators to make this process more transparent and ensure its explainability, accountability and fairness. We demonstrate a prototype voice-enabled device, called Glass-Box, which users can question to understand automated decisions and identify the underlying model's biases and errors. Our system explains algorithmic predictions with class-contrastive counterfactual statements (e.g., ``Had a number of conditions been different:...the prediction would change...''), which show a difference in a particular scenario that causes an algorithm to ``change its mind''. Such explanations do not require any prior technical knowledge to understand, hence are suitable for a lay audience, who interact with the system in a natural way -- through an interactive dialogue. We demonstrate the capabilities of the device by allowing users to impersonate a loan applicant who can question the system to understand the automated decision that he received.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sokol, Kacper, und Peter Flach. „Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/836.

Der volle Inhalt der Quelle
Annotation:
Machine learning models have become pervasive in our everyday life; they decide on important matters influencing our education, employment and judicial system. Many of these predictive systems are commercial products protected by trade secrets, hence their decision-making is opaque. Therefore, in our research we address interpretability and explainability of predictions made by machine learning models. Our work draws heavily on human explanation research in social sciences: contrastive and exemplar explanations provided through a dialogue. This user-centric design, focusing on a lay audience rather than domain experts, applied to machine learning allows explainees to drive the explanation to suit their needs instead of being served a precooked template.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wang, Xue, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei und Kui Ren. „Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks“. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00195.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shi, Yiwei, Kevin McAreavey und Weiru Liu. „Evaluating contrastive explanations for AI planning with non-experts: a smart home battery scenario“. In 2022 27th International Conference on Automation and Computing (ICAC). IEEE, 2022. http://dx.doi.org/10.1109/icac55051.2022.9911125.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie