Littérature scientifique sur le sujet « Contrastive, scenario and counterfactual explanations »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Contrastive, scenario and counterfactual explanations ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Contrastive, scenario and counterfactual explanations"

1

Barzekar, Hosein, et Susan McRoy. « Achievable Minimally-Contrastive Counterfactual Explanations ». Machine Learning and Knowledge Extraction 5, no 3 (3 août 2023) : 922–36. http://dx.doi.org/10.3390/make5030048.

Texte intégral
Résumé :
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan et Guangneng Hu. « Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 3 (24 mars 2024) : 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Texte intégral
Résumé :
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations. These problems reduce the faithfulness of explanations generated by models. To address the above issues, we propose a novel self-supervised Multi-level Contrastive Learning based natural language Explanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples. MCLE extracts discriminative features and aligns the feature spaces from explanations with visual question and answer to generate more consistent explanations. We conduct extensive experiments, ablation analysis, and case study to demonstrate the effectiveness of our method on two VQA-NLE benchmarks.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sokol, Kacper, et Peter Flach. « Desiderata for Interpretability : Explaining Decision Tree Predictions with Counterfactuals ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 10035–36. http://dx.doi.org/10.1609/aaai.v33i01.330110035.

Texte intégral
Résumé :
Explanations in machine learning come in many forms, but a consensus regarding their desired properties is still emerging. In our work we collect and organise these explainability desiderata and discuss how they can be used to systematically evaluate properties and quality of an explainable system using the case of class-contrastive counterfactual statements. This leads us to propose a novel method for explaining predictions of a decision tree with counterfactuals. We show that our model-specific approach exploits all the theoretical advantages of counterfactual explanations, hence improves decision tree interpretability by decoupling the quality of the interpretation from the depth and width of the tree.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kenny, Eoin M., et Mark T. Keane. « On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 13 (18 mai 2021) : 11575–85. http://dx.doi.org/10.1609/aaai.v35i13.17377.

Texte intégral
Résumé :
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs. In response to this disquiet, counterfactual explanations have become very popular in eXplainable AI (XAI) due to their asserted computational, psychological, and legal benefits. In contrast however, semi-factuals (which appear to be equally useful) have surprisingly received no attention. Most counterfactual methods address tabular rather than image data, partly because the non-discrete nature of images makes good counterfactuals difficult to define; indeed, generating plausible counterfactual images which lie on the data manifold is also problematic. This paper advances a novel method for generating plausible counterfactuals and semi-factuals for black-box CNN classifiers doing computer vision. The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all “exceptional” features in a test image to be “normal” from the perspective of the counterfactual class, to generate plausible counterfactual images. Two controlled experiments compare this method to others in the literature, showing that PIECE generates highly plausible counterfactuals (and the best semi-factuals) on several benchmark measures.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zahedi, Zahra, Sailik Sengupta et Subbarao Kambhampati. « ‘Why Didn’t You Allocate This Task to Them?’ Negotiation-Aware Task Allocation and Contrastive Explanation Generation ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 9 (24 mars 2024) : 10243–51. http://dx.doi.org/10.1609/aaai.v38i9.28890.

Texte intégral
Résumé :
In this work, we design an Artificially Intelligent Task Allocator (AITA) that proposes a task allocation for a team of humans. A key property of this allocation is that when an agent with imperfect knowledge (about their teammate's costs and/or the team's performance metric) contests the allocation with a counterfactual, a contrastive explanation can always be provided to showcase why the proposed allocation is better than the proposed counterfactual. For this, we consider a negotiation process that produces a negotiation-aware task allocation and, when contested, leverages a negotiation tree to provide a contrastive explanation. With human subject studies, we show that the proposed allocation indeed appears fair to a majority of participants and, when not, the explanations generated are judged as convincing and easy to comprehend.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Niiniluoto, Ilkka. « Explanation by Idealized Theories ». Kairos. Journal of Philosophy & ; Science 20, no 1 (1 juin 2018) : 43–63. http://dx.doi.org/10.2478/kjps-2018-0003.

Texte intégral
Résumé :
AbstractThe use of idealized scientific theories in explanations of empirical facts and regularities is problematic in two ways: they don’t satisfy the condition that the explanans is true, and they may fail to entail the explanandum. An attempt to deal with the latter problem was proposed by Hempel and Popper with their notion of approximate explanation. A more systematic perspective on idealized explanations was developed with the method of idealization and concretization by the Poznan school (Nowak, Krajewski) in the 1970s. If idealizational laws are treated as counterfactual conditionals, they can be true or truthlike, and the concretizations of such laws may increase their degree of truthlikeness. By replacing Hempel’s truth requirement with the condition that an explanatory theory is truthlike one can distinguish several important types of approximate, corrective, and contrastive explanations by idealized theories. The conclusions have important consequences for the debates about scientific realism and anti-realism.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Darwiche, Adnan, et Chunxi Ji. « On the Computation of Necessary and Sufficient Explanations ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 5 (28 juin 2022) : 5582–91. http://dx.doi.org/10.1609/aaai.v36i5.20498.

Texte intégral
Résumé :
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Crawford, Beverly. « Germany's Future Political Challenges : Imagine that The New Yorker Profiled the German Chancellor in 2015 ». German Politics and Society 23, no 4 (1 décembre 2005) : 69–87. http://dx.doi.org/10.3167/gps.2005.230404.

Texte intégral
Résumé :
What follows is a fictitious scenario, a "thought experiment," meant to project a particular future for Germany if certain assumptions hold. Scenarios are hypotheses that rest on a set of assumptions and one or two "wild cards." They can reveal forces of change that might be otherwise hidden, discard those that are not plausible, and describe the future of trends that are relatively certain. Indeed, scenarios create a particular future in the same way that counterfactual methods create a different past. Counterfactual methods predict how events would have unfolded had a few elements of the story been changed, with a focus on varying conditions that seem important and that can be manipulated. For instance, to explore the effects of military factors on the likelihood of war, one might ask: "how would pre 1914 diplomacy have evolved if the leaders of Europe had not believed that conquest was easy?" Or to explore the importance of broad social and political factors in causing Nazi aggression: "How might the 1930s have unfolded had Hitler died in 1932?" The greater the impact of the posited changes, the more important the actual factors that were manipulated. Assuming that the structure of explanation and prediction are the same, scenario writing pursues a similar method. But, instead of seeking alternative explanations for the past, scenarios project relative certainties and then manipulate the important but uncertain factors, to create a plausible story about the future.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Woodcock, Claire, Brent Mittelstadt, Dan Busbridge et Grant Blank. « The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps : Experimental Study ». Journal of Medical Internet Research 23, no 11 (3 novembre 2021) : e29386. http://dx.doi.org/10.2196/29386.

Texte intégral
Résumé :
Background Artificial intelligence (AI)–driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver health care more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including why-explanations and how-explanations. Social theories suggest that why-explanations are better at communicating knowledge and cultivating trust among laypeople. Objective The aim of this study is to ascertain whether explanations provided by a symptom checker affect explanatory trust among laypeople and whether this trust is impacted by their existing knowledge of disease. Methods A cross-sectional survey of 750 healthy participants was conducted. The participants were shown a video of a chatbot simulation that resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either because of its current use in symptom checkers or because it was informed by theories of contrastive explanation. Exploratory factor analysis of participants’ responses followed by comparison-of-means tests were used to evaluate group differences in trust. Results Depending on the treatment group, two or three variables were generated, reflecting the prior knowledge and subsequent mental model that the participants held. When varying explanation type by disease, migraine was found to be nonsignificant (P=.65) and temporal arteritis, marginally significant (P=.09). Varying disease by explanation type resulted in statistical significance for input influence (P=.001), social proof (P=.049), and no explanation (P=.006), with counterfactual explanation (P=.053). The results suggest that trust in explanations is significantly affected by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate that to be successful, symptom checkers need to tailor explanations to each user’s specific question and discount the diseases that they may also be aware of. Conclusions System builders developing explanations for symptom-checking apps should consider the recipient’s knowledge of a disease and tailor explanations to each user’s specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rahimi, Saeed, Antoni B. Moore et Peter A. Whigham. « Beyond Objects in Space-Time : Towards a Movement Analysis Framework with ‘How’ and ‘Why’ Elements ». ISPRS International Journal of Geo-Information 10, no 3 (22 mars 2021) : 190. http://dx.doi.org/10.3390/ijgi10030190.

Texte intégral
Résumé :
Current spatiotemporal data has facilitated movement studies to shift objectives from descriptive models to explanations of the underlying causes of movement. From both a practical and theoretical standpoint, progress in developing approaches for these explanations should be founded on a conceptual model. This paper presents such a model in which three conceptual levels of abstraction are proposed to frame an agent-based representation of movement decision-making processes: ‘attribute,’ ‘actor,’ and ‘autonomous agent’. These in combination with three temporal, spatial, and spatiotemporal general forms of observations distinguish nine (3 × 3) representation typologies of movement data within the agent framework. Thirdly, there are three levels of cognitive reasoning: ‘association,’ ‘intervention,’ and ‘counterfactual’. This makes for 27 possible types of operation embedded in a conceptual cube with the level of abstraction, type of observation, and degree of cognitive reasoning forming the three axes. The conceptual model is an arena where movement queries and the statement of relevant objectives takes place. An example implementation of a tightly constrained spatiotemporal scenario to ground the agent-structure was summarised. The platform has been well-defined so as to accommodate different tools and techniques to drive causal inference in computational movement analysis as an immediate future step.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Contrastive, scenario and counterfactual explanations"

1

Lerouge, Mathieu. « Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem ». Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.

Texte intégral
Résumé :
Les systèmes d'aide à la décision basés sur l'optimisation combinatoire trouvent des applications dans divers domaines professionnels. Cependant, les décideurs qui utilisent ces systèmes ne comprennent souvent pas les concepts mathématiques et les principes algorithmiques qui les sous-tendent. Ce manque de compréhension peut entraîner du scepticisme et une réticence à accepter les solutions générées par le système, érodant ainsi la confiance placée dans le système. Cette thèse traite cette problématique dans le cas du problème de planification d'employés mobiles, en anglais Workforce Scheduling and Routing Problem (WSRP), un problème d'optimisation combinatoire couplant de l'allocation de ressources humaines et du routage.Tout d'abord, nous proposons un cadre qui modélise le processus d'explication de solutions pour les utilisateurs d'un système de résolution de WSRP, permettant d'aborder une large gamme de sujets. Les utilisateurs initient le processus en faisant des observations sur une solution et en formulant des questions liées à ces observations grâce à des modèles de texte prédéfinis. Ces questions peuvent être de type contrastif, scénario ou contrefactuel. D'un point de vue mathématique, elles reviennent essentiellement à se demander s'il existe une solution faisable et meilleure dans un voisinage de la solution courante. Selon les types de questions, cela conduit à la formulation d'un ou de plusieurs problèmes de décision et de programmes mathématiques.Ensuite, nous développons une méthode pour générer des textes d'explication de différents types, avec un vocabulaire de haut niveau adapté aux utilisateurs. Notre méthode repose sur des algorithmes efficaces calculant du contenu explicatif afin de remplir des modèles de textes d'explication. Des expériences numériques montrent que ces algorithmes ont des temps d'exécution globalement compatibles avec une utilisation en temps quasi-réel des explications par les utilisateurs.Enfin, nous présentons un design de système structurant les interactions entre nos techniques de génération d'explications et les utilisateurs qui reçoivent les textes d'explication. Ce système sert de base à un prototype d'interface graphique visant à démontrer l'applicabilité pratique et les potentiels bénéfices de notre approche dans son ensemble
Decision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Contrastive, scenario and counterfactual explanations"

1

McAreavey, Kevin, et Weiru Liu. « Modifications of the Miller Definition of Contrastive (Counterfactual) Explanations ». Dans Lecture Notes in Computer Science, 54–67. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45608-4_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Liu, Xiaowei, Kevin McAreavey et Weiru Liu. « Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards ». Dans Communications in Computer and Information Science, 72–87. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Främling, Kary. « Counterfactual, Contrastive, and Hierarchical Explanations with Contextual Importance and Utility ». Dans Explainable and Transparent AI and Multi-Agent Systems, 180–84. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40878-6_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kuhl, Ulrike, André Artelt et Barbara Hammer. « For Better or Worse : The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI ». Dans Communications in Computer and Information Science, 280–300. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44070-0_14.

Texte intégral
Résumé :
AbstractCounterfactual explanations (CFEs) are a popular approach in explainable artificial intelligence (xAI), highlighting changes to input data necessary for altering a model’s output. A CFE can either describe a scenario that is better than the factual state (upward CFE), or a scenario that is worse than the factual state (downward CFE). However, potential benefits and drawbacks of the directionality of CFEs for user behavior in xAI remain unclear. The current user study (N = 161) compares the impact of CFE directionality on behavior and experience of participants tasked to extract new knowledge from an automated system based on model predictions and CFEs. Results suggest that upward CFEs provide a significant performance advantage over other forms of counterfactual feedback. Moreover, the study highlights potential benefits of mixed CFEs improving user performance compared to downward CFEs or no explanations. In line with the performance results, users’ explicit knowledge of the system is statistically higher after receiving upward CFEs compared to downward comparisons. These findings imply that the alignment between explanation and task at hand, the so-called regulatory fit, may play a crucial role in determining the effectiveness of model explanations, informing future research directions in (xAI). To ensure reproducible research, the entire code, underlying models and user data of this study is openly available: https://github.com/ukuhl/DirectionalAlienZoo
Styles APA, Harvard, Vancouver, ISO, etc.
5

Holzinger, Andreas, Anna Saranti, Anne-Christin Hauschild, Jacqueline Beinecke, Dominik Heider, Richard Roettger, Heimo Mueller, Jan Baumbach et Bastian Pfeifer. « Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning ». Dans Lecture Notes in Computer Science, 45–64. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_4.

Texte intégral
Résumé :
AbstractWe explore the integration of domain knowledge graphs into Deep Learning for improved interpretability and explainability using Graph Neural Networks (GNNs). Specifically, a protein-protein interaction (PPI) network is masked over a deep neural network for classification, with patient-specific multi-modal genomic features enriched into the PPI graph’s nodes. Subnetworks that are relevant to the classification (referred to as “disease subnetworks”) are detected using explainable AI. Federated learning is enabled by dividing the knowledge graph into relevant subnetworks, constructing an ensemble classifier, and allowing domain experts to analyze and manipulate detected subnetworks using a developed user interface. Furthermore, the human-in-the-loop principle can be applied with the incorporation of experts, interacting through a sophisticated User Interface (UI) driven by Explainable Artificial Intelligence (xAI) methods, changing the datasets to create counterfactual explanations. The adapted datasets could influence the local model’s characteristics and thereby create a federated version that distils their diverse knowledge in a centralized scenario. This work demonstrates the feasibility of the presented strategies, which were originally envisaged in 2021 and most of it has now been materialized into actionable items. In this paper, we report on some lessons learned during this project.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Contrastive, scenario and counterfactual explanations"

1

Sokol, Kacper, et Peter Flach. « Glass-Box : Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/865.

Texte intégral
Résumé :
The prevalence of automated decision making, influencing important aspects of our lives -- e.g., school admission, job market, insurance and banking -- has resulted in increasing pressure from society and regulators to make this process more transparent and ensure its explainability, accountability and fairness. We demonstrate a prototype voice-enabled device, called Glass-Box, which users can question to understand automated decisions and identify the underlying model's biases and errors. Our system explains algorithmic predictions with class-contrastive counterfactual statements (e.g., ``Had a number of conditions been different:...the prediction would change...''), which show a difference in a particular scenario that causes an algorithm to ``change its mind''. Such explanations do not require any prior technical knowledge to understand, hence are suitable for a lay audience, who interact with the system in a natural way -- through an interactive dialogue. We demonstrate the capabilities of the device by allowing users to impersonate a loan applicant who can question the system to understand the automated decision that he received.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Sokol, Kacper, et Peter Flach. « Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/836.

Texte intégral
Résumé :
Machine learning models have become pervasive in our everyday life; they decide on important matters influencing our education, employment and judicial system. Many of these predictive systems are commercial products protected by trade secrets, hence their decision-making is opaque. Therefore, in our research we address interpretability and explainability of predictions made by machine learning models. Our work draws heavily on human explanation research in social sciences: contrastive and exemplar explanations provided through a dialogue. This user-centric design, focusing on a lay audience rather than domain experts, applied to machine learning allows explainees to drive the explanation to suit their needs instead of being served a precooked template.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wang, Xue, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei et Kui Ren. « Counterfactual-based Saliency Map : Towards Visual Contrastive Explanations for Neural Networks ». Dans 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00195.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Shi, Yiwei, Kevin McAreavey et Weiru Liu. « Evaluating contrastive explanations for AI planning with non-experts : a smart home battery scenario ». Dans 2022 27th International Conference on Automation and Computing (ICAC). IEEE, 2022. http://dx.doi.org/10.1109/icac55051.2022.9911125.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie