Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Contrastive, scenario and counterfactual explanations“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Contrastive, scenario and counterfactual explanations" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Contrastive, scenario and counterfactual explanations"
Barzekar, Hosein, und Susan McRoy. „Achievable Minimally-Contrastive Counterfactual Explanations“. Machine Learning and Knowledge Extraction 5, Nr. 3 (03.08.2023): 922–36. http://dx.doi.org/10.3390/make5030048.
Der volle Inhalt der QuelleLai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan und Guangneng Hu. „Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.
Der volle Inhalt der QuelleSokol, Kacper, und Peter Flach. „Desiderata for Interpretability: Explaining Decision Tree Predictions with Counterfactuals“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 10035–36. http://dx.doi.org/10.1609/aaai.v33i01.330110035.
Der volle Inhalt der QuelleKenny, Eoin M., und Mark T. Keane. „On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 11575–85. http://dx.doi.org/10.1609/aaai.v35i13.17377.
Der volle Inhalt der QuelleZahedi, Zahra, Sailik Sengupta und Subbarao Kambhampati. „‘Why Didn’t You Allocate This Task to Them?’ Negotiation-Aware Task Allocation and Contrastive Explanation Generation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 9 (24.03.2024): 10243–51. http://dx.doi.org/10.1609/aaai.v38i9.28890.
Der volle Inhalt der QuelleNiiniluoto, Ilkka. „Explanation by Idealized Theories“. Kairos. Journal of Philosophy & Science 20, Nr. 1 (01.06.2018): 43–63. http://dx.doi.org/10.2478/kjps-2018-0003.
Der volle Inhalt der QuelleDarwiche, Adnan, und Chunxi Ji. „On the Computation of Necessary and Sufficient Explanations“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 5 (28.06.2022): 5582–91. http://dx.doi.org/10.1609/aaai.v36i5.20498.
Der volle Inhalt der QuelleCrawford, Beverly. „Germany's Future Political Challenges: Imagine that The New Yorker Profiled the German Chancellor in 2015“. German Politics and Society 23, Nr. 4 (01.12.2005): 69–87. http://dx.doi.org/10.3167/gps.2005.230404.
Der volle Inhalt der QuelleWoodcock, Claire, Brent Mittelstadt, Dan Busbridge und Grant Blank. „The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study“. Journal of Medical Internet Research 23, Nr. 11 (03.11.2021): e29386. http://dx.doi.org/10.2196/29386.
Der volle Inhalt der QuelleRahimi, Saeed, Antoni B. Moore und Peter A. Whigham. „Beyond Objects in Space-Time: Towards a Movement Analysis Framework with ‘How’ and ‘Why’ Elements“. ISPRS International Journal of Geo-Information 10, Nr. 3 (22.03.2021): 190. http://dx.doi.org/10.3390/ijgi10030190.
Der volle Inhalt der QuelleDissertationen zum Thema "Contrastive, scenario and counterfactual explanations"
Lerouge, Mathieu. „Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.
Der volle Inhalt der QuelleDecision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach
Buchteile zum Thema "Contrastive, scenario and counterfactual explanations"
McAreavey, Kevin, und Weiru Liu. „Modifications of the Miller Definition of Contrastive (Counterfactual) Explanations“. In Lecture Notes in Computer Science, 54–67. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45608-4_5.
Der volle Inhalt der QuelleLiu, Xiaowei, Kevin McAreavey und Weiru Liu. „Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards“. In Communications in Computer and Information Science, 72–87. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0_4.
Der volle Inhalt der QuelleFrämling, Kary. „Counterfactual, Contrastive, and Hierarchical Explanations with Contextual Importance and Utility“. In Explainable and Transparent AI and Multi-Agent Systems, 180–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40878-6_16.
Der volle Inhalt der QuelleKuhl, Ulrike, André Artelt und Barbara Hammer. „For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI“. In Communications in Computer and Information Science, 280–300. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44070-0_14.
Der volle Inhalt der QuelleHolzinger, Andreas, Anna Saranti, Anne-Christin Hauschild, Jacqueline Beinecke, Dominik Heider, Richard Roettger, Heimo Mueller, Jan Baumbach und Bastian Pfeifer. „Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning“. In Lecture Notes in Computer Science, 45–64. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_4.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Contrastive, scenario and counterfactual explanations"
Sokol, Kacper, und Peter Flach. „Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/865.
Der volle Inhalt der QuelleSokol, Kacper, und Peter Flach. „Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/836.
Der volle Inhalt der QuelleWang, Xue, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei und Kui Ren. „Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks“. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00195.
Der volle Inhalt der QuelleShi, Yiwei, Kevin McAreavey und Weiru Liu. „Evaluating contrastive explanations for AI planning with non-experts: a smart home battery scenario“. In 2022 27th International Conference on Automation and Computing (ICAC). IEEE, 2022. http://dx.doi.org/10.1109/icac55051.2022.9911125.
Der volle Inhalt der Quelle