Contents
Academic literature on the topic 'Explication contrefactuelle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explication contrefactuelle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Explication contrefactuelle"
Levy, François. "Statistique et contrefactuels : quelques explications." Intellectica. Revue de l'Association pour la Recherche Cognitive 38, no. 1 (2004): 187–91. http://dx.doi.org/10.3406/intel.2004.1712.
Full textAuray, Stéphane, Aurélien Eyquem, Bertrand Garbinti, and Jonathan Goupille-Lebret. "Inégalités de revenu et de patrimoine : modèles, données et perspectives croisées." Revue française d'économie Vol. XXXIX, no. 1 (July 22, 2024): 81–124. http://dx.doi.org/10.3917/rfe.241.0081.
Full textHarvey, Frank P. "President Al Gore and the 2003 Iraq War: A Counterfactual Test of Conventional “W”isdom." Canadian Journal of Political Science 45, no. 1 (March 2012): 1–32. http://dx.doi.org/10.1017/s0008423911000904.
Full textDissertations / Theses on the topic "Explication contrefactuelle"
Jeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.
Full textDeep neural architectures have demonstrated outstanding results in a variety of computer vision tasks. However, their extraordinary performance comes at the cost of interpretability. As a result, the field of Explanable AI has emerged to understand what these models are learning as well as to uncover their sources of error. In this thesis, we explore the world of explainable algorithms to uncover the biases and variables used by these parametric models in the context of image classification. To this end, we divide this thesis into four parts. The first three chapters proposes several methods to generate counterfactual explanations. In the first chapter, we proposed to incorporate diffusion models to generate these explanations. Next, we link the research areas of adversarial attacks and counterfactuals. The next chapter proposes a new pipeline to generate counterfactuals in a fully black-box mode, \ie, using only the input and the prediction without accessing the model. The final part of this thesis is related to the creation of interpretable by-design methods. More specifically, we investigate how to extend vision transformers into interpretable architectures. Our proposed methods have shown promising results and have made a step forward in the knowledge frontier of current XAI literature
Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.
Full textThis thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Lerouge, Mathieu. "Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.
Full textDecision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach