Literatura científica selecionada sobre o tema "Explication contrefactuelle"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Explication contrefactuelle".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Explication contrefactuelle"
Levy, François. "Statistique et contrefactuels : quelques explications". Intellectica. Revue de l'Association pour la Recherche Cognitive 38, n.º 1 (2004): 187–91. http://dx.doi.org/10.3406/intel.2004.1712.
Texto completo da fonteAuray, Stéphane, Aurélien Eyquem, Bertrand Garbinti e Jonathan Goupille-Lebret. "Inégalités de revenu et de patrimoine : modèles, données et perspectives croisées". Revue française d'économie Vol. XXXIX, n.º 1 (22 de julho de 2024): 81–124. http://dx.doi.org/10.3917/rfe.241.0081.
Texto completo da fonteHarvey, Frank P. "President Al Gore and the 2003 Iraq War: A Counterfactual Test of Conventional “W”isdom". Canadian Journal of Political Science 45, n.º 1 (março de 2012): 1–32. http://dx.doi.org/10.1017/s0008423911000904.
Texto completo da fonteTeses / dissertações sobre o assunto "Explication contrefactuelle"
Jeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.
Texto completo da fonteDeep neural architectures have demonstrated outstanding results in a variety of computer vision tasks. However, their extraordinary performance comes at the cost of interpretability. As a result, the field of Explanable AI has emerged to understand what these models are learning as well as to uncover their sources of error. In this thesis, we explore the world of explainable algorithms to uncover the biases and variables used by these parametric models in the context of image classification. To this end, we divide this thesis into four parts. The first three chapters proposes several methods to generate counterfactual explanations. In the first chapter, we proposed to incorporate diffusion models to generate these explanations. Next, we link the research areas of adversarial attacks and counterfactuals. The next chapter proposes a new pipeline to generate counterfactuals in a fully black-box mode, \ie, using only the input and the prediction without accessing the model. The final part of this thesis is related to the creation of interpretable by-design methods. More specifically, we investigate how to extend vision transformers into interpretable architectures. Our proposed methods have shown promising results and have made a step forward in the knowledge frontier of current XAI literature
Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.
Texto completo da fonteThis thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Lerouge, Mathieu. "Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST174.
Texto completo da fonteDecision support systems based on combinatorial optimization find application in various professional domains. However, decision-makers who use these systems often lack understanding of their underlying mathematical concepts and algorithmic principles. This knowledge gap can lead to skepticism and reluctance in accepting system-generated solutions, thereby eroding trust in the system. This thesis addresses this issue in the case of the Workforce Scheduling and Routing Problems (WSRP), a combinatorial optimization problem involving human resource allocation and routing decisions.First, we propose a framework that models the process for explaining solutions to the end-users of a WSRP-solving system while allowing to address a wide range of topics. End-users initiate the process by making observations about a solution and formulating questions related to these observations using predefined template texts. These questions may be of contrastive, scenario or counterfactual type. From a mathematical point of view, they basically amount to asking whether there exists a feasible and better solution in a given neighborhood of the current solution. Depending on the question types, this leads to the formulation of one or several decision problems and mathematical programs.Then, we develop a method for generating explanation texts of different types, with a high-level vocabulary adapted to the end-users. Our method relies on efficient algorithms for computing and extracting the relevant explanatory information and populates explanation template texts. Numerical experiments show that these algorithms have execution times that are mostly compatible with near-real-time use of explanations by end-users. Finally, we introduce a system design for structuring the interactions between our explanation-generation techniques and the end-users who receive the explanation texts. This system serves as a basis for a graphical-user-interface prototype which aims at demonstrating the practical applicability and potential benefits of our approach