Academic literature on the topic 'IA explicable'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'IA explicable.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "IA explicable"
Krasnov, Fedor, Irina Smaznevich, and Elena Baskakova. "Optimization approach to the choice of explicable methods for detecting anomalies in homogeneous text collections." Informatics and Automation 20, no. 4 (August 3, 2021): 869–904. http://dx.doi.org/10.15622/ia.20.4.5.
Full textBerger, Alain, and Jean-Pierre Cotton. "Quel avenir pour la modélisation et la structuration dans un projet de management de la connaissance ?" I2D - Information, données & documents 1, no. 1 (July 19, 2023): 88–94. http://dx.doi.org/10.3917/i2d.231.0088.
Full textVuarin, Louis, Pedro Gomes Lopes, and David Massé. "L’intelligence artificielle peut-elle être une innovation responsable ?" Innovations N° 72, no. 3 (August 29, 2023): 103–47. http://dx.doi.org/10.3917/inno.pr2.0153.
Full textGuzmán Ponce, Angélica, Joana López-Bautista, and Ruben Fernandez-Beltran. "Interpretando Modelos de IA en Cáncer de Mama con SHAP y LIME." Ideas en Ciencias de la Ingeniería 2, no. 2 (July 12, 2024): 15. http://dx.doi.org/10.36677/ideaseningenieria.v2i2.23952.
Full textPérez-Salgado, Diana, María Sandra Compean-Dardón, and Luis Ortiz-Hernández. "Inseguridad alimentaria y adherencia al tratamiento antirretroviral en personas con VIH de México." Ciência & Saúde Coletiva 22, no. 2 (February 2017): 543–51. http://dx.doi.org/10.1590/1413-81232017222.10792016.
Full textReus-Smit, Christian, and Ayşe Zarakol. "Polymorphic justice and the crisis of international order." International Affairs 99, no. 1 (January 9, 2023): 1–22. http://dx.doi.org/10.1093/ia/iiac232.
Full textForster, Timon. "Respected individuals: when state representatives wield outsize influence in international organizations." International Affairs 100, no. 1 (January 8, 2024): 261–81. http://dx.doi.org/10.1093/ia/iiad226.
Full textSowers, Jeannie, and Erika Weinthal. "Humanitarian challenges and the targeting of civilian infrastructure in the Yemen war." International Affairs 97, no. 1 (January 2021): 157–77. http://dx.doi.org/10.1093/ia/iiaa166.
Full textCantillo Romero, Janer Rafael, Javier Javier Estrada Romero, and Carlos Henríquez Miranda. "APLICACIÓN DE ALGORITMOS DE APRENDIZAJE AUTOMÁTICO EN GEOCIENCIA: REVISIÓN INTEGRAL Y DESAFÍO FUTURO." REVISTA AMBIENTAL AGUA, AIRE Y SUELO 14, no. 2 (November 30, 2023): 9–18. http://dx.doi.org/10.24054/raaas.v14i2.2783.
Full textRenshaw, Phil St John, Emma Parry, and Michael Dickmann. "International assignments – extending an organizational value framework." Journal of Global Mobility: The Home of Expatriate Management Research 8, no. 2 (June 3, 2020): 141–60. http://dx.doi.org/10.1108/jgm-12-2019-0055.
Full textDissertations / Theses on the topic "IA explicable"
Houzé, Etienne. "A generic and adaptive approach to explainable AI in autonomic systems : the case of the smart home." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT022.
Full textSmart homes are Cyber-Physical Systems where various components cooperate to fulfill high-level goals such as user comfort or safety. These autonomic systems can adapt at runtime without requiring human intervention. This adaptation is hard to understand for the occupant, which can hinder the adoption of smart home systems. Since the mid 2010s, explainable AI has been a topic of interest, aiming to open the black box of complex AI models. The difficulty to explain autonomic systems does not come from the intrinsic complexity of their components, but rather from their self-adaptation capability which leads changes of configuration, logic or goals at runtime. In addition, the diversity of smart home devices makes the task harder. To tackle this challenge, we propose to add an explanatory system to the existing smart home autonomic system, whose task is to observe the various controllers and devices to generate explanations. We define six goals for such a system. 1) To generate contrastive explanations in unexpected or unwanted situations. 2) To generate a shallow reasoning, whose different elements are causaly closely related to each other. 3) To be transparent, i.e. to expose its entire reasoning and which components are involved. 4) To be self-aware, integrating its reflective knowledge into the explanation. 5) To be generic and able to adapt to diverse components and system architectures. 6) To preserve privacy and favor locality of reasoning. Our proposed solution is an explanatory system in which a central component, name the ``Spotlight'', implements an algorithm named D-CAS. This algorithm identifies three elements in an explanatory process: conflict detection via observation interpretation, conflict propagation via abductive inference and simulation of possible consequences. All three steps are performed locally, by Local Explanatory Components which are sequentially interrogated by the Spotlight. Each Local Component is paired to an autonomic device or controller and act as an expert in the related knowledge domain. This organization enables the addition of new components, integrating their knowledge into the general system without need for reconfiguration. We illustrate this architecture and algorithm in a proof-of-concept demonstrator that generates explanations in typical use cases. We design Local Explanatory Components to be generic platforms that can be specialized by the addition of modules with predefined interfaces. This modularity enables the integration of various techniques for abduction, interpretation and simulation. Our system aims to handle unusual situations in which data may be scarce, making past occurrence-based abduction methods inoperable. We propose a novel approach: to estimate events memorability and use them as relevant hypotheses to a surprising phenomenon. Our high-level approach to explainability aims to be generic and paves the way towards systems integrating more advanced modules, guaranteeing smart home explainability. The overall method can also be used for other Cyber-Physical Systems
Ayats, H. Ambre. "Construction de graphes de connaissances à partir de textes avec une intelligence artificielle explicable et centrée-utilisateur·ice." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS095.
Full textWith recent advances in artificial intelligence, the question of human control has become central. Today, this involves both research into explainability and designs centered around interaction with the user. What's more, with the expansion of the semantic web and automatic natural language processing methods, the task of constructing knowledge graphs from texts has become an important issue. This thesis presents a user-centered system for the construction of knowledge graphs from texts. This thesis presents several contributions. First, we introduce a user-centered workflow for the aforementioned task, having the property of progressively automating the user's actions while leaving them a fine-grained control over the outcome. Next, we present our contributions in the field of formal concept analysis, used to design an explainable instance-based learning module for relation classification. Finally, we present our contributions in the field of relation extraction, and how these fit into the presented workflow
Afchar, Darius. "Interpretable Music Recommender Systems." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS608.
Full text‘‘Why do they keep recommending me this music track?’’ ‘‘Why did our system recommend these tracks to users?’’ Nowadays, streaming platforms are the most common way to listen to recorded music. Still, music recommendations — at the heart of these platforms — are not an easy feat. Sometimes, both users and engineers may be equally puzzled about the behaviour of a music recommendation system (MRS). MRS have been successfully employed to help explore catalogues that may be as large as tens of millions of music tracks. Built and optimised for accuracy, real-world MRS often end up being quite complex. They may further rely on a range of interconnected modules that, for instance, analyse audio signals, retrieve metadata about albums and artists, collect and aggregate user feedbacks on the music service, and compute item similarities with collaborative filtering. All this complexity hinders the ability to explain recommendations and, more broadly, explain the system. Yet, explanations are essential for users to foster a long-term engagement with a system that they can understand (and forgive), and for system owners to rationalise failures and improve said system. Interpretability may also be needed to check the fairness of a decision or can be framed as a means to control the recommendations better. Moreover, we could also recursively question: Why does an explanation method explain in a certain way? Is this explanation relevant? What could be a better explanation? All these questions relate to the interpretability of MRSs. In the first half of this thesis, we explore the many flavours that interpretability can have in various recommendation tasks. Indeed, since there is not just one recommendation task but many (e.g., sequential recommendation, playlist continuation, artist similarity), as well as many angles through which music may be represented and processed (e.g., metadata, audio signals, embeddings computed from listening patterns), there are as many settings that require specific adjustments to make explanations relevant. A topic like this one can never be exhaustively addressed. This study was guided along some of the mentioned modalities of musical objects: interpreting implicit user logs, item features, audio signals and similarity embeddings. Our contribution includes several novel methods for eXplainable Artificial Intelligence (XAI) and several theoretical results, shedding new light on our understanding of past methods. Nevertheless, similar to how recommendations may not be interpretable, explanations about them may themselves lack interpretability and justifications. Therefore, in the second half of this thesis, we found it essential to take a step back from the rationale of ML and try to address a (perhaps surprisingly) understudied question in XAI: ‘‘What is interpretability?’’ Introducing concepts from philosophy and social sciences, we stress that there is a misalignment in the way explanations from XAI are generated and unfold versus how humans actually explain. We highlight that current research tends to rely too much on intuitions or hasty reduction of complex realities into convenient mathematical terms, which leads to the canonisation of assumptions into questionable standards (e.g., sparsity entails interpretability). We have treated this part as a comprehensive tutorial addressed to ML researchers to better ground their knowledge of explanations with a precise vocabulary and a broader perspective. We provide practical advice and highlight less popular branches of XAI better aligned with human cognition. Of course, we also reflect back and recontextualise our methods proposed in the previous part. Overall, this enables us to formulate some perspective for our field of XAI as a whole, including its more critical and promising next steps as well as its shortcomings to overcome
Lambert, Benjamin. "Quantification et caractérisation de l'incertitude de segmentation d'images médicales pardes réseaux profonds." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALS011.
Full textIn recent years, artificial intelligence algorithms have demonstrated outstanding performance in a wide range of tasks, including the segmentation and classification of medical images. The automatic segmentation of lesions in brain MRIs enables a rapid quantification of the disease progression: a count of new lesions, a measure of total lesion volume and a description of lesion shape. This analysis can then be used by the neuroradiologist to adapt therapeutic treatment if necessary. This makes medical decisions faster and more precise.At present, these algorithms, which are often regarded as black boxes, produce predictions without any information concerning their certainty. This hinders the full adoption of artificial intelligence algorithms in sensitive areas, as they tend to produce errors with high confidence, potentially misleading human decision-makers. Identifying and understanding the causes of these failures is key to maximizing the usefulness of AI algorithms and enabling their acceptance within the medical profession. To achieve this goal, it is important to be able to distinguish between the two main sources of uncertainty. First, aleatoric uncertainty, which corresponds to uncertainty linked to intrinsic image noise and acquisition artifacts. Secondly, epistemic uncertainty, which relates to the lack of knowledge of the model.The joint aim of Pixyl and GIN is to achieve better identification of the sources of uncertainty in deep neural networks, and consequently develop new methods for estimating this uncertainty in routine, real-time clinical use.In the context of medical image segmentation, uncertainty estimation is relevant at several scales. Firstly, at the voxel scale, uncertainty can be quantified using uncertainty maps. This makes it possible to superimpose the image, its segmentation and the uncertainty map to visualize uncertain area. Secondly, for pathologies such as Multiple Sclerosis, the radiologist's attention is focused on the lesion rather than the voxel. Structural uncertainty estimation, i.e. at the lesion scale, enables the radiologist to quickly control uncertain lesions that may be false positives. Thirdly, high-level metrics such as volume or number of lesions are commonly extracted from segmentations. Being able to associate predictive intervals with these metrics is important so that the clinician can take this uncertainty into account in his analysis. Finally, uncertainty can be quantified at the scale of the whole image, for example to detect out-of-distribution images that present a significant anomaly that could bias their analysis.In this thesis, the development of uncertainty quantification tools operating at each of these levels is proposed. More generally, the desired and expected methods should enable Pixyl to improve its current models, services and products. For clinical application, inference time is particularly critical: decision support is only useful if it is fast enough to be applied during patient consultation (i.e. in less than 5 minutes). What's more, innovative solutions will need to maintain a high level of performance even when applied to small image databases, as is generally the case in the medical field
Khodji, Hiba. "Apprentissage profond et transfert de connaissances pour la détection d'erreurs dans les séquences biologiques." Electronic Thesis or Diss., Strasbourg, 2023. http://www.theses.fr/2023STRAD058.
Full textThe widespread use of high throughput technologies in the biomedical field is producing massive amounts of data, notably the new generation of genome sequencing technologies. Multiple Sequence Alignment (MSA) serves as a fundamental tool for the analysis of this data, with applications including genome annotation, protein structure and function prediction, or understanding evolutionary relationships, etc. However, the accuracy of MSA is often compromised due to factors such as unreliable alignment algorithms, inaccurate gene prediction, or incomplete genome sequencing. This thesis addresses the issue of data quality assessment by leveraging deep learning techniques. We propose novel models based on convolutional neural networks for the identification of errors in visual representations of MSAs. Our primary objective is to assist domain experts in their research studies, where the accuracy of MSAs is crucial. Therefore, we focused on providing reliable explanations for our model predictions by harnessing the potential of explainable artificial intelligence (XAI). Particularly, we leveraged visual explanations as a foundation for a transfer learning framework that aims essentially to improve a model's ability to focus on underlying features in an input. Finally, we proposed novel evaluation metrics designed to assess this ability. Initial findings suggest that our approach achieves a good balance between model complexity, performance, and explainability, and could be leveraged in domains where data availability is limited and the need for comprehensive result explanation is paramount
Book chapters on the topic "IA explicable"
BAILLARGEAT, Dominique. "Intelligence Artificielle et villes intelligentes." In Algorithmes et Société, 37–46. Editions des archives contemporaines, 2021. http://dx.doi.org/10.17184/eac.4544.
Full text