Letteratura scientifica selezionata sul tema "Intelligence artificielle responsable"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Intelligence artificielle responsable".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Intelligence artificielle responsable"
Barlow, Julie. "Comment les humains battront-ils les machines ?" Gestion Vol. 48, n. 3 (31 agosto 2023): 84–88. http://dx.doi.org/10.3917/riges.483.0084.
Testo completode Saint-Affrique, Diane. "Intelligence artificielle et médecine : quelles règles éthiques et juridiques pour une IA responsable ?" Médecine & Droit 2022, n. 172 (febbraio 2022): 5–7. http://dx.doi.org/10.1016/j.meddro.2021.09.001.
Testo completoBoubacar, Soumaila. "Évaluation de la Rationalité de l'Intelligence Artificielle à l'Aide de la Formule de Soumaila : Une Perspective Neuro-mathématique et Éthique". Journal Africain des Cas Cliniques et Revues 8, n. 3 (10 luglio 2024): 1–5. http://dx.doi.org/10.70065/24ja83.001l011007.
Testo completoDroesbeke, Jean-Jacques. "Jean-Paul Almetti, Olivier Coppet, Gilbert Saporta, Manifeste pour une intelligence artificielle comprise et responsable". Statistique et société, n. 10 | 3 (1 dicembre 2022): 85–87. http://dx.doi.org/10.4000/statsoc.578.
Testo completoMalafronte, Olivier. "Développement des leaders avec coaching et Intelligence Artificielle : augmentation des compétences". Management & Avenir N° 142, n. 4 (20 settembre 2024): 39–59. http://dx.doi.org/10.3917/mav.142.0039.
Testo completoMarcoux, Audrey, Marie-Hélène Tessier, Frédéric Grondin, Laetitia Reduron e Philip L. Jackson. "Perspectives fondamentale, clinique et sociétale de l’utilisation des personnages virtuels en santé mentale". Santé mentale au Québec 46, n. 1 (21 settembre 2021): 35–70. http://dx.doi.org/10.7202/1081509ar.
Testo completoDesbois, Dominique. "Manifeste pour une intelligence artificielle comprise et responsable". Terminal, n. 134-135 (8 novembre 2022). http://dx.doi.org/10.4000/terminal.8922.
Testo completoRACOCEANU, Daniel, Mehdi OUNISSI e Yannick L. KERGOSIEN. "Explicabilité en Intelligence Artificielle ; vers une IA Responsable - Instanciation dans le domaine de la santé". Technologies logicielles Architectures des systèmes, dicembre 2022. http://dx.doi.org/10.51257/a-v1-h5030.
Testo completoNAULLY, NICOLAS. "KANT ET LA MORALE DES MACHINES : PROGRAMMER LA « BONTÉ » DANS L’ÈRE DES MODÈLES DE LANGAGE". Management & Data Science, 2023. http://dx.doi.org/10.36863/mds.a.25661.
Testo completoTesi sul tema "Intelligence artificielle responsable"
Belahcene, Khaled. "Explications pour l’agrégation des préférences — une contribution à l’aide à la décision responsable". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC101/document.
Testo completoWe consider providing a decision aiding process with tools aiming at complying to the demands of accountability. Decision makers, seeking support, provide preference information in the form of reference cases, that illustrates their views on the way of taking into account conflicting points of view. The analyst, who provides the support, assumes a generic representation of the reasoning with preferences, and fits the aggregation procedure to the preference information. We assume a robust elicitation process, where the recommendations stemming from the fitted procedure can be deduced from dialectical elements. Therefore, we are interested in solving an inverse problem concerning the model, and in deriving explanations, if possible sound, complete, easy to compute and to understand. We address two distinct forms of reasoning: one aimed at comparing pairs of alternatives with an additive value model, the other aimed at sorting alternatives into ordered categories with a noncompensatory model
Ounissi, Mehdi. "Decoding the Black Box : Enhancing Interpretability and Trust in Artificial Intelligence for Biomedical Imaging - a Step Toward Responsible Artificial Intelligence". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS237.
Testo completoIn an era dominated by AI, its opaque decision-making --known as the "black box" problem-- poses significant challenges, especially in critical areas like biomedical imaging where accuracy and trust are crucial. Our research focuses on enhancing AI interpretability in biomedical applications. We have developed a framework for analyzing biomedical images that quantifies phagocytosis in neurodegenerative diseases using time-lapse phase-contrast video microscopy. Traditional methods often struggle with rapid cellular interactions and distinguishing cells from backgrounds, critical for studying conditions like frontotemporal dementia (FTD). Our scalable, real-time framework features an explainable cell segmentation module that simplifies deep learning algorithms, enhances interpretability, and maintains high performance by incorporating visual explanations and by model simplification. We also address issues in visual generative models, such as hallucinations in computational pathology, by using a unique encoder for Hematoxylin and Eosin staining coupled with multiple decoders. This method improves the accuracy and reliability of synthetic stain generation, employing innovative loss functions and regularization techniques that enhance performance and enable precise synthetic stains crucial for pathological analysis. Our methodologies have been validated against several public benchmarks, showing top-tier performance. Notably, our framework distinguished between mutant and control microglial cells in FTD, providing new biological insights into this unproven phenomenon. Additionally, we introduced a cloud-based system that integrates complex models and provides real-time feedback, facilitating broader adoption and iterative improvements through pathologist insights. The release of novel datasets, including video microscopy on microglial cell phagocytosis and a virtual staining dataset related to pediatric Crohn's disease, along with all source codes, underscores our commitment to transparent open scientific collaboration and advancement. Our research highlights the importance of interpretability in AI, advocating for technology that integrates seamlessly with user needs and ethical standards in healthcare. Enhanced interpretability allows researchers to better understand data and improve tool performance
Rusu, Anca. "Delving into AI discourse within EU institutional communications : empowering informed decision-making for tomorrow’s tech by fostering responsible communication for emerging technologies". Electronic Thesis or Diss., Université Paris sciences et lettres, 2023. https://basepub.dauphine.fr/discover?query=%222023UPSLD029%22.
Testo completoThe proliferation of emerging technologies, which are defined as new technologies or new use of old technologies (for example, artificial intelligence (AI)), presents both opportunities and challenges for society when used. These technologies promise to revolutionize various sectors, providing new efficiencies, capabilities, and insights, which makes them interesting to develop and use. However, their use also raises significant ethical, environmental, and social concerns. Organizations communicate through various modes, one of which is written discourse. Such discourse encompasses not only the structure of the message but also its content. In other words, the vocabulary (the structure) is used to express a specific point of view (the content). Within technology usage, there is a clear connection between communication and informed decision-making, as the information about the technology (its form and substance) is spread through communication, which in turn aids in making well-informed decisions. This thesis adopts a risk governance approach, which involves taking a preventive perspective to minimize (or avoid) future potential risks. This viewpoint acknowledges the importance of people making informed decisions about accepting or acting in light of potential future risks. It is to be noted that people's decisions are influenced by what they know about a technology and their perceptions (what they do not know but believe). Hence, our research aims to explore the theoretical perspectives on organizations' communication responsibilities and the actual practices employed by these entities. This choice stems from the apparent gap in the literature concerning responsible communication and the necessity to examine the topic, emphasizing practical considerations for further defining modes of organizational communication to analyze and take proactive action when communicating about emerging technologies such as AI. When an organization communicates about an emerging technology, elements focusing on the responsibility of sharing information can be found in the literature, but none on the responsibility (seen as an ethical behavior) of one organization regarding the impact of what is communicated on the decision-making process. Some responsibility is linked to corporate social responsibility (CSR), but the focus remains on the information. We propose a concept that addresses the intersection between three considered fields: emerging technologies, organizational communication and risk governance, which is the one of Responsible Organizational Communication on Emerging Technologies (ROCET) to address the responsibility of what is communicated as an ethical behavior. We aim to delve into the concept by bridging the divide between theory and practice, examining both simultaneously to garner a comprehensive understanding. This approach will help construct an understanding that meets halfway, building on knowledge accumulated from both areas. Therefore, two analyses will be conducted in parallel: a critical literature review around the “responsible communication” concept and a discourse analysis of standalone reports published by governmental bodies regarding the use of a specific emerging technology, namely artificial intelligence (AI). Using a single case analysis approach, the analysis aims to problematize one's communication regarding a public discourse while challenging such constitutions by exploring models of responsible communication. There is a gap in the literature in referring to this term as this research does. The literature focuses either on the communication conducted by organizations as part of their corporate responsibility strategy or from a communication theory perspective, concentrating on how to convey a message effectively. Alternatively, it looks at the matter from the emerging technologies perspective, where the focus is on information communication referring to the technology
Voarino, Nathalie. "Systèmes d’intelligence artificielle et santé : les enjeux d’une innovation responsable". Thèse, 2019. http://hdl.handle.net/1866/23526.
Testo completoThe use of artificial intelligence (AI) systems in health is part of the advent of a new "high definition" medicine that is predictive, preventive and personalized, benefiting from the unprecedented amount of data that is today available. At the heart of digital health innovation, the development of AI systems promises to lead to an interconnected and self-learning healthcare system. AI systems could thus help to redefine the classification of diseases, generate new medical knowledge, or predict the health trajectories of individuals for prevention purposes. Today, various applications in healthcare are being considered, ranging from assistance to medical decision-making through expert systems to precision medicine (e.g. pharmacological targeting), as well as individualized prevention through health trajectories developed on the basis of biological markers. However, urgent ethical concerns emerge with the increasing use of algorithms to analyze a growing number of data related to health (often personal and sensitive) as well as the reduction of human intervention in many automated processes. From the limitations of big data analysis, the need for data sharing and the algorithmic decision ‘opacity’ stems various ethical concerns relating to the protection of privacy and intimacy, free and informed consent, social justice, dehumanization of care and patients, and/or security. To address these challenges, many initiatives have focused on defining and applying principles for an ethical governance of AI. However, the operationalization of these principles faces various difficulties inherent to applied ethics, which originate either from the scope (universal or plural) of these principles or the way these principles are put into practice (inductive or deductive methods). These issues can be addressed with context-specific or bottom-up approaches of applied ethics. However, people who embrace these approaches still face several challenges. From an analysis of citizens' fears and expectations emerging from the discussions that took place during the coconstruction of the Montreal Declaration for a Responsible Development of AI, it is possible to get a sense of what these difficulties look like. From this analysis, three main challenges emerge: the incapacitation of health professionals and patients, the many hands problem, and artificial agency. These challenges call for AI systems that empower people and that allow to maintain human agency, in order to foster the development of (pragmatic) shared responsibility among the various stakeholders involved in the development of healthcare AI systems. Meeting these challenges is essential in order to adapt existing governance mechanisms and enable the development of a responsible digital innovation in healthcare and research that allows human beings to remain at the center of its development.
Capitoli di libri sul tema "Intelligence artificielle responsable"
Richer, Iris, e Clémence Varin. "19 - Intégrer la diversité culturelle dans les instruments relatifs à l’encadrement de l’IA : vers une technologie culturellement responsable?" In Intelligence artificielle, culture et médias, 405–28. Les Presses de l’Université de Laval, 2024. http://dx.doi.org/10.1515/9782763758787-021.
Testo completoBOUCAUD, Pascale. "Protection de la liberté et de la fragilité de la personne face au robot". In Intelligence(s) artificielle(s) et Vulnérabilité(s) : kaléidoscope, 137–48. Editions des archives contemporaines, 2020. http://dx.doi.org/10.17184/eac.3642.
Testo completoRapporti di organizzazioni sul tema "Intelligence artificielle responsable"
Gautrais, Vincent, Anne Tchiniaev e Émilie Guiraud. Guide des bonnes pratiques en intelligence artificielle : sept principes pour une utilisation responsable des données. Observatoire international sur les impacts sociétaux de l'IA et du numérique, febbraio 2023. http://dx.doi.org/10.61737/tuac9741.
Testo completoMörch, Carl-Maria, Pascale Lehoux, Marc-Antoine Dilhac, Catherine Régis e Xavier Dionne. Recommandations pratiques pour une utilisation responsable de l’intelligence artificielle en santé mentale en contexte de pandémie. Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, dicembre 2020. http://dx.doi.org/10.61737/mqaf7428.
Testo completo