Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Intelligence artificielle responsable“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Intelligence artificielle responsable" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Intelligence artificielle responsable"
Barlow, Julie. „Comment les humains battront-ils les machines ?“ Gestion Vol. 48, Nr. 3 (31.08.2023): 84–88. http://dx.doi.org/10.3917/riges.483.0084.
Der volle Inhalt der Quellede Saint-Affrique, Diane. „Intelligence artificielle et médecine : quelles règles éthiques et juridiques pour une IA responsable ?“ Médecine & Droit 2022, Nr. 172 (Februar 2022): 5–7. http://dx.doi.org/10.1016/j.meddro.2021.09.001.
Der volle Inhalt der QuelleBoubacar, Soumaila. „Évaluation de la Rationalité de l'Intelligence Artificielle à l'Aide de la Formule de Soumaila : Une Perspective Neuro-mathématique et Éthique“. Journal Africain des Cas Cliniques et Revues 8, Nr. 3 (10.07.2024): 1–5. http://dx.doi.org/10.70065/24ja83.001l011007.
Der volle Inhalt der QuelleDroesbeke, Jean-Jacques. „Jean-Paul Almetti, Olivier Coppet, Gilbert Saporta, Manifeste pour une intelligence artificielle comprise et responsable“. Statistique et société, Nr. 10 | 3 (01.12.2022): 85–87. http://dx.doi.org/10.4000/statsoc.578.
Der volle Inhalt der QuelleMalafronte, Olivier. „Développement des leaders avec coaching et Intelligence Artificielle : augmentation des compétences“. Management & Avenir N° 142, Nr. 4 (20.09.2024): 39–59. http://dx.doi.org/10.3917/mav.142.0039.
Der volle Inhalt der QuelleMarcoux, Audrey, Marie-Hélène Tessier, Frédéric Grondin, Laetitia Reduron und Philip L. Jackson. „Perspectives fondamentale, clinique et sociétale de l’utilisation des personnages virtuels en santé mentale“. Santé mentale au Québec 46, Nr. 1 (21.09.2021): 35–70. http://dx.doi.org/10.7202/1081509ar.
Der volle Inhalt der QuelleDesbois, Dominique. „Manifeste pour une intelligence artificielle comprise et responsable“. Terminal, Nr. 134-135 (08.11.2022). http://dx.doi.org/10.4000/terminal.8922.
Der volle Inhalt der QuelleRACOCEANU, Daniel, Mehdi OUNISSI und Yannick L. KERGOSIEN. „Explicabilité en Intelligence Artificielle ; vers une IA Responsable - Instanciation dans le domaine de la santé“. Technologies logicielles Architectures des systèmes, Dezember 2022. http://dx.doi.org/10.51257/a-v1-h5030.
Der volle Inhalt der QuelleNAULLY, NICOLAS. „KANT ET LA MORALE DES MACHINES : PROGRAMMER LA « BONTÉ » DANS L’ÈRE DES MODÈLES DE LANGAGE“. Management & Data Science, 2023. http://dx.doi.org/10.36863/mds.a.25661.
Der volle Inhalt der QuelleDissertationen zum Thema "Intelligence artificielle responsable"
Belahcene, Khaled. „Explications pour l’agrégation des préférences — une contribution à l’aide à la décision responsable“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC101/document.
Der volle Inhalt der QuelleWe consider providing a decision aiding process with tools aiming at complying to the demands of accountability. Decision makers, seeking support, provide preference information in the form of reference cases, that illustrates their views on the way of taking into account conflicting points of view. The analyst, who provides the support, assumes a generic representation of the reasoning with preferences, and fits the aggregation procedure to the preference information. We assume a robust elicitation process, where the recommendations stemming from the fitted procedure can be deduced from dialectical elements. Therefore, we are interested in solving an inverse problem concerning the model, and in deriving explanations, if possible sound, complete, easy to compute and to understand. We address two distinct forms of reasoning: one aimed at comparing pairs of alternatives with an additive value model, the other aimed at sorting alternatives into ordered categories with a noncompensatory model
Ounissi, Mehdi. „Decoding the Black Box : Enhancing Interpretability and Trust in Artificial Intelligence for Biomedical Imaging - a Step Toward Responsible Artificial Intelligence“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS237.
Der volle Inhalt der QuelleIn an era dominated by AI, its opaque decision-making --known as the "black box" problem-- poses significant challenges, especially in critical areas like biomedical imaging where accuracy and trust are crucial. Our research focuses on enhancing AI interpretability in biomedical applications. We have developed a framework for analyzing biomedical images that quantifies phagocytosis in neurodegenerative diseases using time-lapse phase-contrast video microscopy. Traditional methods often struggle with rapid cellular interactions and distinguishing cells from backgrounds, critical for studying conditions like frontotemporal dementia (FTD). Our scalable, real-time framework features an explainable cell segmentation module that simplifies deep learning algorithms, enhances interpretability, and maintains high performance by incorporating visual explanations and by model simplification. We also address issues in visual generative models, such as hallucinations in computational pathology, by using a unique encoder for Hematoxylin and Eosin staining coupled with multiple decoders. This method improves the accuracy and reliability of synthetic stain generation, employing innovative loss functions and regularization techniques that enhance performance and enable precise synthetic stains crucial for pathological analysis. Our methodologies have been validated against several public benchmarks, showing top-tier performance. Notably, our framework distinguished between mutant and control microglial cells in FTD, providing new biological insights into this unproven phenomenon. Additionally, we introduced a cloud-based system that integrates complex models and provides real-time feedback, facilitating broader adoption and iterative improvements through pathologist insights. The release of novel datasets, including video microscopy on microglial cell phagocytosis and a virtual staining dataset related to pediatric Crohn's disease, along with all source codes, underscores our commitment to transparent open scientific collaboration and advancement. Our research highlights the importance of interpretability in AI, advocating for technology that integrates seamlessly with user needs and ethical standards in healthcare. Enhanced interpretability allows researchers to better understand data and improve tool performance
Haidar, Ahmad. „Responsible Artificial Intelligence : Designing Frameworks for Ethical, Sustainable, and Risk-Aware Practices“. Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASI008.pdf.
Der volle Inhalt der QuelleArtificial Intelligence (AI) is rapidly transforming the world, redefining the relationship between technology and society. This thesis investigates the critical need for responsible and sustainable development, governance, and usage of AI and Generative AI (GAI). The study addresses the ethical risks, regulatory gaps, and challenges associated with AI systems while proposing actionable frameworks for fostering Responsible Artificial Intelligence (RAI) and Responsible Digital Innovation (RDI).The thesis begins with a comprehensive review of 27 global AI ethical declarations to identify dominant principles such as transparency, fairness, accountability, and sustainability. Despite their significance, these principles often lack the necessary tools for practical implementation. To address this gap, the second study in the research presents an integrative framework for RAI based on four dimensions: technical, AI for sustainability, legal, and responsible innovation management.The third part of the thesis focuses on RDI through a qualitative study of 18 interviews with managers from diverse sectors. Five key dimensions are identified: strategy, digital-specific challenges, organizational KPIs, end-user impact, and catalysts. These dimensions enable companies to adopt sustainable and responsible innovation practices while overcoming obstacles in implementation.The fourth study analyzes emerging risks from GAI, such as misinformation, disinformation, bias, privacy breaches, environmental concerns, and job displacement. Using a dataset of 858 incidents, this research employs binary logistic regression to examine the societal impact of these risks. The results highlight the urgent need for stronger regulatory frameworks, corporate digital responsibility, and ethical AI governance. Thus, this thesis provides critical contributions to the fields of RDI and RAI by evaluating ethical principles, proposing integrative frameworks, and identifying emerging risks. It emphasizes the importance of aligning AI governance with international standards to ensure that AI technologies serve humanity sustainably and equitably
Rusu, Anca. „Delving into AI discourse within EU institutional communications : empowering informed decision-making for tomorrow’s tech by fostering responsible communication for emerging technologies“. Electronic Thesis or Diss., Université Paris sciences et lettres, 2023. https://basepub.dauphine.fr/discover?query=%222023UPSLD029%22.
Der volle Inhalt der QuelleThe proliferation of emerging technologies, which are defined as new technologies or new use of old technologies (for example, artificial intelligence (AI)), presents both opportunities and challenges for society when used. These technologies promise to revolutionize various sectors, providing new efficiencies, capabilities, and insights, which makes them interesting to develop and use. However, their use also raises significant ethical, environmental, and social concerns. Organizations communicate through various modes, one of which is written discourse. Such discourse encompasses not only the structure of the message but also its content. In other words, the vocabulary (the structure) is used to express a specific point of view (the content). Within technology usage, there is a clear connection between communication and informed decision-making, as the information about the technology (its form and substance) is spread through communication, which in turn aids in making well-informed decisions. This thesis adopts a risk governance approach, which involves taking a preventive perspective to minimize (or avoid) future potential risks. This viewpoint acknowledges the importance of people making informed decisions about accepting or acting in light of potential future risks. It is to be noted that people's decisions are influenced by what they know about a technology and their perceptions (what they do not know but believe). Hence, our research aims to explore the theoretical perspectives on organizations' communication responsibilities and the actual practices employed by these entities. This choice stems from the apparent gap in the literature concerning responsible communication and the necessity to examine the topic, emphasizing practical considerations for further defining modes of organizational communication to analyze and take proactive action when communicating about emerging technologies such as AI. When an organization communicates about an emerging technology, elements focusing on the responsibility of sharing information can be found in the literature, but none on the responsibility (seen as an ethical behavior) of one organization regarding the impact of what is communicated on the decision-making process. Some responsibility is linked to corporate social responsibility (CSR), but the focus remains on the information. We propose a concept that addresses the intersection between three considered fields: emerging technologies, organizational communication and risk governance, which is the one of Responsible Organizational Communication on Emerging Technologies (ROCET) to address the responsibility of what is communicated as an ethical behavior. We aim to delve into the concept by bridging the divide between theory and practice, examining both simultaneously to garner a comprehensive understanding. This approach will help construct an understanding that meets halfway, building on knowledge accumulated from both areas. Therefore, two analyses will be conducted in parallel: a critical literature review around the “responsible communication” concept and a discourse analysis of standalone reports published by governmental bodies regarding the use of a specific emerging technology, namely artificial intelligence (AI). Using a single case analysis approach, the analysis aims to problematize one's communication regarding a public discourse while challenging such constitutions by exploring models of responsible communication. There is a gap in the literature in referring to this term as this research does. The literature focuses either on the communication conducted by organizations as part of their corporate responsibility strategy or from a communication theory perspective, concentrating on how to convey a message effectively. Alternatively, it looks at the matter from the emerging technologies perspective, where the focus is on information communication referring to the technology
Voarino, Nathalie. „Systèmes d’intelligence artificielle et santé : les enjeux d’une innovation responsable“. Thèse, 2019. http://hdl.handle.net/1866/23526.
Der volle Inhalt der QuelleThe use of artificial intelligence (AI) systems in health is part of the advent of a new "high definition" medicine that is predictive, preventive and personalized, benefiting from the unprecedented amount of data that is today available. At the heart of digital health innovation, the development of AI systems promises to lead to an interconnected and self-learning healthcare system. AI systems could thus help to redefine the classification of diseases, generate new medical knowledge, or predict the health trajectories of individuals for prevention purposes. Today, various applications in healthcare are being considered, ranging from assistance to medical decision-making through expert systems to precision medicine (e.g. pharmacological targeting), as well as individualized prevention through health trajectories developed on the basis of biological markers. However, urgent ethical concerns emerge with the increasing use of algorithms to analyze a growing number of data related to health (often personal and sensitive) as well as the reduction of human intervention in many automated processes. From the limitations of big data analysis, the need for data sharing and the algorithmic decision ‘opacity’ stems various ethical concerns relating to the protection of privacy and intimacy, free and informed consent, social justice, dehumanization of care and patients, and/or security. To address these challenges, many initiatives have focused on defining and applying principles for an ethical governance of AI. However, the operationalization of these principles faces various difficulties inherent to applied ethics, which originate either from the scope (universal or plural) of these principles or the way these principles are put into practice (inductive or deductive methods). These issues can be addressed with context-specific or bottom-up approaches of applied ethics. However, people who embrace these approaches still face several challenges. From an analysis of citizens' fears and expectations emerging from the discussions that took place during the coconstruction of the Montreal Declaration for a Responsible Development of AI, it is possible to get a sense of what these difficulties look like. From this analysis, three main challenges emerge: the incapacitation of health professionals and patients, the many hands problem, and artificial agency. These challenges call for AI systems that empower people and that allow to maintain human agency, in order to foster the development of (pragmatic) shared responsibility among the various stakeholders involved in the development of healthcare AI systems. Meeting these challenges is essential in order to adapt existing governance mechanisms and enable the development of a responsible digital innovation in healthcare and research that allows human beings to remain at the center of its development.
Buchteile zum Thema "Intelligence artificielle responsable"
Richer, Iris, und Clémence Varin. „19 - Intégrer la diversité culturelle dans les instruments relatifs à l’encadrement de l’IA : vers une technologie culturellement responsable?“ In Intelligence artificielle, culture et médias, 405–28. Les Presses de l’Université de Laval, 2024. http://dx.doi.org/10.1515/9782763758787-021.
Der volle Inhalt der QuelleBOUCAUD, Pascale. „Protection de la liberté et de la fragilité de la personne face au robot“. In Intelligence(s) artificielle(s) et Vulnérabilité(s) : kaléidoscope, 137–48. Editions des archives contemporaines, 2020. http://dx.doi.org/10.17184/eac.3642.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Intelligence artificielle responsable"
Gautrais, Vincent, Anne Tchiniaev und Émilie Guiraud. Guide des bonnes pratiques en intelligence artificielle : sept principes pour une utilisation responsable des données. Observatoire international sur les impacts sociétaux de l'IA et du numérique, Februar 2023. http://dx.doi.org/10.61737/tuac9741.
Der volle Inhalt der QuelleMörch, Carl-Maria, Pascale Lehoux, Marc-Antoine Dilhac, Catherine Régis und Xavier Dionne. Recommandations pratiques pour une utilisation responsable de l’intelligence artificielle en santé mentale en contexte de pandémie. Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, Dezember 2020. http://dx.doi.org/10.61737/mqaf7428.
Der volle Inhalt der Quelle