Literatura académica sobre el tema "IA éthique"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "IA éthique".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "IA éthique"
El Ganbour, Morad y Saida Belouali. "Élaboration et validation d’un référentiel éthique de l’intelligence artificielle en Éducation : cas du contexte marocain". Médiations et médiatisations, n.º 18 (30 de octubre de 2024): 125–47. http://dx.doi.org/10.52358/mm.vi18.403.
Texto completoBoubacar, Soumaila. "Évaluation de la Rationalité de l'Intelligence Artificielle à l'Aide de la Formule de Soumaila : Une Perspective Neuro-mathématique et Éthique". Journal Africain des Cas Cliniques et Revues 8, n.º 3 (10 de julio de 2024): 1–5. http://dx.doi.org/10.70065/24ja83.001l011007.
Texto completoMessaoudi, Aïssa. "Les défis de l’IA dans l’éducation : de la protection des données aux biais algorithmiques". Médiations et médiatisations, n.º 18 (30 de octubre de 2024): 148–60. http://dx.doi.org/10.52358/mm.vi18.409.
Texto completoGoulenok, Cyril, Marc Grassin, Robin Cremer, Julien Duvivier, Caroline Hauw-Berlemont, Mercé Jourdain, Antoine Lafarge et al. "Introduction aux enjeux éthiques de l’Intelligence Artificielle en réanimation". Médecine Intensive Réanimation 33, Hors-série 1 (11 de junio de 2024): 101–10. http://dx.doi.org/10.37051/mir-00211.
Texto completoTremblay, Diane-Gabrielle, Valérie Psyché y Amina Yagoubi. "mise en œuvre de l’IA dans les organisations est-elle compatible avec une société éthique?" Ad machina, n.º 7 (18 de diciembre de 2023): 166–87. http://dx.doi.org/10.1522/radm.no7.1663.
Texto completoAudran, Jacques. "Cinq enjeux d’évaluation face à l’émergence des IA génératives en éducation". Mesure et évaluation en éducation 47, n.º 1 (2024): 6–26. http://dx.doi.org/10.7202/1114564ar.
Texto completoMansouri, Mohamed. "L’intelligence artificielle et la publicité : quelle éthique ?" Annales des Mines - Enjeux numériques N° 1, n.º 1 (24 de enero de 2018): 53–58. http://dx.doi.org/10.3917/ennu.001.0053.
Texto completoMayol, Samuel. "Du Tempo au Big Data : la symphonie du Marketing 6.0". Question(s) de management 47, n.º 6 (20 de diciembre de 2023): 89–94. http://dx.doi.org/10.3917/qdm.227.0089.
Texto completoFodor, Sylvie. "Un cadre juridique pour une intelligence artificielle éthique : le règlement IA (RIA ou AI Act)". I2D - Information, données & documents 242, n.º 2 (26 de noviembre de 2024): 79–81. https://doi.org/10.3917/i2d.242.0079.
Texto completoBellon, Anne y Julia Velkovska. "L’intelligence artificielle dans l’espace public : du domaine scientifique au problème public". Réseaux N° 240, n.º 4 (21 de septiembre de 2023): 31–70. http://dx.doi.org/10.3917/res.240.0031.
Texto completoTesis sobre el tema "IA éthique"
Hizam, Meriem. "Exploration de la contribution des outils d'intelligence artificielle à la prise de décision en organisation publique et privée : comparaison et caractérisation". Electronic Thesis or Diss., Nantes Université, 2024. http://www.theses.fr/2024NANU3013.
Texto completoArtificial Intelligence (AI), although firmly rooted in the technological landscape for several decades, derives its remarkable advancements from three fundamental pillars: computing power, algorithmic sophistication, and the exponential abundance of data. However, its rapid rise has often been accompanied by a limited understanding of its true technical potential, inadequate ethical oversight, and a lack of evaluation regarding the relevance of its application. These shortcomings, combined with other factors, partially explain the alarming failure rates of AI projects, which reach 85% according to Gartner. It thus appears that AI is not a universal solution to all organizational and social challenges. Our doctoral thesis aims to explore the strategic and ethical integration of AI to maximize its relevance while minimizing associated risks. Although we are still in the era of narrow AI, due to technical, philosophical, and epistemological challenges hindering the emergence of general AI, our research seeks to promote a more relevant and ethically sound integration of AI systems. Our investigations have enabled us to identify a taxonomy of factors contributing to the failure of AI projects. Two fundamental dimensions have been prioritized for our subsequent work: a) Strategic: a preliminary iteration of the Ad Hoc Augmented Decision Theory, designed to assess the relevance of deploying AI in organizational decisionmaking processes. b) Ethical: a maturity matrix for evaluating the ethical robustness of AI systems, helping organizations to position themselves and assess their ethical responsibility and compliance. Conducted within the CIFRE framework, this thesis confronts our theoretical models with organizational realities. Our objectives are twofold: to provide organizations with pragmatic tools for informed decision-making and to enrich the scientific debate on the ethical and strategic issues surrounding AI. Finally, our work is based on a three-dimensional logic integrating technical, human-social, and organizational dimensions, to better coordinate human and artificial intelligence for optimized decision-making
Souverain, Thomas. "Comment l'IA nous classe : expliquer l'IA dans les services financiers et la recherche d'emploi, des solutions techniques aux enjeux éthiques". Electronic Thesis or Diss., Université Paris sciences et lettres, 2024. http://www.theses.fr/2024UPSLE010.
Texto completoThis thesis explores the notion of explainable AI (XAI), involving data-science and philosophy. Thanks to industrial partnerships, we focus on two use cases where AI classifications are deployed, and impact humans on a daily basis: loan granting (partnership with the company DreamQuark) and warnings on illegal job offers (partnership with the French Employment Agency, France Travail). We begin with a critical lens on techniques used to explain AI classifications (the what of explanation). Building our own package, we then examine how classification bias can be mitigated in loan lending, to obtain a fair outcome (the what for of explanation). Finally, we empirically investigate how users of AI interact with algorithmic recommendations, and how these interactions can be optimized (for whom to explain). We argue that explainability solutions are merely the basis of a description which must incorporate causality, fairness, and intelligibility in order to effectively explain AI
Taheri, Sojasi Yousef. "Modeling automated legal and ethical compliance for trustworthy AI". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS225.
Texto completoThe advancements in artificial intelligence have led to significant legal and ethical issues related to privacy, bias, accountability, etc. In recent years, many regulations have been put in place to limit or mitigate the risks associated with AI. Compliance with these regulations are necessary for the reliability of AI systems and to ensure that they are being used responsibly. In addition, reliable AI systems should also be ethical, ensuring alignment with ethical norms. Compliance with applicable laws and adherence to ethical principles are essential for most AI applications. We investigate this problem from the point of view of AI agents. In other words, how an agent can ensure the compliance of its actions with legal and ethical norms. We are interested in approaches based on logical reasoning to integrate legal and ethical compliance in the agent's planning process. The specific domain in which we pursue our objective is the processing of personal data. i.e., the agent's actions involve the use and processing of personal data. A regulation that applies in such a domain is the General Data Protection Regulations (GDPR). In addition, processing of personal data may entail certain ethical risks with respect to privacy or bias.We address this issue through a series of contributions presented in this thesis. We start with the issue of GDPR compliance. We adopt Event Calculus with Answer Set Programming(ASP) to model agents' actions and use it for planning and checking the compliance with GDPR. A policy language is used to represent the GDPR obligations and requirements. Then we investigate the issue of ethical compliance. A pluralistic ordinal utility model is proposed that allows one to evaluate actions based on moral values. This model is based on multiple criteria and uses voting systems to aggregate evaluations on an ordinal scale. We then integrate this utility model and the legal compliance framework in a Hierarchical Task Network(HTN) planner. In this contribution, legal norms are considered hard constraints and ethical norm as soft constraint. Finally, as a last step, we further explore the possible combinations of legal and ethical compliance with the planning agent and propose a unified framework. This framework captures the interaction and conflicts between legal and ethical norms and is tested in a use case with AI systems managing the delivery of health care items
Delarue, Simon. "Learning on graphs : from algorithms to socio-technical analyses on AI". Electronic Thesis or Diss., Institut polytechnique de Paris, 2025. http://www.theses.fr/2025IPPAT004.
Texto completoThis thesis addresses the dual challenge of advancing Artificial Intelligence (AI) methods while critically assessing their societal impact. With AI technologies now embedded in high-stake decision sectors like healthcare and justice, their growing influence demands thorough examination, reflected in emerging international regulations such as the AI Act in Europe. To address these challenges, this work leverages attributed-graph based methods and advocates for a shift from performance-focused AI models to approaches that also prioritise scalability, simplicity, and explainability.The first part of this thesis develops a toolkit of attributed graph-based methods and algorithms aimed at enhancing AI learning techniques. It includes a software contribution that leverages the sparsity of complex networks to reduce computational costs. Additionally, it introduces non-neural graph models for node classification and link predictions tasks, showing how these methods can outperform advanced neural networks while being more computationally efficient. Lastly, it presents a novel pattern mining algorithm that generates concise, human-readable summaries of large networks. Together, these contributions highlight the potential of these approaches to provide efficient and interpretable solutions to AI's technical challenges.The second part adopts an interdisciplinary approach to study AI as a socio-technical system. By framing AI as an ecosystem influenced by various stakeholders and societal concerns, it uses graph-based models to analyse interactions and tensions related to explainability, ethics, and environmental impact. A user study explores the influence of graph-based explanations on user perceptions of AI recommendations, while the building and analysis of a corpus of AI ethics charters and manifestos quantifies the roles of key actors in AI governance. A final study reveals that environmental concerns in AI are primarily framed technically, highlighting the need for a broader approach to the ecological implications of digitalisation
Mainsant, Marion. "Apprentissage continu sous divers scénarios d'arrivée de données : vers des applications robustes et éthiques de l'apprentissage profond". Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALS045.
Texto completoThe human brain continuously receives information from external stimuli. It then has the ability to adapt to new knowledge while retaining past events. Nowadays, more and more artificial intelligence algorithms aim to learn knowledge in the same way as a human being. They therefore have to be able to adapt to a large variety of data arriving sequentially and available over a limited period of time. However, when a deep learning algorithm learns new data, the knowledge contained in the neural network overlaps old one and the majority of the past information is lost, a phenomenon referred in the literature as catastrophic forgetting. Numerous methods have been proposed to overcome this issue, but as they were focused on providing the best performance, studies have moved away from real-life applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. In addition, most of the best state of the art methods are replay methods which retain a small memory of the past and consequently do not preserve data privacy.In this thesis, we propose to explore data arrival scenarios existing in the literature, with the aim of applying them to facial emotion recognition, which is essential for human-robot interactions. To this end, we present Dream Net - Data-Free, a privacy preserving algorithm, able to adapt to a large number of data arrival scenarios without storing any past samples. After demonstrating the robustness of this algorithm compared to existing state-of-the-art methods on standard computer vision databases (Mnist, Cifar-10, Cifar-100 and Imagenet-100), we show that it can also adapt to more complex facial emotion recognition databases. We then propose to embed the algorithm on a Nvidia Jetson nano card creating a demonstrator able to learn and predict emotions in real-time. Finally, we discuss the relevance of our approach for bias mitigation in artificial intelligence, opening up perspectives towards a more ethical AI
Capítulos de libros sobre el tema "IA éthique"
JONAS, Sylvie y Françoise LAMNABHI-LAGARRIGUE. "Éthique et responsabilité des systèmes industriels cyber-physiques". En Digitalisation et contrôle des systèmes industriels cyber-physiques, 317–33. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9085.ch16.
Texto completo"Autres titres parus dans la collection Éthique IA et société". En Données personnelles : reprenons le pouvoir!, 73–74. Les Presses de l’Université de Laval, 2024. http://dx.doi.org/10.1515/9782766303809-017.
Texto completoBOUCAUD, Pascale. "Protection de la liberté et de la fragilité de la personne face au robot". En Intelligence(s) artificielle(s) et Vulnérabilité(s) : kaléidoscope, 137–48. Editions des archives contemporaines, 2020. http://dx.doi.org/10.17184/eac.3642.
Texto completoBAILLARGEAT, Dominique. "Intelligence Artificielle et villes intelligentes". En Algorithmes et Société, 37–46. Editions des archives contemporaines, 2021. http://dx.doi.org/10.17184/eac.4544.
Texto completoPICHENOT, Évelyne. "Société civile européenne et enjeux des algorithmes". En Algorithmes et Société, 141–54. Editions des archives contemporaines, 2021. http://dx.doi.org/10.17184/eac.4621.
Texto completoInformes sobre el tema "IA éthique"
Hulin, Anne-Sophie. Enjeux sociétaux de l'IA 101 : un guide pour démystifier les enjeux éthiques et juridiques des systèmes d’IA. Observatoire international sur les impacts sociétaux de l'intelligence artificielle et du numérique, agosto de 2024. http://dx.doi.org/10.61737/nneu6499.
Texto completoAnne, Abdoulaye, Elisa Gagnon, Esli Osmanlliu, Esma Aïmeur, Florent Michelot, Florie Brangé, Georges-Philippe Gadoury-Sansfaçon et al. Abécédaire de l’IA. Observatoire international sur les impacts sociétaux de l'intelligence artificielle et du numérique, septiembre de 2024. http://dx.doi.org/10.61737/bgjn7670.
Texto completoPlusquellec, Pierrich, Lesly Nzeusseu Kouamou, Alexandre Alle, Cynthia Chassigneux, Antoine Congost, Marie-Pierre Cossette, Abdoulaye Baniré Diallo et al. Action IA : ensemble pour le développement et l’adoption responsable dans l’industrie - Synthèse de la journée. Observatoire international sur les impacts sociétaux de l'IA et du numérique, febrero de 2025. https://doi.org/10.61737/ubll6547.
Texto completoLanglois, Lyse, Marc-Antoine Dilhac, Jim Dratwa, Thierry Ménissier, Jean-Gabriel Ganascia, Daniel Weinstock, Luc Bégin y Allison Marchildon. L'éthique au cœur de l'IA. Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, octubre de 2023. http://dx.doi.org/10.61737/mdhp6080.
Texto completoNaffi, Nadia, Chris Isaac Larnder, Viviane Vallerand y Simon Duguay. Éduquer contre la désinformation amplifiée par l’IA et l’hypertrucage : une recension d’initiatives de 2018 à 2024. Observatoire international sur les impacts sociétaux de l'intelligence artificielle et du numérique, octubre de 2024. http://dx.doi.org/10.61737/qchr7582.
Texto completoJacob, Steve y Sébastien Brousseau. L’IA dans le secteur public : cas d’utilisation et enjeux éthiques. Observatoire international sur les impacts sociétaux de l'IA et du numérique, mayo de 2024. http://dx.doi.org/10.61737/fcxm4981.
Texto completoLanglois, Lyse, Justin Lawarée y Aude Marie Marcoux. Travaux exploratoires pour le développement d’un outil d’évaluation des impacts sociétaux de l’IA et du numérique. Observatoire international sur les impacts sociétaux de l'IA et du numérique, septiembre de 2022. http://dx.doi.org/10.61737/mnnj5598.
Texto completoDilhac, Marc-Antoine, Vincent Mai, Carl-Maria Mörch, Pauline Noiseau y Nathalie Voarino. Penser l’intelligence artificielle responsable : un guide de délibération. Observatoire international sur les impacts sociétaux de l'IA et du numérique, marzo de 2020. http://dx.doi.org/10.61737/nicj7555.
Texto completoMörch, Carl-Maria, Pascale Lehoux, Marc-Antoine Dilhac, Catherine Régis y Xavier Dionne. Recommandations pratiques pour une utilisation responsable de l’intelligence artificielle en santé mentale en contexte de pandémie. Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, diciembre de 2020. http://dx.doi.org/10.61737/mqaf7428.
Texto completo