Literatura científica selecionada sobre o tema "Innovation Numérique Responsable"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Innovation Numérique Responsable".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Innovation Numérique Responsable"
Belhadj Hacen, Abdelhamid, e Denis Legros. "L'enseignement du français et de l'arabe à l'ère de la mondialisation". Diversité 164, n.º 1 (2011): 69–72. http://dx.doi.org/10.3406/diver.2011.3405.
Texto completo da fonteEl Ganbour, Rachid, Samira Elouelji, Morad El Ganbour e Kawtar Tahmoun. "L’innovation frugale en éducation : Un système innovant, responsable et inclusif". Médiations et médiatisations, n.º 16 (30 de outubro de 2023): 95–112. http://dx.doi.org/10.52358/mm.vi16.363.
Texto completo da fonteHennebert, Marc-Antonin, Vincent Pasquier e Christian Lévesque. "Les technologies numériques comme source de revitalisation démocratique : une étude auprès des responsables des communications d’organisations syndicales". Relations industrielles 76, n.º 4 (2021): 684. http://dx.doi.org/10.7202/1086006ar.
Texto completo da fonteCissé, Aminata. "Problématique de la qualité de l’enseignement supérieur : enjeux et stratégies pour l’Afrique". Liens, revue internationale des sciences et technologies de l'éducation 1, n.º 5 (5 de dezembro de 2023): 169–82. http://dx.doi.org/10.61585/pud-liens-v1n513.
Texto completo da fonteTeses / dissertações sobre o assunto "Innovation Numérique Responsable"
Haidar, Ahmad. "Responsible Artificial Intelligence : Designing Frameworks for Ethical, Sustainable, and Risk-Aware Practices". Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASI008.pdf.
Texto completo da fonteArtificial Intelligence (AI) is rapidly transforming the world, redefining the relationship between technology and society. This thesis investigates the critical need for responsible and sustainable development, governance, and usage of AI and Generative AI (GAI). The study addresses the ethical risks, regulatory gaps, and challenges associated with AI systems while proposing actionable frameworks for fostering Responsible Artificial Intelligence (RAI) and Responsible Digital Innovation (RDI).The thesis begins with a comprehensive review of 27 global AI ethical declarations to identify dominant principles such as transparency, fairness, accountability, and sustainability. Despite their significance, these principles often lack the necessary tools for practical implementation. To address this gap, the second study in the research presents an integrative framework for RAI based on four dimensions: technical, AI for sustainability, legal, and responsible innovation management.The third part of the thesis focuses on RDI through a qualitative study of 18 interviews with managers from diverse sectors. Five key dimensions are identified: strategy, digital-specific challenges, organizational KPIs, end-user impact, and catalysts. These dimensions enable companies to adopt sustainable and responsible innovation practices while overcoming obstacles in implementation.The fourth study analyzes emerging risks from GAI, such as misinformation, disinformation, bias, privacy breaches, environmental concerns, and job displacement. Using a dataset of 858 incidents, this research employs binary logistic regression to examine the societal impact of these risks. The results highlight the urgent need for stronger regulatory frameworks, corporate digital responsibility, and ethical AI governance. Thus, this thesis provides critical contributions to the fields of RDI and RAI by evaluating ethical principles, proposing integrative frameworks, and identifying emerging risks. It emphasizes the importance of aligning AI governance with international standards to ensure that AI technologies serve humanity sustainably and equitably
Voarino, Nathalie. "Systèmes d’intelligence artificielle et santé : les enjeux d’une innovation responsable". Thèse, 2019. http://hdl.handle.net/1866/23526.
Texto completo da fonteThe use of artificial intelligence (AI) systems in health is part of the advent of a new "high definition" medicine that is predictive, preventive and personalized, benefiting from the unprecedented amount of data that is today available. At the heart of digital health innovation, the development of AI systems promises to lead to an interconnected and self-learning healthcare system. AI systems could thus help to redefine the classification of diseases, generate new medical knowledge, or predict the health trajectories of individuals for prevention purposes. Today, various applications in healthcare are being considered, ranging from assistance to medical decision-making through expert systems to precision medicine (e.g. pharmacological targeting), as well as individualized prevention through health trajectories developed on the basis of biological markers. However, urgent ethical concerns emerge with the increasing use of algorithms to analyze a growing number of data related to health (often personal and sensitive) as well as the reduction of human intervention in many automated processes. From the limitations of big data analysis, the need for data sharing and the algorithmic decision ‘opacity’ stems various ethical concerns relating to the protection of privacy and intimacy, free and informed consent, social justice, dehumanization of care and patients, and/or security. To address these challenges, many initiatives have focused on defining and applying principles for an ethical governance of AI. However, the operationalization of these principles faces various difficulties inherent to applied ethics, which originate either from the scope (universal or plural) of these principles or the way these principles are put into practice (inductive or deductive methods). These issues can be addressed with context-specific or bottom-up approaches of applied ethics. However, people who embrace these approaches still face several challenges. From an analysis of citizens' fears and expectations emerging from the discussions that took place during the coconstruction of the Montreal Declaration for a Responsible Development of AI, it is possible to get a sense of what these difficulties look like. From this analysis, three main challenges emerge: the incapacitation of health professionals and patients, the many hands problem, and artificial agency. These challenges call for AI systems that empower people and that allow to maintain human agency, in order to foster the development of (pragmatic) shared responsibility among the various stakeholders involved in the development of healthcare AI systems. Meeting these challenges is essential in order to adapt existing governance mechanisms and enable the development of a responsible digital innovation in healthcare and research that allows human beings to remain at the center of its development.
Trabalhos de conferências sobre o assunto "Innovation Numérique Responsable"
Chirilov, Ionela. "Tendances dans le domaine de l'entrepreneuriat social en Europe". In Simpozion stiintific al tinerilor cercetatori, editia 20. Academy of Economic Studies of Moldova, 2023. http://dx.doi.org/10.53486/9789975359023.18.
Texto completo da fonteRelatórios de organizações sobre o assunto "Innovation Numérique Responsable"
McAdams-Roy, Kassandra, Philippe Després e Pierre-Luc Déziel. La gouvernance des données dans le domaine de la santé : Pour une fiducie de données au Québec ? Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, fevereiro de 2023. http://dx.doi.org/10.61737/nrvw8644.
Texto completo da fonte