Auswahl der wissenschaftlichen Literatur zum Thema „Ethical AI Principles“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Ethical AI Principles" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Ethical AI Principles"

1

Siau, Keng, und Weiyu Wang. „Artificial Intelligence (AI) Ethics“. Journal of Database Management 31, Nr. 2 (April 2020): 74–87. http://dx.doi.org/10.4018/jdm.2020040105.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mittelstadt, Brent. „Principles alone cannot guarantee ethical AI“. Nature Machine Intelligence 1, Nr. 11 (November 2019): 501–7. http://dx.doi.org/10.1038/s42256-019-0114-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Guillén, Andrea, und Emma Teodoro. „Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Action“. Social Sciences 12, Nr. 2 (18.01.2023): 53. http://dx.doi.org/10.3390/socsci12020053.

Der volle Inhalt der Quelle
Annotation:
AI predictive tools for migration management in the humanitarian field can significantly aid humanitarian actors in augmenting their decision-making capabilities and improving the lives and well-being of migrants. However, the use of AI predictive tools for migration management also poses several risks. Making humanitarian responses more effective using AI predictive tools cannot come at the expense of jeopardizing migrants’ rights, needs, and interests. Against this backdrop, embedding AI ethical principles into AI predictive tools for migration management becomes paramount. AI ethical principles must be imbued in the design, development, and deployment stages of these AI predictive tools to mitigate risks. Current guidelines to apply AI ethical frameworks contain high-level ethical principles which are not sufficiently specified for achievement. For AI ethical principles to have real impact, they must be translated into low-level technical and organizational measures to be adopted by those designing and developing AI tools. The context-specificity of AI tools implies that different contexts raise different ethical challenges to be considered. Therefore, the problem of how to operationalize AI ethical principles in AI predictive tools for migration management in the humanitarian field remains unresolved. To this end, eight ethical requirements are presented, with their corresponding safeguards to be implemented at the design and development stages of AI predictive tools for humanitarian action, with the aim of operationalizing AI ethical principles and mitigating the inherent risks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shukla, Shubh. „Principles Governing Ethical Development and Deployment of AI“. International Journal of Engineering, Business and Management 8, Nr. 2 (2024): 26–46. http://dx.doi.org/10.22161/ijebm.8.2.5.

Der volle Inhalt der Quelle
Annotation:
The ethical development and deployment of artificial intelligence (AI) is a rapidly evolving field with significant implications for society. This paper delves into the multifaceted ethical considerations surrounding AI, emphasising the importance of transparency, accountability, and privacy. By conducting a comprehensive review of existing literature and case studies, it highlights key ethical issues such as bias in AI algorithms, privacy concerns, and the societal impact of AI technologies. The study underscores the necessity for robust governance frameworks and international collaboration to address these ethical challenges effectively. It explores the need for ongoing ethical evaluation as AI technologies advance, particularly in autonomous systems. The paper emphasises the importance of integrating ethical principles into AI design from the outset, fostering sustainable practices, and raising awareness through education. Furthermore, the paper examines current regulatory frameworks across various regions, comparing their effectiveness in promoting ethical AI practices. The findings suggest a global consensus on key ethical principles, though their implementation varies widely. By proposing strategies to ensure responsible AI innovation and mitigate risks, this research contributes to the ongoing discourse on the future of AI ethics, aiming to guide the development of AI technologies that uphold human dignity and contribute to the common good. Research the ethical considerations and societal impacts of AI, focusing on issues like bias in AI algorithms, privacy concerns, or the effect on employment. This can involve a comprehensive review of existing literature and case studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Prathomwong, Piyanat, und Pagorn Singsuriya. „Ethical Framework of Digital Technology, Artificial Intelligence, and Health Equity“. Asia Social Issues 15, Nr. 5 (06.06.2022): 252136. http://dx.doi.org/10.48048/asi.2022.252136.

Der volle Inhalt der Quelle
Annotation:
Healthcare is evident in the extensive use of digital technology and artificial intelligence (AI). Although one aim of technological development and application is to promote health equity, it can at the same time increase health disparities. An ethical framework is needed to analyze issues arising in the effort to promote health equity through digital technology and AI. Based on an analysis of ethical principles for the promotion of health equity, this research article aims to synthesize an ethical framework for analyzing issues related to the promotion of health equity through digital technology and AI. Results of the study showed a synthesized framework that comprises two main groups of ethical principles: general principles and principles of management. The latter is meant to serve the implementation of the former. The general principles comprise four core principles: Human Dignity, Justice, Non-maleficence, and Beneficence, covering major principles and minor principles. For example, the core principle of Human Dignity includes three major principles (Non-humanization, Privacy, and Autonomy), and two minor principles (Explicability and Transparency). Other core principles have their relevant major and minor principles. The principles of management can be categorized according to their goals to serve different core principles. An illustration of applying the ethical framework is offered through the analysis and categorization of issues solicited from experts in multidisciplinary workshops on digital technology, AI, and health equity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Taddeo, Mariarosaria, David McNeish, Alexander Blanchard und Elizabeth Edgar. „Ethical Principles for Artificial Intelligence in National Defence“. Philosophy & Technology 34, Nr. 4 (13.10.2021): 1707–29. http://dx.doi.org/10.1007/s13347-021-00482-3.

Der volle Inhalt der Quelle
Annotation:
AbstractDefence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control and reliable AI systems—and related recommendations to foster ethically sound uses of AI for national defence purposes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kortz, Mason, Jessica Fjeld, Hannah Hilligoss und Adam Nagy. „Is Lawful AI Ethical AI?“ Morals & Machines 2, Nr. 1 (2022): 60–65. http://dx.doi.org/10.5771/2747-5174-2022-1-60.

Der volle Inhalt der Quelle
Annotation:
Attempts to impose moral constraints on autonomous, artificial decision-making systems range from “human in the loop” requirements to specialized languages for machine-readable moral rules. Regardless of the approach, though, such proposals all face the challenge that moral standards are not universal. It is tempting to use lawfulness as a proxy for morality; unlike moral rules, laws are usually explicitly defined and recorded – and they are usually at least roughly compatible with local moral norms. However, lawfulness is a highly abstracted and, thus, imperfect substitute for morality, and it should be relied on only with appropriate caution. In this paper, we argue that law-abiding AI systems are a more achievable goal than moral ones. At the same time, we argue that it’s important to understand the multiple layers of abstraction, legal and algorithmic, that underlie even the simplest AI-enabled decisions. The ultimate output of such a system may be far removed from the original intention and may not comport with the moral principles to which it was meant to adhere. Therefore, caution is required lest we develop AI systems that are technically law-abiding but still enable amoral or immoral conduct.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Oluwatobi Opeyemi Adeyelu, Chinonye Esther Ugochukwu und Mutiu Alade Shonibare. „ETHICAL IMPLICATIONS OF AI IN FINANCIAL DECISION – MAKING: A REVIEW WITH REAL WORLD APPLICATIONS“. International Journal of Applied Research in Social Sciences 6, Nr. 4 (17.04.2024): 608–30. http://dx.doi.org/10.51594/ijarss.v6i4.1033.

Der volle Inhalt der Quelle
Annotation:
This study delves into the ethical implications of Artificial Intelligence (AI) in financial decision-making, exploring the transformative impact of AI technologies on the financial services sector. Through a comprehensive literature review, the research highlights the dual nature of AI's integration into finance, showcasing both its potential to enhance operational efficiency and decision accuracy and the ethical challenges it introduces. These challenges include concerns over data privacy, algorithmic bias, and the potential for systemic risks, underscoring the need for robust ethical frameworks and regulatory standards. The study emphasizes the importance of a multidisciplinary approach to AI development and deployment, advocating for collaboration among technologists, ethicists, policymakers, and end-users to ensure that AI technologies are aligned with societal values and ethical principles. Future directions for research are identified, focusing on the development of adaptive ethical guidelines, methodologies for embedding ethical principles into AI systems, and the investigation of AI's long-term impact on market dynamics and consumer behaviour. This research contributes valuable insights into the ethical integration of AI in finance, offering recommendations for ensuring that AI technologies are utilized in a manner that is both ethically sound and conducive to the advancement of the financial services industry. Keywords: Artificial Intelligence, Financial Decision-Making, Ethical Implications, Algorithmic Bias, Data Privacy, Regulatory Standards, Multidisciplinary Approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lazăr (Cățeanu), Alexandra Maria, Angela Repanovici, Daniela Popa, Diana Geanina Ionas und Ada Ioana Dobrescu. „Ethical Principles in AI Use for Assessment: Exploring Students’ Perspectives on Ethical Principles in Academic Publishing“. Education Sciences 14, Nr. 11 (12.11.2024): 1239. http://dx.doi.org/10.3390/educsci14111239.

Der volle Inhalt der Quelle
Annotation:
Students’ comprehension of ethical principles and their application in the realm of AI technology play a crucial role in shaping the efficacy and morality of assessment procedures. This study seeks to explore students’ viewpoints on ethical principles within the context of AI-driven assessment activities to illuminate their awareness, attitudes, and practices concerning ethical considerations in educational environments. A systematic review of articles on this topic was conducted using scientometric methods within the Web of Science database. This review identified a research gap in the specialized literature regarding studies that delve into students’ opinions. Subsequently, a questionnaire was administered to students at Transilvania University of Brasov as part of the Information Literacy course. Statistical analysis was performed on the obtained results. Ultimately, students expressed a desire for the Information Culture course to incorporate a module focusing on the ethical use of AI in academic publishing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rossi, Francesca, und Nicholas Mattei. „Building Ethically Bounded AI“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9785–89. http://dx.doi.org/10.1609/aaai.v33i01.33019785.

Der volle Inhalt der Quelle
Annotation:
The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI’s freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven examplebased approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically-bounded AI, we describe two concrete examples, and we outline some outstanding challenges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Ethical AI Principles"

1

Hugosson, Beatrice, Donna Dinh und Gabriella Esmerson. „Why you should care: Ethical AI principles in a business setting : A study investigating the relevancy of the Ethical framework for AI in the context of the IT and telecom industry in Sweden“. Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44236.

Der volle Inhalt der Quelle
Annotation:
Background: The development of artificial intelligence (AI) is ever increasing, especially in the telecom and IT industry due to its great potential competitive advantage. However, AI is implemented at a fast phase in society with insufficient consideration for the ethical implications. Luckily, different initiatives and organizations are now launching ethical principles to prevent possible negative effects stemming from AI usage. One example is the Ethical Framework for AI by Floridi et al., (2018) who established five ethical principles for sustainable AI with inspiration from bioethics. Moreover, Sweden as a country is taking AI ethics seriously since the government is on a mission to be the world leader in harnessing artificial intelligence. Problem: The research in the field of ethical artificial intelligence is increasing but is still in its infancy where the majority of the academic articles are conceptual papers. Moreover, the few frameworks that exist for responsible AI are not always action-guiding and applicable to all AI applications and contexts. Purpose: This study aims to contribute with empirical evidence within the topic of artificial intelligence ethics and investigate the relevancy of an existing framework, namely the Ethical Framework for AI by Floridi et al., (2018), in the IT and telecom industry in Sweden. Method: A qualitative multiple-case study of ten semi-structured interviews with participants from the companies EVRY and Ericsson. The findings have later been connected to the literature within the field of artificial intelligence and ethics. Results: The most reasonable interpretation from the findings and analysis is that some parts of the framework are relevant, while others are not. Specifically, the principles of autonomy and non- maleficence seem to be applicable, meanwhile justice and explicability appear to only be partially supported by the participants and beneficence is suggested to not be relevant due to several reasons.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Haidar, Ahmad. „Responsible Artificial Intelligence : Designing Frameworks for Ethical, Sustainable, and Risk-Aware Practices“. Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASI008.pdf.

Der volle Inhalt der Quelle
Annotation:
L'intelligence artificielle (IA) transforme rapidement le monde, redéfinissant les relations entre technologie et société. Cette thèse explore le besoin essentiel de développer, de gouverner et d'utiliser l'IA et l'IA générative (IAG) de manière responsable et durable. Elle traite des risques éthiques, des lacunes réglementaires et des défis associés aux systèmes d'IA, tout en proposant des cadres concrets pour promouvoir une Intelligence Artificielle Responsable (IAR) et une Innovation Numérique Responsable (INR).La thèse commence par une analyse approfondie de 27 déclarations éthiques mondiales sur l'IA pour identifier des principes dominants tels que la transparence, l'équité, la responsabilité et la durabilité. Bien que significatifs, ces principes manquent souvent d'outils pratiques pour leur mise en œuvre. Pour combler cette lacune, la deuxième étude de la recherche présente un cadre intégrateur pour l'IAR basé sur quatre dimensions : technique, IA pour la durabilité, juridique et gestion responsable de l'innovation.La troisième partie de la thèse porte sur l'INR à travers une étude qualitative basée sur 18 entretiens avec des gestionnaires de secteurs divers. Cinq dimensions clés sont identifiées : stratégie, défis spécifiques au numérique, indicateurs de performance organisationnels, impact sur les utilisateurs finaux et catalyseurs. Ces dimensions permettent aux entreprises d'adopter des pratiques d'innovation durable et responsable tout en surmontant les obstacles à leur mise en œuvre.La quatrième étude analyse les risques émergents liés à l'IAG, tels que la désinformation, les biais, les atteintes à la vie privée, les préoccupations environnementales et la suppression d'emplois. À partir d'un ensemble de 858 incidents, cette recherche utilise une régression logistique binaire pour examiner l'impact sociétal de ces risques. Les résultats soulignent l'urgence d'établir des cadres réglementaires renforcés, une responsabilité numérique des entreprises et une gouvernance éthique de l'IA.En conclusion, cette thèse apporte des contributions critiques aux domaines de l'INR et de l'IAR en évaluant les principes éthiques, en proposant des cadres intégratifs et en identifiant des risques émergents. Elle souligne l'importance d'aligner la gouvernance de l'IA sur les normes internationales afin de garantir que les technologies d'IA servent l'humanité de manière durable et équitable
Artificial Intelligence (AI) is rapidly transforming the world, redefining the relationship between technology and society. This thesis investigates the critical need for responsible and sustainable development, governance, and usage of AI and Generative AI (GAI). The study addresses the ethical risks, regulatory gaps, and challenges associated with AI systems while proposing actionable frameworks for fostering Responsible Artificial Intelligence (RAI) and Responsible Digital Innovation (RDI).The thesis begins with a comprehensive review of 27 global AI ethical declarations to identify dominant principles such as transparency, fairness, accountability, and sustainability. Despite their significance, these principles often lack the necessary tools for practical implementation. To address this gap, the second study in the research presents an integrative framework for RAI based on four dimensions: technical, AI for sustainability, legal, and responsible innovation management.The third part of the thesis focuses on RDI through a qualitative study of 18 interviews with managers from diverse sectors. Five key dimensions are identified: strategy, digital-specific challenges, organizational KPIs, end-user impact, and catalysts. These dimensions enable companies to adopt sustainable and responsible innovation practices while overcoming obstacles in implementation.The fourth study analyzes emerging risks from GAI, such as misinformation, disinformation, bias, privacy breaches, environmental concerns, and job displacement. Using a dataset of 858 incidents, this research employs binary logistic regression to examine the societal impact of these risks. The results highlight the urgent need for stronger regulatory frameworks, corporate digital responsibility, and ethical AI governance. Thus, this thesis provides critical contributions to the fields of RDI and RAI by evaluating ethical principles, proposing integrative frameworks, and identifying emerging risks. It emphasizes the importance of aligning AI governance with international standards to ensure that AI technologies serve humanity sustainably and equitably
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Beltran, Nicole. „Artificial Intelligence in Lethal Automated Weapon Systems - What's the Problem? : Analysing the framing of LAWS in the EU ethics guidelines for trustworthy AI, the European Parliament Resolution on autonomous weapon systems and the CCW GGE guiding principles“. Thesis, Uppsala universitet, Teologiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412188.

Der volle Inhalt der Quelle
Annotation:
Lethal automated weapon systems (LAWS) are developed and deployed by a growing number of state and non-state actors, although no international legally binding framework exists as of yet. As a first attempt to regulate LAWS the UN appointed a group of governmental experts (GGE) to create the guiding principles on the issue of LAWS AI. A few years later the EU appointed an expert group to create the Ethics guideline for trustworthy and the European Parliament passed a resolution on the issue of LAWS.  This thesis attempts to make the underlying norms and discourses that have shaped these guiding principles and guidelines visible. By scrutinizing the documents through the ‘What’s the problem presented to be’-approach, the discursive practices that enables the framing is illuminated. The obscured problems not spoken of in the EU and UN documents are emphasised, suggesting that both documents oversimplifies  and downplays the danger of LAWS, leaving issues such as gender repercussions, human dignity and the dangers of the sophisticated weapons system itself largely unproblematised and hidden behind their suggested dichotomised and anthropocentric solutions, which largely results in a simple “add human and stir”-kind of solution. The underlying cause of this tendency seems to stem from a general unwillingness of states to regulate as LAWS are quickly becoming a matter of have- and have nots and may potentially change warfare as we know it. A case can also be made as to AI’s ‘Hollywood-problem’ as influencing the framing of LAWS, where the dystopian terminator-like depiction in popular culture can be seen reflected in international policy papers and statements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Natrup, Simon. „The Landscape of Artificial Intelligence Ethics: Analysis of Developments, Challenges, and Comparison of Different Markets“. Master's thesis, 2022. http://hdl.handle.net/10362/134702.

Der volle Inhalt der Quelle
Annotation:
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies Management
Artificial Intelligence has become a disruptive force in the everyday lives of billions of people worldwide, and the impact it has will only increase in the future. Be it an algorithm that knows precisely what we want before we are consciously aware of it or a fully automized and weaponized drone that decides in a fraction of a second if it may strike a lethal attack or not. Those algorithms are here to stay. Even if the world could come together and ban, e.g., algorithm-based weaponized systems, there would still be many systems that unintentionally harm individuals and whole societies. Therefore, we must think of AI with Ethical considerations to mitigate the harm and bias of human design, especially with the data on which the machine consciousness is created. Although it may just be an algorithm for a simple automated task, like visual classification, the outcome can have discriminatory results with long-term consequences. This thesis explores the developments and challenges of Artificial Intelligence Ethics in different markets based on specific factors, aims to answer scientific questions, and seeks to raise new ones for future research. Furthermore, measurements and approaches for mitigating risks that lead to such harmful algorithmic decisions and identifying global differences in this field are the main objectives of this research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Ethical AI Principles"

1

Taddeo, Mariarosaria. The Ethics of Artificial Intelligence in Defence. Oxford University Press, 2024. http://dx.doi.org/10.1093/oso/9780197745441.001.0001.

Der volle Inhalt der Quelle
Annotation:
Abstract The volume establishes an ethical framework for the identification, analysis, and resolution of ethical challenges that arise from the uses of artificial intelligence (AI) in defence, ranging from intelligence analysis to cyberwarfare and autonomous weapon systems. It does so with the goal of advancing the relevant debate and to inform the ethical governance of AI in defence. Centring on the autonomy and learning capabilities of AI technologies, the work is rooted in AI ethics and Just War Theory. It provides a systemic conceptual analysis of the different uses of AI in defence and their ethical implications, proposes ethical principles and a methodology for their implementation in practice. It then translates this analysis into actionable recommendations for decision-maker and policymakers to foster ethical governance of AI in the defence sector.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Enemark, Christian, Hrsg. Ethics of Drone Strikes. Edinburgh University Press, 2021. http://dx.doi.org/10.3366/edinburgh/9781474483575.001.0001.

Der volle Inhalt der Quelle
Annotation:
This collection of essays explores a variety of ways of thinking ethically about drone violence. The violent use of armed, unmanned aircraft (‘drones’) is increasing worldwide, but uncertainty persists about the moral status of remote-control killing and why it should be restrained. Practitioners, observers and potential victims of such violence often struggle to reconcile it with traditional expectations about the nature of war and the risk to combatants. Addressing the ongoing policy concern that the state use of drone violence is sometimes poorly understood and inadequately governed, the book’s ethical assessments are not restricted to the application of traditional Just War principles. They also consider the ethics of artificial intelligence (AI), virtue ethics, and guiding principles for forceful law-enforcement. The collection brings together nine original contributions by established and emerging scholars, incorporating expertise in military ethics, critical military studies, gender, history, international law and international relations, in order to better assess the multi-faceted relationship between drone violence and justice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

DiMatteo, Larry A., Cristina Poncibò und Michel Cannarsa, Hrsg. The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, 2022. http://dx.doi.org/10.1017/9781009072168.

Der volle Inhalt der Quelle
Annotation:
The technology and application of artificial intelligence (AI) throughout society continues to grow at unprecedented rates, which raises numerous legal questions that to date have been largely unexamined. Although AI now plays a role in almost all areas of society, the need for a better understanding of its impact, from legal and ethical perspectives, is pressing, and regulatory proposals are urgently needed. This book responds to these needs, identifying the issues raised by AI and providing practical recommendations for regulatory, technical, and theoretical frameworks aimed at making AI compatible with existing legal rules, principles, and democratic values. An international roster of authors including professors of specialized areas of law, technologists, and practitioners bring their expertise to the interdisciplinary nature of AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Ethical AI Principles"

1

Nikolinakos, Nikos Th. „Ethical Principles for Trustworthy AI“. In Law, Governance and Technology Series, 101–66. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27953-9_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Prem, Erich. „Approaches to Ethical AI“. In Introduction to Digital Humanism, 225–39. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45304-5_15.

Der volle Inhalt der Quelle
Annotation:
AbstractThis chapter provides an overview of existing proposals to address ethical issues of AI systems with a focus on ethical frameworks. A large number of such frameworks have been proposed with the aim to ensure the development of AI systems aligned with human values and morals. The frameworks list key ethical values that an AI system should follow. For the most part, they can be regarded as instances of philosophical principlism. This paper provides an overview of such frameworks and their general form and intended way of working. It lists some of the main principles that are proposed in the frameworks and critically assesses the practicality of the various approaches. It also describes current trends, tools, and approaches to ensure the ethicality of AI systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Thiel, Sonja. „Managing AI“. In Edition Museum, 83–98. Bielefeld, Germany: transcript Verlag, 2023. http://dx.doi.org/10.14361/9783839467107-009.

Der volle Inhalt der Quelle
Annotation:
How can a strategy and ethical guidelines be developed for the use of AI in museums? Based on the Creative User Empowerment project, in which management and ethical issues have been discussed, this paper presents lessons learned and guiding principles and questions that can be used as a starting point for the ethics and management of AI solutions in museums. The paper concludes with a proposal for the future role of museums as facilitators of ethical discussions in different areas of AI, based on their core competencies of mediation, education, and reflection in relation to collections.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Francés-Gómez, Pedro. „Ethical Principles and Governance for AI“. In The International Library of Ethics, Law and Technology, 191–217. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48135-2_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Luger, George F. „AI Ethical Issues: From a Social Perspective“. In Artificial Intelligence: Principles and Practice, 573–99. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-57437-5_26.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Johnson, Steven G., Gyorgy Simon und Constantin Aliferis. „Regulatory Aspects and Ethical Legal Societal Implications (ELSI)“. In Health Informatics, 659–92. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-39355-6_16.

Der volle Inhalt der Quelle
Annotation:
AbstractThis chapter reviews the context of regulating AI/ML models, the risk management principles underlying international regulations of clinical AI/ML, the conditions under which health AI/ML models in the U.S. are regulated by the Food and Drug Administration (FDA), and the FDA’s Good Machine Learning Practice (GMLP) principles. The GMLP principles do not offer specific guidance on execution, so we point the Reader to the parts of the book that discuss bringing these principles to practice via concrete best practice recommendations. Intrinsically linked with regulatory aspects are the Ethical, Legal, Social Implications (ELSI) dimensions. The chapter provides an introduction to the nascent field of biomedical AI ethics covering: general AI ELSI studies, AI/ML racial bias, and AI/ML and Health equity principles. Contrary to conventional risks/harms (data security and privacy, adherence to model use as stated in consent), ethical AI/ML involves model effectiveness and harms that can exist within the intended scope of consent. On the positive side, in the case of biomedical AI, these risks are in principle measurable and knowable compared to hard-to-quantify risks/harm due to data breaches. The chapter discusses (and gives illustrative examples) of the importance of causality and equivalence classes for practical detection of racial bias in models. The chapter concludes with a series of recommended best practices for promoting health equity and reducing health disparities via the design and use of health AI/ML.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Miller, Gloria J. „Artificial Intelligence Project Success Factors—Beyond the Ethical Principles“. In Lecture Notes in Business Information Processing, 65–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98997-2_4.

Der volle Inhalt der Quelle
Annotation:
AbstractThe algorithms implemented through artificial intelligence (AI) and big data projects are used in life-and-death situations. Despite research that addresses varying aspects of moral decision-making based upon algorithms, the definition of project success is less clear. Nevertheless, researchers place the burden of responsibility for ethical decisions on the developers of AI systems. This study used a systematic literature review to identify five categories of AI project success factors in 17 groups related to moral decision-making with algorithms. It translates AI ethical principles into practical project deliverables and actions that underpin the success of AI projects. It considers success over time by investigating the development, usage, and consequences of moral decision-making by algorithmic systems. Moreover, the review reveals and defines AI success factors within the project management literature. Project managers and sponsors can use the results during project planning and execution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hirsch, Dennis, Timothy Bartley, Aravind Chandrasekaran, Davon Norris, Srinivasan Parthasarathy und Piers Norris Turner. „Drawing Substantive Lines“. In SpringerBriefs in Law, 47–60. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-21491-2_6.

Der volle Inhalt der Quelle
Annotation:
AbstractThis chapter discusses the benchmarks and standards companies use to distinguish between ethical and unethical uses of advanced analytics and AI. In recent years scholars, governmental bodies, multi-stakeholder groups, industry think tanks, and even individual companies have issued model sets of data ethics and AI ethics principles. These model principles provide an initial reference point for setting substantive standards. However, the breath and ambiguity of these principles, and the conflicts among them, make it difficult for companies to operationalize them in all-things-considered decisions. In our study, most companies accordingly grounded their data ethics decisions, not on abstract ethical principles, but on intuitive benchmarks such as the Golden Rule or what “feels right.” Such gut-level standards, while potentially useful for approximating public expectations, are difficult to teach or apply consistently. Companies need substantive standards that are more actionable than high-level principles, and more standardized than intuitive judgment calls. They need generalizable policies that draw the line between ethical and unethical applications of advanced analytics and AI. How best to generate such company-specific policies remains an open question. One company said they did this by capturing past data ethics decisions and using them as “precedents” to guide future such decisions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kousa, Päivi, und Hannele Niemi. „Artificial Intelligence Ethics from the Perspective of Educational Technology Companies and Schools“. In AI in Learning: Designing the Future, 283–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09687-7_17.

Der volle Inhalt der Quelle
Annotation:
AbstractThis chapter discusses the ethical issues and solutions that educational technology (EdTech) companies and schools consider during their daily work. As an example, two Finnish cases are provided, in which companies and schools were interviewed about the problems they have experienced. The chapter first reviews the regulations and guidelines behind ethical AI. There are a vast number of guidelines, regulations, and principles for ethical AI, but implementation guidelines for how that knowledge should be put into practices are lacking. The problem is acute because, with the quick pace of technological development, schools are in danger of being left behind without sufficient education for effectively managing their uses of AI’s possibilities and coping with its challenges. Issues related to security and trustworthiness are also a growing concern. This chapter does not solve the ethical problems experienced by companies and schools but brings new perspectives into view in how they appear in the light of ethical principles such as beneficence, non-maleficence, autonomy, justice, and explicability. The aim is not only to continue the discussion in the field but to find ways to reduce the gap between decision-makers, businesses, and schools.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hirsch, Dennis, Timothy Bartley, Aravind Chandrasekaran, Davon Norris, Srinivasan Parthasarathy und Piers Norris Turner. „Introduction“. In SpringerBriefs in Law, 1–10. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-21491-2_1.

Der volle Inhalt der Quelle
Annotation:
AbstractBusiness use of artificial intelligence (AI) can produce tremendous insights and benefits. But it can also invade privacy, perpetuate bias, and produce other harms that injure people and damage business reputation. To succeed in today’s economy, companies need to implement AI in a responsible and ethical way. The question is: How to do this? This book points the way. The authors interviewed and surveyed AI ethics managers at leading companies. They asked why these experts see AI ethics as important, and how they seek to achieve it. This book conveys the results of that research on a concise, accessible way that readers should be able to apply to their own organizations. Much of the existing writing on AI ethics focuses either on macro-level AI ethics principles, or on micro-level product design and tooling. The interviews showed that companies need a third component: data and AI ethics management. This third component consists of the management structures, processes, training and substantive benchmarks that companies use to operationalize their high-level data and AI ethics principles and to guide and hold accountable their developers. AI ethics management is the connective tissue that makes AI ethics principles real. It is the focus of this book.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Ethical AI Principles"

1

Pavlova, Yoana P. „AI and Ethical Principles for Sustainable Learning Development“. In 2024 59th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST), 1–4. IEEE, 2024. http://dx.doi.org/10.1109/icest62335.2024.10639732.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Leong, Wai Yie, Yuan Zhi Leong und Wai San Leong. „Evolving Ethics: Adapting Principles to AI-Generated Artistic Landscapes“. In 2024 International Conference on Information Technology Research and Innovation (ICITRI), 242–47. IEEE, 2024. http://dx.doi.org/10.1109/icitri62858.2024.10698905.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cerqueira, José Antonio Siqueira de, und Edna Dias Canedo. „Exploring Ethical Requirements Elicitation for Applications in the Context of AI“. In Congresso Brasileiro de Software: Teoria e Prática. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/cbsoft_estendido.2022.225629.

Der volle Inhalt der Quelle
Annotation:
Ethical concerns arises from the proliferation of Artificial Intelligence (AI) based systems in use. AI ethics has been approached mainly in guidelines and principles, not providing enough practical guidance for developers. Hence, we aim to present RE4AI Ethical Guide and its evaluation. We used the Design Science Research methodology to understand the problem, present the guide and evaluate it through a survey. The Guide is composed of 26 cards across 11 principles. Our preliminary results reveal that it has the potential to facilitate the elicitation of ethical requirements. Thus, we contribute to bridge the gap between principles and practice by assisting developers to elicit ethical requirements and operationalise ethics in AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cerqueira, José Antonio Siqueira de, und Edna Dias Canedo. „Exploring Ethical Requirements Elicitation for Applications in the Context of AI“. In Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação (SBC), 2022. http://dx.doi.org/10.5753/sbsi_estendido.2022.222269.

Der volle Inhalt der Quelle
Annotation:
Ethical concerns arises from the proliferation of Artificial Intelligence (AI) based systems in use. AI ethics has been approached mainly in guidelines and principles, not providing enough practical guidance for developers. Hence, we aim to present RE4AI Ethical Guide and its evaluation. We used the Design Science Research methodology to understand the problem, present the guide and evaluate it through a focus group. The Guide is composed of 26 cards across 11 principles. We evaluated it with 5 AI professionals and our preliminary results reveal that it has the potential to facilitate the elicitation of ethical requirements. Thus, we contribute to bridge the gap between principles and practice by assisting developers to elicit ethical requirements and operationalise ethics in AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Loreggia, Andrea, Nicholas Mattei, Francesca Rossi und K. Brent Venable. „Preferences and Ethical Principles in Decision Making“. In AIES '18: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3278721.3278723.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Woodgate, Jessica. „Ethical Principles for Reasoning about Value Preferences“. In AIES '23: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3600211.3604728.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Danilevskyi, Mykhailo, und Fernando Perez Tellez. „On the compliance with ethical principles in AI“. In HCAIep '23: Human Centered AI Education and Practice Conference 2023. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3633083.3633223.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kutz, Janika, Jens Neuhüttler, Jan Spilski und Thomas Lachmann. „AI-based Services - Design Principles to Meet the Requirements of a Trustworthy AI“. In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003107.

Der volle Inhalt der Quelle
Annotation:
The development of Human-Centered and Trustworthy AI-based services has recently attracted increased attention in politics and science. Even though that technical advances have received many of the attention lately, ethical considerations are becoming more and more important. One of the most valuable publications in this area is the "Ethics Guidelines for Trustworthy AI" of the European Commission (EC). One approach to assist developers in implementing these requirements during the development process is to provide design guidelines. The aim of this paper is to identify which action-oriented design principles can be applied to satisfy the requirements for Trustworthy AI. For this purpose, the design principles published by Major providers of commercial AI-based services were contrasted with the seven requirements of the EC. The results indicate that some design principles can be used to meet the requirements of Trustworthy AI. At the same time, however, it becomes clear that work on Ethical AI should be extended by aspects related to Human-AI Interaction and service process quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhou, Jianlong, Fang Chen, Adam Berry, Mike Reed, Shujia Zhang und Siobhan Savage. „A Survey on Ethical Principles of AI and Implementations“. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2020. http://dx.doi.org/10.1109/ssci47803.2020.9308437.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Vanhée, Loïs, und Melania Borit. „Ethical By Designer - How to Grow Ethical Designers of Artificial Intelligence (Extended Abstract)“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/794.

Der volle Inhalt der Quelle
Annotation:
Ethical concerns regarding Artificial Intelligence technology have fueled discussions around the ethics training received by its designers. Training designers for ethical behaviour, understood as habitual application of ethical principles in any situation, can make a significant difference in the practice of research, development, and application of AI systems. Building on interdisciplinary knowledge and practical experience from computer science, moral psychology, and pedagogy, we propose a functional way to provide this training.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Ethical AI Principles"

1

Baker, James E. Ethics and Artificial Intelligence: A Policymaker's Introduction. Center for Security and Emerging Technology, April 2021. http://dx.doi.org/10.51593/20190022.

Der volle Inhalt der Quelle
Annotation:
The law plays a vital role in how artificial intelligence can be developed and used in ethical ways. But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required. Sound ethical codes and principles concerning AI can help fill legal gaps. In this paper, CSET Distinguished Fellow James E. Baker offers a primer on the limits and promise of three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Atabey, Ayça, Cory Robinson, Anna Lindroos Cermakova, Andra Siibak und Natalia Ingebretsen Kucirkova. Ethics in EdTech: Consolidating Standards For Responsible Data Handling And Usercentric Design. University in Stavanger, 2024. http://dx.doi.org/10.31265/usps.283.

Der volle Inhalt der Quelle
Annotation:
This report proposes aspirational principles for EdTech providers, emphasizing ethical practices, robust data protection, ownership rights, transparent consent processes, and active user engagement, particularly with children. These measures aim to enhance transparency, accountability, and trust in EdTech platforms. Focusing on the K12 sector, the report systematically reviews and integrates key academic, legal, and technical frameworks to propose ethical benchmarks for the EdTech industry. The benchmarks go beyond quality assurance, highlighting good practices and ethical leadership for the field. The report addresses the need for a new culture in EdTech ethics, one that is collaborative and views EdTech providers as partners in dialogue with researchers and policy-makers to identify constructive solutions and uphold social trust. The outlined benchmarks are intended for national policymakers, international agencies, and certification bodies to consider when developing quality standards for EdTech used in schools. They include AI safeguards and stress the importance of meeting international data protection standards, establishing clear ownership rights, and implementing transparent consent processes to address data control issues, as well as active user engagement for improving data governance practices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Burstein, Jill. Duolingo English Test Responsible AI Standards. Duolingo, März 2023. http://dx.doi.org/10.46999/vcae5025.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) is now instantiated in digital learning and assessment platforms. Many sectors, including tech, government, legal, and military sectors, now have used formalized principles to develop responsible AI standards. While there is a substantial literature around responsible AI more generally (e.g., Fjeld et al., 2020; Gianni et al., 2022; and, NIST, 20231 ), traditional validity frameworks (such as, Xi, 2010a; Chapelle et al., 2008; Kunnan, 2000; and, Kane, 1992) pre-date AI advances, and do not provide formal standards for the use of AI in assessment. The AERA/APA/NCME Standards (2014) pre-date modern AI advances, and include limited discussion about the use of AI and technology in educational measurement. Some research discusses AI application in terms of validity (such as Huggins-Manley et al., 2022, Williamson et al., 2012, and Xi, 2010b). In earlier work, Aiken and Epstein (2000) discuss ethical considerations for AI in education. More recently, Dignum (2021) proposed a high-level vision for responsible AI for education, and Dieterle et al (2022) and OECD (2023) discuss guidelines and issues associated with AI in testing. The Duolingo English Test (DET)’s Responsible AI Standards were informed by the ATP (2021) and ITC-ATP (2022) guidelines, which provide comprehensive and relevant guidelines about AI and technology use for assessment. New guidelines for responsible AI are continually being developed (Department for Science, Technology & Innovation, 2023).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Research Libraries Guiding Principles for Artificial Intelligence. Association of Research Libraries, April 2024. http://dx.doi.org/10.29242/principles.ai2024.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) technologies, and in particular, generative AI, have significant potential to improve access to information and advance openness in research outputs. AI also has the potential to disrupt information landscapes and the communities that research libraries support and serve. The increasing availability of AI models sparks many possibilities and raises several ethical, professional, and legal considerations. Articulating a set of research library guiding principles for AI is useful to influence policy and advocate for the responsible development and deployment of AI technologies, promote ethical and transparent practices, and build trust among stakeholders, within research libraries as well as across the research environment. These principles will serve as a foundational framework for the ethical and transparent use of AI and reflect the values we hold in research libraries. ARL will rely on these principles in our policy advocacy and engagement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ethical impact assessment. A tool of the Recommendation on the Ethics of Artificial Intelligence. UNESCO, 2023. http://dx.doi.org/10.54678/ytsa7796.

Der volle Inhalt der Quelle
Annotation:
The Recommendation on the Ethics of AI provides a framework to ensure that AI developments align with the promotion and protection of human rights and human dignity, environmental sustainability, fairness, inclusion and gender equality. It underscores that these goals and principles should inform technological developments in an ex-ante manner. To support effective implementation, UNESCO developed two instruments, the Readiness Assessment Methodology (RAM) and the Ethical Impact Assessment (EIA). The EIA is proposed to procurers of AI systems, as this is one of the main channels in which algorithms make their way to highly sensitive public domains. But the questions and the structure of the document are designed so the tools can also be used more generally by developers of AI systems, in the public or private sectors, who wish to develop AI ethically and fully comply with international standards such as the Recommendation. The document comprises two main parts that together strike a balance between procedure and substance. In the first part, related to scoping, the goal is to understand the basics of the system, as well as to lay out some preliminary questions, such as whether automation is the best solution for the case at hand. It also raises questions about the project team and whether plans are in place to engage different stakeholders. The second part is dedicated to implementing the principles in the UNESCO Recommendation. UNESCO Catno: 0000386276
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Readiness assessment methodology. A tool of the Recommendation on the Ethics of Artificial Intelligence. UNESCO, 2023. http://dx.doi.org/10.54678/yhaa4429.

Der volle Inhalt der Quelle
Annotation:
The Readiness assessment methodology (RAM) is a macro level instrument that will help countries understand where they stand on the scale of preparedness to implement AI ethically and responsibly for all their citizens, in so doing highlighting what institutional and regulatory changes are needed. The outputs of the RAM will help UNESCO tailor the capacity building efforts to the needs of specific countries. Capacity here refers to the ability to assess AI systems in line with the Recommendation, the presence of requisite and appropriate human capital, and infrastructure, policies, and regulations to address the challenges brought about by AI technologies and ensure that people and their interests are always at the center of AI development. In November 2021, the 193 Member States of UNESCO signed the Recommendation on the Ethics of Artificial Intelligence, the first global normative instrument in its domain. The Recommendation serves as a comprehensive and actionable framework for the ethical development and use of AI, encompassing the full spectrum of human rights. It does so by maintaining focus on all stages of the AI system lifecycle. Beyond elaborating the values and principles that should guide the ethical design, development and use of AI, the Recommendation lays out the actions required from Member States to ensure the upholding of such values and principles, through advocating for effective regulation and providing recommendations in various essential policy areas, such as gender, the environment, and communication and information. The Recommendation mandated the development of two key tools, the Readiness Assessment Methodology (RAM) and the Ethical Impact Assessment (EIA), which form the core pillars of the implementation. These tools both aim to assess and promote the resilience of existing laws, policies and institutions to AI implementation in the country, as well as the alignment of AI systems with the values and principles set out in the Recommendation. The goal of this document is to provide more information on the Readiness Assessment Methodology, lay out its various dimensions, and detail the work plan for the implementing countries, including the type of entities that need to be involved, responsibilities of each entity, and the split of work between UNESCO and the implementing country. UNESCO Catno: 0000385198
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie