Siga este link para ver outros tipos de publicações sobre o tema: Ethical AI Principles.

Artigos de revistas sobre o tema "Ethical AI Principles"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Ethical AI Principles".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Siau, Keng, e Weiyu Wang. "Artificial Intelligence (AI) Ethics". Journal of Database Management 31, n.º 2 (abril de 2020): 74–87. http://dx.doi.org/10.4018/jdm.2020040105.

Texto completo da fonte
Resumo:
Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Mittelstadt, Brent. "Principles alone cannot guarantee ethical AI". Nature Machine Intelligence 1, n.º 11 (novembro de 2019): 501–7. http://dx.doi.org/10.1038/s42256-019-0114-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Guillén, Andrea, e Emma Teodoro. "Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Action". Social Sciences 12, n.º 2 (18 de janeiro de 2023): 53. http://dx.doi.org/10.3390/socsci12020053.

Texto completo da fonte
Resumo:
AI predictive tools for migration management in the humanitarian field can significantly aid humanitarian actors in augmenting their decision-making capabilities and improving the lives and well-being of migrants. However, the use of AI predictive tools for migration management also poses several risks. Making humanitarian responses more effective using AI predictive tools cannot come at the expense of jeopardizing migrants’ rights, needs, and interests. Against this backdrop, embedding AI ethical principles into AI predictive tools for migration management becomes paramount. AI ethical principles must be imbued in the design, development, and deployment stages of these AI predictive tools to mitigate risks. Current guidelines to apply AI ethical frameworks contain high-level ethical principles which are not sufficiently specified for achievement. For AI ethical principles to have real impact, they must be translated into low-level technical and organizational measures to be adopted by those designing and developing AI tools. The context-specificity of AI tools implies that different contexts raise different ethical challenges to be considered. Therefore, the problem of how to operationalize AI ethical principles in AI predictive tools for migration management in the humanitarian field remains unresolved. To this end, eight ethical requirements are presented, with their corresponding safeguards to be implemented at the design and development stages of AI predictive tools for humanitarian action, with the aim of operationalizing AI ethical principles and mitigating the inherent risks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Shukla, Shubh. "Principles Governing Ethical Development and Deployment of AI". International Journal of Engineering, Business and Management 8, n.º 2 (2024): 26–46. http://dx.doi.org/10.22161/ijebm.8.2.5.

Texto completo da fonte
Resumo:
The ethical development and deployment of artificial intelligence (AI) is a rapidly evolving field with significant implications for society. This paper delves into the multifaceted ethical considerations surrounding AI, emphasising the importance of transparency, accountability, and privacy. By conducting a comprehensive review of existing literature and case studies, it highlights key ethical issues such as bias in AI algorithms, privacy concerns, and the societal impact of AI technologies. The study underscores the necessity for robust governance frameworks and international collaboration to address these ethical challenges effectively. It explores the need for ongoing ethical evaluation as AI technologies advance, particularly in autonomous systems. The paper emphasises the importance of integrating ethical principles into AI design from the outset, fostering sustainable practices, and raising awareness through education. Furthermore, the paper examines current regulatory frameworks across various regions, comparing their effectiveness in promoting ethical AI practices. The findings suggest a global consensus on key ethical principles, though their implementation varies widely. By proposing strategies to ensure responsible AI innovation and mitigate risks, this research contributes to the ongoing discourse on the future of AI ethics, aiming to guide the development of AI technologies that uphold human dignity and contribute to the common good. Research the ethical considerations and societal impacts of AI, focusing on issues like bias in AI algorithms, privacy concerns, or the effect on employment. This can involve a comprehensive review of existing literature and case studies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Prathomwong, Piyanat, e Pagorn Singsuriya. "Ethical Framework of Digital Technology, Artificial Intelligence, and Health Equity". Asia Social Issues 15, n.º 5 (6 de junho de 2022): 252136. http://dx.doi.org/10.48048/asi.2022.252136.

Texto completo da fonte
Resumo:
Healthcare is evident in the extensive use of digital technology and artificial intelligence (AI). Although one aim of technological development and application is to promote health equity, it can at the same time increase health disparities. An ethical framework is needed to analyze issues arising in the effort to promote health equity through digital technology and AI. Based on an analysis of ethical principles for the promotion of health equity, this research article aims to synthesize an ethical framework for analyzing issues related to the promotion of health equity through digital technology and AI. Results of the study showed a synthesized framework that comprises two main groups of ethical principles: general principles and principles of management. The latter is meant to serve the implementation of the former. The general principles comprise four core principles: Human Dignity, Justice, Non-maleficence, and Beneficence, covering major principles and minor principles. For example, the core principle of Human Dignity includes three major principles (Non-humanization, Privacy, and Autonomy), and two minor principles (Explicability and Transparency). Other core principles have their relevant major and minor principles. The principles of management can be categorized according to their goals to serve different core principles. An illustration of applying the ethical framework is offered through the analysis and categorization of issues solicited from experts in multidisciplinary workshops on digital technology, AI, and health equity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Taddeo, Mariarosaria, David McNeish, Alexander Blanchard e Elizabeth Edgar. "Ethical Principles for Artificial Intelligence in National Defence". Philosophy & Technology 34, n.º 4 (13 de outubro de 2021): 1707–29. http://dx.doi.org/10.1007/s13347-021-00482-3.

Texto completo da fonte
Resumo:
AbstractDefence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control and reliable AI systems—and related recommendations to foster ethically sound uses of AI for national defence purposes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kortz, Mason, Jessica Fjeld, Hannah Hilligoss e Adam Nagy. "Is Lawful AI Ethical AI?" Morals & Machines 2, n.º 1 (2022): 60–65. http://dx.doi.org/10.5771/2747-5174-2022-1-60.

Texto completo da fonte
Resumo:
Attempts to impose moral constraints on autonomous, artificial decision-making systems range from “human in the loop” requirements to specialized languages for machine-readable moral rules. Regardless of the approach, though, such proposals all face the challenge that moral standards are not universal. It is tempting to use lawfulness as a proxy for morality; unlike moral rules, laws are usually explicitly defined and recorded – and they are usually at least roughly compatible with local moral norms. However, lawfulness is a highly abstracted and, thus, imperfect substitute for morality, and it should be relied on only with appropriate caution. In this paper, we argue that law-abiding AI systems are a more achievable goal than moral ones. At the same time, we argue that it’s important to understand the multiple layers of abstraction, legal and algorithmic, that underlie even the simplest AI-enabled decisions. The ultimate output of such a system may be far removed from the original intention and may not comport with the moral principles to which it was meant to adhere. Therefore, caution is required lest we develop AI systems that are technically law-abiding but still enable amoral or immoral conduct.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Oluwatobi Opeyemi Adeyelu, Chinonye Esther Ugochukwu e Mutiu Alade Shonibare. "ETHICAL IMPLICATIONS OF AI IN FINANCIAL DECISION – MAKING: A REVIEW WITH REAL WORLD APPLICATIONS". International Journal of Applied Research in Social Sciences 6, n.º 4 (17 de abril de 2024): 608–30. http://dx.doi.org/10.51594/ijarss.v6i4.1033.

Texto completo da fonte
Resumo:
This study delves into the ethical implications of Artificial Intelligence (AI) in financial decision-making, exploring the transformative impact of AI technologies on the financial services sector. Through a comprehensive literature review, the research highlights the dual nature of AI's integration into finance, showcasing both its potential to enhance operational efficiency and decision accuracy and the ethical challenges it introduces. These challenges include concerns over data privacy, algorithmic bias, and the potential for systemic risks, underscoring the need for robust ethical frameworks and regulatory standards. The study emphasizes the importance of a multidisciplinary approach to AI development and deployment, advocating for collaboration among technologists, ethicists, policymakers, and end-users to ensure that AI technologies are aligned with societal values and ethical principles. Future directions for research are identified, focusing on the development of adaptive ethical guidelines, methodologies for embedding ethical principles into AI systems, and the investigation of AI's long-term impact on market dynamics and consumer behaviour. This research contributes valuable insights into the ethical integration of AI in finance, offering recommendations for ensuring that AI technologies are utilized in a manner that is both ethically sound and conducive to the advancement of the financial services industry. Keywords: Artificial Intelligence, Financial Decision-Making, Ethical Implications, Algorithmic Bias, Data Privacy, Regulatory Standards, Multidisciplinary Approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Lazăr (Cățeanu), Alexandra Maria, Angela Repanovici, Daniela Popa, Diana Geanina Ionas e Ada Ioana Dobrescu. "Ethical Principles in AI Use for Assessment: Exploring Students’ Perspectives on Ethical Principles in Academic Publishing". Education Sciences 14, n.º 11 (12 de novembro de 2024): 1239. http://dx.doi.org/10.3390/educsci14111239.

Texto completo da fonte
Resumo:
Students’ comprehension of ethical principles and their application in the realm of AI technology play a crucial role in shaping the efficacy and morality of assessment procedures. This study seeks to explore students’ viewpoints on ethical principles within the context of AI-driven assessment activities to illuminate their awareness, attitudes, and practices concerning ethical considerations in educational environments. A systematic review of articles on this topic was conducted using scientometric methods within the Web of Science database. This review identified a research gap in the specialized literature regarding studies that delve into students’ opinions. Subsequently, a questionnaire was administered to students at Transilvania University of Brasov as part of the Information Literacy course. Statistical analysis was performed on the obtained results. Ultimately, students expressed a desire for the Information Culture course to incorporate a module focusing on the ethical use of AI in academic publishing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Rossi, Francesca, e Nicholas Mattei. "Building Ethically Bounded AI". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 9785–89. http://dx.doi.org/10.1609/aaai.v33i01.33019785.

Texto completo da fonte
Resumo:
The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI’s freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven examplebased approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically-bounded AI, we describe two concrete examples, and we outline some outstanding challenges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Leimanis, Anrī, e Karina Palkova. "Ethical Guidelines for Artificial Intelligence in Healthcare from the Sustainable Development Perspective". European Journal of Sustainable Development 10, n.º 1 (1 de fevereiro de 2021): 90. http://dx.doi.org/10.14207/ejsd.2021.v10n1p90.

Texto completo da fonte
Resumo:
Use of Artificial Intelligence (AI) in variety of areas has encouraged an extensive global discourse on the underlying ethical principles and values. With the rapid AI development process and its near instant global coverage, the issues of applicable ethical principles and guidelines have become vital. AI promises to deliver a lot of advantages to economic, social and educational fields. Since AI is also increasingly applied in healthcare and medical education areas, ethical application issues are growing ever more important. Ethical and social issues raised by AI in healthcare overlap with those raised by personal data use, function automation, reliance on assistive medical technologies and the so-called ‘telehealth’. Without well-grounded ethical guidelines or even regulatory framework in respect of the AI in healthcare several legal and ethical problems at the implementational level can arise. In order to facilitate further discussion about the ethical principles and responsibilities of educational system in healthcare using AI and to potentially arrive at a consensus concerning safe and desirable uses of AI in healthcare education, this paper performs an evaluation of the self-imposed AI ethical guidelines identifying the common principles and approaches as well as drawbacks limiting the practical and legal application of internal policies. The main aim of the research is to encourage integration of theoretical studies and policy studies on sustainability issues in correlation between healthcare and technologies, the AI ethical perspective.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

AllahRakha, Naeem. "UNESCO's AI Ethics Principles: Challenges and Opportunities". International Journal of Law and Policy 2, n.º 9 (30 de setembro de 2024): 24–36. http://dx.doi.org/10.59022/ijlp.225.

Texto completo da fonte
Resumo:
This paper examines UNESCO's Recommendation on the Ethics of Artificial Intelligence, which outlines key principles for ensuring responsible AI development. The aim is to explore the challenges and opportunities in implementing these principles in the current AI landscape. Through a literature review, comparative analysis of existing frameworks, and case studies. This research identifies key challenges such as cultural variability, regulatory gaps, and the rapid pace of AI innovation. Conversely, it highlights opportunities like establishing global ethical standards, fostering public trust, and promoting responsible AI innovation. The study proposes strategies for overcoming challenges, including clear ethical metrics, international oversight, and ethics education in AI curricula. The findings emphasize the requirement for global cooperation and robust governance mechanisms to ensure ethical AI development. The research concludes that while implementing UNESCO's AI ethics principles is complex, it is crucial for safeguarding human rights and promoting sustainable AI growth worldwide.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Dhamo, Ana, Iris Dhamo e Iris Manastirliu. "Fundamental Rights and New Technologies". Interdisciplinary Journal of Research and Development 10, n.º 3 (23 de novembro de 2023): 121. http://dx.doi.org/10.56345/ijrdv10n319.

Texto completo da fonte
Resumo:
The integration of artificial intelligence (AI) into various aspects of contemporary society prompts a critical exploration of the ethical considerations that underpin its development and deployment. This comprehensive discussion navigates the multifaceted relationship between AI and ethics, emphasizing principles such as transparency, fairness, accountability, and privacy as foundational pillars. Examining the challenges and opportunities inherent in this intersection, the abstract delves into the need for transparency in algorithmic decision-making, the continuous strive for fairness and mitigation of biases, and the importance of accountability throughout the AI lifecycle. Human-centric design emerges as a guiding principle, ensuring that AI technologies enhance human experiences while respecting individual autonomy. The abstract also underscores the significance of informed consent, collaboration across diverse disciplines, and educational empowerment in fostering ethical AI practices. Finally, it posits that the ethical exploration of AI is an ongoing journey, requiring a commitment to progress ethically, embrace responsibility, and contribute collectively to an AI future aligned with the highest aspirations of humanity. Received: 18 October 2023 / Accepted: 20 November 2023 / Published: 23 November 2023
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Pierson, Cameron M., e Elisabeth Hildt. "From Principles to Practice: Comparative Analysis of European and United States Ethical AI Frameworks for Assessment and Methodological Application". Proceedings of the Association for Information Science and Technology 60, n.º 1 (outubro de 2023): 327–37. http://dx.doi.org/10.1002/pra2.792.

Texto completo da fonte
Resumo:
ABSTRACTThe Z‐Inspection® Process is a form of applied research for the ethical assessment of AI systems. It is quickly establishing itself as a robust method to ethically assess AI in Europe. The process is predicated on the European Union's Ethics Guidelines for Trustworthy AI, outlining ethical principles intended to guide European AI development. In contrast, the United States has only recently released its holistic version of such guidelines, the Blueprint for an AI Bill of Rights. The aim of this paper is to assess the suitability of the Blueprint for an AI Bill of Rights as an ethical framework underpinning the use of the Z‐Inspection® Process in the United States. This paper provides preliminary findings of comparative analysis of European and United States ethical frameworks for responsible AI development. Findings outline primary ethical concepts that are shared between respective frameworks. Findings suggest the US Blueprint is suitable as an ethical framework for the Z‐Inspection® Process. There are notable omissions within the US framework which would require further development for Z‐Inspection® use. Discussion will consider opportunities for adapting Z‐Inspection® to the United States context, including contributions from the information professions and research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

García Peñalvo, Francisco José, Marc Alier, Juanan Pereira e Maria Jose Casany. "Safe, Transparent, and Ethical Artificial Intelligence". IJERI: International Journal of Educational Research and Innovation, n.º 22 (3 de dezembro de 2024): 1–21. https://doi.org/10.46661/ijeri.11036.

Texto completo da fonte
Resumo:
The increasing integration of artificial intelligence (AI) into educational environments necessitates a structured framework to ensure its safe and ethical use. A manifesto outlining seven core principles for safe AI in education has been proposed, emphasizing the protection of student data, alignment with institutional strategies, adherence to didactic practices, minimization of errors, comprehensive user interfaces, human oversight, and ethical transparency. These principles are designed to guide the deployment of AI technologies in educational settings, addressing potential risks such as privacy violations, misuse, and over-reliance on technology. Smart Learning Applications (SLApps) are also introduced, integrating AI into the existing institutional technological ecosystem, with special attention to the learning management systems, enabling secure, role-adaptive, and course-specific learning experiences. While large language models like GPT offer transformative potential in education, they also present challenges related to accuracy, ethical use, and pedagogical alignment. To navigate these complexities, a checklist based on the Safe AI in Education principles is recommended, providing educators and institutions with a framework to evaluate AI tools, ensuring they support academic integrity, enhance learning experiences, and uphold ethical standards.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Prasetya, Raynaldi Nugraha, Ari Kusdiyanto, Usman Radiana e Luhur Wicaksono. "Etika dalam Pengembangan Artificial Intelligence: Tinjauan Pedoman dan Penerapannya". Juwara: Jurnal Wawasan dan Aksara 4, n.º 1 (20 de maio de 2024): 240–53. https://doi.org/10.58740/juwara.v4i1.271.

Texto completo da fonte
Resumo:
ABSTRAK Kemajuan dalam pengembangan kecerdasan buatan (AI) telah memunculkan berbagai diskusi terkait aspek etika teknologi ini. Banyak pedoman etika diterbitkan untuk memastikan AI dikembangkan dan digunakan dengan cara yang bertanggung jawab, dengan fokus pada privasi, keadilan, transparansi, dan keamanan. Teknologi AI yang semakin "disruptif" membuat aturan etika ini menjadi sangat penting. Penilitian ini meninjau dan membandingkan 22 pedoman etika AI. Peneliti menemukan bahwa meskipun banyak prinsip yang tumpang tindih, ada kekurangan di beberapa pedoman, terutama terkait keadilan sosial dan penerapan dalam praktik. Penilaian ini menunjukkan bahwa prinsip etika seringkali tidak sepenuhnya diterapkan di lapangan, meski pedoman-pedoman tersebut telah disusun dengan baik. Kurangnya implementasi yang tepat bisa menimbulkan masalah serius di masa depan, terutama karena AI sangat berpengaruh dalam berbagai aspek kehidupan seperti pekerjaan, pendidikan, dan kesehatan. Oleh karena itu, evaluasi mendalam diperlukan untuk memperbaiki pendekatan etika AI. Penulis menyarankan beberapa langkah perbaikan, termasuk peningkatan transparansi dan akuntabilitas dalam pengembangan AI, serta penerapan pedoman etika yang lebih konsisten. Dengan memperkuat prinsip-prinsip ini, diharapkan AI dapat dikembangkan dan digunakan dengan lebih etis, membawa manfaat maksimal bagi masyarakat. ABSTRACT Progress in the development of artificial intelligence (AI) has given rise to various discussions regarding the ethical aspects of this technology. Many ethical guidelines are published to ensure AI is developed and used in a responsible manner, with a focus on privacy, fairness, transparency, and security. AI technology is increasingly "disruptive" making these ethical rules very important. This research reviews and compares 22 AI ethical guidelines. Researchers found that while many of the principles overlap, there are gaps in some of the guidelines, particularly regarding social justice and application in practice. This assessment shows that ethical principles are often not fully implemented in the field, even though the guidelines are well developed. Lack of proper implementation could cause serious problems in the future, especially because AI is very influential in various aspects of life such as work, education, and health. Therefore, in-depth evaluation is needed to improve AI ethical approaches. The authors suggest several steps for improvement, including increased transparency and accountability in AI development, as well as more consistent implementation of ethical guidelines. By strengthening these principles, it is hoped that AI can be developed and used more ethically, bringing maximum benefits to society.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Syifa, Amanda Fairuz. "Ethics in the Age of AI : Principles and Guidelines for Responsible Implementation in Workplace". International Journal of Advanced Technology and Social Sciences 2, n.º 2 (29 de fevereiro de 2024): 237–42. http://dx.doi.org/10.59890/ijatss.v2i2.1398.

Texto completo da fonte
Resumo:
The integration of artificial intelligence (AI) tools into corporate settings has become increasingly prevalent, with organizations leveraging AI to enhance daily operations. However, as reliance on AI grows, ethical considerations surrounding its use become paramount. This paper examines the ethical dimensions of AI implementation, drawing on insights from industry reports and frameworks such as those by McKinsey, Accenture, and IBM. Key challenges include ensuring system integrity, preventing privacy breaches, and addressing biases. To address these challenges, organizations must prioritize data privacy and security, accuracy and reliability of AI predictions, ethical use and inclusivity, and transparency in decision-making processes. Recommendations for organizations include providing education and training on AI ethics, continuously monitoring and improving ethical guidelines, and regularly updating policies to align with technological advancements and corporate values. Further research is needed to explore evolving ethical considerations in AI usage over time
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Hinton, Charlene. "The State of Ethical AI in Practice". International Journal of Technoethics 14, n.º 1 (20 de abril de 2023): 1–15. http://dx.doi.org/10.4018/ijt.322017.

Texto completo da fonte
Resumo:
Despite the prolific introduction of ethical frameworks, empirical research on AI ethics in the public sector is limited. This empirical research investigates how the ethics of AI is translated into practice and the challenges of its implementation by public service organizations. Using the Value Sensitive Design as a framework of inquiry, semi-structured interviews are conducted with eight public service organizations across the Estonian government that have piloted or developed an AI solution for delivering a public service. Results show that the practical application of AI ethical principles is indirectly considered and demonstrated in different ways in the design and development of the AI. However, translation of these principles varies according to the maturity of the AI and the public servant's level of awareness, knowledge, and competences in AI. Data-related challenges persist as public service organizations work on fine-tuning their AI applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Vanhée, Loïs, e Melania Borit. "Viewpoint: Ethical By Designer - How to Grow Ethical Designers of Artificial Intelligence". Journal of Artificial Intelligence Research 73 (13 de fevereiro de 2022): 619–31. http://dx.doi.org/10.1613/jair.1.13135.

Texto completo da fonte
Resumo:
Ethical concerns regarding Artificial Intelligence (AI) technology have fueled discussions around the ethics training received by AI designers. We claim that training designers for ethical behaviour, understood as habitual application of ethical principles in any situation, can make a significant difference in the practice of research, development, and application of AI systems. Building on interdisciplinary knowledge and practical experience from computer science, moral psychology and development, and pedagogy, we propose a functional way to provide this training. This article appears in the special track on AI & Society.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Bubicz, Marta, e Marcos Ferasso. "Advancing Corporate Social Responsibility in AI-driven Human Resources Management: A Maturity Model Approach". International Conference on AI Research 4, n.º 1 (4 de dezembro de 2024): 82–90. https://doi.org/10.34190/icair.4.1.3016.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) in the corporate environment has been the subject of several current social debates of many scientific studies in reshaping human resource management (HRM) practices. There is a significant research gap in understanding how artificial intelligence (AI) can be ethically and effectively integrated into human resources management (HRM), particularly in relation to corporate social responsibility (CSR). This study aims to address this gap by proposing a maturity model to assess and guide the responsible implementation of AI in HRM practices. The research goal focuses on how AI can be aligned with CSR principles to ensure ethical, transparent, and socially responsible usage in organizational settings. The methodology includes a comprehensive review of 52 academic papers, employing bibliometrics, network analysis, and thematic content analysis to explore the interplay between AI, CSR, and HRM. These analyses allowed the identification of key ethical concerns and challenges in AI-driven HRM practices, such as bias in AI algorithms, data privacy issues, transparency, and the need for technical proficiency among HR professionals. The findings reveal a five-level AI maturity model, each stage representing progressive alignment with CSR principles. Organizations at lower maturity levels tend to have ad-hoc AI implementations with minimal CSR focus, while those at higher levels demonstrate full integration of ethical AI practices. Additionally, the study highlights the importance of transparency, accountability, and employee empowerment as critical elements for advancing AI maturity in HRM. This research contributes by offering organizations a practical tool to assess and enhance their AI-driven HRM processes through a CSR lens. It also provides a foundation for future research on strategic policy development, ethical AI governance, and continuous improvement in the integration of AI in HRM. Scholars are encouraged to explore these areas further, particularly in understanding how AI can foster not only organizational efficiency but also social responsibility and ethical standards.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Aparicio-Gómez, Oscar-Yecid, e William-Oswaldo Aparicio-Gómez. "Principios éticos para el uso de la Inteligencia Artificial". Revista Internacional de Desarrollo Humano y Sostenibilidad 1, n.º 1 (1 de janeiro de 2024): 73–87. http://dx.doi.org/10.51660/ridhs11202.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) has emerged as a disruptive technology with the potential to transform multiple aspects of society. However, its development and application pose significant ethical challenges. This article examines the ethical pillars that could guide AI research, development, and implementation. These pillars include fairness, accountability, transparency, privacy, and autonomy. It discusses the importance of each pillar in the context of AI and proposes recommendations for integrating these principles into technological practice and policymaking. These principles not only address current ethical challenges, but also provide guidance for future innovations, ensuring that AI is deployed in ways that promote well-being and respect human dignity. As AI continues to evolve, adherence to these pillars will be essential to building an ethical and equitable technological future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Makridis, Christos, Seth Hurley, Mary Klote e Gil Alterovitz. "Ethical Applications of Artificial Intelligence: Evidence From Health Research on Veterans". JMIR Medical Informatics 9, n.º 6 (2 de junho de 2021): e28921. http://dx.doi.org/10.2196/28921.

Texto completo da fonte
Resumo:
Background Despite widespread agreement that artificial intelligence (AI) offers significant benefits for individuals and society at large, there are also serious challenges to overcome with respect to its governance. Recent policymaking has focused on establishing principles for the trustworthy use of AI. Adhering to these principles is especially important for ensuring that the development and application of AI raises economic and social welfare, including among vulnerable groups and veterans. Objective We explore the newly developed principles around trustworthy AI and how they can be readily applied at scale to vulnerable groups that are potentially less likely to benefit from technological advances. Methods Using the US Department of Veterans Affairs as a case study, we explore the principles of trustworthy AI that are of particular interest for vulnerable groups and veterans. Results We focus on three principles: (1) designing, developing, acquiring, and using AI so that the benefits of its use significantly outweigh the risks and the risks are assessed and managed; (2) ensuring that the application of AI occurs in well-defined domains and is accurate, effective, and fit for the intended purposes; and (3) ensuring that the operations and outcomes of AI applications are sufficiently interpretable and understandable by all subject matter experts, users, and others. Conclusions These principles and applications apply more generally to vulnerable groups, and adherence to them can allow the VA and other organizations to continue modernizing their technology governance, leveraging the gains of AI while simultaneously managing its risks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

MOON, Gieop, Ji Hyun YANG, Yumi SON, Eun Kyung CHOI e Ilhak LEE. "Ethical Principles and Considerations concerning the Use of Artificial Intelligence in Healthcare*". Korean Journal of Medical Ethics 26, n.º 2 (junho de 2023): 103–31. http://dx.doi.org/10.35301/ksme.2023.26.2.103.

Texto completo da fonte
Resumo:
The use of artificial intelligence (AI) in healthcare settings has become increasingly common. Many hope that AI will remove constraints on human and material resources and bring innovations in diagnosis and treatment. However, the deep learning techniques and resulting black box problem of AI raise important ethical concerns. To address these concerns, this article explores some of the relevant ethical domains, issues, and themes in this area and proposes principles to guide use of AI in healthcare. Three ethical themes are identified, including respect for person, accountability, and sustainability, which correspond to the three domains of data acquisition, clinical setting, and social environment. These themes and domains were schematized with detailed explanations of relevant ethical issues, concepts, and applications, such as explainability and accountability. Additionally, it is argued that conflicts between ethical principles should be resolved through deliberative democratic methods and a consensus building process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Amaka Justina Obinna e Azeez Jason Kess-Momoh. "Developing a conceptual technical framework for ethical AI in procurement with emphasis on legal oversight". GSC Advanced Research and Reviews 19, n.º 1 (30 de abril de 2024): 146–60. http://dx.doi.org/10.30574/gscarr.2024.19.1.0149.

Texto completo da fonte
Resumo:
This study presents a conceptual technical framework aimed at promoting ethical AI deployment within the procurement domain, with a particular focus on legal oversight. As the integration of artificial intelligence (AI) technologies in procurement processes becomes increasingly prevalent, concerns surrounding ethical considerations and legal compliance have come to the forefront. The framework outlined in this study offers a structured approach to addressing these challenges, emphasizing the importance of legal oversight in ensuring ethical AI practices. Drawing on existing literature and best practices, the framework outlines key components and principles for guiding the development, implementation, and monitoring of AI systems in procurement contexts. Central to the framework is the recognition of legal requirements and regulatory frameworks governing AI deployment, including data protection laws, liability provisions, and procurement regulations. By incorporating these legal considerations into the design and operation of AI systems, organizations can mitigate risks and ensure compliance with applicable laws. Additionally, the framework emphasizes the need for transparency and accountability in AI procurement processes, advocating for clear documentation, audit trails, and stakeholder engagement mechanisms. Furthermore, the framework outlines strategies for ethical AI design, including the identification and mitigation of algorithmic bias, the promotion of fairness and equity, and the protection of privacy rights. By embedding ethical principles into the development lifecycle of AI systems, organizations can foster trust and confidence among stakeholders while minimizing the potential for harm or discrimination. Overall, the conceptual technical framework presented in this study provides a comprehensive approach to promoting ethical AI in procurement, with a specific emphasis on legal oversight. By integrating legal requirements, ethical principles, and technical considerations, organizations can ensure that AI deployment in procurement processes is conducted responsibly, transparently, and in accordance with legal and ethical standards.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Sokolchik, V. N., e A. I. Razuvanau. "Ethical principles for using the artificial intelligence in research (based on biomedical research). Vestsi Natsyyanal’nai akademii navuk Belarusi". Proceedings of the National Academy of Sciences of Belarus, Humanitarian Series 69, n.º 2 (28 de abril de 2024): 95–107. http://dx.doi.org/10.29235/2524-2369-2024-69-2-95-107.

Texto completo da fonte
Resumo:
The article is devoted to the relevant issue of modern science and ethical support for scientific research using Artificial Intelligence (AI). Despite a significant number of foreign and domestic publications about AI, the conceptual framework for the ethics of scientific research using AI remains undeveloped. Based on the international recommendations and articles, as well as own research experience and membership of research ethical committees, the authors define and analyze the basic ethical principles for the scientific research using AI. The proposed principles are considered in the context of their practical application in the field of biomedicine, which are connected with protection of the mankind and nature, maintaining the confidentiality of participants‘ data, preventing discrimination, protecting against errors of AI, respecting informed consent, as well as observing the norms of “open science”, mutual trust from developers and users, etc. The application of the proposed principles orients the scientists, the developers of artificial intelligence, ethical committees, realizing review process, all society, to the priority of the humanization of science, respect for man and nature, as well as education of society regarding to AI, the creation of a regulatory framework, ethical recommendations and codes of ethics for the using of AI in scientific research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Younas, Asifa. "Analyzing the guiding principles of AI ethics: A framing theory perspective on the communication of ethical considerations in artificial intelligence (AI)". BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning 3, n.º 1 (2024): 1–9. http://dx.doi.org/10.54646/bijiam.2024.20.

Texto completo da fonte
Resumo:
Various organizations have created AI ethics standards and protocols in an era of rapidly expanding AI, all to ensure ethical AI use for the benefit of society. However, the ethical issues raised by AI’s societal applications in the actual world have generated scholarly debates. Through the prism of framing theory in media and communication, this study examines AI ethics principles from three significant organizations: Microsoft, NIST, and the AI HLEG of the European Commission. Institutional AI ethics communication must be closely examined in this rapidly changing technical environment because of how institutions frame their AI principles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Lababidi, Sami. "A Critical Evaluation of ChatGPT's Adherence to Responsible Artificial Intelligence Principles". Information Sciences with Applications 3 (20 de junho de 2024): 64–73. http://dx.doi.org/10.61356/j.iswa.2024.3303.

Texto completo da fonte
Resumo:
The swift evolution of ChatGPT maintains revealing great promise in different fields of life while occasionally with ethically questionable impacts. While the current research effort has focused on the benefits that can be gained from ChatGPT, increasing concerns have been raised about the ethical implications that could result from its widespread use. To this end, this study presents an in-depth investigation of the ethical aspects of ChatGPT from the perspective of responsible AI. In particular, a novel theoretical framework is introduced to practically analyze and interpret the ChatGPT from an ethical side lens. Our framework is based on the concept of responsible AI, to focus on the variety of scenarios in which ChatGPT can possibly lead to unintentional consequences, and to advocate alternate paths that the researcher and practitioners can follow to expand their knowledge regarding the mitigation of such incidences. This work expands the theorization of the ethical side to disclose unknown ideas of existing literature and to suggest other leading premises that may guide future development and use of ChatGPT and alike language models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Ismiatun, Ismiatun, e Emy Sukartini. "Library Digital Transformation: Legal and Ethical Analysis of Librarians' Use of AI to Improve Scientific Literacy". Knowledge Garden 2, n.º 2 (30 de setembro de 2024): 23–41. http://dx.doi.org/10.21776/ub.knowledgegarden.2024.2.2.26.

Texto completo da fonte
Resumo:
The digital transformation in modern libraries employs Artificial Intelligence (AI) to enhance scientific literacy. However, the implementation of AI presents various legal and ethical challenges that must be addressed to ensure its responsible and fair use. This study utilizes a narrative literature review method, analyzing and synthesizing various relevant sources. This method is flexible, covering diverse types of literature, and is useful for developing theoretical or conceptual models. Regulations for AI in Indonesia are still incomplete, although some aspects are regulated by existing laws. Key challenges include privacy, bias, discrimination, and AI regulation. Ethical principles such as transparency, honesty, and accountability are crucial in AI utilization. Ethical guidelines and regulations must be implemented to ensure fairness and accountability. Librarians play a vital role in enhancing users' scientific literacy through training and digital resource curation. AI implementation can improve library service efficiency and quality but also requires careful attention to ethical and privacy issues. AI holds significant potential for enhancing scientific literacy and operational efficiency in libraries, but the existing legal and ethical challenges must be addressed seriously. More specific regulations and clear ethical guidelines are essential to ensure the safe and responsible application of AI. The application of AI in libraries can significantly enhance scientific literacy and service efficiency, but stricter regulations and the implementation of ethical guidelines are needed to address the existing legal and ethical challenges. This study provides guidance for libraries in adopting AI technology safely and ethically.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Sistla, Swetha. "AI with Integrity: The Necessity of Responsible AI Governance". Journal of Artificial Intelligence & Cloud Computing 3, n.º 5 (31 de outubro de 2024): 1–3. http://dx.doi.org/10.47363/jaicc/2024(3)e180.

Texto completo da fonte
Resumo:
Responsible AI Governance has emerged as a critical framework for ensuring the ethical development and deployment of artificial intelligence systems. As AI technologies continue to advance and permeate various sectors of society, the need for robust governance structures becomes increasingly apparent. This document explores the key principles, challenges, and best practices in Responsible AI Governance, highlighting the importance of transparency, accountability, and fairness in AI systems. By examining current initiatives, regulatory landscapes, and industry standards, we aim to provide a comprehensive overview of the strategies organizations can employ to navigate the complex ethical terrain of AI development and implementation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Verma, Saurabh, Pankaj Pali, Mohit Dhanwani e Sarthak Jagwani. "Ethical AI: Developing Frameworks for Responsible Deployment in Autonomous Systems". International Journal of Multidisciplinary Research in Science, Engineering and Technology 6, n.º 04 (25 de abril de 2023): 1–14. http://dx.doi.org/10.15680/ijmrset.2023.0604036.

Texto completo da fonte
Resumo:
Ensuring ethical deployment of AI in autonomous systems is crucial to mitigate potential risks and societal impacts. This paper presents a comprehensive framework that integrates ethical principles into the AI development lifecycle. By utilizing adaptive and resilient mechanisms, the framework ensures transparency, accountability, and fairness in AI systems. Experimental results highlight significant improvements in ethical compliance and operational safety compared to traditional AI deployment methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Möllmann, Nicholas RJ, Milad Mirbabaie e Stefan Stieglitz. "Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations". Health Informatics Journal 27, n.º 4 (outubro de 2021): 146045822110523. http://dx.doi.org/10.1177/14604582211052391.

Texto completo da fonte
Resumo:
The application of artificial intelligence (AI) not only yields in advantages for healthcare but raises several ethical questions. Extant research on ethical considerations of AI in digital health is quite sparse and a holistic overview is lacking. A systematic literature review searching across 853 peer-reviewed journals and conferences yielded in 50 relevant articles categorized in five major ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development. The status quo highlights potential areas with little empirical but required research. Less explored areas with remaining ethical questions are validated and guide scholars’ efforts by outlining an overview of addressed ethical principles and intensity of studies including correlations. Practitioners understand novel questions AI raises eventually leading to properly regulated implementations and further comprehend that society is on its way from supporting technologies to autonomous decision-making systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Sokolchik, V. N., e A. I. Razuvanov. "Hierarchy of Ethical Principles for the use of Artificial Intelligence in Medicine and Healthcare". Journal of Digital Economy Research 1, n.º 4 (16 de fevereiro de 2024): 48–84. http://dx.doi.org/10.24833/14511791-2023-4-48-84.

Texto completo da fonte
Resumo:
The article researches the problem of ethical support of the application of artificial intelligence (AI) in medicine and healthcare, which is topical for modern sci­ence. Despite a significant number of foreign and domestic publications devoted to the topic of AI, the conceptual justification of the ethics of AI application in medicine and healthcare remains poorly developed. Relying on international recommendations and articles, as well as on their own experience of research activities, work in research ethics committees, the results of a pilot survey of health care workers, etc., the authors define and analyze the basic ethical principles of using AI in medicine and health care. The proposed principles are considered in the context of their practical application to protect human and natural rights and interests, which includes preservation of patient confidentiality, prevention of discrimination, protection from AI errors, respect for in­formed consent, as well as compliance with the norms of “open science”, mutual trust of developers and users, etc. The proposed principles are analyzed in the context of their practical application. The application of the proposed principles will orient scientists, AI developers, ethical committees conducting expert review of research, society as a whole to the priorities of humanization of healthcare, respect for human beings and nature, as well as to educate society, create a regulatory framework, ethical recommen­dations and codes of ethics for the use of AI in medicine and healthcare.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Esther, Taiwo, Akinsola Ahmed, Tella Edward, Makinde Kolade e Akinwande Mayowa. "A Review of the Ethics of Artificial Intelligence and Its Applications in the United States". International Journal on Cybernetics & Informatics 12, n.º 6 (7 de outubro de 2023): 122–37. http://dx.doi.org/10.5121/ijci.2023.120610.

Texto completo da fonte
Resumo:
This study is focused on the ethics of Artificial Intelligence and its application in the United States, the paper highlights the impact AI has in every sector of the US economy and multiple facets of the technological space and the resultant effect on entities spanning businesses, government, academia, and civil society. There is a need for ethical considerations as these entities are beginning to depend on AI for delivering various crucial tasks, which immensely influence their operations, decision-making, and interactions with each other. The adoption of ethical principles, guidelines, and standards of work is therefore required throughout the entire process of AI development, deployment, and usage to ensure responsible and ethical AI practices. Our discussion explores eleven fundamental 'ethical principles' structured as overarching themes. These encompass Transparency, Justice, Fairness, Equity, Non-Maleficence, Responsibility, Accountability, Privacy, Beneficence, Freedom, Autonomy, Trust, Dignity, Sustainability, and Solidarity. These principles collectively serve as a guiding framework, directing the ethical path for the responsible development, deployment, and utilization of artificial intelligence (AI) technologies across diverse sectors and entities within the United States. The paper also discusses the revolutionary impact of AI applications, such as Machine Learning, and explores various approaches used to implement AI ethics. This examination is crucial to address the growing concerns surrounding the inherent risks associated with the widespread use of artificial intelligence.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Rani, Poonam. "A Comprehensive Survey of Artificial Intelligence (AI): Principles, Techniques, and Applications". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 11, n.º 3 (15 de dezembro de 2020): 1990–2000. http://dx.doi.org/10.17762/turcomat.v11i3.13596.

Texto completo da fonte
Resumo:
AI has emerged as a transformational technology with enormous potential to change a wide range of sectors. Its foundations are founded on robots' capacity to learn and do jobs that would normally need human intellect. AI techniques such as machine learning and deep learning have grown in sophistication, enabling for the development of strong AI applications in fields such as healthcare, finance, and transportation. Yet, the fast development and implementation of AI raises a slew of issues that must be addressed. Ethical issues, data privacy and security, transparency and explainability, legislation and policy, technological hurdles, adoption and acceptability, accessibility, and interaction with current systems are among these challenges. To address these issues, industry, government, and academia must work together to create ethical frameworks, invest in research and development, and encourage openness and accessibility. Notwithstanding these obstacles, the potential advantages of AI are enormous. AI has the ability to improve efficiency, production, and decision-making in a variety of industries. It also has the ability to enhance people's lives and find answers to some of the world's most urgent problems. Overall, the ideas, methodologies, and applications of AI provide great prospects for good change; nevertheless, addressing the issues is critical to ensuring that AI is created and utilised in an ethical and responsible manner.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Patel, Vishwa, Dr Jay A. Dave, Dr Satvik Khara e Gaurav D. Tivari. "Examining the Integration of Responsible AI Principles in Social Media Marketing for Digital Health: A Theoretical Analysis". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 07 (10 de julho de 2024): 1–13. http://dx.doi.org/10.55041/ijsrem36379.

Texto completo da fonte
Resumo:
This study explores the integration of responsible artificial intelligence (AI) principles into social media marketing strategies within the digital health sector. With the proliferation of digital health platforms and the growing use of AI technologies in marketing, ensuring ethical practices is paramount. By examining the application of responsible AI principles in this context, this research aims to address concerns related to privacy, fairness, transparency, and accountability. Through a comprehensive analysis of current practices and emerging trends, this study highlights the importance of balancing marketing objectives with ethical considerations to promote trust and engagement among users. Key aspects include leveraging AI algorithms for personalized content delivery while safeguarding user data, implementing transparent AI-driven decision-making processes, and establishing mechanisms for accountability and user empowerment. This investigation offers valuable insights for marketers, policymakers, and stakeholders in navigating the evolving landscape of digital health marketing while upholding ethical standards and fostering positive user experiences. Key Words: Responsible AI, Social Media Marketing, Digital Health, Ethical Practices, Transparency
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Kim, Tae Wan, John Hooker e Thomas Donaldson. "Taking Principles Seriously: A Hybrid Approach to Value Alignment in Artificial Intelligence". Journal of Artificial Intelligence Research 70 (28 de fevereiro de 2021): 871–90. http://dx.doi.org/10.1613/jair.1.12481.

Texto completo da fonte
Resumo:
An important step in the development of value alignment (VA) systems in artificial intelligence (AI) is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing “naturalistic fallacy,” which is an attempt to derive “ought” from “is,” and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified model logic, we precisely formulate principles derived from deontological ethics and show how they imply particular “test propositions” for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles. This article is part of the special track on AI and Society.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Ravi Kottur. "Responsible AI Development: A Comprehensive Framework for Ethical Implementation in Contemporary Technological Systems". International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, n.º 6 (12 de dezembro de 2024): 1553–61. https://doi.org/10.32628/cseit241061197.

Texto completo da fonte
Resumo:
This article presents a comprehensive framework for implementing responsible artificial intelligence (AI) development in contemporary technological landscapes. As AI systems become increasingly integrated into daily life across various sectors, the need for ethical guidelines and responsible development practices has become paramount. The article examines the fundamental principles of responsible AI, including fairness, transparency, accountability, privacy, and system robustness, while proposing practical implementation strategies for organizations. Through analysis of current practices and emerging challenges, this article outlines a structured approach to ethical AI development that balances innovation with societal values. The article introduces a multi-stakeholder model for implementing responsible AI practices, emphasizing the importance of cross-disciplinary collaboration, continuous education, and robust oversight mechanisms. By examining the intersection of technological advancement and ethical considerations, this article contributes to the growing body of knowledge on responsible AI development and provides actionable insights for developers, policymakers, and organizations. The findings suggest that successful implementation of responsible AI requires systematic integration of ethical principles throughout the development lifecycle, supported by strong governance frameworks and stakeholder engagement.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Bayuningrat, Saka Adjie, Moch Zairul Alam e Diah Pawestri Maharani. "Bentuk Internalisasi Nilai Etik Mengenai Bias Negatif dan Diskriminasi Dalam Platform generative AI". RechtJiva 1, n.º 1 (4 de março de 2024): 1–22. http://dx.doi.org/10.21776/rechtjiva.v1n1.1.

Texto completo da fonte
Resumo:
This research discusses a form of classification in the realm of Artificial Intelligence, namely generative AI. Generative AI is an AI technology that can create new content in the form of text, images, audio, and others. However, generative AI has the potential to cause problems such as negative bias and unintentional discrimination in the form of negative stereotyping of a racial group. Current regulations are inadequate to address these challenges and determine the liability of parties in the event of harm. This research aims to analyze the urgency of new regulations related to generative AI for anti-bias and discrimination. The approach is normative juridical using statutory, conceptual, and comparative approaches. The results show that current regulations do not adequately protect the public from bias and discrimination by AI. New regulations are needed that include AI ethical principles, AI audits, classification of parties' responsibilities (developers, data providers, regulators), and application of the principle of liability based on fault. These regulations are important to ensure responsible use of generative AI and respect for human rights. In conclusion, current regulations need to be refined and new ones created to address the ethical and liability challenges in the utilization of generative AI in line with anti-discrimination principles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Pokholkova, Maria, Auxane Boch, Ellen Hohma e Christoph Lütge. "Measuring adherence to AI ethics: a methodology for assessing adherence to ethical principles in the use case of AI-enabled credit scoring application". AI and Ethics, 15 de abril de 2024. http://dx.doi.org/10.1007/s43681-024-00468-9.

Texto completo da fonte
Resumo:
AbstractThis article discusses the critical need to find solutions for ethically assessing artificial intelligence systems, underlining the importance of ethical principles in designing, developing, and employing these systems to enhance their acceptance in society. In particular, measuring AI applications’ adherence to ethical principles is determined to be a major concern. This research proposes a methodology for measuring an application’s adherence to acknowledged ethical principles. The proposed concept is grounded in existing research on quantification, specifically, Expert Workshop, which serves as a foundation of this study. The suggested method is tested on the use case of AI-enabled Credit Scoring applications using the ethical principle of transparency as an example. AI development, AI Ethics, finance, and regulation experts were invited to a workshop. The study’s findings underscore the importance of ethical AI implementation and highlight benefits and limitations for measuring ethical adherence. A proposed methodology thus offers insights into a foundation for future AI ethics assessments within and outside the financial industry, promoting responsible AI practices and constructive dialogue.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Papyshev, Gleb. "Governing AI through interaction: situated actions as an informal mechanism for AI regulation". AI and Ethics, 27 de março de 2024. http://dx.doi.org/10.1007/s43681-024-00446-1.

Texto completo da fonte
Resumo:
AbstractThis article presents a perspective that the interplay between high-level ethical principles, ethical praxis, plans, situated actions, and procedural norms influences ethical AI practices. This is grounded in six case studies, drawn from fifty interviews with stakeholders involved in AI governance in Russia. Each case study focuses on a different ethical principle—privacy, fairness, transparency, human oversight, social impact, and accuracy. The paper proposes a feedback loop that emerges from human-AI interactions. This loop begins with the operationalization of high-level ethical principles at the company level into ethical praxis, and plans derived from it. However, real-world implementation introduces situated actions—unforeseen events that challenge the original plans. These turn into procedural norms via routinization and feed back into the understanding of operationalized ethical principles. This feedback loop serves as an informal regulatory mechanism, refining ethical praxis based on contextual experiences. The study underscores the importance of bottom-up experiences in shaping AI's ethical boundaries and calls for policies that acknowledge both high-level principles and emerging micro-level norms. This approach can foster responsive AI governance, rooted in both ethical principles and real-world experiences.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Law, Rob, Huiyue Ye e Soey Sut Ieng Lei. "Ethical artificial intelligence (AI): principles and practices". International Journal of Contemporary Hospitality Management, 8 de outubro de 2024. http://dx.doi.org/10.1108/ijchm-04-2024-0482.

Texto completo da fonte
Resumo:
Purpose This study aims to delve into the ethical challenges in artificial intelligence (AI) technologies to underscore the necessity of establishing principles for ethical AI utilization in hospitality and tourism. Design/methodology/approach A narrative review of research on ethical AI across diverse realms was conducted to reflect current research progress and examine whether sufficient measures have been taken to address issues pertinent to AI utilization in hospitality and tourism. Findings Ethical issues including privacy concerns, detrimental stereotypes, manipulation and brutalization pertinent to AI utilization are elaborated. How AI should be properly used and managed ethically, responsibly and sustainably is suggested. Research limitations/implications Five fine-tuned principles for regulating AI use in hospitality and tourism are proposed. Practical implications A resilient mindset, enhancement of AI context adaptability, equilibrium between development and regulation and collaborative effort of multiple stakeholders are paramount. Originality/value Through applying the AI evolution trajectory model, this study contributes to the current discourse of managing AI by proposing a framework that addresses the specific characteristics of hospitality and tourism.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Adams, Jonathan. "Introducing the ethical-epistemic matrix: a principle-based tool for evaluating artificial intelligence in medicine". AI and Ethics, 24 de outubro de 2024. http://dx.doi.org/10.1007/s43681-024-00597-1.

Texto completo da fonte
Resumo:
AbstractWhile there has been much discussion of the ethical assessment of artificial intelligence (AI) in medicine, such work has rarely been combined with the parallel body of scholarship analyzing epistemic implications of AI. This paper proposes a method for joint evaluation of AI’s ethical and epistemic implications in medicine that draws on the principle-oriented tradition in bioethics and the consequent ‘ethical matrix’ approach to assessing novel technologies. It first introduces principle-based approaches as specific tools for ethical assessment of AI in medicine and other domains that are contrasted with the lack of comparable epistemic principles that would govern AI evaluation in medicine. In the next section, the ethical matrix is explained as a well-established principle-based tool in applied ethics that has had some limited applications to near-term implications of AI in medicine and elsewhere that can be strengthened, I suggest, using epistemic principles. To this end, the following section looks to the philosophy of science for relevant epistemic principles, identifying ‘accuracy’, ‘consistency’, ‘relevance’, and ‘instrumental efficacy’ as a provisional set for technology evaluation. The next section articulates the relevance of these epistemic principles to AI in medicine by highlighting conventional standards that have already been applied in AI, epistemology, and the medical sciences. Before concluding, the paper then defines and defends the possibility of an ‘ethical-epistemic matrix’ for the application of these epistemic principles alongside established ethical principles to a selection of stakeholder groups: patients, clinicians, developers, and the public.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Woodgate, Jessica, e Nirav Ajmeri. "Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions". ACM Computing Surveys, 13 de junho de 2024. http://dx.doi.org/10.1145/3672394.

Texto completo da fonte
Resumo:
Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Petersson, Lena, Kalista Vincent, Petra Svedberg, Jens M. Nygren e Ingrid Larsson. "Ethical considerations in implementing AI for mortality prediction in the emergency department: Linking theory and practice". DIGITAL HEALTH 9 (janeiro de 2023). http://dx.doi.org/10.1177/20552076231206588.

Texto completo da fonte
Resumo:
Background Artificial intelligence (AI) is predicted to be a solution for improving healthcare, increasing efficiency, and saving time and recourses. A lack of ethical principles for the use of AI in practice has been highlighted by several stakeholders due to the recent attention given to it. Research has shown an urgent need for more knowledge regarding the ethical implications of AI applications in healthcare. However, fundamental ethical principles may not be sufficient to describe ethical concerns associated with implementing AI applications. Objective The aim of this study is twofold, (1) to use the implementation of AI applications to predict patient mortality in emergency departments as a setting to explore healthcare professionals’ perspectives on ethical issues in relation to ethical principles and (2) to develop a model to guide ethical considerations in AI implementation in healthcare based on ethical theory. Methods Semi-structured interviews were conducted with 18 participants. The abductive approach used to analyze the empirical data consisted of four steps alternating between inductive and deductive analyses. Results Our findings provide an ethical model demonstrating the need to address six ethical principles (autonomy, beneficence, non-maleficence, justice, explicability, and professional governance) in relation to ethical theories defined as virtue, deontology, and consequentialism when AI applications are to be implemented in clinical practice. Conclusions Ethical aspects of AI applications are broader than the prima facie principles of medical ethics and the principle of explicability. Ethical aspects thus need to be viewed from a broader perspective to cover different situations that healthcare professionals, in general, and physicians, in particular, may face when using AI applications in clinical practice.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Krijger, Joris. "Enter the metrics: critical theory and organizational operationalization of AI ethics". AI & SOCIETY, 6 de setembro de 2021. http://dx.doi.org/10.1007/s00146-021-01256-3.

Texto completo da fonte
Resumo:
AbstractAs artificial intelligence (AI) deployment is growing exponentially, questions have been raised whether the developed AI ethics discourse is apt to address the currently pressing questions in the field. Building on critical theory, this article aims to expand the scope of AI ethics by arguing that in addition to ethical principles and design, the organizational dimension (i.e. the background assumptions and values influencing design processes) plays a pivotal role in the operationalization of ethics in AI development and deployment contexts. Through the prism of critical theory, and the notions of underdetermination and technical code as developed by Feenberg in particular, the organizational dimension is related to two general challenges in operationalizing ethical principles in AI: (a) the challenge of ethical principles placing conflicting demands on an AI design that cannot be satisfied simultaneously, for which the term ‘inter-principle tension’ is coined, and (b) the challenge of translating an ethical principle to a technological form, constraint or demand, for which the term ‘intra-principle tension’ is coined. Rather than discussing principles, methods or metrics, the notion of technical code precipitates a discussion on the subsequent questions of value decisions, governance and procedural checks and balances. It is held that including and interrogating the organizational context in AI ethics approaches allows for a more in depth understanding of the current challenges concerning the formalization and implementation of ethical principles as well as of the ways in which these challenges could be met.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Morley, Jessica, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi e Luciano Floridi. "Operationalising AI ethics: barriers, enablers and next steps". AI & SOCIETY, 15 de novembro de 2021. http://dx.doi.org/10.1007/s00146-021-01308-8.

Texto completo da fonte
Resumo:
AbstractBy mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology of tools and methods designed to translate between the five most common AI ethics principles and implementable design practices. Whilst a useful starting point, that research rested on the assumption that all AI practitioners are aware of the ethical implications of AI, understand their importance, and are actively seeking to respond to them. In reality, it is unclear whether this is the case. It is this limitation that we seek to overcome here by conducting a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What motivates AI practitioners to embed ethical principles into design practices? What barriers do AI practitioners face when attempting to translate ethical principles into practice? And finally, what assistance do AI practitioners want and need when translating ethical principles into practice?
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Atkins, Suzanne, Ishwarradj Badrie e Sieuwert Otterloo. "Applying Ethical AI Frameworks in practice: Evaluating conversational AI chatbot solutions". Computers and Society Research Journal, 24 de setembro de 2021. http://dx.doi.org/10.54822/qxom4114.

Texto completo da fonte
Resumo:
Ethical AI frameworks are designed to encourage the accountability, responsibility and transparency of AI applications. They provide principles for ethical design. To be truly transparent, it should be clear to the user of the AI application that the designers followed responsible AI principles. In order to test how easy it is for a user to assess the responsibility of an AI system and to understand the differences between ethical AI frameworks, we evaluated four commercial chatbots against four responsible AI frameworks. We found that the ethical frameworks produced quite different assessment scores. Many ethical AI frameworks contain requirements/principles that are difficult to evaluate for anyone except the chatbot developer. Our results also show that domain-specific ethical AI guidelines are easier to use and yield more practical insights than domain-independent frameworks. We conclude that ethical AI researchers should focus on studying specific domains and not AI as a whole, and that ethical AI guidelines should focus more on creating measurable standards and less on stating high level principles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Langman, Sofya, Nicole Capicotto, Yaser Maddahi e Kourosh Zareinia. "Roboethics principles and policies in Europe and North America". SN Applied Sciences 3, n.º 12 (7 de novembro de 2021). http://dx.doi.org/10.1007/s42452-021-04853-5.

Texto completo da fonte
Resumo:
AbstractRobotics and artificial intelligence (AI) are revolutionizing all spheres of human life. From industrial processes to graphic design, the implementation of automated intelligent systems is changing how industries work. The spread of robots and AI systems has triggered academic institutions to closely examine how these technologies may affect the humanity—this is how the fields of roboethics and AI ethics have been born. The identification of ethical issues for robotics and AI and creation of ethical frameworks were the first steps to creating a regulatory environment for these technologies. In this paper, we focus on regulatory efforts in Europe and North America to create enforceable regulation for AI and robotics. We describe and compare ethical principles, policies, and regulations that have been proposed by government organizations for the design and use of robots and AI. We also discuss proposed international regulation for robotics and AI. This paper tries to highlight the need for a comprehensive, enforceable, and agile policy to ethically regulate technology today and in the future. Through reviewing existing policies, we conclude that the European Unition currently leads the way in defining roboethics and AI ethical principles and implementing them into policy. Our findings suggest that governments in Europe and North America are aware of the ethical risks that robotics and AI pose, and are engaged in policymaking to create regulatory policies for these new technologies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Westerstrand, Salla. "Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence". Science and Engineering Ethics 30, n.º 5 (9 de outubro de 2024). http://dx.doi.org/10.1007/s11948-024-00507-y.

Texto completo da fonte
Resumo:
AbstractThe popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls’s theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls’s theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls’s theory of justice as fairness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Pranav Dixit, Prashant S. Acharya, Dr Tripti Sahu,. "Ethical Considerations in AI-Driven User Interfaces". Journal of Informatics Education and Research 4, n.º 1 (7 de março de 2024). http://dx.doi.org/10.52783/jier.v4i1.643.

Texto completo da fonte
Resumo:
The integration of AI into user interfaces (UIs) has propelled technological advancements, enabling personalized experiences and improved efficiency. However, the ethical implications of AI-driven UIs cannot be overlooked. This research paper will analyze the ethical considerations regarding- transparency, accountability, fairness, privacy, and user consent. By exploring potential risks and challenges, we can advocate for the implementation of ethical design principles, regulatory frameworks, and user education initiatives. Through case studies and future directions, this paper highlights the importance of responsible and ethical AI UI development. The following questions and topics will be covered in this research paper- What is AI? Current capabilities and future scope of AI in User interfaces Where is AI being currently used? Concerns, Risks regarding AI Potential solutions, Creating Ethical principles How can AI be used ethically in UI design? Real-life examples
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia