Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: AI Governance Framework.

Zeitschriftenartikel zum Thema „AI Governance Framework“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "AI Governance Framework" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Miyamoto, Michiko. „Measuring AI Governance, AI Adoption and AI Strategy of Japanese Companies“. International Journal of Membrane Science and Technology 10, Nr. 1 (11.10.2023): 649–57. http://dx.doi.org/10.15379/ijmst.v10i1.2627.

Der volle Inhalt der Quelle
Annotation:
Purpose: This study aims to measure the level of AI governance and AI adoption among Japanese companies. Theoretical Framework: The research investigates the extent to which Japanese companies have implemented AI governance frameworks and the degree of AI adoption in their operations. The study also explores the relationship between AI governance, AI adoption, and AI strategy, providing insights into the factors that influence successful AI implementation. Design / Methodology / Approach: a survey questionnaire was administered to a representative sample of Japanese companies across various industries. The questionnaire included items that assessed the presence and effectiveness of AI governance practices within the organizations. Findings: a positive correlation was observed between AI governance and AI adoption. Companies with well-established AI governance frameworks tended to have higher levels of AI adoption, suggesting that effective governance practices play a crucial role in facilitating successful AI implementation. These findings provide valuable insights into the current state of AI governance and AI adoption among Japanese companies. Conclusion: The results can assist organizations in benchmarking their AI initiatives against industry standards and identifying areas for improvement. Policymakers and regulators can also utilize these findings to develop guidelines and frameworks that promote responsible and effective AI implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Peckham, Jeremy B. „An AI Harms and Governance Framework for Trustworthy AI“. Computer 57, Nr. 3 (März 2024): 59–68. http://dx.doi.org/10.1109/mc.2024.3354040.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rassolov, I. M., und S. G. Chubukova. „Artificial Intelligence and Effective Governance: Legal Framework“. Kutafin Law Review 9, Nr. 2 (05.07.2022): 309–28. http://dx.doi.org/10.17803/2713-0525.2022.2.20.309-328.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) use in the state governance structures is obviously on the rise. Cognitive technologies have potential to transform the government sector — by reducing expenses, mundane chores, coping with resource limitations, making more accurate projections, and implementing AI into an array of organizational processes and systems. Methods. General research methods: analysis, synthesis, logical method were employed to study certain concepts and legal categories and their interrelations (artificial intelligence, artificial intelligence technologies, governance system, machine-readable law, digital state, automated decision-making, etc.) and develop insights into public relations amid proactive use of artificial intelligence systems and technologies in the governance system. Comparative legal research method was used to discern dynamics and further trends in legal relations, as well as to compare approaches of foreign countries to regulating AI systems and technologies. Prognostic method was applied to project the future of the Russian legislation as concerns building effective legal framework to regulate AI systems and technologies in the governance system. Technical legal (dogmatic) method helped develop legal foundation for the use of technologies and AI systems in the governance sphere. The analysis showed promising theoretical and practical avenues of modern law development in the aspect of artificial intelligence: the concept of artificial intelligence within the conceptual legal framework was described; legal regulation of administrative processes and its specifics were defined; ethics and principles of artificial intelligence application in governance were stressed, which involves restrictions of AI use in automated decision-making; stipulating the status of informed consent in the legislation in case an automated decision is made; the procedure which allows prohibiting the use of automated decision was established, as well as the procedure of AI risk assessment in the governance system, ensuring proper data protection and independent security monitoring.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Gajjar, Vyoma. „AGENTIC GOVERNANCE: A FRAMEWORK FOR AUTONOMOUS DECISION-MAKING SYSTEMS“. International Journal of Engineering Applied Sciences and Technology 09, Nr. 04 (04.09.2024): 73–75. http://dx.doi.org/10.33564/ijeast.2024.v09i04.008.

Der volle Inhalt der Quelle
Annotation:
The proliferation of fake news has become a significant concern in recent years, with far-reaching consequences for individuals, communities, and society. Artificial intelligence (AI) has the potential to play a crucial role in detecting and mitigating the spread of fake news. However, the use of AI in fake news detection also raises important governance considerations. In this paper, we propose a novel approach to AI governance in fake news detection, including a framework for responsible AI governance, a new algorithm for fake news detection, and a comprehensive evaluation of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sentinella, Richard, Maël Schnegg und Klaus Möller. „A Management Control oriented Governance Framework for Artificial Intelligence“. Die Unternehmung 77, Nr. 2 (2023): 162–84. http://dx.doi.org/10.5771/0042-059x-2023-2-162.

Der volle Inhalt der Quelle
Annotation:
In an age of increasing access to and power of artificial intelligence (AI), ethical concerns, such as fairness, transparency, and human well-being have come to the attention of regulators, standard setting bodies, and organizations alike. In order to build AI-based systems that comply with new rules, organizations will have to adopt systems of governance. This study develops, based on existing frameworks and a multiple case study, a governance framework specifically designed with these challenges in mind: The St. Gallen Governance Framework for Artificial Intelligence focuses on identifying stakeholder concerns and strategic goals, building a management control system, assigning roles and responsibilities, and incorporating dynamism into the system of governance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sentinella, Richard, Victoria Honsel und Mael Schnegg. „AI-Governance als strategisches Instrument“. Controlling 36, Nr. 6 (2024): 37–40. https://doi.org/10.15358/0935-0381-2024-6-37.

Der volle Inhalt der Quelle
Annotation:
In ihrer AI-Anwendung müssen Unternehmen ethische Anforderungen wie Fairness, Transparenz und Verantwortlichkeiten mitdenken. Auch Regulierungsbehörden definieren immer striktere Regeln für die Nutzung der wirtschaftlichen und technologischen Vorteile von AI. Das St. Galler Governance Framework skizziert Schritte, um eine AI-Governance mit ethischen und rechtlichen Standards in Einklang zu bringen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sistla, Swetha. „AI with Integrity: The Necessity of Responsible AI Governance“. Journal of Artificial Intelligence & Cloud Computing 3, Nr. 5 (31.10.2024): 1–3. http://dx.doi.org/10.47363/jaicc/2024(3)e180.

Der volle Inhalt der Quelle
Annotation:
Responsible AI Governance has emerged as a critical framework for ensuring the ethical development and deployment of artificial intelligence systems. As AI technologies continue to advance and permeate various sectors of society, the need for robust governance structures becomes increasingly apparent. This document explores the key principles, challenges, and best practices in Responsible AI Governance, highlighting the importance of transparency, accountability, and fairness in AI systems. By examining current initiatives, regulatory landscapes, and industry standards, we aim to provide a comprehensive overview of the strategies organizations can employ to navigate the complex ethical terrain of AI development and implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kolade, Titilayo Modupe, Nsidibe Taiwo Aideyan, Seun Michael Oyekunle, Olumide Samuel Ogungbemi, Dooshima Louisa Dapo-Oyewole und Oluwaseun Oladeji Olaniyi. „Artificial Intelligence and Information Governance: Strengthening Global Security, through Compliance Frameworks, and Data Security“. Asian Journal of Research in Computer Science 17, Nr. 12 (04.12.2024): 36–57. https://doi.org/10.9734/ajrcos/2024/v17i12528.

Der volle Inhalt der Quelle
Annotation:
This study examines the dual role of artificial intelligence (AI) in advancing and challenging global information governance and data security. By leveraging methodologies such as Hierarchical Cluster Analysis (HCA), Principal Component Analysis (PCA), Structural Equation Modeling (SEM), and Multi-Criteria Decision Analysis (MCDA), the study investigates AI-specific vulnerabilities, governance gaps, and the effectiveness of compliance frameworks. Data from the MITRE ATT&CK Framework, AI Incident Database, Global Cybersecurity Index (GCI), and National Vulnerability Database (NVD) form the empirical foundation for this analysis. Key findings reveal that AI-driven data breaches exhibit the highest regulatory scores (0.72) and dependency levels (0.81), underscoring the critical need for robust compliance frameworks in high-risk AI environments. PCA identifies regulatory gaps (45.3% variance) and AI technology type (30.2% variance) as significant factors influencing security outcomes. SEM highlights governance strength as a primary determinant of security effectiveness (coefficient = 0.68, p < 0.001), while MCDA underscores the importance of adaptability in governance frameworks for addressing AI-specific threats. The study recommends adopting quantum-resistant encryption, enhancing international cooperation, and integrating AI automation with human oversight to fortify governance structures. These insights provide actionable strategies for policymakers, industry leaders, and researchers to navigate the complexities of AI governance and align technological advancements with ethical and security imperatives in a rapidly evolving digital landscape.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wagner, Jennifer K., Megan Doerr und Cason D. Schmit. „AI Governance: A Challenge for Public Health“. JMIR Public Health and Surveillance 10 (30.09.2024): e58358-e58358. http://dx.doi.org/10.2196/58358.

Der volle Inhalt der Quelle
Annotation:
Abstract The rapid evolution of artificial intelligence (AI) is structuralizing social, political, and economic determinants of health into the invisible algorithms that shape all facets of modern life. Nevertheless, AI holds immense potential as a public health tool, enabling beneficial objectives such as precision public health and medicine. Developing an AI governance framework that can maximize the benefits and minimize the risks of AI is a significant challenge. The benefits of public health engagement in AI governance could be extensive. Here, we describe how several public health concepts can enhance AI governance. Specifically, we explain how (1) harm reduction can provide a framework for navigating the governance debate between traditional regulation and “soft law” approaches; (2) a public health understanding of social determinants of health is crucial to optimally weigh the potential risks and benefits of AI; (3) public health ethics provides a toolset for guiding governance decisions where individual interests intersect with collective interests; and (4) a One Health approach can improve AI governance effectiveness while advancing public health outcomes. Public health theories, perspectives, and innovations could substantially enrich and improve AI governance, creating a more equitable and socially beneficial path for AI development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Adebola Folorunso, Kehinde Olanipekun, Temitope Adewumi und Bunmi Samuel. „A policy framework on AI usage in developing countries and its impact“. Global Journal of Engineering and Technology Advances 21, Nr. 1 (30.10.2024): 154–66. http://dx.doi.org/10.30574/gjeta.2024.21.1.0192.

Der volle Inhalt der Quelle
Annotation:
The rapid growth of Artificial Intelligence (AI) presents both significant opportunities and challenges for developing countries. A well-structured policy framework is crucial to maximize the benefits of AI while mitigating its risks. This review proposes a comprehensive AI policy framework tailored to developing countries, emphasizing the need for robust infrastructure, capacity building, ethical governance, and economic incentives. Key elements include the development of digital infrastructure, education and training programs to enhance AI literacy, and ethical guidelines to ensure fairness and transparency in AI applications. Data governance and privacy protections are critical, particularly in countries where regulatory frameworks are underdeveloped. Furthermore, international cooperation is highlighted as a necessity for aligning local policies with global AI standards, facilitating cross-border data sharing, and ensuring equitable access to AI innovations. The potential impact of AI on economic growth, job creation, healthcare, education, and public service delivery is profound, yet challenges such as workforce displacement, increased inequality, and the digital divide must be carefully managed. The proposed framework addresses these challenges, providing strategies to overcome barriers to AI adoption, including financial constraints, governance issues, and unequal access to technology. Moreover, it stresses the importance of fostering public-private partnerships and ensuring that AI development is inclusive, benefiting all segments of society. By implementing a comprehensive AI policy framework, developing countries can harness AI’s transformative power to drive sustainable development, improve social outcomes, and strengthen their economic standing in the global landscape. This review concludes by recommending continuous policy evaluation and adaptation to keep pace with AI's rapid evolution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Adebola Folorunso, Adeola Adewa, Olufunbi Babalola und Chineme Edgar Nwatu. „A governance framework model for cloud computing: role of AI, security, compliance, and management“. World Journal of Advanced Research and Reviews 24, Nr. 2 (30.11.2024): 1969–82. http://dx.doi.org/10.30574/wjarr.2024.24.2.3513.

Der volle Inhalt der Quelle
Annotation:
The rapid adoption of cloud computing has transformed how organizations manage their IT resources, necessitating robust governance frameworks to address the complexities and risks inherent in cloud environments. This review proposes a comprehensive governance framework model that integrates the roles of artificial intelligence (AI), security, compliance, and management to enhance the effectiveness of cloud operations. AI plays a critical role in optimizing resource allocation and improving decision-making processes within cloud governance. By leveraging machine learning algorithms, organizations can achieve dynamic resource management, predictive analytics, and automated compliance monitoring, which enhance operational efficiency and reduce human error. Furthermore, the integration of AI in security management facilitates real-time threat detection and response, allowing organizations to proactively mitigate risks associated with data breaches and cyberattacks. Security is a paramount concern in cloud governance, given the shared responsibility model between cloud providers and clients. This framework emphasizes the implementation of comprehensive security measures, including data encryption, identity management, and incident response protocols, to safeguard sensitive information and maintain customer trust. Compliance with regulatory requirements is essential in ensuring organizational accountability and minimizing legal risks. The proposed governance model incorporates automated compliance checks and reporting mechanisms, ensuring adherence to industry-specific regulations such as GDPR and HIPAA. Moreover, effective management of cloud resources is crucial for optimizing performance and controlling costs. The governance framework outlines best practices for lifecycle management, cost optimization, and resource allocation, enabling organizations to achieve their strategic objectives. This governance framework model underscores the importance of integrating AI, security, compliance, and management for a holistic approach to cloud governance, providing organizations with the necessary tools to navigate the complexities of cloud computing while maximizing its benefits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Correia, Anacleto, und Pedro B. Água. „Artificial intelligence to enhance corporate governance: A conceptual framework“. Corporate Board: Role, Duties and Composition 19, Nr. 1 (2023): 29–35. http://dx.doi.org/10.22495/cbv19i1art3.

Der volle Inhalt der Quelle
Annotation:
In this preliminary study, we explore the novel intersection of corporate governance (CG) and artificial intelligence (AI), addressing the crucial question: How can AI be leveraged to enhance ethical and transparent decision-making within the corporate environment? Drawing from current studies on organizational governance, AI ethics, and data science, our research raises the curtain on the potential of AI in augmenting traditional governance mechanisms, while also scrutinizing the ethical quandaries and challenges it may pose. We propose a novel conceptual framework, rooted in the principles of separation of ownership and control, and data ethics, to be underpinned and validated, in the future, through an empirical study. Given the current inception stage of the study, we expect the results will illustrate a significant positive impact of AI on CG effectiveness, particularly in enhancing transparency and fostering ethical decision-making. We also propose future studies to be done as a mix of econometric and machine learning methods to empirically test the framework with datasets gathered over a period of years.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Richard, Moses Peace. „Legal Perspective on the Use of Artificial Intelligence in Corporate Governance in Nigeria: Potentials and Challenges“. Journal of Legal Studies 34, Nr. 48 (06.11.2024): 97–118. http://dx.doi.org/10.2478/jles-2024-0016.

Der volle Inhalt der Quelle
Annotation:
Abstract From a legal standpoint, this paper critically examines the potential and challenges in deploying Artificial (AI) Intelligence in corporate governance in Nigeria. The examination revealed that leveraging AI in corporate governance could enhance corporate efficiency in Nigeria through improved decision-making, risk management, financial reporting, and stakeholder protection and engagement. However, possible bias and data privacy breaches are significant risks that pose ethical challenges when AI is deployed in corporate governance. Particularly, Nigeria is bisected by several socio-economic challenges, such as a lack of a robust AI framework and insufficient technological expertise to develop and optimize AI systems. Furthermore, Nigeria currently lacks comprehensive national AI legislation, thereby resulting in the absence of a legal basis for the effective deployment of AI in corporate governance and board management. Against this backdrop, this paper proposes an AI-based corporate governance framework, which can be adapted into future legislative reforms to streamline decision-making processes and improve board accountability and stakeholders’ protection in companies. Overall, it argues that AI legislation and policies are vital to the success of efforts to implement AI-based corporate governance in Nigeria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Krishnan, Mahendhiran. „API C4E Augmentation: AI-Powered Agent(AIPA) Framework“. International Journal for Research in Applied Science and Engineering Technology 12, Nr. 11 (30.11.2024): 988–95. http://dx.doi.org/10.22214/ijraset.2024.65267.

Der volle Inhalt der Quelle
Annotation:
Abstract: In today’s rapidly evolving digital landscape, organizations increasingly rely on Application Programming Interface known as API for seamless integration and data exchange. API providers comprising consumers, developers, and project teams face several challenges in discovering, developing and mapping APIs in compliance with organizational enterprise architecture principles. This paper aims to address these challenges by proposing an AI-Powered Agent (AIPA) Framework that leverages Generative AI and AI Code Assistant tools to enhance API governance and streamline the API development cycle (API Management, develop API, testing, security, deployment and monitoring). The proposed framework facilitates API discovery through an interactive chat interface that summarizes available APIs based on the use case. Additionally, it aims to automate key aspects of API development, including compliance checks, security protocols, and document generation thereby significantly reducing effort(s), human error(s), and improving productivity and efficiency. Establishing an API Center for Enablement (C4E) ensures consistent adoption of best practices across the organization. By integrating AI-driven solution with API governance, this paper outlines a pathway for organizations to improve API quality, security, and usability while empowering to adapt to digital disruptions. The findings suggest that a comprehensive, AI-enhanced governance framework enhances operational and development efficiency and fosters innovation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Dey, Diptish, und Debarati Bhaumik. „APPRAISE: a Governance Framework for Innovation with Artificial Intelligence Systems“. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16.10.2024): 328–40. http://dx.doi.org/10.1609/aies.v7i1.31640.

Der volle Inhalt der Quelle
Annotation:
As artificial intelligence (AI) systems increasingly impact society, the EU Artificial Intelligence Act (AIA) is the first legislative attempt to regulate AI systems. This paper proposes a governance framework for organizations innovating with AI systems. Building upon secondary research, the framework aims at driving a balance between four types of pressures that organizations, innovating with AI, experience, and thereby creating responsible value. These pressures encompass AI/technology, normative, value creation, and regulatory aspects. The framework is partially validated through primary research in two phases. In the first phase, a conceptual model is proposed that measures the extent to which organizational tasks result in AIA compliance, using elements from the AIA as mediators and strategic variables such as organization size, extent of outsourcing, and offshoring as moderators. 34 organizations in the Netherlands are surveyed to test the conceptual model. The average actual compliance score of the 34 participants is low, and most participants exaggerate their compliance. Organization size is found to have significant impact on AIA compliance. In phase 2, two case studies are conducted with the purpose of generating in-depth insights to validate the proposed framework. The case studies confirm the interplay of the four pressures on organizations innovating with AI, and furthermore substantiate the governance framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Asif, Rameez, Syed Raheel Hassan und Gerard Parr. „Integrating a Blockchain-Based Governance Framework for Responsible AI“. Future Internet 15, Nr. 3 (28.02.2023): 97. http://dx.doi.org/10.3390/fi15030097.

Der volle Inhalt der Quelle
Annotation:
This research paper reviews the potential of smart contracts for responsible AI with a focus on frameworks, hardware, energy efficiency, and cyberattacks. Smart contracts are digital agreements that are executed by a blockchain, and they have the potential to revolutionize the way we conduct business by increasing transparency and trust. When it comes to responsible AI systems, smart contracts can play a crucial role in ensuring that the terms and conditions of the contract are fair and transparent as well as that any automated decision-making is explainable and auditable. Furthermore, the energy consumption of blockchain networks has been a matter of concern; this article explores the energy efficiency element of smart contracts. Energy efficiency in smart contracts may be enhanced by the use of techniques such as off-chain processing and sharding. The study emphasises the need for careful auditing and testing of smart contract code in order to protect against cyberattacks along with the use of secure libraries and frameworks to lessen the likelihood of smart contract vulnerabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Mariam, Gadmi, Loulid Adil und Bendarkawi Zakaria. „THE INTEGRATION OF ARTIFICIAL INTELLIGENCE (AI) INTO EDUCATION SYSTEMS AND ITS IMPACT ON THE GOVERNANCE OF HIGHER EDUCATION INSTITUTIONS“. International Journal of Professional Business Review 9, Nr. 12 (06.12.2024): e05176. https://doi.org/10.26668/businessreview/2024.v9i12.5176.

Der volle Inhalt der Quelle
Annotation:
Objective: The research aims to explore the integration of Artificial Intelligence (AI) within educational systems and analyze its impact on the governance of higher education institutions (HEIs), particularly focusing on decision-making, data protection, and administrative efficiency. Theoretical Framework: The article presents key theories on the transformative role of AI in educational governance, particularly focusing on how AI-driven data analysis and automation enhance decision-making and administrative efficiency. It also addresses theories related to ethical governance, emphasizing data protection and equitable access within higher education institutions. Method: The research methodology in this article is based on a qualitative approach, combining a review of existing literature with case studies of AI implementation in educational contexts. This approach provides in-depth insights into the effects of AI on governance within higher education institutions. Results and Discussion: The research findings highlight that AI integration in higher education governance improves decision-making and operational efficiency through data-driven insights and automation. However, it also reveals challenges, particularly in data protection, ethical concerns, and shifting power dynamics within institutions. The study emphasizes the need for responsible and transparent AI governance to ensure balanced benefits across stakeholders. Research Implications: This research underscores the need for higher education institutions to adopt AI responsibly, balancing its potential to enhance governance and decision-making with rigorous ethical standards, especially in data privacy and equity. It calls on policymakers and administrators to develop frameworks that ensure AI-driven processes remain transparent, inclusive, and aligned with educational values. Originality/Value: The originality of this research lies in its focus on how AI specifically transforms governance in higher education institutions, going beyond general applications of AI in education to address ethical, operational, and decision-making challenges unique to institutional governance. It provides a nuanced perspective on balancing innovation with responsibility in an academic setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Adebola Folorunso, Olufunbi Babalola, Chineme Edgar Nwatu und Urenna Ukonne. „Compliance and Governance issues in Cloud Computing and AI: USA and Africa“. Global Journal of Engineering and Technology Advances 21, Nr. 2 (30.11.2024): 127–38. http://dx.doi.org/10.30574/gjeta.2024.21.2.0213.

Der volle Inhalt der Quelle
Annotation:
The rapid expansion of cloud computing and artificial intelligence (AI) has driven transformative change across various industries, presenting both opportunities and challenges in the realms of compliance and governance. This review examines the distinctive and overlapping compliance and governance issues faced by the United States (USA) and African countries in managing cloud computing and AI technologies. In the USA, compliance frameworks such as the California Consumer Privacy Act (CCPA), HIPAA, and the NIST AI Risk Management Framework provide regulatory infrastructure, emphasizing data privacy, sovereignty, and AI ethics. In contrast, African nations, led by South Africa’s Protection of Personal Information Act (POPIA) and regional initiatives like those promoted by the African Union, are developing data protection and AI governance structures within diverse and resource-constrained environments. Key compliance concerns include data privacy, sovereignty, and cross-border data transfers, with the USA focusing on sectoral regulations and Africa on emerging continent-wide data frameworks. Governance challenges differ across regions, especially in data ownership, AI ethics, and risk management; in the USA, well-established risk management frameworks enable more consistent cybersecurity practices, whereas African nations often face hurdles related to limited infrastructure and varying regulatory standards. This comparative analysis underscores the importance of harmonized policies, highlighting the need for collaborative, cross-regional initiatives to mitigate regulatory disparities and foster secure data flows. Ultimately, this review advocates for adaptive, flexible frameworks that incorporate ethical AI guidelines and global best practices, which are essential for supporting sustainable cloud and AI adoption across the USA and Africa. Through proactive compliance strategies and enhanced governance mechanisms, these regions can effectively navigate the challenges of a technology-driven global landscape while promoting innovation and protecting stakeholder interests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Zuwanda, Zulkham Sadat, Arief Fahmi Lubis, Nuryati Solapari, Marius Supriyanto Sakmaf und Andri Triyantoro. „Ethical and Legal Analysis of Artificial Intelligence Systems in Law Enforcement with a Study of Potential Human Rights Violations in Indonesia“. Easta Journal Law and Human Rights 2, Nr. 03 (28.06.2024): 176–85. http://dx.doi.org/10.58812/eslhr.v2i03.283.

Der volle Inhalt der Quelle
Annotation:
This research examines the ethical and legal implications of deploying Artificial Intelligence (AI) systems in law enforcement, with a particular focus on potential human rights violations in Indonesia. Utilizing a normative analysis approach, the study evaluates existing ethical frameworks, legal principles, and human rights standards to assess the governance and implications of AI-driven policing. Key findings indicate significant ethical concerns, including bias, discrimination, lack of transparency, and privacy violations. The legal analysis reveals gaps in Indonesia’s regulatory framework, highlighting the need for specific legislation to address AI’s complexities. Human rights implications, such as threats to privacy, freedom of expression, and equality, are critically analyzed. Comparative case studies from other jurisdictions provide empirical insights and underscore the importance of robust ethical and legal frameworks. The research proposes several recommendations, including the establishment of clear ethical guidelines, strengthening legal frameworks, enhancing transparency and accountability, promoting public engagement, and conducting regular impact assessments to ensure responsible AI governance in law enforcement. This study aims to contribute to the development of ethical AI governance frameworks and inform policy recommendations for responsible AI deployment in law enforcement practices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ridzuan, Nurhadhinah Nadiah, Masairol Masri, Muhammad Anshari, Norma Latif Fitriyani und Muhammad Syafrudin. „AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility“. Information 15, Nr. 8 (25.07.2024): 432. http://dx.doi.org/10.3390/info15080432.

Der volle Inhalt der Quelle
Annotation:
This study examines the applications, benefits, challenges, and ethical considerations of artificial intelligence (AI) in the banking and finance sectors. It reviews current AI regulation and governance frameworks to provide insights for stakeholders navigating AI integration. A descriptive analysis based on a literature review of recent research is conducted, exploring AI applications, benefits, challenges, regulations, and relevant theories. This study identifies key trends and suggests future research directions. The major findings include an overview of AI applications, benefits, challenges, and ethical issues in the banking and finance industries. Recommendations are provided to address these challenges and ethical issues, along with examples of existing regulations and strategies for implementing AI governance frameworks within organizations. This paper highlights innovation, regulation, and ethical issues in relation to AI within the banking and finance sectors. Analyzes the previous literature, and suggests strategies for AI governance framework implementation and future research directions. Innovation in the applications of AI integrates with fintech, such as preventing financial crimes, credit risk assessment, customer service, and investment management. These applications improve decision making and enhance the customer experience, particularly in banks. Existing AI regulations and guidelines include those from Hong Kong SAR, the United States, China, the United Kingdom, the European Union, and Singapore. Challenges include data privacy and security, bias and fairness, accountability and transparency, and the skill gap. Therefore, implementing an AI governance framework requires rules and guidelines to address these issues. This paper makes recommendations for policymakers and suggests practical implications in reference to the ASEAN guidelines for AI development at the national and regional levels. Future research directions, a combination of extended UTAUT, change theory, and institutional theory, as well as the critical success factor, can fill the theoretical gap through mixed-method research. In terms of the population gap can be addressed by research undertaken in a nation where fintech services are projected to be less accepted, such as a developing or Islamic country. In summary, this study presents a novel approach using descriptive analysis, offering four main contributions that make this research novel: (1) the applications of AI in the banking and finance industries, (2) the benefits and challenges of AI adoption in these industries, (3) the current AI regulations and governance, and (4) the types of theories relevant for further research. The research findings are expected to contribute to policy and offer practical implications for fintech development in a country.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Saputra, Beny, Hartati Hartati und Olivér Bene. „Hungary’s AI Strategy: Lessons for Indonesia’s AI Legal Framework Enhancement“. Jambe Law Journal 7, Nr. 1 (06.07.2024): 25–58. http://dx.doi.org/10.22437/home.v7i1.325.

Der volle Inhalt der Quelle
Annotation:
This study analyses Hungary's approach to regulating artificial intelligence (AI) by analyzing their AI Strategy (2020-2030) and provides insights for improving Indonesia's legal framework. In Hungary, although there is no dedicated legislation for artificial intelligence (AI), the country places a high importance on adhering to current regulations to regulate AI technologies. This paper does a comparative analysis to evaluate the influence of Hungary's approach on the advancement of artificial intelligence (AI), the methods used to enforce regulations, the ethical principles followed, the safeguarding of data, and the extent of international partnerships. This research seeks to offer practical insights for enhancing Indonesia's legal infrastructure in the field of AI governance and technology regulation by comparing Hungary's regulatory landscape with Indonesia's current framework. The purpose of the research is to provide guidance to policymakers and stakeholders in Indonesia regarding effective tactics and best practices based on Hungary's experience. This will assist in enhancing Indonesia's regulatory framework for AI and technology
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Mun, Jimin, Liwei Jiang, Jenny Liang, Inyoung Cheong, Nicole DeCairo, Yejin Choi, Tadayoshi Kohno und Maarten Sap. „Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits“. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16.10.2024): 997–1010. http://dx.doi.org/10.1609/aies.v7i1.31698.

Der volle Inhalt der Quelle
Annotation:
General purpose AI, such as ChatGPT, seems to have lowered the barriers for the public to use AI and harness its power. However, the governance and development of AI still remain in the hands of a few, and the pace of development is accelerating without a comprehensive assessment of risks. As a first step towards democratic risk assessment and design of general purpose AI, we introduce PARTICIP-AI, a carefully designed framework for laypeople to speculate and assess AI use cases and their impacts. Our framework allows us to study more nuanced and detailed public opinions on AI through collecting use cases, surfacing diverse harms through risk assessment under alternate scenarios (i.e., developing and not developing a use case), and illuminating tensions over AI devel- opment through making a concluding choice on its development. To showcase the promise of our framework towards informing democratic AI development, we run a medium-scale study with inputs from 295 demographically diverse participants. Our analyses show that participants’ responses emphasize applications for personal life and society, contrasting with most current AI development’s business focus. We also surface diverse set of envisioned harms such as distrust in AI and institutions, complementary to those defined by experts. Furthermore, we found that perceived impact of not developing use cases significantly predicted participants’ judgements of whether AI use cases should be developed, and highlighted lay users’ concerns of techno-solutionism. We conclude with a discussion on how frameworks like PARTICIP-AI can further guide democratic AI development and governance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Mirishli, Shahmar. „Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy“. ANCIENT LAND 6, Nr. 8 (30.08.2024): 40–55. http://dx.doi.org/10.36719/2706-6185/38/40-55.

Der volle Inhalt der Quelle
Annotation:
Abstract This article examines the ethical and legal implications of artificial intelligence (AI) driven data collection, focusing on developments from 2023 to 2024. It analyzes recent advancements in AI technologies and their impact on data collection practices across various sectors. The study compares regulatory approaches in the European Union, the United States, and China, highlighting the challenges in creating a globally harmonized framework for AI governance. Key ethical issues, including informed consent, algorithmic bias, and privacy protection, are critically assessed in the context of increasingly sophisticated AI systems. The research explores case studies in healthcare, finance, and smart cities to illustrate the practical challenges of AI implementation. It evaluates the effectiveness of current legal frameworks and proposes solutions encompassing legal and policy recommendations, technical safeguards, and ethical frameworks. The article emphasizes the need for adaptive governance and international cooperation to address the global nature of AI development while balancing innovation with the protection of individual rights and societal values.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Olufunbi Babalola, Adebisi Adedoyin, Foyeke Ogundipe, Adebola Folorunso und Chineme Edgar Nwatu. „Policy framework for Cloud Computing: AI, governance, compliance and management“. Global Journal of Engineering and Technology Advances 21, Nr. 2 (30.11.2024): 114–26. http://dx.doi.org/10.30574/gjeta.2024.21.2.0212.

Der volle Inhalt der Quelle
Annotation:
The rapid evolution of cloud computing has transformed data management, operational efficiency, and artificial intelligence (AI) capabilities across industries. However, this advancement presents new challenges in governance, compliance, and management, necessitating a comprehensive policy framework to ensure secure, ethical, and effective cloud usage. This review examines a robust policy framework designed to address these challenges, focusing on the integration of AI, governance practices, regulatory compliance, and cloud management. The framework outlines specific policies for AI, emphasizing ethical considerations, accountability, and transparency, alongside mechanisms for privacy and bias mitigation to foster responsible AI deployment in cloud environments. Governance policies are structured to establish clear data stewardship, risk management, and continuous monitoring protocols, ensuring that cloud resources align with organizational and regulatory standards. Moreover, compliance is addressed through adherence to global standards such as GDPR and HIPAA, with an emphasis on data sovereignty, auditability, and vendor accountability to maintain regulatory alignment across jurisdictions. Management policies within the framework focus on optimizing resource allocation, enforcing Service Level Agreements (SLAs), and developing disaster recovery and business continuity strategies. These management policies aim to balance cost-efficiency with performance reliability. Recognizing the complexities of multi-cloud and hybrid environments, the framework proposes adaptable guidelines that accommodate rapid technological shifts and address security and privacy risks inherent in cloud computing. Through case studies and best practices, this framework offers actionable insights for organizations seeking to implement secure, compliant, and efficient cloud systems. In exploring the future landscape, the review anticipates emerging regulations and underscores the importance of industry-wide collaboration in refining cloud policies. This policy framework provides a foundation for organizations to harness the full potential of cloud computing while upholding standards in AI ethics, data governance, and regulatory compliance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Winfield, Alan F. T., und Marina Jirotka. „Ethical governance is essential to building trust in robotics and artificial intelligence systems“. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, Nr. 2133 (15.10.2018): 20180085. http://dx.doi.org/10.1098/rsta.2018.0085.

Der volle Inhalt der Quelle
Annotation:
This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Hogan, Linda, und Marta Lasek-Markey. „Towards a Human Rights-Based Approach to Ethical AI Governance in Europe“. Philosophies 9, Nr. 6 (30.11.2024): 181. https://doi.org/10.3390/philosophies9060181.

Der volle Inhalt der Quelle
Annotation:
As AI-driven solutions continue to revolutionise the tech industry, scholars have rightly cautioned about the risks of ‘ethics washing’. In this paper, we make a case for adopting a human rights-based ethical framework for regulating AI. We argue that human rights frameworks can be regarded as the common denominator between law and ethics and have a crucial role to play in the ethics-based legal governance of AI. This article examines the extent to which human rights-based regulation has been achieved in the primary example of legislation regulating AI governance, i.e., the EU AI Act 2024/1689. While the AI Act has a firm commitment to protect human rights, which in the EU legal order have been given expression in the Charter of Fundamental Rights, we argue that this alone does not contain adequate guarantees for enforcing some of these rights. This is because issues such as EU competence and the principle of subsidiarity make the idea of protection of fundamental rights by the EU rather than national constitutions controversial. However, we argue that human rights-based, ethical regulation of AI in the EU could be achieved through contextualisation within a values-based framing. In this context, we explore what are termed ‘European values’, which are values on which the EU was founded, notably Article 2 TEU, and consider the extent to which these could provide an interpretative framework to support effective regulation of AI and avoid ‘ethics washing’.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Badrul Hisham, Amirah ‘Aisha, Nor Ashikin Mohamed Yusof, Siti Hasliah Salleh und Hafiza Abas. „Transforming Governance: A Systematic Review of AI Applications in Policymaking“. Journal of Science, Technology and Innovation Policy 10, Nr. 1 (06.12.2024): 7–15. https://doi.org/10.11113/jostip.v10n1.148.

Der volle Inhalt der Quelle
Annotation:
This systematic literature review examines the transformative applications of artificial intelligence (AI) in policymaking, exploring its potential to enhance decision-making, public engagement, and governance effectiveness. Employing a rigorous research methodology, this review analyzed scholarly articles from Scopus, Web of Science, and PubMed databases using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework, ensuring methodological transparency and reproducibility. The final dataset of 22 articles was synthesized into four key themes: (i) AI in policy development and implementation, which focuses on data-driven decision support in policy formulation; (ii) AI in public administration and governance, highlighting AI’s role in improving public sector efficiency and resilience; (iii) ethical and regulatory aspects of AI in policymaking, which addresses critical issues like transparency, privacy, and bias; and (iv) applications of AI in specific policy domains, encompassing areas such as public health, environmental sustainability, and education. Findings indicate that AI can support evidence-based policymaking by facilitating real-time data analysis, scenario modeling, and enhanced public participation. However, challenges persist, particularly concerning ethical considerations, algorithmic accountability, and regulatory frameworks that ensure AI is implemented responsibly and equitably. This review underscores the need for interdisciplinary collaboration, ethical standards, and robust governance frameworks to address these challenges and maximize AI’s benefits in policy development and implementation. The synthesis of insights from diverse policy contexts provides a foundation for future research, encouraging exploration of responsible AI integration in policymaking to advance public trust, accountability, and policy effectiveness).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Dawson, April. „Algorithmic Adjudication and Constitutional AI—The Promise of A Better AI Decision Making Future?“ SMU Science and Technology Law Review 27, Nr. 1 (2024): 11. http://dx.doi.org/10.25172/smustlr.27.1.3.

Der volle Inhalt der Quelle
Annotation:
Algorithmic governance is when algorithms, often in the form of AI, make decisions, predict outcomes, and manage resources in various aspects of governance. This approach can be applied in areas like public administration, legal systems, policy-making, and urban planning. Algorithmic adjudication involves using AI to assist in or decide legal disputes. This often includes the analysis of legal documents, case precedents, and relevant laws to provide recommendations or even final decisions. The AI models typically used in these emerging decision-making systems use traditionally trained AI systems on large data sets so the system can render a decision or prediction based on past practices. However, the decisions often perpetuate existing biases and can be difficult to explain. Algorithmic decision-making models using a constitutional AI framework (like Anthropic's LLM Claude) may produce results that are more explainable and aligned with societal values. The constitutional AI framework integrates core legal and ethical standards directly into the algorithm’s design and operation, ensuring decisions are made with considerations for fairness, equality, and justice. This article will discuss society’s movement toward algorithmic governance and adjudication, the challenges associated with using traditionally trained AI in these decision-making models, and the potential for better outcomes with constitutional AI models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Kazim, Emre, Danielle Mendes Thame Denny und Adriano Koshiyama. „AI auditing and impact assessment: according to the UK information commissioner’s office“. AI and Ethics 1, Nr. 3 (07.02.2021): 301–10. http://dx.doi.org/10.1007/s43681-021-00039-2.

Der volle Inhalt der Quelle
Annotation:
AbstractAs the use of data and artificial intelligence systems becomes crucial to core services and business, it increasingly demands a multi-stakeholder and complex governance approach. The Information Commissioner's Office’s ‘Guidance on the AI auditing framework: Draft guidance for consultation’ is a move forward in AI governance. The aim of this initiative is toward producing guidance that encompasses both technical (e.g. system impact assessments) and non-engineering (e.g. human oversight) components to governance and represents a significant milestone in the movement towards standardising AI governance. This paper will summarise and critically evaluate the ICO effort and try to anticipate future debates and present some general recommendations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Arroyo, Jinnifer. „Emerging Frontiers of Public Safety: Synergizing AI and Bioengineering for Crime Prevention“. Journal of Technology Innovations and Energy 2, Nr. 3 (19.09.2023): 64–75. http://dx.doi.org/10.56556/jtie.v2i3.595.

Der volle Inhalt der Quelle
Annotation:
The convergence of artificial intelligence (AI) and biological engineering technology (BET) can potentially revolutionize public safety efforts. However, the responsible use of these technologies requires crucial considerations. This study employed an exploratory sequential mixed-method to examine the governance mechanisms apropos AI and BET in the context of crime prevention in the Philippines. It identifies several key components that contribute to establishing governance mechanisms, including multisectoral agencies, legislative initiatives, and regulatory frameworks. The study also identifies a 3-factor model for the governance convergence of AI and BET in public safety. These factors include empowerment and sufficiency, ethical considerations, and laws and regulations. The findings underscore the notable implications of integrating AI and BET into public safety efforts, such as improving surveillance systems, proactively preventing public health crises, and optimizing emergency response capabilities. However, ethical considerations and regulatory guidelines must be in place to address privacy concerns and mitigate potential risks associated with these technologies. The convergence of AI and BET also presents opportunities for sustainability. Nevertheless, concerns arise regarding its improper utilization. Based on the study's findings, policy recommendations are directed at ethical considerations, governance and regulation, and sustainability. These policy actions aim to address the opportunities and challenges associated with the convergence of AI and BET in public safety, ensuring responsible and beneficial use within the framework of Public Safety 4.0.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Almaqtari, Faozi A. „The Role of IT Governance in the Integration of AI in Accounting and Auditing Operations“. Economies 12, Nr. 8 (01.08.2024): 199. http://dx.doi.org/10.3390/economies12080199.

Der volle Inhalt der Quelle
Annotation:
IT governance is a framework that manages the efficient use of information technology within an organization, focusing on strategic alignment, risk management, resource management, performance measurement, compliance, and value delivery. This study investigates the role of IT governance in integrating artificial intelligence (AI) in accounting and auditing operations. Data were collected from 228 participants from Saudi Arabia using a combination of convenience sampling and snowball sampling methods. The collected data were then analyzed using structural equation modeling. Unexpectedly, the results demonstrate that AI, big data analytics, cloud computing, and deep learning technologies significantly enhance accounting and auditing functions’ efficiency and decision-making capabilities, leading to improved financial reporting and audit processes. The results highlight that IT governance plays a crucial role in managing the complexities of AI integration, aligning business strategies with AI-enabled technologies, and facilitating these advancements. This research fills a gap in previous research and adds significantly to the academic literature by improving the understanding of integrating AI into accounting and auditing processes. It builds on existing theoretical frameworks by investigating the role of IT governance in promoting AI adoption. The findings provide valuable insights for accounting and auditing experts, IT specialists, and organizational leaders. The study provides practical insights on deploying AI-driven technology in organizations to enhance auditing procedures and financial reporting. In a societal context, it highlights the broader implications of AI on transparency, accountability, and trust in financial reporting. Finally, the study offers practitioners, policymakers, and scholars valuable insights on leveraging AI advancements to optimize accounting and auditing operations. It highlights IT governance as an essential tool for effectively integrating AI technologies in accounting and auditing operations. However, successful implementation encounters significant organizational challenges like organizational support, training, data sovereignty, and regulatory compliance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Antonov, Alexander. „Managing complexity: the EU’s contribution to artificial intelligence governance“. Revista CIDOB d'Afers Internacionals, Nr. 131 (22.09.2022): 41–65. http://dx.doi.org/10.24241/rcai.2022.131.2.41/en.

Der volle Inhalt der Quelle
Annotation:
With digital ecosystems being questioned around the world, this paper examines the EU’s role in and contribution to the emerging concept of artificial intelligence (AI) governance. Seen by the EU as the key ingredient for innovation, the adoption of AI systems has altered our understanding of governance. Framing AI as an autonomous digital technology embedded in social structures, this paper argues that EU citizens' trust in AI can be increased if the innovation it entails is grounded in a fundamental rights-based approach. This is assessed based on the work of the High-Level Expert Group on AI (which has developed a framework for trustworthy AI) and the European Commission’s recently approved proposal for an Artificial Intelligence Act (taking a risk-based approach).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Hilal Muhammad, Mohd, Muhammad Khairul Zharif Nor A’zam und Mohammad Daniel Shukor. „AI-Powered Governments: The Role of Data Analytics and Visualization in Accelerating Digital Transformation“. International Journal of Research and Innovation in Social Science VIII, Nr. IX (2024): 2901–13. http://dx.doi.org/10.47772/ijriss.2024.8090243.

Der volle Inhalt der Quelle
Annotation:
This study addresses the growing challenges faced by governments in utilising the potential of AI and data analytics for digital transformation, focusing on how these technologies can enhance public service delivery, transparency, and citizen engagement. Despite their transformative potential, majority of the governments struggle with the adoption and integration of AI-powered systems due to infrastructural, regulatory, and ethical concerns. The aim of this study is to investigate the role of data analytics and visualization tools in accelerating digital transformation within AI-powered governments. By synthesizing relevant theories such as the Unified Theory of Acceptance and Use of Technology (UTAUT), Institutional Theory, and the Dynamic Capabilities Framework, this conceptual paper provides a strong theoretical foundation for understanding AI adoption in governance. The methodology involves a comprehensive literature review, analysing past studies and theoretical frameworks related to AI, data analytics, and public sector digital transformation. The findings reveal that AI-powered systems can significantly improve governance outcomes by enabling real-time insights and decision-making, while visualization tools enhance transparency and accountability. However, challenges remain, particularly regarding data privacy, digital infrastructure, and equitable access to services. The study’s implications are both theoretical and practical. Theoretically, it contributes to understanding how AI and data analytics are reshaping governance, while practically, it highlights the need for governments to invest in digital infrastructure and develop dynamic capabilities to adapt to technological advancements. Future research should focus on addressing the specific challenges faced by emerging economies and exploring the ethical implications of AI in governance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Odero, Brenda, David Nderitu und Gabrielle Samuel. „The Ubuntu Way: Ensuring Ethical AI Integration in Health Research“. Wellcome Open Research 9 (28.10.2024): 625. http://dx.doi.org/10.12688/wellcomeopenres.23021.1.

Der volle Inhalt der Quelle
Annotation:
The integration of artificial intelligence (AI) in health research has grown rapidly, particularly in African nations, which have also been developing data protection laws and AI strategies. However, the ethical frameworks governing AI use in health research are often based on Western philosophies, focusing on individualism, and may not fully address the unique challenges and cultural contexts of African communities. This paper advocates for the incorporation of African philosophies, specifically Ubuntu, into AI health research ethics frameworks to better align with African values and contexts. This study explores the concept of Ubuntu, a philosophy that emphasises communalism, interconnectedness, and collective well-being, and its application to AI health research ethics. By analysing existing global AI ethics frameworks and contrasting them with the Ubuntu philosophy, a new ethics framework is proposed that integrates these perspectives. The framework is designed to address ethical challenges at individual, community, national, and environmental levels, with a particular focus on the African context. The proposed framework highlights four key principles derived from Ubuntu: communalism and openness, harmony and support, research prioritisation and community empowerment, and community-oriented decision-making. These principles are aligned with global ethical standards such as justice, beneficence, transparency, and accountability but are adapted to reflect the communal and relational values inherent in Ubuntu. The framework aims to ensure that AI-driven health research benefits communities equitably, respects local contexts and promotes long-term sustainability. Integrating Ubuntu into AI health research ethics can address the limitations of current frameworks that emphasise individualism. This approach not only aligns with African values but also offers a model that could be applied more broadly to enhance the ethical governance of AI in health research worldwide. By prioritising communal well-being, inclusivity, and environmental stewardship, the proposed framework has the potential to foster more responsible and contextually relevant AI health research practices in Africa.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Hachkevych, A. „Hiroshima AI Process“. Analytical and Comparative Jurisprudence, Nr. 3 (22.07.2024): 574–83. http://dx.doi.org/10.24144/2788-6018.2024.03.98.

Der volle Inhalt der Quelle
Annotation:
The focus of this study is on the Hiroshima AI Process. It ranks among the most important AI global governance initiatives in recent years. Over the course of the 2023 year G7 states discussed current issues of AI regulations and adopted a number of documents. Among them are those that lay the foundations of a comprehensive policy framework on AI and are recommended for implementation by the relevant actors in this field. The author investigated the Hiroshima AI Process as a separate phenomenon from the standpoint of the influence of international cooperation on the development of AI. He examined a few documents reflecting current trends in AI governance. Those include the Takasaki Ministerial Declaration with the results of the meeting of Digital and Tech Ministers, The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, as well as the Report “Towards a G7 Common Understanding on Generative AI”. The author’s interest in this global governance initiative is largely explained by the fact that the positions of highly developed states with their own vision of AI policy and AI regulations were agreed upon and reflected after the discussions. The documents prepared will have a significant impact on the domain of AI far beyond the borders of G7 states, given their important role in law and policy making processes at the global level. Subsequently in the study of the Hiroshima AI Process the author revealed that international cooperation with the participation of the G7 had established a superstructure for the generally recognized AI principles of trustworthy, accountability, transparency and fairness. On the basis of the analysis of the Ministerial Declaration, a kind of AI G7 road map for international discussions on advanced AI systems has been created. The findings are valuable for both a better understanding of AI global governance and determining the near future prospects for regulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Strome, Trevor. „Data governance best practices for the AI-ready airport“. Journal of Airport Management 19, Nr. 1 (01.12.2024): 57. https://doi.org/10.69554/zhbt2608.

Der volle Inhalt der Quelle
Annotation:
Airports generate vast amounts of data across various systems that are crucial for operational efficiency, safety and enhanced passenger experience. As airports increasingly rely on this data in order to adapt to evolving passenger experience demands, business environments and regulatory requirements, data governance becomes essential for managing, safeguarding and leveraging data effectively. A robust data governance framework provides the structure for ensuring data quality, security and compliance while enabling airports to harness data for artificial intelligence (AI) applications such as predictive maintenance and passenger flow management. By starting with a clear scope, objectives and policies, airports can build a data governance framework that addresses both current needs and future challenges. This paper explores the role of data governance in making airports AI-ready, outlining best practices for implementing a governance programme. It highlights the importance of tools such as data lineage tracking and the need for strong data security measures to comply with regulations such as General Data Protection Regulation (GDPR). The paper also emphasises the need for a culture of accountability, outlining key roles such as data stewards and chief data officers (CDOs) to ensure consistent, robust data management. As AI adoption grows, airports must focus on maintaining data integrity, fostering transparency and ensuring regulatory compliance to unlock the full potential of their data assets while safeguarding privacy and building stakeholder trust.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Tong, Yifeng. „Research on Criminal Risk Analysis and Governance Mechanism of Generative Artificial Intelligence such as ChatGPT“. Studies in Law and Justice 2, Nr. 2 (Juni 2023): 85–94. http://dx.doi.org/10.56397/slj.2023.06.13.

Der volle Inhalt der Quelle
Annotation:
The launch of ChatGPT marks a breakthrough in the development of generative artificial intelligence (generative AI) technology, which is based on the collection and learning of big data as its core operating mechanism. Now the generative AI has the characteristics of high intelligence and high generalization and thus has led to various criminal risks. The current criminal law norms are inadequate in terms of the attribution system and crime norms for the criminal risks caused by the generative AI technologies. Thus, we should clarify the types of risks and governance challenges of generative AI, clarify that data is the object of risk governance, and establish a pluralistic liability allocation mechanism and a mixed legal governance framework of criminal, civil, and administrative law on this basis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Bansal, Saurabh, und Dr Neelesh Jain. „A Comprehensive Study Assessing the Transformative Role of Artificial Intelligence in India\'s Governance Policy Framework“. International Journal for Research in Applied Science and Engineering Technology 11, Nr. 7 (31.07.2023): 1748–56. http://dx.doi.org/10.22214/ijraset.2023.54973.

Der volle Inhalt der Quelle
Annotation:
Abstract: Artificial Intelligence (AI) has gained significant prominence worldwide, and India is actively embracing its potential for transforming various sectors. This paper comprehensively studies the intersection between artificial intelligence and Indian government policies. It explores the opportunities, challenges, and implications of AI implementation in the Indian context, and discusses the evolving role of the Indian government in harnessing AI technologies. The paper addresses the challenges and risks associated with AI implementation in India, including ethical considerations, socioeconomic implications, privacy concerns, workforce capacity building, and infrastructure requirements. This section underscores the need for appropriate policies and regulatory frameworks to address these challenges effectively. The paper further examines the Indian government's policies and initiatives on AI, including the national AI strategy, policy frameworks, AI centres of excellence, startup ecosystem, and international collaboration efforts. It delves into key policy considerations such as data governance, ethical and responsible AI, regulation and standards, skills development, and inclusivity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Barnes, Emily, und James Hutson. „Navigating the ethical terrain of AI in higher education: Strategies for mitigating bias and promoting fairness“. Forum for Education Studies 2, Nr. 2 (21.06.2024): 1229. http://dx.doi.org/10.59400/fes.v2i2.1229.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) and machine learning (ML) are transforming higher education by enhancing personalized learning and academic support, yet they pose significant ethical challenges, particularly in terms of inherent biases. This review critically examines the integration of AI in higher education, underscoring the dual aspects of its potential to innovate educational paradigms and the essential need to address ethical implications to avoid perpetuating existing inequalities. The researchers employed a methodological approach that analyzed case studies and literature as primary data collection methods, focusing on strategies to mitigate biases through technical solutions, diverse datasets, and strict adherence to ethical guidelines. Their findings indicate that establishing an ethical AI environment in higher education is imperative and involves comprehensive efforts across policy regulation, governance, and education. The study emphasizes the significance of interdisciplinary collaboration in addressing the complexities of AI bias, highlighting how policy, regulation, governance, and education play pivotal roles in creating an ethical AI framework. Ultimately, the paper advocates for continuous vigilance and proactive strategies to ensure that AI contributes positively to educational settings, stressing the need for robust frameworks that integrate ethical considerations throughout the lifecycle of AI systems to ensure their responsible and equitable use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

R, Dr Chaya, und Syed Salman. „The Present State of Artificial Intelligence in the Indian Legal System and its Monitoring“. June-July 2023, Nr. 34 (27.07.2023): 12–21. http://dx.doi.org/10.55529/jls.34.12.21.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence has grown to be an essential element of many companies, including the legal sector. It is critical to guarantee that AI is used safely and responsibly throughout the country, and the actions of the Indian government and business organisations are an excellent place to start. As AI becomes more integrated into the legal system and different sectors, various legislative frameworks controlling its application and use in India has been emerging. It becomes essential to understand India's legal framework for AI governance and monitoring. The study focuses on the numerous legal and regulatory frameworks in India that govern the development and application of AI. It also covers several national laws, guidelines, and regulations emphasising responsible and ethical AI implementation along with identification of countries that are encouraging regulators and law makers to implement AI Regulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

MURKO, Eva, Matej BABŠEK und Aleksander ARISTOVNIK. „Artificial intelligence and public governance models in socioeconomic welfare: some insights from Slovenia“. Administratie si Management Public 43, Nr. 43 (01.12.2024): 41–60. https://doi.org/10.24818/amp/2024.43-03.

Der volle Inhalt der Quelle
Annotation:
This paper investigates the adoption of artificial intelligence (AI) in public governance and its impact on socioeconomic welfare, focusing on Slovenian Social Work Centres (SWCs). The objectives are to assess how AI applications align with governance models such as (Neo)Weberian Bureaucracy, New Public Management (NPM), and Good Governance, and to evaluate their effectiveness in promoting socioeconomic welfare. Furthermore, the study aims to identify opportunities and risks associated with AI in public governance and to provide policy recommendations for the ethical and effective integration of AI. A mixed-methods approach is adopted, comprising a comprehensive literature review to develop a theoretical framework, a cross-tabulation analysis of the European Commission's dataset of 686 AI use cases in 27 EU Member States, and a case study of AI implementation in Slovenian SWCs. This includes the analysis of administrative data from 2018–2022 on the e-Welfare platform and analysis of reports from Slovenian oversight bodies such as the Court of Audit, the Administrative Inspection, and the Human Rights Ombudsman. The results show that AI significantly improves administrative efficiency, particularly in the areas of resource management, cost-effectiveness, and service quality, which closely align with NPM principles. However, challenges remain in terms of transparency and accountability, as AI systems are often not transparent, making oversight difficult and jeopardising public trust, especially in the area of social welfare. The study concludes that while AI has significant potential to improve public governance, appropriate regulation and human oversight are essential to mitigate risks and ensure compliance with governance principles. The study provides valuable insights into the role of AI in administrative efficiency and is therefore relevant to policymakers, public officials, and researchers aiming to leverage AI's benefits while ensuring ethical governance and equitable socioeconomic outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gasimli, Vusal, und Ismat Mehraliyeva. „Implementation of AI in program-based governance in Azerbaijan“. InterConf, Nr. 40(183) (20.12.2023): 549–57. http://dx.doi.org/10.51582/interconf.19-20.12.2023.054.

Der volle Inhalt der Quelle
Annotation:
In contemporary governance, program-based governance has gained widespread adoption across various sectors. However, the increasing demand for human labor in data management, transmission and analysis poses challenges to efficient administration. As societies grapple with increasingly complex challenges, the need for efficient and adaptive governance mechanisms becomes paramount. In response, the intersection of artificial intelligence (AI) technologies and program-based governance presents a way for enhancing overall managerial efficiency. This article provides a comprehensive overview of program-based governance within the framework of the monitoring and evaluation process using the insights of Azerbaijan. The exploration of case studies and practical applications - a personalized chatbot model created using advanced natural language processing such as GPT, the article identifies key functionalities where strategic AI integration can optimize and strengthen programmatic management. In the first implemented solution, users can swiftly obtain a summarized results on the progress reports, including details on goal achievement, challenges, and other pertinent information in Azerbaijani language, through the personalized bot on the portal.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Dev, Deepthi Kallahakalu Vijay. „AI-Enhanced Data Governance for Modernizing the US Court System“. International Journal of Engineering and Advanced Technology Studies 12, Nr. 4 (15.04.2024): 48–55. https://doi.org/10.37745/ijeats.13/vol12n44855.

Der volle Inhalt der Quelle
Annotation:
The US court system is currently burdened by inefficiencies, data silos, and security vulnerabilities that urgently require modernization to restore public trust. Outdated legacy systems, fragmented data practices, and limited interoperability hinder case management and transparency. A robust data governance framework powered by cutting-edge technologies like Artificial Intelligence (AI), blockchain, and federated learning is essential to address these pressing challenges. This paper explores how AI-enhanced data governance can swiftly transform the judicial system by ensuring data integrity, security, and accessibility. It presents solutions that modernize the court system and offer scalable applications for other sectors, such as healthcare, finance, and education. Adopting centralized data platforms, AI-driven data management, and advanced encryption methods can enhance operational efficiency, reduce biases, and improve decision-making processes. By leveraging this technology-driven framework, the judiciary can deliver justice more effectively, regain public trust, and set a precedent for modernization across industries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Wang, Fang, Chao Zhang, Shengnan Yang, Xiaozhong Liu und Ying‐Hsang Liu. „Human Subjectivity in Information Practice and AI Governance“. Proceedings of the Association for Information Science and Technology 61, Nr. 1 (Oktober 2024): 828–32. http://dx.doi.org/10.1002/pra2.1111.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTThe rise of Artificial Intelligence (AI) introduces a notable tension in the realm of traditional, human‐centric information practices, where human subjectivity has been pivotal in both influencing and being influenced by our interactions with information. An excessive reliance on AI distances humans from practices, potentially diminishing human subjectivity. Additionally, as AI takes on roles once exclusively human, it might constrict opportunities for personal growth and the cultivation of unique insights. Moreover, this technological dependency could dilute the richness of direct human interactions, weakening the fabric of social bonds. These issues—increased AI dependence, AI's encroachment on human roles, and the degradation of social ties—underscore the urgent necessity to revisit our interaction with technology, ensuring it serves to enrich rather than undermine the human experience. In light of this, our panel gathers experts to explore strategies for preserving human subjectivity through cognitive autonomy, creative agency, and social connectivity in the age of AI‐driven information practices. Our dialogue also aims to develop a comprehensive AI governance framework, scrutinized from an interdisciplinary perspective, continually refined in collaboration with academic communities, such as ASIS&T, to solidify and enhance our approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Neupane, Aarati. „Transformation of Public Service: Rise of Technology, AI and Automation“. Prashasan: The Nepalese Journal of Public Administration 55, Nr. 2 (31.12.2023): 42–52. http://dx.doi.org/10.3126/prashasan.v55i2.63538.

Der volle Inhalt der Quelle
Annotation:
The rapid advancement of technology has proven beneficial for enhancing efficiency, transparency, responsiveness, and effectiveness in the administration of public services. The internalization of technological innovations, including AI and automation, has become imperative to address the increasing needs and expectations of citizens in modern governance. Various developed and developing countries have revolutionized public services through the adoption of AI and automation. Nepal has made strides in digital governance, advancing from a paper-based form of governance to digital governance. Moreover, the Digital Nepal Framework aspires to restructure the economy through increased service delivery, production, and productivity by harnessing digital technology. The issues of the digital divide, ethical and responsible adoption of AI, citizen engagement and participation, and inclusiveness have been challenging. The government should prioritize investments in digital infrastructure, encourage tech innovations, and promote capacity development in the field of technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Oluwaseun Adeola Bakare, Onoriode Reginald Aziza, Ngozi Samuel Uzougbo und Portia Oduro. „A governance and risk management framework for project management in the oil and gas industry“. Open Access Research Journal of Science and Technology 12, Nr. 1 (30.09.2024): 121–30. http://dx.doi.org/10.53022/oarjst.2024.12.1.0119.

Der volle Inhalt der Quelle
Annotation:
The oil and gas industry operates in an environment marked by high risks, complex regulations, and significant financial investments. Effective project management in this sector requires a robust governance and risk management framework to address operational, regulatory, financial, and environmental challenges. This review proposes a comprehensive governance and risk management framework tailored specifically to the unique needs of oil and gas projects. The framework integrates governance structures that define clear roles, responsibilities, and decision-making processes, ensuring that projects align with corporate objectives and compliance standards. Central to this framework is a risk management model that identifies, assesses, and mitigates potential project risks. The model emphasizes continuous risk monitoring, utilizing advanced technologies such as predictive analytics, AI, and digital twins to forecast risks and optimize decision-making. Additionally, it advocates for proactive stakeholder engagement strategies to ensure alignment among diverse stakeholders, including government agencies, investors, and local communities. The governance aspect of the framework promotes transparency, accountability, and sustainability, ensuring that projects meet both operational goals and regulatory requirements. It also introduces agile project management techniques to enhance flexibility and responsiveness throughout the project lifecycle. By integrating technological tools like blockchain for contract management and AI for predictive maintenance, the framework addresses both governance and risk management challenges. Ultimately, the review serves as a guide for oil and gas companies to enhance project efficiency, minimize risks, and meet the demands of an increasingly complex industry landscape.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Yu, Yifan. „Military AIs impacts on international strategic stability“. Applied and Computational Engineering 4, Nr. 1 (14.06.2023): 20–25. http://dx.doi.org/10.54254/2755-2721/4/20230339.

Der volle Inhalt der Quelle
Annotation:
Technological revolution brought major changes in the system framework of strategic stability. Artificial intelligence (AI) thrives internationally as a disruptive technology and is applied to many fields in the 21st century. This paper evaluates the strength, limitations, and impacts of AI-empowered military application on international strategic stability. Applications of AI technology for military purpose brings both positive and negative impact on nations defending and offending, so the international strategic stability. However, the impact of AI on international strategic stability is mainly negative. For facing stability challenges, nations shall formulate systematic governance of military AI; the global community shall promote friendly multilateral cooperation between each other. In the end, this view offers significant implications for maintaining international strategic stability and improving AI governance capabilities in the foreseeable future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Philip Olaseni Shoetan, Olukunle Oladipupo Amoo, Enyinaya Stefano Okafor und Oluwabukunmi Latifat Olorunfemi. „SYNTHESIZING AI'S IMPACT ON CYBERSECURITY IN TELECOMMUNICATIONS: A CONCEPTUAL FRAMEWORK“. Computer Science & IT Research Journal 5, Nr. 3 (18.03.2024): 594–605. http://dx.doi.org/10.51594/csitrj.v5i3.908.

Der volle Inhalt der Quelle
Annotation:
As the telecommunications sector increasingly relies on interconnected digital infrastructure, the proliferation of cyber threats poses significant challenges to security and operational integrity. This review presents a conceptual framework for understanding and harnessing the potential of artificial intelligence (AI) in fortifying cybersecurity within the telecommunications industry. The framework integrates the transformative capabilities of AI with the unique demands of cybersecurity in telecommunications, aiming to enhance threat detection, mitigation, and response strategies. It encompasses a multidimensional approach that encompasses both technical and organizational facets, recognizing the interconnectedness of technology, human factors, and regulatory environments. Firstly, the framework delves into the application of AI in bolstering proactive threat intelligence gathering and analysis. Through advanced algorithms and machine learning techniques, AI empowers telecom operators to identify anomalous patterns, predict potential vulnerabilities, and pre-emptively adapt defensive measures. Secondly, it explores AI-driven solutions for dynamic risk assessment and adaptive cybersecurity protocols. By leveraging real-time data analytics and automated decision-making, telecom networks can swiftly adapt to evolving threats and ensure continuous protection against intrusions or breaches. Furthermore, the framework emphasizes the role of AI in augmenting human capabilities through intelligent automation and cognitive assistance. By offloading routine tasks and providing context-aware insights, AI enables cybersecurity professionals to focus on strategic initiatives and complex threat scenarios. Lastly, the framework addresses the imperative of ethical considerations, accountability, and transparency in deploying AI for cybersecurity in telecommunications. It advocates for responsible AI governance frameworks that prioritize privacy, fairness, and bias mitigation while fostering collaboration across industry stakeholders. In summary, this conceptual framework provides a roadmap for harnessing AI's transformative potential to fortify cybersecurity resilience in telecommunications, thereby safeguarding critical infrastructure and ensuring the integrity of global communication networks. Keywords: AI, Cybersecurity, Telecommunication, Framework, Conceptual, Impact, Review.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Matai, Puneet. „The Imperative of AI Governance: Steering BFSI towards Responsible Innovation“. Journal of Artificial Intelligence & Cloud Computing, 28.02.2024, 1–5. http://dx.doi.org/10.47363/jaicc/2024(3)319.

Der volle Inhalt der Quelle
Annotation:
This whitepaper focuses on the critical concept of AI Governance in the Banking, Financial Services, and Insurance (BFSI) sector. It highlights the need for regulations and governance frameworks to mitigate risks associated with accelerated adoption of AI in BFSI, including unethical bias, potential privacy breaches and threats to market stability. The paper advocates for the adoption of Explainable AI (XAI) frameworks to enhance transparency and prevent algorithmic bias. It also examines the evolving global landscape of AI governance, citing examples like the EU AI Act and Singapore’s Model AI Governance Framework. In conclusion, the paper suggests implementing a robust AI governance framework for the financial services industry to address legal, ethical, and regulatory challenges and promote responsible innovation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Ligot, Dominic Vincent. „AI Governance: A Framework for Responsible AI Development“. SSRN Electronic Journal, 2024. http://dx.doi.org/10.2139/ssrn.4817726.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie