Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Responsible Artificial Intelligence.

Zeitschriftenartikel zum Thema „Responsible Artificial Intelligence“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Responsible Artificial Intelligence" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tawei Wang, Tawei Wang. „Responsible Use of Artificial Intelligen“. International Journal of Computer Auditing 4, Nr. 1 (Dezember 2022): 001–3. http://dx.doi.org/10.53106/256299802022120401001.

Der volle Inhalt der Quelle
Annotation:
<p>Artificial intelligence (AI) has once again attracted the public&rsquo;s attention in the past decade. Riding on the wave of big data revolution, the development of AI is much more promising than that 50 years ago. One example is ChatGPT (https://openai.com/blog/chatgpt/), which has quickly become a hot topic in the past several months and has brought back all kinds of discussion about AI and decision making. In this editorial, I would like to highlight several perspectives that may help us rethink about the implications of using AI for decision making especially for audit professionals.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Teng, C. L., A. S. Bhullar, P. Jermain, D. Jordon, R. Nawfel, P. Patel, R. Sean, M. Shang und D. H. Wu. „Responsible Artificial Intelligence in Radiation Oncology“. International Journal of Radiation Oncology*Biology*Physics 120, Nr. 2 (Oktober 2024): e659. http://dx.doi.org/10.1016/j.ijrobp.2024.07.1446.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gregor, Shirley. „Responsible Artificial Intelligence and Journal Publishing“. Journal of the Association for Information Systems 25, Nr. 1 (2024): 48–60. http://dx.doi.org/10.17705/1jais.00863.

Der volle Inhalt der Quelle
Annotation:
The aim of this opinion piece is to examine the responsible use of artificial intelligence (AI) in relation to academic journal publishing. The work discusses approaches to AI with particular attention to recent developments with generative AI. Consensus is noted around eight normative themes for principles for responsible AI and their associated risks. A framework from Shneiderman (2022) for human-centered AI is employed to consider journal publishing practices that can address the principles of responsible AI at different levels. The resultant AI principled governance matrix (AI-PGM) for journal publishing shows how countermeasures for risks can be employed at the levels of the author-researcher team, the organization, the industry, and by government regulation. The AI-PGM allows a structured approach to responsible AI and may be modified as developments with AI unfold. It shows how the whole publishing ecosystem should be considered when looking at the responsible use of AI—not just journal policy itself.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Haidar, Ahmad. „An Integrative Theoretical Framework for Responsible Artificial Intelligence“. International Journal of Digital Strategy, Governance, and Business Transformation 13, Nr. 1 (15.12.2023): 1–23. http://dx.doi.org/10.4018/ijdsgbt.334844.

Der volle Inhalt der Quelle
Annotation:
The rapid integration of Artificial Intelligence (AI) into various sectors has yielded significant benefits, such as enhanced business efficiency and customer satisfaction, while posing challenges, including privacy concerns, algorithmic bias, and threats to autonomy. In response to these multifaceted issues, this study proposes a novel integrative theoretical framework for Responsible AI (RAI), which addresses four key dimensions: technical, sustainable development, responsible innovation management, and legislation. The responsible innovation management and the legal dimensions form the foundational layers of the framework. The first embeds elements like anticipation and reflexivity into corporate culture, and the latter examines AI-specific laws from the European Union and the United States, providing a comparative perspective on legal frameworks governing AI. The study's findings may be helpful for businesses seeking to responsibly integrate AI, developers who focus on creating responsibly compliant AI, and policymakers looking to foster awareness and develop guidelines for RAI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shneiderman, Ben. „Responsible AI“. Communications of the ACM 64, Nr. 8 (August 2021): 32–35. http://dx.doi.org/10.1145/3445973.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dignum, Virginia. „Responsible Artificial Intelligence --- From Principles to Practice“. ACM SIGIR Forum 56, Nr. 1 (Juni 2022): 1–6. http://dx.doi.org/10.1145/3582524.3582529.

Der volle Inhalt der Quelle
Annotation:
The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. AI is changing the way we work, live and solve challenges but concerns about fairness, transparency or privacy are also growing. Ensuring responsible, ethical AI is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. In order to develop and use AI responsibly, we need to work towards technical, societal, institutional and legal methods and tools which provide concrete support to AI practitioners, as well as awareness and training to enable participation of all, to ensure the alignment of AI systems with our societies' principles and values. This paper is a curated version of my keynote at the Web Conference 2022.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rodrigues, Rowena, Anais Resseguier und Nicole Santiago. „When Artificial Intelligence Fails“. Public Governance, Administration and Finances Law Review 8, Nr. 2 (14.12.2023): 17–28. http://dx.doi.org/10.53116/pgaflr.7030.

Der volle Inhalt der Quelle
Annotation:
Diverse initiatives promote the responsible development, deployment and use of Artificial Intelligence (AI). AI incident databases have emerged as a valuable and timely learning resource and tool in AI governance. This article assesses the value of such databases and outlines how this value can be enhanced. It reviews four databases: the AI Incident Database, the AI, Algorithmic, and Automation Incidents and Controversies Repository, the AI Incident Tracker and Where in the World Is AI. The article provides a descriptive analysis of these databases, examines their objectives, and locates them within the landscape of initiatives that advance responsible AI. It reflects on their primary objective, i.e. learning from mistakes to avoid them in the future, and explores how they might benefit diverse stakeholders. The article supports the broader uptake of these databases and recommends four key actions to enhance their value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

VASYLKIVSKYI, Mikola, Ganna VARGATYUK und Olga BOLDYREVA. „INTELLIGENT RADIO INTERFACE WITH THE SUPPORT OF ARTIFICIAL INTELLIGENCE“. Herald of Khmelnytskyi National University. Technical sciences 217, Nr. 1 (23.02.2023): 26–32. http://dx.doi.org/10.31891/2307-5732-2023-317-1-26-32.

Der volle Inhalt der Quelle
Annotation:
The peculiarities of the implementation of the 6G intelligent radio interface infrastructure, which will use an individual configuration for each individual subscriber application and flexible services with lower overhead costs, have been studied. A personalized infrastructure consisting of an AI-enabled intelligent physical layer, an intelligent MAC controller, and an intelligent protocol is considered, followed by a potentially novel AI-based end-to-end (E2E) device. The intelligent controller is investigated, in particular the intelligent functions at the MAC level, which may become key components of the intelligent controller in the future. The joint optimization of these components, which will provide better system performance, is considered. It was determined that instead of using a complex mathematical method of optimization, it is possible to use machine learning, which has less complexity and can adapt to network conditions. A 6G radio interface design based on a combination of model-driven and data-driven artificial intelligence is investigated and is expected to provide customized radio interface optimization from pre-configuration to self-learning. The specifics of configuring the network scheme and transmission parameters at the level of subscriber equipment and services using a personalized radio interface to maximize the individual user experience without compromising the throughput of the system as a whole are determined. Artificial intelligence is considered, which will be a built-in function of the radio interface that creates an intelligent physical layer and is responsible for MAC access control, network management optimization (such as load balancing and power saving), replacing some non-linear or non-convex algorithms in receiver modules or compensation of shortcomings in non-linear models. Built-in intelligence has been studied, which will make the 6G physical layer more advanced and efficient, facilitate the optimization of structural elements of the physical layer and procedural design, including the possible change of the receiver architecture, will help implement new detection and positioning capabilities, which, in turn, will significantly affect the design of radio interface components. The requirements for the 6G network are defined, which provide for the creation of a single network with scanning and communication functions, which must be integrated into a single structure at the stage of radio interface design. The specifics of carefully designing a communication and scanning network that will offer full scanning capabilities and more fully meet all key performance indicators in the communications industry are explored.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Germanov, Nikolai S. „The concept of responsible artificial intelligence as the future of artificial intelligence in medicine“. Digital Diagnostics 4, Nr. 1S (26.06.2023): 27–29. http://dx.doi.org/10.17816/dd430334.

Der volle Inhalt der Quelle
Annotation:
Active deployment of artificial intelligence (AI) systems in medicine creates many challenges. Recently, the concept of responsible artificial intelligence (RAI) was widely discussed, which is aimed at solving the inevitable ethical, legal, and social problems. The scientific literature was analyzed and the possibility of applying the RAI concept to overcome the existing AI problems in medicine was considered. Studies of possible AI applications in medicine showed that current algorithms are unable to meet the basic enduring needs of society, particularly, fairness, transparency, and reliability. The RAI concept based on three principles accountability for AI activities and responsibility and transparency of findings (ART) was proposed to address ethical issues. Further evolution, without the development and application of the ART concept, turns dangerous and impossible the use of AI in such areas as medicine and public administration. The requirements for accountability and transparency of conclusions are based on the identified epistemological (erroneous, non-transparent, and incomplete conclusions) and regulatory (data confidentiality and discrimination of certain groups) problems of using AI in digital medicine [2]. Epistemological errors committed by AI are not limited to omissions related to the volume and representativeness of the original databases analyzed. In addition, these include the well-known black box problem, i.e. the inability to look into the process of forming AI outputs when processing input data. Along with epistemological errors, normative problems inevitably arise, including patient confidentiality and discrimination of some social groups due to the refusal of some patients to provide medical data for training algorithms and as part of the analyzed databases, which will lead to inaccurate AI conclusions in cases of certain gender, race, and age. Importantly, the methodology of the AI data analysis depends on the program code set by the programmer, whose epistemological and logical errors are projected onto the AI. Hence the problem of determining responsibility in the case of erroneous conclusions, i.e. its distribution between the program itself, the developer, and the executor. Numerous professional associations design ethical standards for developers and a statutory framework to regulate responsibility between the links described. However, the state must play the greatest role in the development and approval of such legislation. The use of AI in medicine, despite its advantages, is accompanied by many ethical, legal, and social challenges. The development of RAI has the potential both to solve these challenges and to further the active and secure deployment of AI systems in digital medicine and healthcare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tyrranen, V. A. „ARTIFICIAL INTELLIGENCE CRIMES“. Territory Development, Nr. 3(17) (2019): 10–13. http://dx.doi.org/10.32324/2412-8945-2019-3-10-13.

Der volle Inhalt der Quelle
Annotation:
The article is devoted to current threats to information security associated with the widespread dissemination of computer technology. The author considers one of the aspects of cybercrime, namely crime using artificial intelligence. The concept of artificial intelligence is analyzed, a definition is proposed that is sufficient for effective enforcement. The article discusses the problems of criminalizing such crimes, the difficulties of solving the issue of legal personality and delinquency of artificial intelligence are shown. The author gives various cases, explaining why difficulties arise in determining the person responsible for the crime, gives an objective assessment of the possibility of criminal prosecution of the creators of the software, in the work of which there were errors that caused harm to the rights protected by criminal law and legitimate interests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Vourganas, Ioannis, Vladimir Stankovic und Lina Stankovic. „Individualised Responsible Artificial Intelligence for Home-Based Rehabilitation“. Sensors 21, Nr. 1 (22.12.2020): 2. http://dx.doi.org/10.3390/s21010002.

Der volle Inhalt der Quelle
Annotation:
Socioeconomic reasons post-COVID-19 demand unsupervised home-based rehabilitation and, specifically, artificial ambient intelligence with individualisation to support engagement and motivation. Artificial intelligence must also comply with accountability, responsibility, and transparency (ART) requirements for wider acceptability. This paper presents such a patient-centric individualised home-based rehabilitation support system. To this end, the Timed Up and Go (TUG) and Five Time Sit To Stand (FTSTS) tests evaluate daily living activity performance in the presence or development of comorbidities. We present a method for generating synthetic datasets complementing experimental observations and mitigating bias. We present an incremental hybrid machine learning algorithm combining ensemble learning and hybrid stacking using extreme gradient boosted decision trees and k-nearest neighbours to meet individualisation, interpretability, and ART design requirements while maintaining low computation footprint. The model reaches up to 100% accuracy for both FTSTS and TUG in predicting associated patient medical condition, and 100% or 83.13%, respectively, in predicting area of difficulty in the segments of the test. Our results show an improvement of 5% and 15% for FTSTS and TUG tests, respectively, over previous approaches that use intrusive means of monitoring such as cameras.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Werder, Karl, Balasubramaniam Ramesh und Rongen (Sophia) Zhang. „Establishing Data Provenance for Responsible Artificial Intelligence Systems“. ACM Transactions on Management Information Systems 13, Nr. 2 (30.06.2022): 1–23. http://dx.doi.org/10.1145/3503488.

Der volle Inhalt der Quelle
Annotation:
Data provenance, a record that describes the origins and processing of data, offers new promises in the increasingly important role of artificial intelligence (AI)-based systems in guiding human decision making. To avoid disastrous outcomes that can result from bias-laden AI systems, responsible AI builds on four important characteristics: fairness, accountability, transparency, and explainability. To stimulate further research on data provenance that enables responsible AI, this study outlines existing biases and discusses possible implementations of data provenance to mitigate them. We first review biases stemming from the data's origins and pre-processing. We then discuss the current state of practice, the challenges it presents, and corresponding recommendations to address them. We present a summary highlighting how our recommendations can help establish data provenance and thereby mitigate biases stemming from the data's origins and pre-processing to realize responsible AI-based systems. We conclude with a research agenda suggesting further research avenues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Belle, Vaishak. „The quest for interpretable and responsible artificial intelligence“. Biochemist 41, Nr. 5 (18.10.2019): 16–19. http://dx.doi.org/10.1042/bio04105016.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in computational biology, finance, law and robotics. However, such a highly positive impact is coupled with significant challenges: how do we understand the decisions suggested by these systems in order that we can trust them? How can they be held accountable for those decisions?
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Madhavan, Raj, Jaclyn A. Kerr, Amanda R. Corcos und Benjamin P. Isaacoff. „Toward Trustworthy and Responsible Artificial Intelligence Policy Development“. IEEE Intelligent Systems 35, Nr. 5 (01.09.2020): 103–8. http://dx.doi.org/10.1109/mis.2020.3019679.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Neri, Emanuele, Francesca Coppola, Vittorio Miele, Corrado Bibbolino und Roberto Grassi. „Artificial intelligence: Who is responsible for the diagnosis?“ La radiologia medica 125, Nr. 6 (31.01.2020): 517–21. http://dx.doi.org/10.1007/s11547-020-01135-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

McGregor, Sean. „A Scaled Multiyear Responsible Artificial Intelligence Impact Assessment“. Computer 56, Nr. 8 (August 2023): 20–27. http://dx.doi.org/10.1109/mc.2022.3231551.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Kuennen, Christopher S. „Developing Leaders of Character for Responsible Artificial Intelligence“. Journal of Character and Leadership Development 10, Nr. 3 (27.10.2023): 52–59. http://dx.doi.org/10.58315/jcld.v10.273.

Der volle Inhalt der Quelle
Annotation:
Who is responsible for Responsible AI (RAI)? As the Department of Defense (DoD) invests in AI workforce education, this question serves as starting point for an argument that effective training for military RAI demands focused character development for officers. This essay makes that case in three parts. First, while norms around responsibility and AI are likely to evolve, there remains long-standing legal, ethical, and practical precedent to think of commissioned officers as the loci of responsibility for the application of military AI. Next, given DoD’s emphasis on responsibility, it should devote significant pedagogical attention to the subjective skills, motivations, and perceptions of operators who depend on AI to execute their mission, beyond merely promoting technical literacy. Finally, the significance of character for RAI entails the application of proven character development methodologies from pre-commissioning education onward: critical dialogue, hands-on practice applying AI in complex circumstances, and moral reminders about the relevance of the DoD’s ethical principles for AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Kumar, Sarvesh, Upasana Gupta, Arvind Kumar Singh und Avadh Kishore Singh. „Artificial Intelligence“. Journal of Computers, Mechanical and Management 2, Nr. 3 (31.08.2023): 31–42. http://dx.doi.org/10.57159/gadl.jcmm.2.3.23064.

Der volle Inhalt der Quelle
Annotation:
As we navigate the digital era of the 21st century, cyber security has grown into a pressing societal issue that requires innovative, cutting-edge solutions. In response to this pressing need, Artificial Intelligence (AI) has emerged as a revolutionary instrument, causing a paradigm shift in cyber security. AI's prowess resides in its capacity to process and analyze immense quantities of heterogeneous cyber security data, thereby facilitating the efficient completion of crucial tasks. These duties, which include threat detection, asset prioritization, and vulnerability management, are performed with a level of speed and accuracy that far exceeds human capabilities, thereby transforming our approach to cyber security. This document provides a comprehensive dissection of AI's profound impact on cyber security, as well as an in-depth analysis of how AI tools not only augment, but in many cases transcend human-mediated processes. By delving into the complexities of AI implementation within the realm of cyber security, we demonstrate the potential for AI to effectively anticipate, identify, and preempt cyber threats, empowering organizations to take a proactive stance towards digital safety. Despite these advancements, it is essential to consider the inherent limitations of AI. We emphasize the need for sustained human oversight and intervention to ensure that cyber security measures are proportionate and effective. Importantly, we address potential ethical concerns and emphasize the significance of robust governance structures for the responsible and transparent use of artificial intelligence in cyber security. This paper clarifies the transformative role of AI in reshaping cyber security strategies, thereby contributing to a safer, more secure digital future. In doing so, it sets the groundwork for further exploration and discussion on the use of AI in cyber security, a discussion that is becoming increasingly important as we continue to move deeper into the digital age.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Buzu, Irina. „Shaping the future: responsible regulation of generative artificial intelligence“. Vector European, Nr. 1 (April 2024): 14–19. http://dx.doi.org/10.52507/2345-1106.2024-1.03.

Der volle Inhalt der Quelle
Annotation:
Generative AI's potential to revolutionize various fields is undeniable, but its ethical implications and potential misuse raise concerns. This article explores how existing legal frameworks, like the EU's GDPR, are adapting to address these challenges, while also examining emerging regulations around the world, such as the EU AI Act. We will delve into the diverse approaches to regulating generative AI, highlighting the focus on transparency, data minimization, risk mitigation, and responsible development. The discussion will also touch upon the crucial role of collaboration and diverse perspectives in ensuring this powerful technology serves humanity ethically and responsibly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Igbokwe, Innocent C. „Artificial Intelligence in Educational Leadership: Risks and Responsibilities“. European Journal of Arts, Humanities and Social Sciences 1, Nr. 6 (01.11.2024): 3–10. http://dx.doi.org/10.59324/ejahss.2024.1(6).01.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) is transforming various sectors globally, including education. In education, artificial intelligence has the potential to significantly enhance educational leadership by improving decision-making, streamlining administrative tasks, and personalizing student learning experiences. However, the integration of AI into educational systems also introduces risks related to bias, privacy, transparency, and accountability. Educational leaders bear the responsibility of managing these risks and ensuring that AI is used ethically and responsibly. This paper explores the risks and responsibilities associated with the implementation of AI in educational leadership. This paper examines ethical concerns, decision-making processes, privacy, accountability, and the need for responsible AI usage. It recommends among other things that educational leaders must ensure that AI systems are designed and operated in ways that promote fairness, equity, and inclusivity by developing and implementing comprehensive ethical guidelines to ensure the responsible use of AI; should implement bias-detection mechanisms so as to promote fairness in AI-driven decision-making processes and reduce discriminatory practices and must enforce strict data security protocols, such as encryption, secure access, and regular system audits, to safeguard sensitive information in educational institutions. To this end, by understanding these risks and responsibilities, educational leaders can better harness AI's potential to enhance educational outcomes while safeguarding the integrity of education systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Zainuddin, Nurkhamimi. „Does Artificial Intelligence Cause More Harm than Good in Schools?“ International Journal of Language Education and Applied Linguistics 14, Nr. 1 (04.04.2024): 1–3. http://dx.doi.org/10.15282/ijleal.v14i1.10432.

Der volle Inhalt der Quelle
Annotation:
The integration of artificial intelligence (AI) in schools presents significant challenges and risks requiring responsible and ethical management. Despite warnings from tech leaders, major corporations push AI adoption in schools, leading to privacy violations, biased algorithms and curricular misinformation. Generative AI, though enhancing resources, risks disseminating false information. Biased AI models perpetuate inequalities, especially for marginalized groups. The financial burdens of AI implementation worsen budget constraints, and AI-driven surveillance raises privacy concerns. Governance must prioritize ethics and student rights, establishing transparent frameworks to prevent commercial interests from overshadowing educational goals. This editorial suggests halting AI adoption until comprehensive legislation safeguards against risks. Stakeholders should prioritize responsible AI development, stressing transparency and accountability. Collaboration between AI developers and educators is essential to ensuring AI serves students and society responsibly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

O'g'li, Eshmamatov Ruslan Xasan. „THE IMPACT OF ARTIFICIAL INTELLIGENCE ON LANGUAGE LEARNING“. International Journal of Pedagogics 4, Nr. 11 (01.11.2024): 101–5. http://dx.doi.org/10.37547/ijp/volume04issue11-20.

Der volle Inhalt der Quelle
Annotation:
This article explores the significant impact of artificial intelligence (AI) oneducation, particularly in the field of English language learning. By exploring current advances in artificial intelligence technology, including natural language processing and machine learning algorithms, the article highlights how AI-based tools and applications are revolutionizing the way Englishis taught and learned. In addition, the article discusses potential challenges and ethical considerations associated with the integration of AI into education, emphasizing the importance of responsible and inclusive implementation. Overall, this article highlights the changing role of artificial intelligence in shaping the future of English language education, providing in sight into its benefits, opportunities, and implications for students and teachers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Lu, Huimin, Mohsen Guizani und Pin-Han Ho. „Editorial Introduction to Responsible Artificial Intelligence for Autonomous Driving“. IEEE Transactions on Intelligent Transportation Systems 23, Nr. 12 (Dezember 2022): 25212–15. http://dx.doi.org/10.1109/tits.2022.3221169.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Upadhyay, Umashankar, Anton Gradisek, Usman Iqbal, Eshita Dhar, Yu-Chuan Li und Shabbir Syed-Abdul. „Call for the responsible artificial intelligence in the healthcare“. BMJ Health & Care Informatics Online 30, Nr. 1 (Dezember 2023): e100920. http://dx.doi.org/10.1136/bmjhci-2023-100920.

Der volle Inhalt der Quelle
Annotation:
The integration of artificial intelligence (AI) into healthcare is progressively becoming pivotal, especially with its potential to enhance patient care and operational workflows. This paper navigates through the complexities and potentials of AI in healthcare, emphasising the necessity of explainability, trustworthiness, usability, transparency and fairness in developing and implementing AI models. It underscores the ‘black box’ challenge, highlighting the gap between algorithmic outputs and human interpretability, and articulates the pivotal role of explainable AI in enhancing the transparency and accountability of AI applications in healthcare. The discourse extends to ethical considerations, exploring the potential biases and ethical dilemmas that may arise in AI application, with a keen focus on ensuring equitable and ethical AI use across diverse global regions. Furthermore, the paper explores the concept of responsible AI in healthcare, advocating for a balanced approach that leverages AI’s capabilities for enhanced healthcare delivery and ensures ethical, transparent and accountable use of technology, particularly in clinical decision-making and patient care.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Sharma, Ravi S., Samia Loucif, Nir Kshetri und Jeffrey Voas. „Global Initiatives on “Safer” and More “Responsible” Artificial Intelligence“. Computer 57, Nr. 11 (November 2024): 131–37. http://dx.doi.org/10.1109/mc.2024.3447488.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Aulia Rahman, Rofi, Valentino Nathanael Prabowo, Aimee Joy David und József Hajdú. „Constructing Responsible Artificial Intelligence Principles as Norms: Efforts to Strengthen Democratic Norms in Indonesia and European Union“. PADJADJARAN Jurnal Ilmu Hukum (Journal of Law) 9, Nr. 2 (2022): 231–52. http://dx.doi.org/10.22304/pjih.v9n2.a5.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence influences democratic norms and principles. It affects the quality of democracy since it triggers hoaxes, irresponsible political campaign, and data privacy violations. The study discusses the legal framework and debate in the regulation of Artificial Intelligence in the European Union legal system. The study is a doctrinal legal study with conceptual and comparative approach. It aims to criticize the current doctrine of democracy. The analysis explored the law on election and political party in Indonesia to argue that the democratic concept is outdated. On the other hand, the European Union has prepared future legal framework to harmonize Artificial Intelligence and democracy. The result of the study indicates that the absence of law on Artificial Intelligence might be the fundamental reason of the setback of democracy in Indonesia. Therefore, the Indonesian legal system must regulate a prospective Artificial Intelligence regulation and a new democratic concept by determining the new principles of responsible Artificial Intelligence into drafts of laws on Artificial Intelligence, election, and political party. Finally, the new laws shall control programmers, politicians, governments, and voters who create and use Artificial Intelligence technology. In addition, these legal principles shall be the guideline to prevent the harms and to mitigate the risks of Artificial Intelligence technology as well as the effort to strengthen democracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Madaoui, Nadjia. „The Impact of Artificial Intelligence on Legal Systems: Challenges and Opportunities“. Problems of legality 1, Nr. 164 (10.05.2024): 285–303. http://dx.doi.org/10.21564/2414-990x.164.289266.

Der volle Inhalt der Quelle
Annotation:
The integration of artificial intelligence into legal systems has engendered a paradigm shift in the legal landscape, presenting a complex interplay of challenges and opportunities for the legal profession and the justice system. This Comprehensive research delves into the multifaceted impact of artificial intelligence on legal systems, focusing on its transformative potential and implications. Through an extensive analysis of the integration of artificial intelligence technologies, including natural language processing, machine learning, and predictive analytics, the study illuminates the profound improvements in legal research, decision-making processes, and case management, emphasizing the unprecedented efficiency and accessibility that artificial intelligence offers within the legal domain. Furthermore, the research critically examines the ethical and societal challenges stemming from artificial intelligence integration, including concerns related to data privacy, algorithmic bias, and the accountability of artificial intelligence-driven legal solutions. By scrutinizing the existing regulatory frameworks governing artificial intelligence implementation, the study underscores the necessity of responsible and ethical artificial intelligence integration, advocating for transparency, fairness, and equitable practices in the legal profession. The findings contribute to the ongoing discourse on the ethical implications and effective management of artificial intelligence integration in legal systems, providing valuable insights and recommendations for stakeholders and policymakers to navigate the complexities and ensure the responsible adoption of artificial intelligence technologies within the legal sphere
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Mayank Chhatwal. „Artificial Intelligence and Data Concerns for Government Sector Undertakings“. Journal of Electrical Systems 20, Nr. 2 (18.04.2024): 2792–801. http://dx.doi.org/10.52783/jes.6852.

Der volle Inhalt der Quelle
Annotation:
The integration of Artificial Intelligence (AI) within government sector undertakings (PSUs) represents a critical step toward achieving enhanced operational efficiency, improved service delivery, and more informed decision-making processes. However, this transformative potential is accompanied by significant concerns, particularly around data privacy, security, governance, and ethical issues. This paper seeks to explore the nuanced relationship between AI and data concerns in PSUs, evaluating the advantages, challenges, and best practices for managing these technologies responsibly. In doing so, it highlights strategies for balancing innovation with responsible data governance to ensure that AI is implemented effectively, ethically, and securely in the public sector.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Md Wasim Ahmed. „Artificial Intelligence and Legal Ethics“. International Journal of Law and Politics Studies 6, Nr. 5 (12.10.2024): 226–37. https://doi.org/10.32996/ijlps.2024.6.5.12.

Der volle Inhalt der Quelle
Annotation:
AI has revolutionized the practice of law thus resulting in major developments, but the ethical implications of the same are complex and elaborate. Finally, this paper examines how professional organizations can serve as leaders in determining the ethical application of AI particularly in the legal profession. Through setting of ethical standards, as well as providing information and materials for learning, such organizations assist legal professions in the exercise of responsible usage of AI technologies. The discussion raises the demographic concerns of the AI systems, efficiency, transparency, and the fairness of the AI system as well as the cardinal need to provide practical approaches for training as well as updating the legal curriculum due to the influence of AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Roșca, Cosmina –. Mihaela, Ionuț Adrian Gortoescu und Marius Radu Tănase. „ARTIFICIAL INTELLIGENCE – POWERED VIDEO CONTENT GENERATION TOOLS“. Romanian Journal of Petroleum & Gas Technology 5 (76), Nr. 1 (15.08.2024): 131–44. http://dx.doi.org/10.51865/jpgt.2024.01.10.

Der volle Inhalt der Quelle
Annotation:
This article discusses the considerations of artificial intelligence-powered video content generation tools, exploring their applications, ethical considerations, and evaluation criteria. Through discussions of various artificial intelligence (AI) tools, including features, limitations, and implications, the authors analyze the evolving landscape of video creation in the digital age. Key themes include the ethical implications of deep fake technology and the importance of responsible AI principles, exemplified by Microsoft's guidelines. This paper identifies five of the most promoted free social media tools. Evaluation criteria for these tools, such as visual quality, relevance, coherence, authenticity, and transparency, are examined to assess the suitability of AI-generated videos. While AI offers promising opportunities, the discussion underscores the continued need for human oversight and ethical considerations to ensure the responsible use of AI technologies in video content generation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

HURZHII, S. „Trends in the application of artificial intelligence technologies in the military and technical sphere“. INFORMATION AND LAW, Nr. 3(50) (04.09.2024): 136–46. http://dx.doi.org/10.37750/2616-6798.2024.3(50).311713.

Der volle Inhalt der Quelle
Annotation:
The role and significance of artificial intelligence technologies in the military-technical sphere are determined. The principles of using artificial intelligence technologies in the activities of the armed forces are outlined. Attention is focused on the threats and risks posed by the use of artificial intelligence in the military-technical sphere. The peculiarities of the legislative provision of the military use of artificial intelligence technologies in the USA are highlighted. Detailed aspects of the technological implementation of artificial intelligence during the execution of military tasks in the context of the American experience. The conceptual foundations of the russian use of artificial intelligence technologies of a military nature are revealed. The institutional capabilities and achievements of the russian federation in the field of technological support of the needs of the army in the field of artificial intelligence have been determined. The scope and directions of the russian army's innovative developments using artificial intelligence technologies are outlined. It has been updated that unmanned systems are singled out as a special priority for the application of technologies in the field of artificial intelligence of the russian federation. The main global trends in the use of artificial intelligence technologies in the military sphere are revealed. Further directions of improving the field of military use of artificial intelligence technologies have been identified. It was concluded that the development, introduction and approval by the world community of criteria for the responsible use of artificial intelligence for military purposes will contribute to the construction and formation of an international consensus on the responsible handling and use of artificial intelligence technologies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

K. K, Pragya. „Ethics of Artificial Intelligence(AI)“. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, Nr. 05 (11.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem33762.

Der volle Inhalt der Quelle
Annotation:
In today's research and development, artificial intelligence (AI) ethics are a complex and urgent issue. Concerns about artificial intelligence (AI) systems' possible effects on people, communities, and the larger global environment are raised as these systems are incorporated into more and more facets of society. This study examines the ethical implications of artificial intelligence (AI), looking at topics including privacy, fairness, accountability, transparency, and the possibility of prejudice and discrimination in AI algorithms and decision-making processes. The study endeavours to contribute to the establishment of frameworks and rules that encourage the responsible and ethical use of AI technologies, guaranteeing their conformity with society values and the preservation of human rights, by critically assessing these ethical issues. Keywords:-AI ethics , artificial intelligence, ethics, machine ethics, robotics, challenges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wangdi, Pema. „Integrating Artificial Intelligence in Education:“. International Journal of Research in STEM Education 6, Nr. 2 (14.11.2024): 50–60. http://dx.doi.org/10.33830/ijrse.v6i2.1722.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence (AI) is transforming educational practices by facilitating personalized learning, automating grading processes, and enhancing support through intelligent tutoring systems. This systematic review explores AI's integration in educational settings, highlighting its contributions to increased productivity and tailored learning experiences. It addresses key challenges including data privacy, algorithmic bias, and the need for enhanced accountability and transparency in AI applications. The review also discusses strategic recommendations for embedding ethical AI into curriculum design and emphasizes the importance of professional development for educators. Collaboration among educational stakeholders is vital for advancing responsible AI utilization. By synthesizing recent literature, this review provides insights into AI tools' effectiveness, explores ethical dimensions of technology in classrooms, and suggests future directions for research and practice in educational AI. This analysis serves as a resource for educators, policymakers, and technologists aiming to optimize AI benefits in education.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Balas, Hashim, und Reem Shatnawi. „Legal framework for AI applications“. F1000Research 13 (29.04.2024): 418. http://dx.doi.org/10.12688/f1000research.147019.1.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI), the most widespread technological term, has become an integral part of human daily life. People are becoming increasingly dependent on AI-powered devices; thus, it is now essential to have a legal framework to oversee artificial intelligence's various uses and legal peculiarities. As artificial intelligence has entered all areas of life and into various sectors, which requires examining in detail the legal nature of artificial intelligence, determining the laws that must be applied to it, and identifying the people or entities responsible for it to determine the scope of their responsibilities for the resulting damages that may be caused to others as a result of the use of intelligence. Artificial, this research aimed to examine the adequacy of the legal rules in Jordanian legislation regulating the provisions of artificial intelligence in light of the diversity of its applications and its different legal nature. The research is divided into two chapters. The first chapter presents the concept of artificial intelligence, and the second chapter discusses the legal liability of artificial intelligence. The results of the research revealed that the Jordanian legislator did not specify the legal nature of the artificial intelligence and contented himself with stipulating it in separate texts, Legal liability resulting from utilizing artificial intelligence systems can be challenged under regulations of the defect in manufacturing, responsibility for guarding things, and the distinction between responsibilities due to the degree of independence and intelligence of artificial intelligence, In determining the liability of artificial intelligence, the legislator should consider the types of artificial intelligence systems, their different capabilities, and their independence from humans, The process of enacting special laws regulating all aspects of artificial intelligence must be expedited. This law should be characterized by flexibility that enables it to keep pace with and rapid development witnessed in this field.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Kurniawan, Itok. „Analisis terhadap Artificial Intelligence sebagai Subjek Hukum Pidana“. Mutiara : Jurnal Ilmiah Multidisiplin Indonesia 1, Nr. 1 (18.07.2023): 35–44. http://dx.doi.org/10.61404/jimi.v1i1.4.

Der volle Inhalt der Quelle
Annotation:
This article discusses issues related to the existence of artificial intelligence as a legal subject, and criminal liability when artificial intelligence commits criminal acts. The purpose of this study is to find out the categorization of artificial intelligence as a legal object or legal subject, and to find out to whom criminal responsibility is assigned when artificial intelligence commits a crime. The research method used in writing this article is normative legal research, with a conceptual approach. The results of this study are that artificial intelligence is not a legal subject, because the actions carried out by artificial intelligence are only orders from its users, and for criminal acts committed by artificial intelligence, those who must be responsible are the creators of artificial intelligence or users of artificial intelligence
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Orsi Koch Delgado, Heloísa, Aline De Azevedo Fay, Maria José Sebastiany und Asafe Davi Cortina Silva. „Artificial intelligence adaptive learning tools“. BELT - Brazilian English Language Teaching Journal 11, Nr. 2 (31.12.2020): e38749. http://dx.doi.org/10.15448/2178-3640.2020.2.38749.

Der volle Inhalt der Quelle
Annotation:
This paper explores the field of Artificial Intelligence applied to Education, focusing on the English Language Teaching. It outlines concepts and uses of Artificial Intelligence, and appraises the functionalities of adaptive tools, bringing evaluative feedback on their use by American school teachers, and highlighting the importance of additional research on the matter. It was observed that the tools are valid media options to complement teaching, especially concerning adaptive learning. They offer students more inclusive opportunities: they maximize learning by tailoring instruction to address students ‘needs, and helping students become more responsible for their own schooling. As for teachers, their testimonials highlight the benefits of dedicating more class time to the students’ most pressing weaker areas. Drawbacks might include the need to provide teachers with autonomy to override recommendations so as to help them find other ways to teach a skill that seems to be more effective for a specific student.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Baniecki, Hubert, und Przemyslaw Biecek. „Responsible Prediction Making of COVID-19 Mortality (Student Abstract)“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 18 (18.05.2021): 15755–56. http://dx.doi.org/10.1609/aaai.v35i18.17874.

Der volle Inhalt der Quelle
Annotation:
For high-stakes prediction making, the Responsible Artificial Intelligence (RAI) is more important than ever. It builds upon Explainable Artificial Intelligence (XAI) to advance the efforts in providing fairness, model explainability, and accountability to the AI systems. During the literature review of COVID-19 related prognosis and diagnosis, we found out that most of the predictive models are not faithful to the RAI principles, which can lead to biassed results and wrong reasoning. To solve this problem, we show how novel XAI techniques boost transparency, reproducibility and quality of models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Fawzy, Ahmad, Danastri Cantya Nirmala, Denaya Khansa und Yudhistira Tri Wardhana. „Ethics and Regulation for Artificial Intelligence in Healthcare: Empowering Clinicians to Ensure Equitable and High-Quality Care“. INTERNATIONAL JOURNAL OF MEDICAL SCIENCE AND CLINICAL RESEARCH STUDIES 03, Nr. 07 (22.07.2023): 1350–57. http://dx.doi.org/10.47191/ijmscrs/v3-i7-23.

Der volle Inhalt der Quelle
Annotation:
As artificial intelligence (AI) technology becomes increasingly integrated into healthcare, it is crucial for clinicians to possess a comprehensive understanding of its capabilities, limitations, and ethical implications. This literature review explores the reasons why clinicians need to be better informed about artificial intelligence, emphasizes the potential benefits of artificial intelligence in healthcare, raises awareness regarding the risks and unintended consequences associated with its use, discusses the development of machine learning and artificial intelligence in healthcare, and underscores the need for ethical guidelines and regulation to harness the potential of artificial intelligence in a responsible manner.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Buhmann, Alexander, und Christian Fieseler. „Towards a deliberative framework for responsible innovation in artificial intelligence“. Technology in Society 64 (Februar 2021): 101475. http://dx.doi.org/10.1016/j.techsoc.2020.101475.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Doorn, Neelke. „Artificial intelligence in the water domain: Opportunities for responsible use“. Science of The Total Environment 755 (Februar 2021): 142561. http://dx.doi.org/10.1016/j.scitotenv.2020.142561.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Devaraj, Harsha, Simran Makhija und Suryoday Basak. „On the Implications of Artificial Intelligence and its Responsible Growth“. Journal of Scientometric Research 8, Nr. 2s (19.11.2019): s2—s6. http://dx.doi.org/10.5530/jscires.8.2.21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

DR. NIDHI SHARMA. „ARTIFICIAL INTELLIGENCE: LEGAL IMPLICATIONS AND CHALLENGES“. Knowledgeable Research: A Multidisciplinary Journal 2, Nr. 11 (30.06.2024): 13–32. http://dx.doi.org/10.57067/220k4298.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) technologies are rapidly transforming various aspects of society, from healthcare and finance to transportation and education. While AI offers tremendous potential for innovation and efficiency, its widespread adoption raises significant legal implications and challenges. This paper examines the legal landscape surrounding AI, focusing on key areas such as privacy, liability, intellectual property, and employment law. One of the primary concerns with AI is the privacy implications stemming from the collection, storage, and analysis of vast amounts of data. Regulations such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States aim to safeguard individuals' privacy rights and impose strict requirements on data handling practices. Another critical area of concern is liability, particularly regarding the accountability for AI-driven decisions that may result in harm to individuals or entities. Questions arise about who should be held responsible for such decisions the developers, users, or the AI systems themselves. Furthermore, AI raises complex issues related to intellectual property, including the ownership of AI-generated works, patentability of AI algorithms, and the protection of AI innovations. Additionally, the integration of AI into the workforce raises questions about the future of employment, job displacement, and the need for new regulations to protect workers in the age of automation. In conclusion, while AI presents unprecedented opportunities for advancement, it also poses significant legal challenges that require careful consideration and proactive regulation. Policymakers, legal professionals, and stakeholders must collaborate to develop frameworks that promote the responsible development, deployment, and regulation of AI technologies while safeguarding individual rights, privacy, and societal values.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Shahvaroughi Farahani, Milad, und Ghazal Ghasemi. „Will artificial intelligence threaten humanity?“ Sustainable Economies 2, Nr. 2 (22.05.2024): 65. http://dx.doi.org/10.62617/se.v2i2.65.

Der volle Inhalt der Quelle
Annotation:
The rapid advancement of artificial intelligence (AI) has sparked intense debate regarding its potential threat to humanity. This abstract delves into the multifaceted discussion surrounding the implications of AI on the future of humanity. It explores various perspectives, ranging from optimistic views that highlight the transformative benefits of AI to pessimistic concerns about its existential threat. Drawing on insights from experts and researchers, the abstract examines key areas of contention, including the possibility of technological singularity, the ethical dilemmas posed by autonomous weapons, and the socio-economic impacts of AI-driven automation. So, the main purpose of the paper is to study the impacts of AI from different points of view including social, economic, political, etc. Therefore, different. Furthermore, it discusses strategies for mitigating the risks associated with AI, emphasizing the importance of ethical guidelines, regulatory frameworks, and international cooperation. Overall, this abstract provides a comprehensive overview of the complex considerations surrounding the impact of AI on humanity and underscores the need for thoughtful deliberation and proactive measures to ensure a beneficial and responsible integration of AI into society.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Kunze, Lars. „Build Back Better with Responsible AI“. KI - Künstliche Intelligenz 35, Nr. 1 (März 2021): 1–3. http://dx.doi.org/10.1007/s13218-021-00707-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Kumbum, Praveen Kumar, Vijay Kumar Adari, Vinay Chunduru, Srinivas Gonepally und Kishor Kumar Amuda. „Artificial Intelligence using TOPSIS Method“. Journal of Computer Science Applications and Information Technology 5, Nr. 1 (2020): 1–7. http://dx.doi.org/10.15226/2474-9257/5/1/00147.

Der volle Inhalt der Quelle
Annotation:
Technology based on artificial intelligence (AI) is a revolutionary force that is changing economies, civilizations, and industries all over the world. AI, which has its roots in computer science and cognitive psychology, is a wide range of tools and methods designed to make robots capable of doing activities that have historically required human intellect. This abstract examines the many facets of artificial intelligence (AI) technology, including its fundamentals, uses, difficulties, and ramifications. Artificial Intelligence (AI) technology comprises several subfields such as robotics, computer vision, natural language processing, machine learning, and expert systems. Particularly, machine learning techniques have propelled incredible progress by allowing computers to learn from data and make judgments or predictions without the need for explicit programming. Natural language processing allows machines to comprehend, interpret, and produce human language, hence facilitating human-computer interaction. Machines can now see, analyze, and interpret visual data from the real world thanks to computer vision technology. Applications of AI technology may be found in a wide range of industries, including manufacturing, healthcare, finance, transportation, agriculture, education, and entertainment. AI-powered solutions help in drug discovery, medical imaging analysis, diagnosis, and customized therapy in the healthcare industry. AI algorithms are used in finance to power automated trading, fraud detection, risk assessment, and customer support. AI makes it possible for transportation to include predictive maintenance, traffic management, and driverless cars. Artificial Intelligence enhances supply chain management, quality assurance, and production processes in manufacturing. AI technology has the potential to revolutionize many industries, but it also comes with dangers and problems. These include privacy concerns, security hazards, ethical dilemmas, issues with prejudice and fairness, and effects on society and employment. Responsible AI methods, legal frameworks, multidisciplinary cooperation, and ethical standards are all necessary to meet these issues. Future prospects for AI technology development include the ability to solve challenging issues, spur creativity, increase productivity, and improve quality of life. But to fully utilize AI, one must take a comprehensive strategy that strikes a balance between the advancement of technology and ethical issues, human values, and social well-being. In summary, artificial intelligence (AI) technology is at the vanguard of innovation, presenting never-before-seen possibilities to transform whole sectors, spur economic expansion, and tackle global issues. AI has the ability to usher in a future of greater human-machine collaboration, innovation, and wealth through the promotion of collaboration, transparency, and ethical stewardship. the Ranking of the Artificial Intelligence using the TOPSIS Method . Interpretable Models is got the first rank whereas is the Ethical AI is having the Lowest rank. Keywords: Explainable AI (XAI), Interpretable Models, Ethical AI ,Responsible AI, Robustness and Adversarial Defense, Continual Learning, Federated Learning, Human-Centric AI, AI Governance and Policy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Anupama, T., und S. Rosita. „Neuromarketing Insights Enhanced by Artificial Intelligence“. ComFin Research 12, Nr. 2 (01.04.2024): 24–28. http://dx.doi.org/10.34293/commerce.v12i2.7300.

Der volle Inhalt der Quelle
Annotation:
The intersection of neuromarketing and artificial intelligence is examined in this research, along with the significant implications for transforming marketing tactics. By using neuroscience methods to identify subconscious reactions to marketing stimuli, neuromarketing sheds light on consumer behaviour. The integration of AI enhances neuromarketing research by effectively analysing neurodata and detecting significant patterns. When combined, they allow for marketing initiatives that are emotionally compelling, targeted, and personalised. However, ethical issues pertaining to bias in AI algorithms, customer privacy, and societal ramifications need to be taken into account. Businesses must adopt this multidisciplinary strategy if they want to stay ahead in the increasingly competitive market. This study advocates for responsible and ethical use in marketing practices while highlighting the transformational potential of utilising AI technologies with neuroscience findings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Prylypko, Daryna. „Artificial intelligence and copyright“. Theory and Practice of Intellectual Property, Nr. 2 (06.07.2021): 15–22. http://dx.doi.org/10.33731/22021.236526.

Der volle Inhalt der Quelle
Annotation:
Key words: copyright, work, artificial intelligence, computer program In the article, the problemsof legislation of Ukraine regarding the issues of copyright on works created due to artificialintelligence were analyzed. Particularly, who is the owner of copyright ofworks created due to artificial intelligence. On the one hand, it could be a developer ofa computer program, from the other hand, it could be a client or an employer. Because,it could happen that there is a situation when robots created something newand original, e.g., how it happened with the project “New Rembrandt”. In this case,computers created a unique portrait of Rembrandt. And here is a question, where isin this portrait original and intellectual works of developers of these computers andprograms. In the contrast, this portrait could be created without people who developedspecial machines, programs, and computers. The article’s author proposes to addinto Ukrainian legislation with following norm: the owner of the copyright createddue to artificial intelligence should be a natural person who uses artificial intelligencefor these purposes within the official relationship or on the basis of a contract. In caseof automatic generation of such work by artificial intelligence, the owner of copyrightshould be the developer.Also, another question arises, particularly, who will be responsible for the damagecaused by the artificial intelligence. As an example, of the solution for this issue Resolution2015/2103 (INL) was given, where is mentioned that human agent could be responsiblefor the caused damage. Because, it is not always a developer is responsiblefor the damage.Also, the legislation and justice practice of foreign countries was explored. Theways of overcoming mentioned problems in legislation of Ukraine were proposed.Such as changing our legislation and giving the exact explanation in who is the ownerof copyright on works created due to artificial intelligence and in which cases this personcould become an owner of the copyright. However, probably, these issues shouldbe resolved at international level regarding globalization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

KIRILLOVA, Elena Anatol'yevna, Oleg Evgenyevich BLINKOV, Natalija Ivanovna OGNEVA, Aleksey Sergeevich VRAZHNOV und Natal'ja Vladimirovna SERGEEVA. „Artificial Intelligence as a New Category of Civil Law“. Journal of Advanced Research in Law and Economics 11, Nr. 1 (31.03.2020): 91. http://dx.doi.org/10.14505//jarle.v11.1(47).12.

Der volle Inhalt der Quelle
Annotation:
This research gives consideration to the legal status of artificial intelligence technology. Artificial intelligence as a future technology is actively expanding its capabilities at the present stage of development of society. In this regard, the concept of ‘artificial intelligence’ and the application of the rule of right in resolving issues of legal responsibility for the operation of artificial intelligence technologies require definition. The main purpose of this study is to define the concept of ‘artificial intelligence’ and determine whether artificial intelligence technologies are the object or subject of right. The article provides the analysis of possible approaches to the disclosure of the concept of ‘artificial intelligence’ as a legal category and its relationship with the concepts of ‘robot’ and ‘cyberphysical system’. The issues of legal responsibility for the operation of artificial intelligence are revealed. For the purposes hereof, the methods of collecting and studying the singularities; generalizations; the methods of scientific abstraction; the methods of cognition of consistent patterns, as well as the method of objectivity, concreteness, pluralism and a whole range of other methods were used. The study has concluded that the artificial intelligence technology is an autonomous self-organizing computer-software or cyberphysical system with the ability and capability to think, learn, make decisions independently, perceive and model surrounding images and symbols, relationships, processes and implement its own decisions. The following general properties of artificial intelligence technologies have been defined: autonomy; the ability to perceive the conditions (situation), make and implement own decisions; the ability to adapt own behavior, to learn, to communicate with other artificial intelligence, to consider, accumulate and reproduce experience (including human experience). Within the present historical period, artificial intelligence technology should be considered as the object of right. The legal responsibility for the operation of artificial intelligence lies with the operator or another person who sets the parameters of its operation and controls its behavior. The creator (manufacturer) of artificial intelligence is also recognized as a responsible person. This conclusion makes it possible to enter the category of artificial intelligence in the legal field and determine the persons responsible for its poor-quality operation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Dzyaloshinsky, I. M. „Artificial Intelligence: A Humanitarian Perspective“. Vestnik NSU. Series: History and Philology 21, Nr. 6 (17.06.2022): 20–29. http://dx.doi.org/10.25205/1818-7919-2022-21-6-20-29.

Der volle Inhalt der Quelle
Annotation:
The article is devoted to the study of the features of human intelligence and the intelligence of complex computer systems, usually referred to as artificial intelligence (AI). As a hypothesis, a statement was formulated about a significant difference between human and artificial intelligence. Human intelligence is a product of a multi-thousand-year history of the development and interaction of three interrelated processes: 1) the formation and development of the human personality; 2) the formation of complex network relationships between members of the social community; 3) collective activity as the basis for the existence and development of communities and individuals. AI is a complex of technological solutions that imitate human cognitive processes. Because of this, with all the options for technical development (acceleration of processes for collecting and processing data and finding solutions, using computer vision, speech recognition and synthesis, etc.). AI will always be associated with human activity. In other words, only people (not machines) are the ultimate source and determinant of values on which any artificial intelligence depends. No mind (human or machine) will ever be truly autonomous: everything we do depends on the social context created by other people who determine the meaning of what we want to achieve. This means that people are responsible for everything that AI does.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Chen, Bingjun. „Analysis of the Difference between the Liability for Infringement of Artificial Intelligence Products and the Liability for Subsequent Observation Obligations“. Journal of Economics and Law 1, Nr. 2 (März 2024): 83–87. http://dx.doi.org/10.62517/jel.202414211.

Der volle Inhalt der Quelle
Annotation:
Currently, artificial intelligence is entering a rapidly developing era. Artificial intelligence technology is growing rapidly. Artificial intelligence technology can already be applied to various fields of social life, and artificial intelligence products with artificial intelligence technology as the core are abundant. Artificial intelligence products have been integrated into the lives of ordinary people. Due to the autonomy of artificial intelligence products, they may work in an unmanned state at certain times, resulting in infringement liability that may cause personal or property damage to others during the operation of artificial intelligence products. There are two ways to pursue accountability: one is to directly pursue the infringement liability of artificial intelligence products, and the specific subject who caused the infringement of artificial intelligence products shall bear the infringement liability; another approach is to pursue the responsibility of the producer and seller of artificial intelligence products for their subsequent observation obligations. The reason is that the producer and seller of artificial intelligence products fail to fulfill their subsequent observation obligations, resulting in the failure to detect defects in the artificial intelligence products in a timely manner and resulting in subsequent observation obligations. The responsible parties are the producers and sellers of artificial intelligence products.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie