Segui questo link per vedere altri tipi di pubblicazioni sul tema: Responsible Artificial Intelligence.

Articoli di riviste sul tema "Responsible Artificial Intelligence"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Responsible Artificial Intelligence".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Tawei Wang, Tawei Wang. "Responsible Use of Artificial Intelligen". International Journal of Computer Auditing 4, n. 1 (dicembre 2022): 001–3. http://dx.doi.org/10.53106/256299802022120401001.

Testo completo
Abstract (sommario):
<p>Artificial intelligence (AI) has once again attracted the public&rsquo;s attention in the past decade. Riding on the wave of big data revolution, the development of AI is much more promising than that 50 years ago. One example is ChatGPT (https://openai.com/blog/chatgpt/), which has quickly become a hot topic in the past several months and has brought back all kinds of discussion about AI and decision making. In this editorial, I would like to highlight several perspectives that may help us rethink about the implications of using AI for decision making especially for audit professionals.</p> <p>&nbsp;</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gregor, Shirley. "Responsible Artificial Intelligence and Journal Publishing". Journal of the Association for Information Systems 25, n. 1 (2024): 48–60. http://dx.doi.org/10.17705/1jais.00863.

Testo completo
Abstract (sommario):
The aim of this opinion piece is to examine the responsible use of artificial intelligence (AI) in relation to academic journal publishing. The work discusses approaches to AI with particular attention to recent developments with generative AI. Consensus is noted around eight normative themes for principles for responsible AI and their associated risks. A framework from Shneiderman (2022) for human-centered AI is employed to consider journal publishing practices that can address the principles of responsible AI at different levels. The resultant AI principled governance matrix (AI-PGM) for journal publishing shows how countermeasures for risks can be employed at the levels of the author-researcher team, the organization, the industry, and by government regulation. The AI-PGM allows a structured approach to responsible AI and may be modified as developments with AI unfold. It shows how the whole publishing ecosystem should be considered when looking at the responsible use of AI—not just journal policy itself.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Teng, C. L., A. S. Bhullar, P. Jermain, D. Jordon, R. Nawfel, P. Patel, R. Sean, M. Shang e D. H. Wu. "Responsible Artificial Intelligence in Radiation Oncology". International Journal of Radiation Oncology*Biology*Physics 120, n. 2 (ottobre 2024): e659. http://dx.doi.org/10.1016/j.ijrobp.2024.07.1446.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Haidar, Ahmad. "An Integrative Theoretical Framework for Responsible Artificial Intelligence". International Journal of Digital Strategy, Governance, and Business Transformation 13, n. 1 (15 dicembre 2023): 1–23. http://dx.doi.org/10.4018/ijdsgbt.334844.

Testo completo
Abstract (sommario):
The rapid integration of Artificial Intelligence (AI) into various sectors has yielded significant benefits, such as enhanced business efficiency and customer satisfaction, while posing challenges, including privacy concerns, algorithmic bias, and threats to autonomy. In response to these multifaceted issues, this study proposes a novel integrative theoretical framework for Responsible AI (RAI), which addresses four key dimensions: technical, sustainable development, responsible innovation management, and legislation. The responsible innovation management and the legal dimensions form the foundational layers of the framework. The first embeds elements like anticipation and reflexivity into corporate culture, and the latter examines AI-specific laws from the European Union and the United States, providing a comparative perspective on legal frameworks governing AI. The study's findings may be helpful for businesses seeking to responsibly integrate AI, developers who focus on creating responsibly compliant AI, and policymakers looking to foster awareness and develop guidelines for RAI.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Shneiderman, Ben. "Responsible AI". Communications of the ACM 64, n. 8 (agosto 2021): 32–35. http://dx.doi.org/10.1145/3445973.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Dignum, Virginia. "Responsible Artificial Intelligence --- From Principles to Practice". ACM SIGIR Forum 56, n. 1 (giugno 2022): 1–6. http://dx.doi.org/10.1145/3582524.3582529.

Testo completo
Abstract (sommario):
The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. AI is changing the way we work, live and solve challenges but concerns about fairness, transparency or privacy are also growing. Ensuring responsible, ethical AI is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. In order to develop and use AI responsibly, we need to work towards technical, societal, institutional and legal methods and tools which provide concrete support to AI practitioners, as well as awareness and training to enable participation of all, to ensure the alignment of AI systems with our societies' principles and values. This paper is a curated version of my keynote at the Web Conference 2022.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Rodrigues, Rowena, Anais Resseguier e Nicole Santiago. "When Artificial Intelligence Fails". Public Governance, Administration and Finances Law Review 8, n. 2 (14 dicembre 2023): 17–28. http://dx.doi.org/10.53116/pgaflr.7030.

Testo completo
Abstract (sommario):
Diverse initiatives promote the responsible development, deployment and use of Artificial Intelligence (AI). AI incident databases have emerged as a valuable and timely learning resource and tool in AI governance. This article assesses the value of such databases and outlines how this value can be enhanced. It reviews four databases: the AI Incident Database, the AI, Algorithmic, and Automation Incidents and Controversies Repository, the AI Incident Tracker and Where in the World Is AI. The article provides a descriptive analysis of these databases, examines their objectives, and locates them within the landscape of initiatives that advance responsible AI. It reflects on their primary objective, i.e. learning from mistakes to avoid them in the future, and explores how they might benefit diverse stakeholders. The article supports the broader uptake of these databases and recommends four key actions to enhance their value.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

VASYLKIVSKYI, Mikola, Ganna VARGATYUK e Olga BOLDYREVA. "INTELLIGENT RADIO INTERFACE WITH THE SUPPORT OF ARTIFICIAL INTELLIGENCE". Herald of Khmelnytskyi National University. Technical sciences 217, n. 1 (23 febbraio 2023): 26–32. http://dx.doi.org/10.31891/2307-5732-2023-317-1-26-32.

Testo completo
Abstract (sommario):
The peculiarities of the implementation of the 6G intelligent radio interface infrastructure, which will use an individual configuration for each individual subscriber application and flexible services with lower overhead costs, have been studied. A personalized infrastructure consisting of an AI-enabled intelligent physical layer, an intelligent MAC controller, and an intelligent protocol is considered, followed by a potentially novel AI-based end-to-end (E2E) device. The intelligent controller is investigated, in particular the intelligent functions at the MAC level, which may become key components of the intelligent controller in the future. The joint optimization of these components, which will provide better system performance, is considered. It was determined that instead of using a complex mathematical method of optimization, it is possible to use machine learning, which has less complexity and can adapt to network conditions. A 6G radio interface design based on a combination of model-driven and data-driven artificial intelligence is investigated and is expected to provide customized radio interface optimization from pre-configuration to self-learning. The specifics of configuring the network scheme and transmission parameters at the level of subscriber equipment and services using a personalized radio interface to maximize the individual user experience without compromising the throughput of the system as a whole are determined. Artificial intelligence is considered, which will be a built-in function of the radio interface that creates an intelligent physical layer and is responsible for MAC access control, network management optimization (such as load balancing and power saving), replacing some non-linear or non-convex algorithms in receiver modules or compensation of shortcomings in non-linear models. Built-in intelligence has been studied, which will make the 6G physical layer more advanced and efficient, facilitate the optimization of structural elements of the physical layer and procedural design, including the possible change of the receiver architecture, will help implement new detection and positioning capabilities, which, in turn, will significantly affect the design of radio interface components. The requirements for the 6G network are defined, which provide for the creation of a single network with scanning and communication functions, which must be integrated into a single structure at the stage of radio interface design. The specifics of carefully designing a communication and scanning network that will offer full scanning capabilities and more fully meet all key performance indicators in the communications industry are explored.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Germanov, Nikolai S. "The concept of responsible artificial intelligence as the future of artificial intelligence in medicine". Digital Diagnostics 4, n. 1S (26 giugno 2023): 27–29. http://dx.doi.org/10.17816/dd430334.

Testo completo
Abstract (sommario):
Active deployment of artificial intelligence (AI) systems in medicine creates many challenges. Recently, the concept of responsible artificial intelligence (RAI) was widely discussed, which is aimed at solving the inevitable ethical, legal, and social problems. The scientific literature was analyzed and the possibility of applying the RAI concept to overcome the existing AI problems in medicine was considered. Studies of possible AI applications in medicine showed that current algorithms are unable to meet the basic enduring needs of society, particularly, fairness, transparency, and reliability. The RAI concept based on three principles accountability for AI activities and responsibility and transparency of findings (ART) was proposed to address ethical issues. Further evolution, without the development and application of the ART concept, turns dangerous and impossible the use of AI in such areas as medicine and public administration. The requirements for accountability and transparency of conclusions are based on the identified epistemological (erroneous, non-transparent, and incomplete conclusions) and regulatory (data confidentiality and discrimination of certain groups) problems of using AI in digital medicine [2]. Epistemological errors committed by AI are not limited to omissions related to the volume and representativeness of the original databases analyzed. In addition, these include the well-known black box problem, i.e. the inability to look into the process of forming AI outputs when processing input data. Along with epistemological errors, normative problems inevitably arise, including patient confidentiality and discrimination of some social groups due to the refusal of some patients to provide medical data for training algorithms and as part of the analyzed databases, which will lead to inaccurate AI conclusions in cases of certain gender, race, and age. Importantly, the methodology of the AI data analysis depends on the program code set by the programmer, whose epistemological and logical errors are projected onto the AI. Hence the problem of determining responsibility in the case of erroneous conclusions, i.e. its distribution between the program itself, the developer, and the executor. Numerous professional associations design ethical standards for developers and a statutory framework to regulate responsibility between the links described. However, the state must play the greatest role in the development and approval of such legislation. The use of AI in medicine, despite its advantages, is accompanied by many ethical, legal, and social challenges. The development of RAI has the potential both to solve these challenges and to further the active and secure deployment of AI systems in digital medicine and healthcare.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Tyrranen, V. A. "ARTIFICIAL INTELLIGENCE CRIMES". Territory Development, n. 3(17) (2019): 10–13. http://dx.doi.org/10.32324/2412-8945-2019-3-10-13.

Testo completo
Abstract (sommario):
The article is devoted to current threats to information security associated with the widespread dissemination of computer technology. The author considers one of the aspects of cybercrime, namely crime using artificial intelligence. The concept of artificial intelligence is analyzed, a definition is proposed that is sufficient for effective enforcement. The article discusses the problems of criminalizing such crimes, the difficulties of solving the issue of legal personality and delinquency of artificial intelligence are shown. The author gives various cases, explaining why difficulties arise in determining the person responsible for the crime, gives an objective assessment of the possibility of criminal prosecution of the creators of the software, in the work of which there were errors that caused harm to the rights protected by criminal law and legitimate interests.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Vourganas, Ioannis, Vladimir Stankovic e Lina Stankovic. "Individualised Responsible Artificial Intelligence for Home-Based Rehabilitation". Sensors 21, n. 1 (22 dicembre 2020): 2. http://dx.doi.org/10.3390/s21010002.

Testo completo
Abstract (sommario):
Socioeconomic reasons post-COVID-19 demand unsupervised home-based rehabilitation and, specifically, artificial ambient intelligence with individualisation to support engagement and motivation. Artificial intelligence must also comply with accountability, responsibility, and transparency (ART) requirements for wider acceptability. This paper presents such a patient-centric individualised home-based rehabilitation support system. To this end, the Timed Up and Go (TUG) and Five Time Sit To Stand (FTSTS) tests evaluate daily living activity performance in the presence or development of comorbidities. We present a method for generating synthetic datasets complementing experimental observations and mitigating bias. We present an incremental hybrid machine learning algorithm combining ensemble learning and hybrid stacking using extreme gradient boosted decision trees and k-nearest neighbours to meet individualisation, interpretability, and ART design requirements while maintaining low computation footprint. The model reaches up to 100% accuracy for both FTSTS and TUG in predicting associated patient medical condition, and 100% or 83.13%, respectively, in predicting area of difficulty in the segments of the test. Our results show an improvement of 5% and 15% for FTSTS and TUG tests, respectively, over previous approaches that use intrusive means of monitoring such as cameras.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Werder, Karl, Balasubramaniam Ramesh e Rongen (Sophia) Zhang. "Establishing Data Provenance for Responsible Artificial Intelligence Systems". ACM Transactions on Management Information Systems 13, n. 2 (30 giugno 2022): 1–23. http://dx.doi.org/10.1145/3503488.

Testo completo
Abstract (sommario):
Data provenance, a record that describes the origins and processing of data, offers new promises in the increasingly important role of artificial intelligence (AI)-based systems in guiding human decision making. To avoid disastrous outcomes that can result from bias-laden AI systems, responsible AI builds on four important characteristics: fairness, accountability, transparency, and explainability. To stimulate further research on data provenance that enables responsible AI, this study outlines existing biases and discusses possible implementations of data provenance to mitigate them. We first review biases stemming from the data's origins and pre-processing. We then discuss the current state of practice, the challenges it presents, and corresponding recommendations to address them. We present a summary highlighting how our recommendations can help establish data provenance and thereby mitigate biases stemming from the data's origins and pre-processing to realize responsible AI-based systems. We conclude with a research agenda suggesting further research avenues.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Belle, Vaishak. "The quest for interpretable and responsible artificial intelligence". Biochemist 41, n. 5 (18 ottobre 2019): 16–19. http://dx.doi.org/10.1042/bio04105016.

Testo completo
Abstract (sommario):
Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in computational biology, finance, law and robotics. However, such a highly positive impact is coupled with significant challenges: how do we understand the decisions suggested by these systems in order that we can trust them? How can they be held accountable for those decisions?
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Madhavan, Raj, Jaclyn A. Kerr, Amanda R. Corcos e Benjamin P. Isaacoff. "Toward Trustworthy and Responsible Artificial Intelligence Policy Development". IEEE Intelligent Systems 35, n. 5 (1 settembre 2020): 103–8. http://dx.doi.org/10.1109/mis.2020.3019679.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Neri, Emanuele, Francesca Coppola, Vittorio Miele, Corrado Bibbolino e Roberto Grassi. "Artificial intelligence: Who is responsible for the diagnosis?" La radiologia medica 125, n. 6 (31 gennaio 2020): 517–21. http://dx.doi.org/10.1007/s11547-020-01135-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

McGregor, Sean. "A Scaled Multiyear Responsible Artificial Intelligence Impact Assessment". Computer 56, n. 8 (agosto 2023): 20–27. http://dx.doi.org/10.1109/mc.2022.3231551.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Kuennen, Christopher S. "Developing Leaders of Character for Responsible Artificial Intelligence". Journal of Character and Leadership Development 10, n. 3 (27 ottobre 2023): 52–59. http://dx.doi.org/10.58315/jcld.v10.273.

Testo completo
Abstract (sommario):
Who is responsible for Responsible AI (RAI)? As the Department of Defense (DoD) invests in AI workforce education, this question serves as starting point for an argument that effective training for military RAI demands focused character development for officers. This essay makes that case in three parts. First, while norms around responsibility and AI are likely to evolve, there remains long-standing legal, ethical, and practical precedent to think of commissioned officers as the loci of responsibility for the application of military AI. Next, given DoD’s emphasis on responsibility, it should devote significant pedagogical attention to the subjective skills, motivations, and perceptions of operators who depend on AI to execute their mission, beyond merely promoting technical literacy. Finally, the significance of character for RAI entails the application of proven character development methodologies from pre-commissioning education onward: critical dialogue, hands-on practice applying AI in complex circumstances, and moral reminders about the relevance of the DoD’s ethical principles for AI.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Kumar, Sarvesh, Upasana Gupta, Arvind Kumar Singh e Avadh Kishore Singh. "Artificial Intelligence". Journal of Computers, Mechanical and Management 2, n. 3 (31 agosto 2023): 31–42. http://dx.doi.org/10.57159/gadl.jcmm.2.3.23064.

Testo completo
Abstract (sommario):
As we navigate the digital era of the 21st century, cyber security has grown into a pressing societal issue that requires innovative, cutting-edge solutions. In response to this pressing need, Artificial Intelligence (AI) has emerged as a revolutionary instrument, causing a paradigm shift in cyber security. AI's prowess resides in its capacity to process and analyze immense quantities of heterogeneous cyber security data, thereby facilitating the efficient completion of crucial tasks. These duties, which include threat detection, asset prioritization, and vulnerability management, are performed with a level of speed and accuracy that far exceeds human capabilities, thereby transforming our approach to cyber security. This document provides a comprehensive dissection of AI's profound impact on cyber security, as well as an in-depth analysis of how AI tools not only augment, but in many cases transcend human-mediated processes. By delving into the complexities of AI implementation within the realm of cyber security, we demonstrate the potential for AI to effectively anticipate, identify, and preempt cyber threats, empowering organizations to take a proactive stance towards digital safety. Despite these advancements, it is essential to consider the inherent limitations of AI. We emphasize the need for sustained human oversight and intervention to ensure that cyber security measures are proportionate and effective. Importantly, we address potential ethical concerns and emphasize the significance of robust governance structures for the responsible and transparent use of artificial intelligence in cyber security. This paper clarifies the transformative role of AI in reshaping cyber security strategies, thereby contributing to a safer, more secure digital future. In doing so, it sets the groundwork for further exploration and discussion on the use of AI in cyber security, a discussion that is becoming increasingly important as we continue to move deeper into the digital age.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Buzu, Irina. "Shaping the future: responsible regulation of generative artificial intelligence". Vector European, n. 1 (aprile 2024): 14–19. http://dx.doi.org/10.52507/2345-1106.2024-1.03.

Testo completo
Abstract (sommario):
Generative AI's potential to revolutionize various fields is undeniable, but its ethical implications and potential misuse raise concerns. This article explores how existing legal frameworks, like the EU's GDPR, are adapting to address these challenges, while also examining emerging regulations around the world, such as the EU AI Act. We will delve into the diverse approaches to regulating generative AI, highlighting the focus on transparency, data minimization, risk mitigation, and responsible development. The discussion will also touch upon the crucial role of collaboration and diverse perspectives in ensuring this powerful technology serves humanity ethically and responsibly.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Igbokwe, Innocent C. "Artificial Intelligence in Educational Leadership: Risks and Responsibilities". European Journal of Arts, Humanities and Social Sciences 1, n. 6 (1 novembre 2024): 3–10. http://dx.doi.org/10.59324/ejahss.2024.1(6).01.

Testo completo
Abstract (sommario):
Artificial intelligence (AI) is transforming various sectors globally, including education. In education, artificial intelligence has the potential to significantly enhance educational leadership by improving decision-making, streamlining administrative tasks, and personalizing student learning experiences. However, the integration of AI into educational systems also introduces risks related to bias, privacy, transparency, and accountability. Educational leaders bear the responsibility of managing these risks and ensuring that AI is used ethically and responsibly. This paper explores the risks and responsibilities associated with the implementation of AI in educational leadership. This paper examines ethical concerns, decision-making processes, privacy, accountability, and the need for responsible AI usage. It recommends among other things that educational leaders must ensure that AI systems are designed and operated in ways that promote fairness, equity, and inclusivity by developing and implementing comprehensive ethical guidelines to ensure the responsible use of AI; should implement bias-detection mechanisms so as to promote fairness in AI-driven decision-making processes and reduce discriminatory practices and must enforce strict data security protocols, such as encryption, secure access, and regular system audits, to safeguard sensitive information in educational institutions. To this end, by understanding these risks and responsibilities, educational leaders can better harness AI's potential to enhance educational outcomes while safeguarding the integrity of education systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Zainuddin, Nurkhamimi. "Does Artificial Intelligence Cause More Harm than Good in Schools?" International Journal of Language Education and Applied Linguistics 14, n. 1 (4 aprile 2024): 1–3. http://dx.doi.org/10.15282/ijleal.v14i1.10432.

Testo completo
Abstract (sommario):
The integration of artificial intelligence (AI) in schools presents significant challenges and risks requiring responsible and ethical management. Despite warnings from tech leaders, major corporations push AI adoption in schools, leading to privacy violations, biased algorithms and curricular misinformation. Generative AI, though enhancing resources, risks disseminating false information. Biased AI models perpetuate inequalities, especially for marginalized groups. The financial burdens of AI implementation worsen budget constraints, and AI-driven surveillance raises privacy concerns. Governance must prioritize ethics and student rights, establishing transparent frameworks to prevent commercial interests from overshadowing educational goals. This editorial suggests halting AI adoption until comprehensive legislation safeguards against risks. Stakeholders should prioritize responsible AI development, stressing transparency and accountability. Collaboration between AI developers and educators is essential to ensuring AI serves students and society responsibly.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Lu, Huimin, Mohsen Guizani e Pin-Han Ho. "Editorial Introduction to Responsible Artificial Intelligence for Autonomous Driving". IEEE Transactions on Intelligent Transportation Systems 23, n. 12 (dicembre 2022): 25212–15. http://dx.doi.org/10.1109/tits.2022.3221169.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Upadhyay, Umashankar, Anton Gradisek, Usman Iqbal, Eshita Dhar, Yu-Chuan Li e Shabbir Syed-Abdul. "Call for the responsible artificial intelligence in the healthcare". BMJ Health & Care Informatics Online 30, n. 1 (dicembre 2023): e100920. http://dx.doi.org/10.1136/bmjhci-2023-100920.

Testo completo
Abstract (sommario):
The integration of artificial intelligence (AI) into healthcare is progressively becoming pivotal, especially with its potential to enhance patient care and operational workflows. This paper navigates through the complexities and potentials of AI in healthcare, emphasising the necessity of explainability, trustworthiness, usability, transparency and fairness in developing and implementing AI models. It underscores the ‘black box’ challenge, highlighting the gap between algorithmic outputs and human interpretability, and articulates the pivotal role of explainable AI in enhancing the transparency and accountability of AI applications in healthcare. The discourse extends to ethical considerations, exploring the potential biases and ethical dilemmas that may arise in AI application, with a keen focus on ensuring equitable and ethical AI use across diverse global regions. Furthermore, the paper explores the concept of responsible AI in healthcare, advocating for a balanced approach that leverages AI’s capabilities for enhanced healthcare delivery and ensures ethical, transparent and accountable use of technology, particularly in clinical decision-making and patient care.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Sharma, Ravi S., Samia Loucif, Nir Kshetri e Jeffrey Voas. "Global Initiatives on “Safer” and More “Responsible” Artificial Intelligence". Computer 57, n. 11 (novembre 2024): 131–37. http://dx.doi.org/10.1109/mc.2024.3447488.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Aulia Rahman, Rofi, Valentino Nathanael Prabowo, Aimee Joy David e József Hajdú. "Constructing Responsible Artificial Intelligence Principles as Norms: Efforts to Strengthen Democratic Norms in Indonesia and European Union". PADJADJARAN Jurnal Ilmu Hukum (Journal of Law) 9, n. 2 (2022): 231–52. http://dx.doi.org/10.22304/pjih.v9n2.a5.

Testo completo
Abstract (sommario):
Artificial Intelligence influences democratic norms and principles. It affects the quality of democracy since it triggers hoaxes, irresponsible political campaign, and data privacy violations. The study discusses the legal framework and debate in the regulation of Artificial Intelligence in the European Union legal system. The study is a doctrinal legal study with conceptual and comparative approach. It aims to criticize the current doctrine of democracy. The analysis explored the law on election and political party in Indonesia to argue that the democratic concept is outdated. On the other hand, the European Union has prepared future legal framework to harmonize Artificial Intelligence and democracy. The result of the study indicates that the absence of law on Artificial Intelligence might be the fundamental reason of the setback of democracy in Indonesia. Therefore, the Indonesian legal system must regulate a prospective Artificial Intelligence regulation and a new democratic concept by determining the new principles of responsible Artificial Intelligence into drafts of laws on Artificial Intelligence, election, and political party. Finally, the new laws shall control programmers, politicians, governments, and voters who create and use Artificial Intelligence technology. In addition, these legal principles shall be the guideline to prevent the harms and to mitigate the risks of Artificial Intelligence technology as well as the effort to strengthen democracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Madaoui, Nadjia. "The Impact of Artificial Intelligence on Legal Systems: Challenges and Opportunities". Problems of legality 1, n. 164 (10 maggio 2024): 285–303. http://dx.doi.org/10.21564/2414-990x.164.289266.

Testo completo
Abstract (sommario):
The integration of artificial intelligence into legal systems has engendered a paradigm shift in the legal landscape, presenting a complex interplay of challenges and opportunities for the legal profession and the justice system. This Comprehensive research delves into the multifaceted impact of artificial intelligence on legal systems, focusing on its transformative potential and implications. Through an extensive analysis of the integration of artificial intelligence technologies, including natural language processing, machine learning, and predictive analytics, the study illuminates the profound improvements in legal research, decision-making processes, and case management, emphasizing the unprecedented efficiency and accessibility that artificial intelligence offers within the legal domain. Furthermore, the research critically examines the ethical and societal challenges stemming from artificial intelligence integration, including concerns related to data privacy, algorithmic bias, and the accountability of artificial intelligence-driven legal solutions. By scrutinizing the existing regulatory frameworks governing artificial intelligence implementation, the study underscores the necessity of responsible and ethical artificial intelligence integration, advocating for transparency, fairness, and equitable practices in the legal profession. The findings contribute to the ongoing discourse on the ethical implications and effective management of artificial intelligence integration in legal systems, providing valuable insights and recommendations for stakeholders and policymakers to navigate the complexities and ensure the responsible adoption of artificial intelligence technologies within the legal sphere
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Mayank Chhatwal. "Artificial Intelligence and Data Concerns for Government Sector Undertakings". Journal of Electrical Systems 20, n. 2 (18 aprile 2024): 2792–801. http://dx.doi.org/10.52783/jes.6852.

Testo completo
Abstract (sommario):
The integration of Artificial Intelligence (AI) within government sector undertakings (PSUs) represents a critical step toward achieving enhanced operational efficiency, improved service delivery, and more informed decision-making processes. However, this transformative potential is accompanied by significant concerns, particularly around data privacy, security, governance, and ethical issues. This paper seeks to explore the nuanced relationship between AI and data concerns in PSUs, evaluating the advantages, challenges, and best practices for managing these technologies responsibly. In doing so, it highlights strategies for balancing innovation with responsible data governance to ensure that AI is implemented effectively, ethically, and securely in the public sector.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Roșca, Cosmina –. Mihaela, Ionuț Adrian Gortoescu e Marius Radu Tănase. "ARTIFICIAL INTELLIGENCE – POWERED VIDEO CONTENT GENERATION TOOLS". Romanian Journal of Petroleum & Gas Technology 5 (76), n. 1 (15 agosto 2024): 131–44. http://dx.doi.org/10.51865/jpgt.2024.01.10.

Testo completo
Abstract (sommario):
This article discusses the considerations of artificial intelligence-powered video content generation tools, exploring their applications, ethical considerations, and evaluation criteria. Through discussions of various artificial intelligence (AI) tools, including features, limitations, and implications, the authors analyze the evolving landscape of video creation in the digital age. Key themes include the ethical implications of deep fake technology and the importance of responsible AI principles, exemplified by Microsoft's guidelines. This paper identifies five of the most promoted free social media tools. Evaluation criteria for these tools, such as visual quality, relevance, coherence, authenticity, and transparency, are examined to assess the suitability of AI-generated videos. While AI offers promising opportunities, the discussion underscores the continued need for human oversight and ethical considerations to ensure the responsible use of AI technologies in video content generation.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

K. K, Pragya. "Ethics of Artificial Intelligence(AI)". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 05 (11 maggio 2024): 1–5. http://dx.doi.org/10.55041/ijsrem33762.

Testo completo
Abstract (sommario):
In today's research and development, artificial intelligence (AI) ethics are a complex and urgent issue. Concerns about artificial intelligence (AI) systems' possible effects on people, communities, and the larger global environment are raised as these systems are incorporated into more and more facets of society. This study examines the ethical implications of artificial intelligence (AI), looking at topics including privacy, fairness, accountability, transparency, and the possibility of prejudice and discrimination in AI algorithms and decision-making processes. The study endeavours to contribute to the establishment of frameworks and rules that encourage the responsible and ethical use of AI technologies, guaranteeing their conformity with society values and the preservation of human rights, by critically assessing these ethical issues. Keywords:-AI ethics , artificial intelligence, ethics, machine ethics, robotics, challenges.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

HURZHII, S. "Trends in the application of artificial intelligence technologies in the military and technical sphere". INFORMATION AND LAW, n. 3(50) (4 settembre 2024): 136–46. http://dx.doi.org/10.37750/2616-6798.2024.3(50).311713.

Testo completo
Abstract (sommario):
The role and significance of artificial intelligence technologies in the military-technical sphere are determined. The principles of using artificial intelligence technologies in the activities of the armed forces are outlined. Attention is focused on the threats and risks posed by the use of artificial intelligence in the military-technical sphere. The peculiarities of the legislative provision of the military use of artificial intelligence technologies in the USA are highlighted. Detailed aspects of the technological implementation of artificial intelligence during the execution of military tasks in the context of the American experience. The conceptual foundations of the russian use of artificial intelligence technologies of a military nature are revealed. The institutional capabilities and achievements of the russian federation in the field of technological support of the needs of the army in the field of artificial intelligence have been determined. The scope and directions of the russian army's innovative developments using artificial intelligence technologies are outlined. It has been updated that unmanned systems are singled out as a special priority for the application of technologies in the field of artificial intelligence of the russian federation. The main global trends in the use of artificial intelligence technologies in the military sphere are revealed. Further directions of improving the field of military use of artificial intelligence technologies have been identified. It was concluded that the development, introduction and approval by the world community of criteria for the responsible use of artificial intelligence for military purposes will contribute to the construction and formation of an international consensus on the responsible handling and use of artificial intelligence technologies.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Balas, Hashim, e Reem Shatnawi. "Legal framework for AI applications". F1000Research 13 (29 aprile 2024): 418. http://dx.doi.org/10.12688/f1000research.147019.1.

Testo completo
Abstract (sommario):
Artificial intelligence (AI), the most widespread technological term, has become an integral part of human daily life. People are becoming increasingly dependent on AI-powered devices; thus, it is now essential to have a legal framework to oversee artificial intelligence's various uses and legal peculiarities. As artificial intelligence has entered all areas of life and into various sectors, which requires examining in detail the legal nature of artificial intelligence, determining the laws that must be applied to it, and identifying the people or entities responsible for it to determine the scope of their responsibilities for the resulting damages that may be caused to others as a result of the use of intelligence. Artificial, this research aimed to examine the adequacy of the legal rules in Jordanian legislation regulating the provisions of artificial intelligence in light of the diversity of its applications and its different legal nature. The research is divided into two chapters. The first chapter presents the concept of artificial intelligence, and the second chapter discusses the legal liability of artificial intelligence. The results of the research revealed that the Jordanian legislator did not specify the legal nature of the artificial intelligence and contented himself with stipulating it in separate texts, Legal liability resulting from utilizing artificial intelligence systems can be challenged under regulations of the defect in manufacturing, responsibility for guarding things, and the distinction between responsibilities due to the degree of independence and intelligence of artificial intelligence, In determining the liability of artificial intelligence, the legislator should consider the types of artificial intelligence systems, their different capabilities, and their independence from humans, The process of enacting special laws regulating all aspects of artificial intelligence must be expedited. This law should be characterized by flexibility that enables it to keep pace with and rapid development witnessed in this field.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Orsi Koch Delgado, Heloísa, Aline De Azevedo Fay, Maria José Sebastiany e Asafe Davi Cortina Silva. "Artificial intelligence adaptive learning tools". BELT - Brazilian English Language Teaching Journal 11, n. 2 (31 dicembre 2020): e38749. http://dx.doi.org/10.15448/2178-3640.2020.2.38749.

Testo completo
Abstract (sommario):
This paper explores the field of Artificial Intelligence applied to Education, focusing on the English Language Teaching. It outlines concepts and uses of Artificial Intelligence, and appraises the functionalities of adaptive tools, bringing evaluative feedback on their use by American school teachers, and highlighting the importance of additional research on the matter. It was observed that the tools are valid media options to complement teaching, especially concerning adaptive learning. They offer students more inclusive opportunities: they maximize learning by tailoring instruction to address students ‘needs, and helping students become more responsible for their own schooling. As for teachers, their testimonials highlight the benefits of dedicating more class time to the students’ most pressing weaker areas. Drawbacks might include the need to provide teachers with autonomy to override recommendations so as to help them find other ways to teach a skill that seems to be more effective for a specific student.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Kurniawan, Itok. "Analisis terhadap Artificial Intelligence sebagai Subjek Hukum Pidana". Mutiara : Jurnal Ilmiah Multidisiplin Indonesia 1, n. 1 (18 luglio 2023): 35–44. http://dx.doi.org/10.61404/jimi.v1i1.4.

Testo completo
Abstract (sommario):
This article discusses issues related to the existence of artificial intelligence as a legal subject, and criminal liability when artificial intelligence commits criminal acts. The purpose of this study is to find out the categorization of artificial intelligence as a legal object or legal subject, and to find out to whom criminal responsibility is assigned when artificial intelligence commits a crime. The research method used in writing this article is normative legal research, with a conceptual approach. The results of this study are that artificial intelligence is not a legal subject, because the actions carried out by artificial intelligence are only orders from its users, and for criminal acts committed by artificial intelligence, those who must be responsible are the creators of artificial intelligence or users of artificial intelligence
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Baniecki, Hubert, e Przemyslaw Biecek. "Responsible Prediction Making of COVID-19 Mortality (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 18 (18 maggio 2021): 15755–56. http://dx.doi.org/10.1609/aaai.v35i18.17874.

Testo completo
Abstract (sommario):
For high-stakes prediction making, the Responsible Artificial Intelligence (RAI) is more important than ever. It builds upon Explainable Artificial Intelligence (XAI) to advance the efforts in providing fairness, model explainability, and accountability to the AI systems. During the literature review of COVID-19 related prognosis and diagnosis, we found out that most of the predictive models are not faithful to the RAI principles, which can lead to biassed results and wrong reasoning. To solve this problem, we show how novel XAI techniques boost transparency, reproducibility and quality of models.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Fawzy, Ahmad, Danastri Cantya Nirmala, Denaya Khansa e Yudhistira Tri Wardhana. "Ethics and Regulation for Artificial Intelligence in Healthcare: Empowering Clinicians to Ensure Equitable and High-Quality Care". INTERNATIONAL JOURNAL OF MEDICAL SCIENCE AND CLINICAL RESEARCH STUDIES 03, n. 07 (22 luglio 2023): 1350–57. http://dx.doi.org/10.47191/ijmscrs/v3-i7-23.

Testo completo
Abstract (sommario):
As artificial intelligence (AI) technology becomes increasingly integrated into healthcare, it is crucial for clinicians to possess a comprehensive understanding of its capabilities, limitations, and ethical implications. This literature review explores the reasons why clinicians need to be better informed about artificial intelligence, emphasizes the potential benefits of artificial intelligence in healthcare, raises awareness regarding the risks and unintended consequences associated with its use, discusses the development of machine learning and artificial intelligence in healthcare, and underscores the need for ethical guidelines and regulation to harness the potential of artificial intelligence in a responsible manner.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Buhmann, Alexander, e Christian Fieseler. "Towards a deliberative framework for responsible innovation in artificial intelligence". Technology in Society 64 (febbraio 2021): 101475. http://dx.doi.org/10.1016/j.techsoc.2020.101475.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Doorn, Neelke. "Artificial intelligence in the water domain: Opportunities for responsible use". Science of The Total Environment 755 (febbraio 2021): 142561. http://dx.doi.org/10.1016/j.scitotenv.2020.142561.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Devaraj, Harsha, Simran Makhija e Suryoday Basak. "On the Implications of Artificial Intelligence and its Responsible Growth". Journal of Scientometric Research 8, n. 2s (19 novembre 2019): s2—s6. http://dx.doi.org/10.5530/jscires.8.2.21.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

DR. NIDHI SHARMA. "ARTIFICIAL INTELLIGENCE: LEGAL IMPLICATIONS AND CHALLENGES". Knowledgeable Research: A Multidisciplinary Journal 2, n. 11 (30 giugno 2024): 13–32. http://dx.doi.org/10.57067/220k4298.

Testo completo
Abstract (sommario):
Artificial intelligence (AI) technologies are rapidly transforming various aspects of society, from healthcare and finance to transportation and education. While AI offers tremendous potential for innovation and efficiency, its widespread adoption raises significant legal implications and challenges. This paper examines the legal landscape surrounding AI, focusing on key areas such as privacy, liability, intellectual property, and employment law. One of the primary concerns with AI is the privacy implications stemming from the collection, storage, and analysis of vast amounts of data. Regulations such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States aim to safeguard individuals' privacy rights and impose strict requirements on data handling practices. Another critical area of concern is liability, particularly regarding the accountability for AI-driven decisions that may result in harm to individuals or entities. Questions arise about who should be held responsible for such decisions the developers, users, or the AI systems themselves. Furthermore, AI raises complex issues related to intellectual property, including the ownership of AI-generated works, patentability of AI algorithms, and the protection of AI innovations. Additionally, the integration of AI into the workforce raises questions about the future of employment, job displacement, and the need for new regulations to protect workers in the age of automation. In conclusion, while AI presents unprecedented opportunities for advancement, it also poses significant legal challenges that require careful consideration and proactive regulation. Policymakers, legal professionals, and stakeholders must collaborate to develop frameworks that promote the responsible development, deployment, and regulation of AI technologies while safeguarding individual rights, privacy, and societal values.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Shahvaroughi Farahani, Milad, e Ghazal Ghasemi. "Will artificial intelligence threaten humanity?" Sustainable Economies 2, n. 2 (22 maggio 2024): 65. http://dx.doi.org/10.62617/se.v2i2.65.

Testo completo
Abstract (sommario):
The rapid advancement of artificial intelligence (AI) has sparked intense debate regarding its potential threat to humanity. This abstract delves into the multifaceted discussion surrounding the implications of AI on the future of humanity. It explores various perspectives, ranging from optimistic views that highlight the transformative benefits of AI to pessimistic concerns about its existential threat. Drawing on insights from experts and researchers, the abstract examines key areas of contention, including the possibility of technological singularity, the ethical dilemmas posed by autonomous weapons, and the socio-economic impacts of AI-driven automation. So, the main purpose of the paper is to study the impacts of AI from different points of view including social, economic, political, etc. Therefore, different. Furthermore, it discusses strategies for mitigating the risks associated with AI, emphasizing the importance of ethical guidelines, regulatory frameworks, and international cooperation. Overall, this abstract provides a comprehensive overview of the complex considerations surrounding the impact of AI on humanity and underscores the need for thoughtful deliberation and proactive measures to ensure a beneficial and responsible integration of AI into society.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Kunze, Lars. "Build Back Better with Responsible AI". KI - Künstliche Intelligenz 35, n. 1 (marzo 2021): 1–3. http://dx.doi.org/10.1007/s13218-021-00707-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Anupama, T., e S. Rosita. "Neuromarketing Insights Enhanced by Artificial Intelligence". ComFin Research 12, n. 2 (1 aprile 2024): 24–28. http://dx.doi.org/10.34293/commerce.v12i2.7300.

Testo completo
Abstract (sommario):
The intersection of neuromarketing and artificial intelligence is examined in this research, along with the significant implications for transforming marketing tactics. By using neuroscience methods to identify subconscious reactions to marketing stimuli, neuromarketing sheds light on consumer behaviour. The integration of AI enhances neuromarketing research by effectively analysing neurodata and detecting significant patterns. When combined, they allow for marketing initiatives that are emotionally compelling, targeted, and personalised. However, ethical issues pertaining to bias in AI algorithms, customer privacy, and societal ramifications need to be taken into account. Businesses must adopt this multidisciplinary strategy if they want to stay ahead in the increasingly competitive market. This study advocates for responsible and ethical use in marketing practices while highlighting the transformational potential of utilising AI technologies with neuroscience findings.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Prylypko, Daryna. "Artificial intelligence and copyright". Theory and Practice of Intellectual Property, n. 2 (6 luglio 2021): 15–22. http://dx.doi.org/10.33731/22021.236526.

Testo completo
Abstract (sommario):
Key words: copyright, work, artificial intelligence, computer program In the article, the problemsof legislation of Ukraine regarding the issues of copyright on works created due to artificialintelligence were analyzed. Particularly, who is the owner of copyright ofworks created due to artificial intelligence. On the one hand, it could be a developer ofa computer program, from the other hand, it could be a client or an employer. Because,it could happen that there is a situation when robots created something newand original, e.g., how it happened with the project “New Rembrandt”. In this case,computers created a unique portrait of Rembrandt. And here is a question, where isin this portrait original and intellectual works of developers of these computers andprograms. In the contrast, this portrait could be created without people who developedspecial machines, programs, and computers. The article’s author proposes to addinto Ukrainian legislation with following norm: the owner of the copyright createddue to artificial intelligence should be a natural person who uses artificial intelligencefor these purposes within the official relationship or on the basis of a contract. In caseof automatic generation of such work by artificial intelligence, the owner of copyrightshould be the developer.Also, another question arises, particularly, who will be responsible for the damagecaused by the artificial intelligence. As an example, of the solution for this issue Resolution2015/2103 (INL) was given, where is mentioned that human agent could be responsiblefor the caused damage. Because, it is not always a developer is responsiblefor the damage.Also, the legislation and justice practice of foreign countries was explored. Theways of overcoming mentioned problems in legislation of Ukraine were proposed.Such as changing our legislation and giving the exact explanation in who is the ownerof copyright on works created due to artificial intelligence and in which cases this personcould become an owner of the copyright. However, probably, these issues shouldbe resolved at international level regarding globalization.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Dzyaloshinsky, I. M. "Artificial Intelligence: A Humanitarian Perspective". Vestnik NSU. Series: History and Philology 21, n. 6 (17 giugno 2022): 20–29. http://dx.doi.org/10.25205/1818-7919-2022-21-6-20-29.

Testo completo
Abstract (sommario):
The article is devoted to the study of the features of human intelligence and the intelligence of complex computer systems, usually referred to as artificial intelligence (AI). As a hypothesis, a statement was formulated about a significant difference between human and artificial intelligence. Human intelligence is a product of a multi-thousand-year history of the development and interaction of three interrelated processes: 1) the formation and development of the human personality; 2) the formation of complex network relationships between members of the social community; 3) collective activity as the basis for the existence and development of communities and individuals. AI is a complex of technological solutions that imitate human cognitive processes. Because of this, with all the options for technical development (acceleration of processes for collecting and processing data and finding solutions, using computer vision, speech recognition and synthesis, etc.). AI will always be associated with human activity. In other words, only people (not machines) are the ultimate source and determinant of values on which any artificial intelligence depends. No mind (human or machine) will ever be truly autonomous: everything we do depends on the social context created by other people who determine the meaning of what we want to achieve. This means that people are responsible for everything that AI does.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

JASIM, FIRAS TARIK, e Karthick M. "ARTIFICIAL INTELLIGENCE INNOVATION AND HUMAN RESOURCE RECRUITMENT". Tamjeed Journal of Healthcare Engineering and Science Technology 1, n. 2 (4 agosto 2023): 20–29. http://dx.doi.org/10.59785/tjhest.v1i2.22.

Testo completo
Abstract (sommario):
This research aims to show the perspectives of personnel in various organizations. From senior executives to operational staff responsible for recruiting employees of private organizations in India, 22 people peruse artificial intelligence innovation in human resource recruitment which relies on collecting insights from the sample and theoretical research studies to study the possibility. Advantages and effects of artificial intelligence innovation on human resource recruitment and use it as a recommendation for organizations to apply artificial intelligence innovation to human resource recruitment.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

KIRILLOVA, Elena Anatol'yevna, Oleg Evgenyevich BLINKOV, Natalija Ivanovna OGNEVA, Aleksey Sergeevich VRAZHNOV e Natal'ja Vladimirovna SERGEEVA. "Artificial Intelligence as a New Category of Civil Law". Journal of Advanced Research in Law and Economics 11, n. 1 (31 marzo 2020): 91. http://dx.doi.org/10.14505//jarle.v11.1(47).12.

Testo completo
Abstract (sommario):
This research gives consideration to the legal status of artificial intelligence technology. Artificial intelligence as a future technology is actively expanding its capabilities at the present stage of development of society. In this regard, the concept of ‘artificial intelligence’ and the application of the rule of right in resolving issues of legal responsibility for the operation of artificial intelligence technologies require definition. The main purpose of this study is to define the concept of ‘artificial intelligence’ and determine whether artificial intelligence technologies are the object or subject of right. The article provides the analysis of possible approaches to the disclosure of the concept of ‘artificial intelligence’ as a legal category and its relationship with the concepts of ‘robot’ and ‘cyberphysical system’. The issues of legal responsibility for the operation of artificial intelligence are revealed. For the purposes hereof, the methods of collecting and studying the singularities; generalizations; the methods of scientific abstraction; the methods of cognition of consistent patterns, as well as the method of objectivity, concreteness, pluralism and a whole range of other methods were used. The study has concluded that the artificial intelligence technology is an autonomous self-organizing computer-software or cyberphysical system with the ability and capability to think, learn, make decisions independently, perceive and model surrounding images and symbols, relationships, processes and implement its own decisions. The following general properties of artificial intelligence technologies have been defined: autonomy; the ability to perceive the conditions (situation), make and implement own decisions; the ability to adapt own behavior, to learn, to communicate with other artificial intelligence, to consider, accumulate and reproduce experience (including human experience). Within the present historical period, artificial intelligence technology should be considered as the object of right. The legal responsibility for the operation of artificial intelligence lies with the operator or another person who sets the parameters of its operation and controls its behavior. The creator (manufacturer) of artificial intelligence is also recognized as a responsible person. This conclusion makes it possible to enter the category of artificial intelligence in the legal field and determine the persons responsible for its poor-quality operation.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Chen, Bingjun. "Analysis of the Difference between the Liability for Infringement of Artificial Intelligence Products and the Liability for Subsequent Observation Obligations". Journal of Economics and Law 1, n. 2 (marzo 2024): 83–87. http://dx.doi.org/10.62517/jel.202414211.

Testo completo
Abstract (sommario):
Currently, artificial intelligence is entering a rapidly developing era. Artificial intelligence technology is growing rapidly. Artificial intelligence technology can already be applied to various fields of social life, and artificial intelligence products with artificial intelligence technology as the core are abundant. Artificial intelligence products have been integrated into the lives of ordinary people. Due to the autonomy of artificial intelligence products, they may work in an unmanned state at certain times, resulting in infringement liability that may cause personal or property damage to others during the operation of artificial intelligence products. There are two ways to pursue accountability: one is to directly pursue the infringement liability of artificial intelligence products, and the specific subject who caused the infringement of artificial intelligence products shall bear the infringement liability; another approach is to pursue the responsibility of the producer and seller of artificial intelligence products for their subsequent observation obligations. The reason is that the producer and seller of artificial intelligence products fail to fulfill their subsequent observation obligations, resulting in the failure to detect defects in the artificial intelligence products in a timely manner and resulting in subsequent observation obligations. The responsible parties are the producers and sellers of artificial intelligence products.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Deepa, Dr N., G. Aswini, G. Asenath Jemimah e S. Kaviya. "Impact of Artificial Intelligence in Human Psychology". International Journal for Research in Applied Science and Engineering Technology 12, n. 3 (31 marzo 2024): 596–601. http://dx.doi.org/10.22214/ijraset.2024.58863.

Testo completo
Abstract (sommario):
Abstract: The diverse impacts of AI on human psychology, highlighting both its potential benefits and ethical considerations. As AI continues to evolve, it is essential to navigate these complexities thoughtfully to ensure the responsible and effective integration of technology into psychological practice. impact of AI on human psychology based on different ways factors, including the design and applied of AI systems, social values and individual experience.it is encompassing various domains such as diagnosis, treatment, research methodology and ethical considerations.AI on human psychology has merits and demerits.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Mincu, Constantin, e Dorin Alexandrescu. "Artificial Intelligence – Evolution, Challenge or Threat". Land Forces Academy Review 29, n. 2 (1 giugno 2024): 247–52. http://dx.doi.org/10.2478/raft-2024-0026.

Testo completo
Abstract (sommario):
Abstract The aim of this article is to stimulate interest among military specialists and beyond in a field that has explosively developed in recent years. It offers an overview of the benefits, risks, and potential threats of artificial intelligence (AI), currently in the focus of politicians, experts, and tech companies worldwide. Without claiming to provide answers or solutions to questions about AI's future evolution, we present a selection of information on this process and highlight some conclusions we find relevant and realistic for the field's future development. Our approach also serves as an alarm bell for the responsible and controlled management of AI's evolution to prevent uncontrollable consequences for society. Lastly, we see the introduction of comprehensive AI training programs in education and development for experts in all fields as a crucial and necessary solution to guide this process in the anticipated direction.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Ounasser, Nabila, Maryem Rhanoui, Mounia Mikram e Bouchra El Asri. "A brief on artificial intelligence in medicine". International Journal of Advances in Applied Sciences 13, n. 4 (1 dicembre 2024): 1055. http://dx.doi.org/10.11591/ijaas.v13.i4.pp1055-1064.

Testo completo
Abstract (sommario):
This review explores the transformative impact of artificial intelligence (AI) in medicine. It discusses the benefits of AI, its core technologies, integration processes, and diverse applications. AI enhances diagnostics, personalizes treatments, and optimizes healthcare operations. Machine learning and deep learning are key AI technologies, while explainable AI ensures transparency. The review emphasizes the integration journey and highlights AI applications, from image diagnosis to telemedicine. Ethical concerns, data privacy, regulations, and algorithmic bias are challenges. The future promises continued innovation, global health equity, and responsible AI application in medicine.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia