To see the other types of publications on this topic, follow the link: AI technology ethics.

Journal articles on the topic 'AI technology ethics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'AI technology ethics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Siau, Keng, and Weiyu Wang. "Artificial Intelligence (AI) Ethics." Journal of Database Management 31, no. 2 (April 2020): 74–87. http://dx.doi.org/10.4018/jdm.2020040105.

Full text
Abstract:
Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?
APA, Harvard, Vancouver, ISO, and other styles
2

Johnson, Sylvester A. "Technology Innovation and AI Ethics." Ethics of Artificial Intelligence, no. 299 (September 19, 2019): 14–27. http://dx.doi.org/10.29242/rli.299.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khokhlov, A. L., and D. Yu Belousov. "Ethical aspects of using software with artificial intelligence technology." Kachestvennaya Klinicheskaya Praktika = Good Clinical Practice 20, no. 1 (April 24, 2021): 70–84. http://dx.doi.org/10.37489/2588-0519-2021-1-70-84.

Full text
Abstract:
Today, artificial intelligence (AI) technologies can offer solutions to many social problems, including those used to diagnose and treat diseases. However, for this it is necessary to provide an appropriate legal basis. The article presents a brief history of the development of AI, explains the conceptual apparatus, describes the legal basis for its development and implementation in Russian healthcare, the methodology for conducting clinical trials of AI systems, and gives their classification. Particular attention is paid to the ethical principles of clinical trials of AI systems. The ethical examination of projects of clinical trials of AI systems by the Ethics Committee is considered in detail.
APA, Harvard, Vancouver, ISO, and other styles
4

Miao, Zeyi. "Investigation on human rights ethics in artificial intelligence researches with library literature analysis method." Electronic Library 37, no. 5 (October 7, 2019): 914–26. http://dx.doi.org/10.1108/el-04-2019-0089.

Full text
Abstract:
Purpose The purpose of this paper was to identify whether artificial intelligence (AI) products can possess human rights, how to define their rights and obligations and what ethical standards they should follow. In this study, the human rights ethical dilemma encountered in the application and development of AI technology has been focused on and analyzed in detail in the light of the existing research status of AI ethics. Design/methodology/approach In this study, first of all, the development and application of AI technology, as well as the concept and characteristics of human rights ethics, are introduced. Second, the human rights ethics of AI technology are introduced in detail, including the human rights endowment of AI machines, the fault liability of AI machines and the moral orientation of AI machines. Finally, the approaches to human rights ethics are proposed to ensure that AI technology serves human beings. Every link of its research, production and application should be strictly managed and supervised. Findings The results show that the research in this study can provide help for the related problems encountered in AI practice. Intelligent library integrates human rights protection organically so that readers or users can experience more intimate service in this system. It is a kind of library operation mode with more efficient and convenient characteristics, which is based on digital, networked and intelligent information science. It aims at using the greenest way and digital means to realize the reading and research of human rights protection literature in the literature analysis method. Originality/value Intelligent library is the future development mode of new libraries, which can realize broad interconnection and sharing. It is people-oriented and can make intelligent management and service and establish the importance of the principle of human rights protection and the specific idea of the principle. The development of science and technology brings not only convenience to people's social life but also questions to be thought. People should reduce its potential harm, so as to make AI technology continue to benefit humankind.
APA, Harvard, Vancouver, ISO, and other styles
5

Héder, Mihály. "A criticism of AI ethics guidelines." Információs Társadalom 20, no. 4 (December 31, 2020): 57. http://dx.doi.org/10.22503/inftars.xx.2020.4.5.

Full text
Abstract:
This paper investigates the current wave of Artificial Intelligence Ethics GUidelines (AIGUs). The goal is not to provide a broad survey of the details of such efforts; instead, the reasons for the proliferation of such guidelines is investigated. Two main research questions are pursued. First, what is the justification for the proliferation of AIGUs, and what are the reasonable goals and limitations of such projects? Second, what are the specific concerns of AI that are so unique that general technology regulation cannot cover them? The paper reveals that the development of AI guidelines is part of a decades-long trend of an ever-increasing express need for stronger social control of technology, and that many of the concerns of the AIGUs are not specific to the technology itself, but are rather about transparency and human oversight. Nevertheless, the positive potential of the situation is that the intense world-wide focus on AIGUs will yield such profound guidelines that the regulation of other technologies may want to follow suite.
APA, Harvard, Vancouver, ISO, and other styles
6

Ouchchy, Leila, Allen Coin, and Veljko Dubljević. "AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media." AI & SOCIETY 35, no. 4 (March 29, 2020): 927–36. http://dx.doi.org/10.1007/s00146-020-00965-5.

Full text
Abstract:
Abstract As artificial intelligence (AI) technologies become increasingly prominent in our daily lives, media coverage of the ethical considerations of these technologies has followed suit. Since previous research has shown that media coverage can drive public discourse about novel technologies, studying how the ethical issues of AI are portrayed in the media may lead to greater insight into the potential ramifications of this public discourse, particularly with regard to development and regulation of AI. This paper expands upon previous research by systematically analyzing and categorizing the media portrayal of the ethical issues of AI to better understand how media coverage of these issues may shape public debate about AI. Our results suggest that the media has a fairly realistic and practical focus in its coverage of the ethics of AI, but that the coverage is still shallow. A multifaceted approach to handling the social, ethical and policy issues of AI technology is needed, including increasing the accessibility of correct information to the public in the form of fact sheets and ethical value statements on trusted webpages (e.g., government agencies), collaboration and inclusion of ethics and AI experts in both research and public debate, and consistent government policies or regulatory frameworks for AI technology.
APA, Harvard, Vancouver, ISO, and other styles
7

Etzioni, Amitai, and Oren Etzioni. "The ethics of robotic caregivers." Interaction Studies 18, no. 2 (December 8, 2017): 174–90. http://dx.doi.org/10.1075/is.18.2.02etz.

Full text
Abstract:
As Artificial Intelligence technology seems poised for a major take-off and changing societal dynamics are creating a high demand for caregivers for elders, children, and those infirmed, robotic caregivers may well be used much more often. This article examines the ethical concerns raised by the use of AI caregivers and concludes that many of these concerns are avoided when AI caregivers operate as partners rather than substitutes. Furthermore, most of the remaining concerns are minor and are faced by human caregivers as well. Nonetheless, because AI caregivers’ systems are learning systems, an AI caregiver could stray from its initial guidelines. Therefore, subjecting AI caregivers to an AI-based oversight system is proposed to ensure that their actions remain both legal and ethical.
APA, Harvard, Vancouver, ISO, and other styles
8

Jacobowitz, Jan L., and Justin Ortiz. "Happy Birthday Siri! Dialing in Legal Ethics for Artificial Intelligence, Smartphones, and Real Time Lawyers." Symposium Edition - Artificial Intelligence and the Legal Profession 4, no. 5 (April 2018): 407–42. http://dx.doi.org/10.37419/jpl.v4.i5.1.

Full text
Abstract:
This Article explores the history of AI and the advantages and potential dangers of using AI to assist with legal research, administrative functions, contract drafting, case evaluation, and litigation strategy. This Article also provides an overview of security vulnerabilities attorneys should be aware of and the precautions that they should employ when using their smartphones (in both their personal and professional lives) in order to adequately protect confidential information. Finally, this Article concludes that lawyers who fail to explore the ethical use of AI in their practices may find themselves at a professional disadvantage and in dire ethical straits. The first part of this Article defines the brave new world of AI and how it both directly and indirectly impacts the practice of law. The second part of this Article explores legal ethics considerations when selecting and using AI vendors and virtual assistants. The third part outlines technology risks and potential solutions for lawyers who seek to embrace smartphone technology while complying with legal ethics obligations. The Article concludes with an optimistic eye toward the future of the legal profession.
APA, Harvard, Vancouver, ISO, and other styles
9

Wasilow, Sherry, and Joelle B. Thorpe. "Artificial Intelligence, Robotics, Ethics, and the Military: A Canadian Perspective." AI Magazine 40, no. 1 (March 28, 2019): 37–48. http://dx.doi.org/10.1609/aimag.v40i1.2848.

Full text
Abstract:
Defense and security organizations depend upon science and technology to meet operational needs, predict and counter threats, and meet increasingly complex demands of modern warfare. Artificial intelligence and robotics could provide solutions to a wide range of military gaps and deficiencies. At the same time, the unique and rapidly evolving nature of AI and robotics challenges existing polices, regulations, and values, and introduces complex ethical issues that might impede their development, evaluation, and use by the Canadian Armed Forces (CAF). Early consideration of potential ethical issues raised by military use of emerging AI and robotics technologies in development is critical to their effective implementation. This article presents an ethics assessment framework for emerging AI and robotics technologies. It is designed to help technology developers, policymakers, decision makers, and other stakeholders identify and broadly consider potential ethical issues that might arise with the military use and integration of emerging AI and robotics technologies of interest. We also provide a contextual environment for our framework, as well as an example of how our framework can be applied to a specific technology. Finally, we briefly identify and address several pervasive issues that arose during our research.
APA, Harvard, Vancouver, ISO, and other styles
10

Joamets, Kristi, and Archil Chochia. "Access to Artificial Intelligence for Persons with Disabilities: Legal and Ethical Questions Concerning the Application of Trustworthy AI." Acta Baltica Historiae et Philosophiae Scientiarum 9, no. 1 (May 27, 2021): 51–66. http://dx.doi.org/10.11590/abhps.2021.1.04.

Full text
Abstract:
Digitalisation and emerging technologies affect our lives and are increasingly present in a growing number of fields. Ethical implications of the digitalisation process have therefore long been discussed by the scholars. The rapid development of artificial intelligence (AI) has taken the legal and ethical discussion to another level. There is no doubt that AI can have a positive impact on the society. The focus here, however, is on its more negative impact. This article will specifically consider how the law and ethics in their interaction can be applied in a situation where a disabled person needs some kind of assistive technology to participate in the society as an equal member. This article intends to investigate whether the EU Guidelines for Trustworthy AI, as a milestone of ethics concerning technology, has the power to change the current practice of how social and economic rights are applied. The main focus of the article is the ethical requirements ‘Human agency and oversight’ and, more specifically, fundamental rights.
APA, Harvard, Vancouver, ISO, and other styles
11

Aggarwal, Nikita. "Introduction to the Special Issue on Intercultural Digital Ethics." Philosophy & Technology 33, no. 4 (September 19, 2020): 547–50. http://dx.doi.org/10.1007/s13347-020-00428-1.

Full text
Abstract:
Abstract Recent advances in the capability of digital information technologies—particularly due to advances in artificial intelligence (AI)—have invigorated the debate on the ethical issues surrounding their use. However, this debate has often been dominated by ‘Western’ ethical perspectives, values and interests, to the exclusion of broader ethical and socio-cultural perspectives. This imbalance carries the risk that digital technologies produce ethical harms and lack social acceptance, when the ethical norms and values designed into these technologies collide with those of the communities in which they are delivered and deployed. This special issue takes a step towards broadening the approach of digital ethics, by bringing together a range of cultural, social and structural perspectives on the ethical issues relating to digital information technology. Importantly, it refreshes and reignites the field of Intercultural Digital Ethics for the age of AI and ubiquitous computing.
APA, Harvard, Vancouver, ISO, and other styles
12

Arambula, Alexandra M., and Andrés M. Bur. "Ethical Considerations in the Advent of Artificial Intelligence in Otolaryngology." Otolaryngology–Head and Neck Surgery 162, no. 1 (November 26, 2019): 38–39. http://dx.doi.org/10.1177/0194599819889686.

Full text
Abstract:
Artificial intelligence (AI) is quickly expanding within the sphere of health care, offering the potential to enhance the efficiency of care delivery, diminish costs, and reduce diagnostic and therapeutic errors. As the field of otolaryngology also explores use of AI technology in patient care, a number of ethical questions warrant attention prior to widespread implementation of AI. This commentary poses many of these ethical questions for consideration by the otolaryngologist specifically, using the 4 pillars of medical ethics—autonomy, beneficence, nonmaleficence, and justice—as a framework and advocating both for the assistive role of AI in health care and for the shared decision-making, empathic approach to patient care.
APA, Harvard, Vancouver, ISO, and other styles
13

Sætra, Henrik Skaug, and Eduard Fosch-Villaronga. "Research in AI has Implications for Society: How do we Respond?" Morals & Machines 1, no. 1 (2021): 62–75. http://dx.doi.org/10.5771/2747-5182-2021-1-62.

Full text
Abstract:
Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.
APA, Harvard, Vancouver, ISO, and other styles
14

Sætra, Henrik Skaug, and Eduard Fosch-Villaronga. "Research in AI has Implications for Society: How do we Respond?" Morals & Machines 1, no. 1 (2021): 62–75. http://dx.doi.org/10.5771/2747-5174-2021-1-62.

Full text
Abstract:
Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.
APA, Harvard, Vancouver, ISO, and other styles
15

Sætra, Henrik Skaug, and Eduard Fosch-Villaronga. "Research in AI has Implications for Society: How do we Respond?" Morals & Machines 1, no. 1 (2021): 60–73. http://dx.doi.org/10.5771/2747-5174-2021-1-60.

Full text
Abstract:
Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.
APA, Harvard, Vancouver, ISO, and other styles
16

Sætra, Henrik Skaug, and Eduard Fosch-Villaronga. "Research in AI has Implications for Society: How do we Respond?" Morals & Machines 1, no. 1 (2021): 60–73. http://dx.doi.org/10.5771/2747-5182-2021-1-60.

Full text
Abstract:
Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.
APA, Harvard, Vancouver, ISO, and other styles
17

Baric-Parker, Jean, and Emily E. Anderson. "Patient Data-Sharing for AI: Ethical Challenges, Catholic Solutions." Linacre Quarterly 87, no. 4 (May 15, 2020): 471–81. http://dx.doi.org/10.1177/0024363920922690.

Full text
Abstract:
Recent news of Catholic and secular healthcare systems sharing electronic health record (EHR) data with technology companies for the purposes of developing artificial intelligence (AI) applications has drawn attention to the ethical and social challenges of such collaborations, including threats to patient privacy and confidentiality, undermining of patient consent, and lack of corporate transparency. Although the United States Catholic Conference of Bishops’ Ethical and Religious Directives for Health Care Services ( ERDs) address collaborations between US Catholic healthcare providers and other entities, the ERDs do not adequately address the novel concerns seen in EHR data-sharing for AI development. Neither does the Health Insurance Portability and Accountability Act (HIPAA) privacy rule. This article describes ethical and social problems observed in recent patient data-sharing collaborations with AI companies and analyzes them in light of the guiding principles of the ERDs as well as the 2020 Rome Call to AI Ethics ( RCAIE) document recently released by the Vatican. While both the ERDs and RCAIE guiding principles can inform future collaborations, we suggest that the next revision of the ERDs should consider addressing data-sharing and AI more directly. Summary: Electronic health record data-sharing with artificial intelligence developers presents unique ethical and social challenges that can be addressed with updated United States Catholic Conference of Bishops’ Ethical and Religious Directives and guidance from the Vatican’s 2020 Rome Call to AI Ethics.
APA, Harvard, Vancouver, ISO, and other styles
18

Kaczmarek-Śliwińska, Monika. "Organisational Communication in the Age of Artificial Intelligence Development. Opportunities and Threats." Social Communication 5, no. 2 (December 1, 2019): 62–68. http://dx.doi.org/10.2478/sc-2019-0010.

Full text
Abstract:
Abstract Organisational communication in the age of artificial intelligence (AI) development is an opportunity but also a challenge. Thanks to the changing media space and the development of technology, it is possible to automate work, increase the effectiveness and power of influence and distribution of content. However, they also raise questions concerning risks, ranging from those associated with the social area (reducing the number of jobs) to the ethics of communication and the ethics of the professional profession of public relations (still PR ethics or the AI ethics in PR). The article will outline the opportunities and concerns resulting from the use of AI in communication of an organisation.
APA, Harvard, Vancouver, ISO, and other styles
19

Nemitz, Paul. "Constitutional democracy and technology in the age of artificial intelligence." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133 (October 15, 2018): 20180089. http://dx.doi.org/10.1098/rsta.2018.0089.

Full text
Abstract:
Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy. This paper first describes the four core elements of today's digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws. The paper closes with a call for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose. This article is part of a theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
APA, Harvard, Vancouver, ISO, and other styles
20

Smith, Maxwell J., and Sally Bean. "AI and Ethics in Medical Radiation Sciences." Journal of Medical Imaging and Radiation Sciences 50, no. 4 (December 2019): S24—S26. http://dx.doi.org/10.1016/j.jmir.2019.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mudundi, Sunil Varma, Tejaswi Pasumathy, and Dr Raul Villamarin Roudriguez. "The sovereignty of Artificial Intelligence over Human Ethics and Heedfulness." Journal of University of Shanghai for Science and Technology 23, no. 08 (August 19, 2021): 657–65. http://dx.doi.org/10.51201/jusst/21/08444.

Full text
Abstract:
Artificial Intelligence in present days is in extreme growth. We see AI in almost every field in work today. Artificial Intelligence is being introduced in crucial roles like recruiting, Law enforcement and in the Military. To be involved in such crucial roles, it needs lots of trusts and scientific evaluation. With the evolution of artificial intelligence, automatic machines are in a speed run in this decade. Developing a machine/robot with a set of tools/programs will technically sort of some of the challenges. But the problem arises when we completely depend on robots/machines. Artificial intelligence this fast-growing technology will be very helpful when we take help from it for just primary needs like face detection, sensor-controllers, bill counters…etc. But we face real challenges when we involve with decision making, critical thinking…etc. In mere future, automated machines are going to replace many positions of humans. Many firms from small to big are opting for Autonomous means just to make their work simpler and efficient. Using a machine gives more accurate results and outputs in simulated time. As technology is developing fast, they should be developed as per societal rules and conditions. Scientists and analysts predict that singularity in AI can be achieved by 2047. Ray Kurzweil, Director of Technology at Google predicted that AI may achieve singularity in 2047. We all saw the DRDO invention on autonomous fighting drones. They operate without any human assistance. They evaluate target type, its features and eliminate them based on edge detection techniques using computer vision. AI is also into recruiting people for companies. Some companies started using AI Recruiter to evaluate the big pool of applications and select efficient ones into the industry. This is possible through computer vision and machine learning algorithms. In recent times AI is being used as a suggestion tool for judgement too. Apart from all these advancements, some malicious scenarios may affect humankind. When AI is used in the wrong way many lives will fall in danger. Collecting all good and evil from past experiences is it possible to feed a machine to work autonomously. As many philosophers and educated people kept some set of guidelines in society is it practically possible to follow when AI achieves singularity and when we talk about the neural networking of human. They have good decision-making skills, critical thinking…etc. We will briefly discuss the ethics and AI robots / Machines that involve consciousness and cognitive abilities. In this upgrading technological world, AI is ruling a maximum number of operations. So, we will discuss how can ethics be followed. How can we balance ethics and technology in both phases.We will deep dive into some of these interesting areas in this article.
APA, Harvard, Vancouver, ISO, and other styles
22

Ryan, Mark. "In AI We Trust: Ethics, Artificial Intelligence, and Reliability." Science and Engineering Ethics 26, no. 5 (June 10, 2020): 2749–67. http://dx.doi.org/10.1007/s11948-020-00228-y.

Full text
Abstract:
Abstract One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.
APA, Harvard, Vancouver, ISO, and other styles
23

LI, Chao. "跨越AI醫生與病人之間的意義鴻溝——評<在AI醫生和病人之間——人工智能診斷技術的內在邏輯及其對病人主體性建構的影響>." International Journal of Chinese & Comparative Philosophy of Medicine 17, no. 2 (January 1, 2019): 55–59. http://dx.doi.org/10.24112/ijccpm.171674.

Full text
Abstract:
LANGUAGE NOTE | Document text in Chinese; abstract in English only.There is a gap in meaning between the AI physician and patient, relating to the generation of meaning and the construction of personality. Bridging this gap in meaning has become an unavoidable problem when rethinking the application of AI technology in the medical field. Only when the construction of patients’ subjectivity turns from practical to ethical thought can we fully demonstrate the core of physician–patient interaction; that is, the generation of meaning and the construction of personality. Only then, facing the life world itself, starting with ethics, relationships, emotions, etc., can we connect the AI physician with the patient. The replacement of human physicians by AI physicians is neither technologically inevitable nor philosophically viable. Both technology and philosophy have the possibility of a logical turn.DOWNLOAD HISTORY | This article has been downloaded 31 times in Digital Commons before migrating into this platform.
APA, Harvard, Vancouver, ISO, and other styles
24

Sholla, Sahil, Roohie Naaz Mir, and Mohammad Ahsan Chishti. "A Fuzzy Logic-Based Method for Incorporating Ethics in the Internet of Things." International Journal of Ambient Computing and Intelligence 12, no. 3 (July 2021): 98–122. http://dx.doi.org/10.4018/ijaci.2021070105.

Full text
Abstract:
IoT is expected to have far-reaching consequences on society due to a wide spectrum of applications like smart healthcare, smart transportation, smart agriculture, smart home, etc. However, ethical considerations of AI-enabled smart devices have not been duly considered from a design perspective. In this paper, the authors propose a novel fuzzy logic-based method to incorporate ethics within smart things of IoT. Ethical considerations relevant to a machine context are represented in terms of fuzzy ethics variables (FEVs) and ethics rules. For each ethics rule, a value called scaled ethics value (SEV) is used to indicate its ethical desirability. In order to model flexibility in ethical response, the authors employ the concept of ethics modes that selectively allow scenarios depending on the value of SEV. The method offers a viable mechanism for smart devices to imbue ethical sensitivity that can pave the way for a technology society amenable to human ethics. However, the method does not account for varying ethics, as such incorporating learning mechanisms represent a promising research direction.
APA, Harvard, Vancouver, ISO, and other styles
25

Carr, Sarah. "‘AI gone mental’: engagement and ethics in data-driven technology for mental health." Journal of Mental Health 29, no. 2 (January 30, 2020): 125–30. http://dx.doi.org/10.1080/09638237.2020.1714011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

KRIEBITZ, Alexander, and Christoph LÜTGE. "Artificial Intelligence and Human Rights: A Business Ethical Assessment." Business and Human Rights Journal 5, no. 1 (January 2020): 84–104. http://dx.doi.org/10.1017/bhj.2019.28.

Full text
Abstract:
AbstractArtificial intelligence (AI) has evolved as a disruptive technology, impacting a wide range of human rights-related issues ranging from discrimination to supply chain due diligence. Given the increasing human rights obligations of companies and the intensifying discourse on AI and human rights, we shed light on the responsibilities of corporate actors in terms of human rights standards in the context of developing and using AI. What implications do human rights obligations have for companies developing and using AI? In our article, we discuss firstly whether AI inherently conflicts with human rights and human autonomy. Next, we discuss how AI might be linked to the beneficence criterion of AI ethics and how AI might be applied in human rights-related areas. Finally, we elaborate on individual aspects of what it means to conform to human rights, addressing AI-specific problem areas.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Fan, and Yuan Lu. "ENGAGING END USERS IN AN AI-ENABLED SMART SERVICE DESIGN - THE APPLICATION OF THE SMART SERVICE BLUEPRINT SCAPE (SSBS) FRAMEWORK." Proceedings of the Design Society 1 (July 27, 2021): 1363–72. http://dx.doi.org/10.1017/pds.2021.136.

Full text
Abstract:
AbstractArtificial Intelligence (AI) has expanded in a diverse context, it infiltrates our social lives and is a critical part of algorithmic decision-making. Adopting AI technology, especially AI-enabled design, by end users who are non-AI experts is still limited. The incomprehensible, untransparent decision-making and difficulty of using AI become obstacles which prevent these end users to adopt AI technology. How to design the user experience (UX) based on AI technologies is an interesting topic to explore.This paper investigates how non-AI-expert end users can be engaged in the design process of an AI-enabled application by using a framework called Smart Service Blueprint Scape (SSBS), which aims to establish a bridge between UX and AI systems by mapping and translating AI decisions based on UX. A Dutch mobility service called ‘stUmobiel ’ was taken as a design case study. The goal is to design a reservation platform with stUmobiel end users. Co-creating with case users and assuring them to understand the decision-making and service provisional process of the AI-enabled design is crucial to promote users’ adoption. Furthermore, the concern of AI ethics also arises in the design process and should be discussed in a broader sense.
APA, Harvard, Vancouver, ISO, and other styles
28

Morley, Jessica, Luciano Floridi, Libby Kinsey, and Anat Elhalal. "From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices." Science and Engineering Ethics 26, no. 4 (December 11, 2019): 2141–68. http://dx.doi.org/10.1007/s11948-019-00165-5.

Full text
Abstract:
AbstractThe debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960. 10.1126/science.132.3429.741; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
APA, Harvard, Vancouver, ISO, and other styles
29

Bærøe, Kristine, and Torbjørn Gundersen. "Translational Ethics: Justified Roles of Bioethicists Within and Beyond Lifecycles of Artificial Intelligence Systems in Health." Studia Universitatis Babeş-Bolyai Bioethica 66, Special Issue (September 9, 2021): 30–31. http://dx.doi.org/10.24193/subbbioethica.2021.spiss.10.

Full text
Abstract:
"Background: Artificial Intelligence (AI) systems hold great promise for the future development within a variety of sectors. At the same time, there is also great concern about harms and potential misuse of AI. Upscaling and implementing existing AI systems do already have the potential of affecting severely, and potentially irreversibly, fundamental social conditions for social interaction, professional autonomy, and political governance. Therefore, guiding principles and frameworks to support developers and governing authorities are emerging around the world to foster justified trust in AI research and innovation. Ultimately, these safeguarding institutions and mechanisms rely on human knowledge and wisdom. Health is an area that is expected to benefit from AI based technologies aimed at promoting beneficial, accurate and effective preventive and curative interventions. Also, machine learning technologies might be used to improve the accuracy of the evidence base for cost-effective and beneficial decision-making. How can bioethicists contribute to promote beneficial AI interventions and avoid harms produced by AI technology? What would be justified roles of bioethicists in development and use of AI systems? Method: The paper is based on literature review and philosophical reflection. Discussion: In this presentation, we will base our analysis on an analytical decomposition of the life cycle of AI systems into the phases of development, deployment and use. Furthermore, we will use a framework of translational ethics proposed by Bærøe, and identify a variety of structural tasks, as well as limitations to such, for bioethicists to undertake within this emerging multifold area of experts and disciplines. "
APA, Harvard, Vancouver, ISO, and other styles
30

Coin, Allen, Megan Mulder, and Veljko Dubljević. "Ethical Aspects of BCI Technology: What Is the State of the Art?" Philosophies 5, no. 4 (October 24, 2020): 31. http://dx.doi.org/10.3390/philosophies5040031.

Full text
Abstract:
Brain–Computer Interface (BCI) technology is a promising research area in many domains. Brain activity can be interpreted through both invasive and non-invasive monitoring devices, allowing for novel, therapeutic solutions for individuals with disabilities and for other non-medical applications. However, a number of ethical issues have been identified from the use of BCI technology. In this paper, we review the academic discussion of the ethical implications of BCI technology in the last five years. We conclude that some emerging applications of BCI technology—including commercial ventures that seek to meld human intelligence with AI—present new and unique ethical concerns. Further, we seek to understand how academic literature on the topic of BCIs addresses these novel concerns. Similar to prior work, we use a limited sample to identify trends and areas of concern or debate among researchers and ethicists. From our analysis, we identify two key areas of BCI ethics that warrant further research: the physical and psychological effects of BCI technology. Additionally, questions of BCI policy have not yet become a frequent point of discussion in the relevant literature on BCI ethics, and we argue this should be addressed in future work. We provide guiding questions that will help ethicists and policy makers grapple with the most important issues associated with BCI technology.
APA, Harvard, Vancouver, ISO, and other styles
31

Carlson, Kristen W. "Safe Artificial General Intelligence via Distributed Ledger Technology." Big Data and Cognitive Computing 3, no. 3 (July 8, 2019): 40. http://dx.doi.org/10.3390/bdcc3030040.

Full text
Abstract:
Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans. A set of logically distinct conceptual components is proposed that are necessary and sufficient to (1) ensure various AGI scenarios will not harm humanity, and (2) robustly align AGI and human values and goals. By systematically addressing pathways to malevolent AI we can induce the methods/axioms required to redress them. Distributed ledger technology (DLT, “blockchain”) is integral to this proposal, e.g., “smart contracts” are necessary to address the evolution of AI that will be too fast for human monitoring and intervention. The proposed axioms: (1) Access to technology by market license. (2) Transparent ethics embodied in DLT. (3) Morality encrypted via DLT. (4) Behavior control structure with values at roots. (5) Individual bar-code identification of critical components. (6) Configuration Item (from business continuity/disaster recovery planning). (7) Identity verification secured via DLT. (8) “Smart” automated contracts based on DLT. (9) Decentralized applications—AI software modules encrypted via DLT. (10) Audit trail of component usage stored via DLT. (11) Social ostracism (denial of resources) augmented by DLT petitions. (12) Game theory and mechanism design.
APA, Harvard, Vancouver, ISO, and other styles
32

Khan, Wahiduzzaman, and Takudzwa Fadziso. "Ethical Issues on Utilization of AI, Robotics and Automation Technologies." Asian Journal of Humanity, Art and Literature 7, no. 2 (December 1, 2020): 79–90. http://dx.doi.org/10.18034/ajhal.v7i2.521.

Full text
Abstract:
The fast technological advancements in machine intelligence and automation may also arrive with risks and some negative effects on employees, firms, and society at large. Currently, both end-users, scientists, and practitioners have acknowledged the need for machine assistance and also welcome consideration for a robust ethical strategy that will allow a safe application and usage of improved technologies. Artificial Intelligence related ethics has been presented and considered from various standpoints and views. This paper furthers on the subject. Potential ethical issues are envisaged in the area of machine end-user perceptions, privacy, accountability, and the robot/ human rights, design of ethical machines, and technological singularity. It, therefore, possesses the question. What are the current ethical issues with the use of machines? The study adopted a quantitative and qualitative approach to drawing conclusions from the thematic and descriptive analysis. The result shows that majority of the respondents were males 46 (65.7%) while 24(34.3%) were females. They are mainly literates/ Majority 49(70%) are from the private firm and come majorly from the Asian continent. The majority of the respondents view that ethical consideration is necessary for machine and automation design, the machine-human relationship should be improved. Their privacy should be instituted while they consider technology singularity as a severe issue and desire the creation of ethical machines. Following this result, the study documents policy recommendations.
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Lin, and Shejiao Ding. "Ethics and risks between human and robotic interaction." Interaction Studies 20, no. 1 (July 15, 2019): 134–47. http://dx.doi.org/10.1075/is.18009.yu.

Full text
Abstract:
Abstract Robot is definitely playing important role in human society. Low contact on machine standards is mostly on industrial robot while close contacts are in increasing demand in service robot, etc. The development of robotics with advanced hardware and artificial intelligence (AI) provide the possibility with human beings while close contacts raise many new issues on ethics and risks. For interaction, the related technique of perception, cognition and interaction are briefly introduced. For ethics, rules should be given for the robot designers to include ethics for certain application while risks should be evaluated during the experiment test. To make efficient decision, safety design with AI technology should be put on agenda for roboticists. Except from the risks, ethics raise many challenges while most of them can be solved by developing technologies while some of the problems exist in human’s society which also raise the questions for the human beings. More broader vision should be taken from different social departments together to avoid the possible embarrassed issues. It’s time to welcome the world of robotics and related techniques will make life more efficient while human-robot coexistence society will come one day and law should be imposed on both.
APA, Harvard, Vancouver, ISO, and other styles
34

Dent, Kyle, Richelle Dumond, and Mike Kuniavsky. "A framework for systematically applying humanistic ethics when using AI as a design material." Temes de Disseny, no. 35 (July 25, 2019): 178–97. http://dx.doi.org/10.46467/tdd35.2019.178-197.

Full text
Abstract:
As machine learning and AI systems gain greater capabilities and are deployed more widely, we – as designers, developers, and researchers – must consider both the positive and negative implications of their use. In light of this, PARC’s researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals’ privacy, protect personal data, and enable freedom of expression and equality. Technology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.
APA, Harvard, Vancouver, ISO, and other styles
35

Breidbach, Christoph F., and Paul Maglio. "Accountable algorithms? The ethical implications of data-driven business models." Journal of Service Management 31, no. 2 (March 9, 2020): 163–85. http://dx.doi.org/10.1108/josm-03-2019-0073.

Full text
Abstract:
PurposeThe purpose of this study is to identify, analyze and explain the ethical implications that can result from the datafication of service.Design/methodology/approachThis study uses a midrange theorizing approach to integrate currently disconnected perspectives on technology-enabled service, data-driven business models, data ethics and business ethics to introduce a novel analytical framework centered on data-driven business models as the general metatheoretical unit of analysis. The authors then contextualize the framework using data-intensive insurance services.FindingsThe resulting midrange theory offers new insights into how using machine learning, AI and big data sets can lead to unethical implications. Centered around 13 ethical challenges, this work outlines how data-driven business models redefine the value network, alter the roles of individual actors as cocreators of value, lead to the emergence of new data-driven value propositions, as well as novel revenue and cost models.Practical implicationsFuture research based on the framework can help guide practitioners to implement and use advanced analytics more effectively and ethically.Originality/valueAt a time when future technological developments related to AI, machine learning or other forms of advanced data analytics are unpredictable, this study instigates a critical and timely discourse within the service research community about the ethical implications that can arise from the datafication of service by introducing much-needed theory and terminology.
APA, Harvard, Vancouver, ISO, and other styles
36

Sharp, Lucy. "Society 5.0: A brave new world." Impact 2020, no. 2 (April 15, 2020): 2–3. http://dx.doi.org/10.21820/23987073.2020.2.4.

Full text
Abstract:
Society 5.0 is Japan's concept of a technology-based, humancentred society. It is essentially an impressive upgrade on existing society that will better human existence. It will emerge from the fourth industrial revolution and will see humans and machines coexisting in harmony. Technology such as Artificial Intelligence (AI) will permeate all areas of life; including, for example, healthcare, the environment, scientific research and ethics.
APA, Harvard, Vancouver, ISO, and other styles
37

Khakurel, Jayden, Birgit Penzenstadler, Jari Porras, Antti Knutas, and Wenlu Zhang. "The Rise of Artificial Intelligence under the Lens of Sustainability." Technologies 6, no. 4 (November 3, 2018): 100. http://dx.doi.org/10.3390/technologies6040100.

Full text
Abstract:
Since the 1950s, artificial intelligence (AI) has been a recurring topic in research. However, this field has only recently gained significant momentum because of the advances in technology and algorithms, along with new AI techniques such as machine learning methods for structured data, modern deep learning, and natural language processing for unstructured data. Although companies are eager to join the fray of this new AI trend and take advantage of its potential benefits, it is unclear what implications AI will have on society now and in the long term. Using the five dimensions of sustainability to structure the analysis, we explore the impacts of AI on several domains. We find that there is a significant impact on all five dimensions, with positive and negative impacts, and that value, collaboration, sharing responsibilities; ethics will play a vital role in any future sustainable development of AI in society. Our exploration provides a foundation for in-depth discussions and future research collaborations.
APA, Harvard, Vancouver, ISO, and other styles
38

Welch, James Patrick. "Drone Warfare in Transnational Armed Conflict and Counterterrorism." Journal of Intelligence, Conflict, and Warfare 3, no. 3 (March 15, 2021): 98–100. http://dx.doi.org/10.21810/jicw.v3i3.2765.

Full text
Abstract:
On November 23, 2020, Dr. James Patrick Welch presented on the topic of Drone Warfare in Transnational Armed Conflict and Counterterrorism at the 2020 CASIS West Coast Security Conference. The presentation was followed by a moderated question and answer period. Key points of discussion included: the ethics surrounding drone warfare, drone proliferation, accountability, and AI technology in drone warfare.
APA, Harvard, Vancouver, ISO, and other styles
39

Jia, Hepeng. "Yi Zeng: promoting good governance of artificial intelligence." National Science Review 7, no. 12 (October 24, 2020): 1954–56. http://dx.doi.org/10.1093/nsr/nwaa255.

Full text
Abstract:
Abstract Artificial intelligence (AI) has developed quickly in recent years, with applications expanding from automatic driving and smart manufacturing to personal healthcare and algorithm-based social media utilization. During the COVID-19 pandemic, AI has played an essential role in identifying suspected infections, ensuring epidemic surveillance and quickening drug screening. However, many questions accompanied AI’s development. How to protect citizens’ privacy and national information security? What measures can help AI learn and practice good human behaviors and avoid unethical use of AI technologies? To answer these questions, Nation Science Review (NSR) interviewed Yi Zeng, Professor and Deputy Director at the Research Center for Brain-inspired Artificial Intelligence at the Institute of Automation, Chinese Academy of Sciences (CAS). He is a board member for the National Governance Committee of Next-Generation Artificial Intelligence affiliated to the Ministry of Science and Technology of China (MOST). Zeng is also in AI ethics expert groups at the World Health Organization and the United Nations Educational, Scientific, and Cultural Organization (UNESCO). He jointly led the drafting of Beijing AI Principles (2019) and the National Governance Principles of New Generation AI of China (GPNGAI, 2019).
APA, Harvard, Vancouver, ISO, and other styles
40

Leite, Iolanda, and Anuj Karpatne. "Welcome to AI Matters 7(1)." AI Matters 7, no. 1 (March 2021): 4. http://dx.doi.org/10.1145/3465074.3465075.

Full text
Abstract:
Welcome to the first issue of this year's AI Matters Newsletter! We start with a report on upcoming SIGAI Events by Dilini Samarasinghe and Conference reports by Louise Dennis, our conference coordination officers. In our regular Education column, Duri Long, Jonathan Moon, and Brian Magerko introduce two "unplugged" activities (i.e., no technology needed) to learn about AI focussed on K-12 AI Education. We then bring you our regular Policy column, where Larry Medsker covers several topics on AI policy, including the role of Big Tech on AI Ethics and an interview with Dr. Eric Daimler who is the CEO of the MIT-spinout Conexus.com. Finally, we close with four article contributions. The first article discusses emerging applications of AI in analyzing source code and its implications to several industries. The second article discusses topics in the area of physical scene understanding that are necessary for machines to perceive, interact, and reason about the physical world. The third article presents novel practices and highlights from the Fourth Workshop on Mechanism Design for Social Good. The fourth article provides a report on the "Decoding AI" event that was conducted online by ViSER for high school students and adults sponsored by ACM SIGAI.
APA, Harvard, Vancouver, ISO, and other styles
41

Karim, Shakir, Nitirajsingh Sandu, and Ergun Gide. "A study to analyse the impact of artificial intelligence (AI) in transforming Australian healthcare." Global Journal of Information Technology: Emerging Technologies 10, no. 1 (April 30, 2020): 01–11. http://dx.doi.org/10.18844/gjit.v10i1.4533.

Full text
Abstract:
Artificial Intelligence (AI) is the biggest emerging movement and promise in today’s technology world. Artificial Intelligence (AI) in contrast to Natural (human or animal) Intelligence, is intelligence demonstrated by machines. AI is also called Machine Intelligence, aims to mimic human intelligence by being able to obtain and apply knowledge and skills. It promises substantial involvements, vast changes, modernizations, and integration with and within people’s ongoing life. It makes the world more demanding and helps to take the prompt and appropriate decisions with real time. This paper provides a main analysis of health industry and health care system in Australian Healthcare that are relevant to the consequences formed by Artificial Intelligence (AI). This paper primarily has used secondary research analysis method to provide a wide-ranging investigation of the positive and negative consequences of health issues relevant to Artificial Intelligence (AI), the architects of those consequences and those overstated by the consequences. The secondary resources are subject to journal articles, reports, academic conference proceedings, media articles, corporation-based documents, blogs and other appropriate information. The study found that Artificial Intelligence (AI) provides useful insights in Australian Healthcare system. It is steadily reducing the cost of Australian Healthcare system and improving patients’ overall outcome in Australian Healthcare. Artificial Intelligence (AI) not only can improve the affairs between public and health enterprises but also make the life better by increasing efficiency and modernization. However, beyond the technology maturity, there are still many challenges to overcome before Australian Healthcare can fully leverage the potential of AI in health care - Ethics being one of the most critical. Keywords: Artificial Intelligence (AI), Health Industry, Health Care System, Australian Healthcare;
APA, Harvard, Vancouver, ISO, and other styles
42

Totschnig, Wolfhart. "Fully Autonomous AI." Science and Engineering Ethics 26, no. 5 (July 28, 2020): 2473–85. http://dx.doi.org/10.1007/s11948-020-00243-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

YANG SUNJIN. "Ethics of science and technology from the perspective of Yangming in the era of artificial intelligence(AI)." YANG-MING STUDIES ll, no. 45 (December 2016): 497–507. http://dx.doi.org/10.17088/tksyms.2016..45.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Carbone, Paola, and Giuseppe Rossi. "Digital Voodoo: Who Does What?" Pólemos 14, no. 1 (April 28, 2020): 91–129. http://dx.doi.org/10.1515/pol-2020-2007.

Full text
Abstract:
AbstractIt is obvious that AI is marking its entry onto the global scene. Anyway, since any AI is supposed to perform a task in such a way that the outcome would be indistinguishable from the outcome of a human agent working to achieve the same task, it is more and more difficult to recognize its presence in our daily life. Issues of transparency and ethics are crucial not only to be aware of the presence of an AI, but mostly to know how the algorithm is actually working and to verify the ‘quality’ of its dataset. It is important to realize that only the engineers know what an AI does, even if it enacts actions in real life. For common people, and even for regulators, it may turn out to be a digital voodoo that goes far beyond their understanding. Who does what? That is the question.The first part of the essay discusses the claims of “digital philosophy” (or “digital ontology”) under the light of legal analysis. The assumption that humankind and “intelligent” machines share the same digital/algorithmic ontology (as parts of an all-encompassing “digital universe”) could deprive the law of its ethical and emotional foundations. A sound regulation of both the circulation of digital information and AI requires adequate knowledge of technology (transparency) and the capacity to make ethical choices based on wide social consensus, in order to safeguard the fundamentally “human” nature of the law.The second part of the essay is an inquiry into AI as a machine-writing system, that is into the capacity of an AI to produce literary texts. Three examples will be discussed and a comparison to electronic literature will be pointed out. What seems to be fundamental is the idea of a literary output stuck on the very moment of its generation (process of writing) which is conceived by the reader as an anaesthetic experience. More than creativity, it is the perception of creativity that rules such literary works. This literary production can make readers aware of the power of an underlying algorithm.
APA, Harvard, Vancouver, ISO, and other styles
45

Bragg, Danielle, Naomi Caselli, Julie A. Hochgesang, Matt Huenerfauth, Leah Katz-Hernandez, Oscar Koller, Raja Kushalnagar, Christian Vogler, and Richard E. Ladner. "The FATE Landscape of Sign Language AI Datasets." ACM Transactions on Accessible Computing 14, no. 2 (July 2021): 1–45. http://dx.doi.org/10.1145/3436996.

Full text
Abstract:
Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.
APA, Harvard, Vancouver, ISO, and other styles
46

Hatherley, Joshua James. "Limits of trust in medical AI." Journal of Medical Ethics 46, no. 7 (March 27, 2020): 478–81. http://dx.doi.org/10.1136/medethics-2019-105935.

Full text
Abstract:
Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
47

Dipla, Victoria. "AI and the Healthcare sector: Industry, legal and ethical issues." Bioethica 7, no. 1 (March 23, 2021): 34. http://dx.doi.org/10.12681/bioeth.26540.

Full text
Abstract:
In this modern era, AI systems, robotics and all kinds of technological innovations have prevailed in almost every industry there is. Even though, they provide with several advantages and benefits, such novelties, due to their newly found capacities pose a certain undoubted risk for contemporary societies, unfamiliar yet with the full extent of the perils following these kind of innovations.This article engages in an examination of one of the industries critically changed and influenceδ by AI technology, the healthcare industry, as it possesses the highest bioethical interest. The article, thus, is divided to four sections. The first is dedicated to novel advancements in the field of health care services and medicine, which include the introduction and/or full deployment of machine learning and robotics. Second, as already mentioned due to the fact that these technologies are accompanied by legal concerns, especially in terms of privacy, a legal analysis of the most relevant and prominent concerns is attempted. The emphasis is given on the European Union’s approach on the matter of AI related technology. Both its main bodies are mentioned, the European Parliament and the European Commission, for their procurement of documents related to novel technologies.In addition, after the legal framework analysis and the more binding in nature legislative efforts, the article proceeds with the presentation of the soft-law related to the AI technological field, as well as the ethics and guidelines developed to mitigate its risks and issues. Lastly, the following analysis is closed by conclusions based on the combination of remarks and resolutions from the above mentioned sections of the article.
APA, Harvard, Vancouver, ISO, and other styles
48

Campanioni, Chris. "An Era of AI-personation & Self(ie) Surveillance." Interações: Sociedade e as novas modernidades, no. 34 (October 2, 2018): 9–22. http://dx.doi.org/10.31211/interacoes.n34.2018.a1.

Full text
Abstract:
Any discussion of the social invisibilities engendered by the Internet necessarily demands further questioning as to how visibility, as an increasing cultural norm, has produced new inequalities in real life. This contribution combines autoethnographic research, social media analysis, and data analytics with theoretical frameworks such as phenomenology and psychology to globally investigate our current culture of AI-catfishing, social media metrics, and metrics manipulation. My paper raises questions about re-materializing digital divides and inequalities in the “offline world” through citing self-surveillance techniques and algorithmic biases to show how we are both at the whim of these AI-inflected prejudices but also complicit in reproducing them, whether through government coercion or our own cultural norms and rules. I trace our relationship with music technology to outline a trajectory of sensory disconnect and co-produced community — a framework for understanding current cultural phenomena and the ethics of distributed data, privacy, and the rendering of our bodies as a new kind of transaction, and currency. The rise of fake news is re-contextualized within the widespread rise of fake users: the various impersonations of self even and especially through AI.
APA, Harvard, Vancouver, ISO, and other styles
49

Farisco, Michele, Kathinka Evers, and Arleen Salles. "Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence." Science and Engineering Ethics 26, no. 5 (July 7, 2020): 2413–25. http://dx.doi.org/10.1007/s11948-020-00238-w.

Full text
Abstract:
Abstract Ethical reflection on Artificial Intelligence (AI) has become a priority. In this article, we propose a methodological model for a comprehensive ethical analysis of some uses of AI, notably as a replacement of human actors in specific activities. We emphasize the need for conceptual clarification of relevant key terms (e.g., intelligence) in order to undertake such reflection. Against that background, we distinguish two levels of ethical analysis, one practical and one theoretical. Focusing on the state of AI at present, we suggest that regardless of the presence of intelligence, the lack of morally relevant features calls for caution when considering the role of AI in some specific human activities.
APA, Harvard, Vancouver, ISO, and other styles
50

Ab Hamid, Nor ‘Adha, Azizah Mat Rashid, and Mohd Farok Mat Nor. "A SEGMENT STUDY OF MORAL CRISIS ON SOCIAL MEDIA AND ONLINE USING THE ARTIFICIAL INTELLIGENCE APPLICATION: A WAKE-UP CALL." International Journal of Law, Government and Communication 6, no. 22 (March 5, 2021): 36–44. http://dx.doi.org/10.35631/ijlgc.622003.

Full text
Abstract:
The development of science and technology is always ahead and has no point and seems limitless. Although human beings are the agents who started this development but eventually faced with a bitter situation which can sacrifice human moral, right and interest of our future. Shariah criminal offenses nowadays can not only occur or be witnessed by a person in a meeting physically with the perpetrator. As a result of technological developments, such behavior can occur and can be witnessed in general by larger groups. Although the illegal treatment which is not in accordance with sharia law and the moral crisis issues happening surrounding us and is rampant on social media, no enforcement is done on perpetrators who use social media medium. According to sharia principles, something that is wrong should be prevented and it is the responsibility of all Muslim individuals. But what is happening today, some Shariah criminal behavior, especially in relation to ethics, can occur easily using facilities technology driven by technological ingenuity. If the application of existing legal provisions is limited and has obstacles for enforcement purposes, then the problem needs to be overcome due to development the law should be in line with current developments. The study aims to identify a segment and cases of the moral crisis on social media and online using the artificial intelligence (AI) application and to identify the needs for shariah prevention. This thesis uses qualitative approaches, adopts library-based research, and, by content analysis of documents, applies the literature review approach. The findings show that the use of social media and AI technology has had an impact on various issues such as moral crisis, security, misuse, an intrusion of personal data, and the construction of AI beyond human control. Thus, the involvement and cooperation of various parties are needed in regulating and addressing issues that arise as a result of the use of social media and AI technology in human life.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography