Auswahl der wissenschaftlichen Literatur zum Thema „AI technology ethics“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "AI technology ethics" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "AI technology ethics"

1

Siau, Keng, und Weiyu Wang. „Artificial Intelligence (AI) Ethics“. Journal of Database Management 31, Nr. 2 (April 2020): 74–87. http://dx.doi.org/10.4018/jdm.2020040105.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Johnson, Sylvester A. „Technology Innovation and AI Ethics“. Ethics of Artificial Intelligence, Nr. 299 (19.09.2019): 14–27. http://dx.doi.org/10.29242/rli.299.2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khokhlov, A. L., und D. Yu Belousov. „Ethical aspects of using software with artificial intelligence technology“. Kachestvennaya Klinicheskaya Praktika = Good Clinical Practice 20, Nr. 1 (24.04.2021): 70–84. http://dx.doi.org/10.37489/2588-0519-2021-1-70-84.

Der volle Inhalt der Quelle
Annotation:
Today, artificial intelligence (AI) technologies can offer solutions to many social problems, including those used to diagnose and treat diseases. However, for this it is necessary to provide an appropriate legal basis. The article presents a brief history of the development of AI, explains the conceptual apparatus, describes the legal basis for its development and implementation in Russian healthcare, the methodology for conducting clinical trials of AI systems, and gives their classification. Particular attention is paid to the ethical principles of clinical trials of AI systems. The ethical examination of projects of clinical trials of AI systems by the Ethics Committee is considered in detail.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Miao, Zeyi. „Investigation on human rights ethics in artificial intelligence researches with library literature analysis method“. Electronic Library 37, Nr. 5 (07.10.2019): 914–26. http://dx.doi.org/10.1108/el-04-2019-0089.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper was to identify whether artificial intelligence (AI) products can possess human rights, how to define their rights and obligations and what ethical standards they should follow. In this study, the human rights ethical dilemma encountered in the application and development of AI technology has been focused on and analyzed in detail in the light of the existing research status of AI ethics. Design/methodology/approach In this study, first of all, the development and application of AI technology, as well as the concept and characteristics of human rights ethics, are introduced. Second, the human rights ethics of AI technology are introduced in detail, including the human rights endowment of AI machines, the fault liability of AI machines and the moral orientation of AI machines. Finally, the approaches to human rights ethics are proposed to ensure that AI technology serves human beings. Every link of its research, production and application should be strictly managed and supervised. Findings The results show that the research in this study can provide help for the related problems encountered in AI practice. Intelligent library integrates human rights protection organically so that readers or users can experience more intimate service in this system. It is a kind of library operation mode with more efficient and convenient characteristics, which is based on digital, networked and intelligent information science. It aims at using the greenest way and digital means to realize the reading and research of human rights protection literature in the literature analysis method. Originality/value Intelligent library is the future development mode of new libraries, which can realize broad interconnection and sharing. It is people-oriented and can make intelligent management and service and establish the importance of the principle of human rights protection and the specific idea of the principle. The development of science and technology brings not only convenience to people's social life but also questions to be thought. People should reduce its potential harm, so as to make AI technology continue to benefit humankind.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Héder, Mihály. „A criticism of AI ethics guidelines“. Információs Társadalom 20, Nr. 4 (31.12.2020): 57. http://dx.doi.org/10.22503/inftars.xx.2020.4.5.

Der volle Inhalt der Quelle
Annotation:
This paper investigates the current wave of Artificial Intelligence Ethics GUidelines (AIGUs). The goal is not to provide a broad survey of the details of such efforts; instead, the reasons for the proliferation of such guidelines is investigated. Two main research questions are pursued. First, what is the justification for the proliferation of AIGUs, and what are the reasonable goals and limitations of such projects? Second, what are the specific concerns of AI that are so unique that general technology regulation cannot cover them? The paper reveals that the development of AI guidelines is part of a decades-long trend of an ever-increasing express need for stronger social control of technology, and that many of the concerns of the AIGUs are not specific to the technology itself, but are rather about transparency and human oversight. Nevertheless, the positive potential of the situation is that the intense world-wide focus on AIGUs will yield such profound guidelines that the regulation of other technologies may want to follow suite.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ouchchy, Leila, Allen Coin und Veljko Dubljević. „AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media“. AI & SOCIETY 35, Nr. 4 (29.03.2020): 927–36. http://dx.doi.org/10.1007/s00146-020-00965-5.

Der volle Inhalt der Quelle
Annotation:
Abstract As artificial intelligence (AI) technologies become increasingly prominent in our daily lives, media coverage of the ethical considerations of these technologies has followed suit. Since previous research has shown that media coverage can drive public discourse about novel technologies, studying how the ethical issues of AI are portrayed in the media may lead to greater insight into the potential ramifications of this public discourse, particularly with regard to development and regulation of AI. This paper expands upon previous research by systematically analyzing and categorizing the media portrayal of the ethical issues of AI to better understand how media coverage of these issues may shape public debate about AI. Our results suggest that the media has a fairly realistic and practical focus in its coverage of the ethics of AI, but that the coverage is still shallow. A multifaceted approach to handling the social, ethical and policy issues of AI technology is needed, including increasing the accessibility of correct information to the public in the form of fact sheets and ethical value statements on trusted webpages (e.g., government agencies), collaboration and inclusion of ethics and AI experts in both research and public debate, and consistent government policies or regulatory frameworks for AI technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Etzioni, Amitai, und Oren Etzioni. „The ethics of robotic caregivers“. Interaction Studies 18, Nr. 2 (08.12.2017): 174–90. http://dx.doi.org/10.1075/is.18.2.02etz.

Der volle Inhalt der Quelle
Annotation:
As Artificial Intelligence technology seems poised for a major take-off and changing societal dynamics are creating a high demand for caregivers for elders, children, and those infirmed, robotic caregivers may well be used much more often. This article examines the ethical concerns raised by the use of AI caregivers and concludes that many of these concerns are avoided when AI caregivers operate as partners rather than substitutes. Furthermore, most of the remaining concerns are minor and are faced by human caregivers as well. Nonetheless, because AI caregivers’ systems are learning systems, an AI caregiver could stray from its initial guidelines. Therefore, subjecting AI caregivers to an AI-based oversight system is proposed to ensure that their actions remain both legal and ethical.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jacobowitz, Jan L., und Justin Ortiz. „Happy Birthday Siri! Dialing in Legal Ethics for Artificial Intelligence, Smartphones, and Real Time Lawyers“. Symposium Edition - Artificial Intelligence and the Legal Profession 4, Nr. 5 (April 2018): 407–42. http://dx.doi.org/10.37419/jpl.v4.i5.1.

Der volle Inhalt der Quelle
Annotation:
This Article explores the history of AI and the advantages and potential dangers of using AI to assist with legal research, administrative functions, contract drafting, case evaluation, and litigation strategy. This Article also provides an overview of security vulnerabilities attorneys should be aware of and the precautions that they should employ when using their smartphones (in both their personal and professional lives) in order to adequately protect confidential information. Finally, this Article concludes that lawyers who fail to explore the ethical use of AI in their practices may find themselves at a professional disadvantage and in dire ethical straits. The first part of this Article defines the brave new world of AI and how it both directly and indirectly impacts the practice of law. The second part of this Article explores legal ethics considerations when selecting and using AI vendors and virtual assistants. The third part outlines technology risks and potential solutions for lawyers who seek to embrace smartphone technology while complying with legal ethics obligations. The Article concludes with an optimistic eye toward the future of the legal profession.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wasilow, Sherry, und Joelle B. Thorpe. „Artificial Intelligence, Robotics, Ethics, and the Military: A Canadian Perspective“. AI Magazine 40, Nr. 1 (28.03.2019): 37–48. http://dx.doi.org/10.1609/aimag.v40i1.2848.

Der volle Inhalt der Quelle
Annotation:
Defense and security organizations depend upon science and technology to meet operational needs, predict and counter threats, and meet increasingly complex demands of modern warfare. Artificial intelligence and robotics could provide solutions to a wide range of military gaps and deficiencies. At the same time, the unique and rapidly evolving nature of AI and robotics challenges existing polices, regulations, and values, and introduces complex ethical issues that might impede their development, evaluation, and use by the Canadian Armed Forces (CAF). Early consideration of potential ethical issues raised by military use of emerging AI and robotics technologies in development is critical to their effective implementation. This article presents an ethics assessment framework for emerging AI and robotics technologies. It is designed to help technology developers, policymakers, decision makers, and other stakeholders identify and broadly consider potential ethical issues that might arise with the military use and integration of emerging AI and robotics technologies of interest. We also provide a contextual environment for our framework, as well as an example of how our framework can be applied to a specific technology. Finally, we briefly identify and address several pervasive issues that arose during our research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Joamets, Kristi, und Archil Chochia. „Access to Artificial Intelligence for Persons with Disabilities: Legal and Ethical Questions Concerning the Application of Trustworthy AI“. Acta Baltica Historiae et Philosophiae Scientiarum 9, Nr. 1 (27.05.2021): 51–66. http://dx.doi.org/10.11590/abhps.2021.1.04.

Der volle Inhalt der Quelle
Annotation:
Digitalisation and emerging technologies affect our lives and are increasingly present in a growing number of fields. Ethical implications of the digitalisation process have therefore long been discussed by the scholars. The rapid development of artificial intelligence (AI) has taken the legal and ethical discussion to another level. There is no doubt that AI can have a positive impact on the society. The focus here, however, is on its more negative impact. This article will specifically consider how the law and ethics in their interaction can be applied in a situation where a disabled person needs some kind of assistive technology to participate in the society as an equal member. This article intends to investigate whether the EU Guidelines for Trustworthy AI, as a milestone of ethics concerning technology, has the power to change the current practice of how social and economic rights are applied. The main focus of the article is the ethical requirements ‘Human agency and oversight’ and, more specifically, fundamental rights.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "AI technology ethics"

1

Schildt, Alexandra, und Jenny Luo. „Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279659.

Der volle Inhalt der Quelle
Annotation:
AI has quickly grown from being a vast concept to an emerging technology that many companies are looking to integrate into their businesses, generally considered an ongoing “revolution” transforming science and society altogether. Researchers and organizations agree that AI and the recent rapid developments in machine learning carry huge potential benefits. At the same time, there is an increasing worry that ethical challenges are not being addressed in the design and implementation of AI systems. As a result, AI has sparked a debate about what principles and values should guide its development and use. However, there is a lack of consensus about what values and principles should guide the development, as well as what practical tools should be used to translate such principles into practice. Although researchers, organizations and authorities have proposed tools and strategies for working with ethical AI within organizations, there is a lack of a holistic perspective, tying together the tools and strategies proposed in ethical, technical and organizational discourses. The thesis aims to contribute with knowledge to bridge this gap by addressing the following purpose: to explore and present the different tools and methods companies and organizations should have in order to build machine learning applications in a fair and transparent manner. The study is of qualitative nature and data collection was conducted through a literature review and interviews with subject matter experts. In our findings, we present a number of tools and methods to increase fairness and transparency. Our findings also show that companies should work with a combination of tools and methods, both outside and inside the development process, as well as in different stages of the machine learning development process. Tools used outside the development process, such as ethical guidelines, appointed roles, workshops and trainings, have positive effects on alignment, engagement and knowledge while providing valuable opportunities for improvement. Furthermore, the findings suggest that it is crucial to translate high-level values into low-level requirements that are measurable and can be evaluated against. We propose a number of pre-model, in-model and post-model techniques that companies can and should implement in each other to increase fairness and transparency in their machine learning systems.
AI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Stenberg, Louise, und Svante Nilsson. „Factors influencing readiness of adopting AI : A qualitative study of how the TOE framework applies to AI adoption in governmental authorities“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279583.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence is increasing in interest and it is creating value to many organizations world-wide. Due to the potential, governmental authorities in Sweden who work with large volumes of text documents are interested in natural language processing models, which is a sub field of AI and have started to incorporate it to their organizations. This study explores and discusses factors that are influential for governmental authorities when adopting AI and highlights ethical aspects which are of importance for the adoption process. This is explored through a literature review which lead to a frame of reference built on the Technology Organization Environment framework (TOE), which then was tested through interviews with project leaders and AI architects at governmental authorities who are working with language models. The results show that the TOE framework is suitable for analysing AI adoption for governmental authorities. The factors that are found influential are Relative Advantage, Compatibility and Complexity, Management support, Staff capacity, Regulatory environment and Cooperation. Furthermore, the findings suggest that AI Ethics and Data access are influential in all three contexts of technology, organization and environment. The findings of this study confirm results from previous research regarding adoption of new technology, and also provides the literature with exploring the adoption process of AI in governmental authorities, which was not widely explored in literature on beforehand.
Allt fler intresserar sig för artificiell intelligens då det skapar värde för många organisationer. Svenska myndigheter som arbetar med stora mängder textdokument ser potentialen i AI och har börjat implementera språkmodeller, ett sorts AI, i sina organisationer. Den här studien utforskar och diskuterar faktorer som är inflytelserika inför implementering av AI och belyser etiska aspekter som är viktiga för implementationsprocessen. Detta har utforskats först genom en litteraturstudie, ur vilken ett ramverk som bygger på Teknologi Organisation Miljö-ramverket (TOE) har tagits fram. Detta har sedan testats genom intervjuer med projektledare och AI arkitekter på svenska myndigheter som arbetar med språkmodeller. Resultaten visar att TOE-ramverket lämpar sig väl för att analysera adoptering av AI i myndigheter. Faktorerna som har identifierats som inflytelserika är relativ fördel, kompatibilitet, komplexitet, ledningsstöd, anställdas kapacitet, regleringskontext och samarbete. Dessutom föreslås det att etik för AI och datatillgång ska spänna över alla tre kontexter inom TOE. Resultaten av studien bekräftar tidigare forskning gällande adoptering av nya teknologier, och den bidrar även till litteraturen genom att utforska adopteringsprocessen av AI i myndigheter, vilket inte har utforskats i större utsträckning tidigare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Almer, Jasmine, und Julia Ivert. „Artificiell Intelligens framtidsutsikter inom sjukvården : En studie om studerande sjuksköterskors attityder gällande Artificiell Intelligens inom sjukvården“. Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413725.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence is an area that has developed radically in recent years and is constantly evolving in several industries. In this thesis, a qualitative case study is conducted which addresses student nurses' attitudes regarding Artificial Intelligence in Swedish healthcare and its use in the future. Through interviews conducted with Uppsala University student nurses, the empirical material is analyzed using the Technology Acceptance Model (TAM) theory to finally produce a result regarding the future use of Artificial Intelligence in healthcare. The analysis resulted in two evident areas regarding AI usage: decision-making AI and non-decision-making AI where the participants’ attitudes differed between the two divisions. The attitudes towards decision-making AI were rather negative partly because of the lack of responsibility and accountability together with the reduced patient contact it would entail. The attitudes towards non-decision-making AI were, in contrast, considered positive, partly because of the efficiency it would imply using AI technology as an appliance and the areas of improvement it would entail in the profession. For example by creating time for more care and attention, something that nursing students imply is the main focus in health and social care. Finally, the results of the analysis are discussed based on various aspects such as ethics and morals, the profession itself and further research.
Artificiell Intelligens är ett område vilket utvecklats radikalt senaste åren och som konstant fortsätter att utvecklas inom flera branscher. I denna uppsats utförs en kvalitativ fallstudie, vilken behandlar studerande sjuksköterskors attityder gällande Artificiell Intelligens inom sjukvården och dess användning i framtiden. Genom utförda intervjuer av Uppsala Universitets studerande sjuksköterskor, analyseras det empiriska materialet med hjälp av teorin Technology Acceptance Model (TAM), för att slutligen ta fram ett resultat vad det gäller ett framtida användandet av Artificiell Intelligens inom sjukvården. Analysen resulterade i två tydliga områden gällande användningen av AI inom sjukvården: beslutsfattande AI respektive icke-beslutsfattande AI, där intervjudeltagarnas attityder urskiljdes mellan de två indelningarna. De studerande sjuksköterskornas attityder gentemot beslutsfattande AI var tämligen negativ, dels på grund av de bristande faktorer som identifierades gällande ett ansvarsutkrävande, samt den minskade patientkontakten systemet kan komma att medföra. Attityderna gentemot icke-beslutsfattande AI ansågs i kontrast mycket positiva. Dels på grund av den effektivisering systemet möjligtvis kan medföra genom att använda AI-teknik som ett hjälpmedel eller komplement samt de förbättringsområden som inträder relaterat till arbetsrollen. Ett exempel på förbättringsområde som framkom var att skapa mer tid för vård och omsorg, något som sjuksköterskestudenterna menar på att yrket faktiskt är till för. Avslutningsvis diskuteras resultatet från analysen vilket intressanta resonemang om etik och moral, arbetsrollen i fråga samt vidare forskning förs på tal.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Haviland, Hannah. „"The Machine Made Me Do It!" : An Exploration of Ascribing Agency and Responsibility to Decision Support Systems“. Thesis, Linköping University, Centre for Applied Ethics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2922.

Der volle Inhalt der Quelle
Annotation:

Are agency and responsibility solely ascribable to humans? The advent of artificial intelligence (AI), including the development of so-called “affective computing,” appears to be chipping away at the traditional building blocks of moral agency and responsibility. Spurred by the realization that fully autonomous, self-aware, even rational and emotionally-intelligent computer systems may emerge in the future, professionals in engineering and computer science have historically been the most vocal to warn of the ways in which such systems may alter our understanding of computer ethics. Despite the increasing attention of many philosophers and ethicists to the development of AI, there continues to exist a fair amount of conceptual muddiness on the conditions for assigning agency and responsibility to such systems, from both an ethical and a legal perspective. Moral and legal philosophies may overlap to a high degree, but are neither interchangeable nor identical. This paper attempts to clarify the actual and hypothetical ethical and legal situations governing a very particular type of advanced, or “intelligent,” computer system: medical decision support systems (MDSS) that feature AI in their system design. While it is well-recognized that MDSS can be categorized by type and function, further categorization of their mediating effects on users and patients is needed in order to even begin ascribing some level of moral or legal responsibility. I conclude that various doctrines of Anglo legal systems appear to allow for the possibility of assigning specific types of agency – and thus specific types of legal responsibility – to some types of MDSS. Strong arguments for assigning moral agency and responsibility are still lacking, however.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Victorin, Karin. „AI as Gatekeepers to the Job Market : A Critical Reading of; Performance, Bias, and Coded Gaze in Recruitment Chatbots“. Thesis, Linköpings universitet, Tema Genus, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177257.

Der volle Inhalt der Quelle
Annotation:
The topic of this thesis is AI recruitment chatbots, digital discrimination, and data feminism (D´Ignazio and F.Klein 2020), where I aim to critically analyze issues of bias in these types of human-machine interaction technologies. Coming from a professional background of theatre, performance art, and drama, I am curious to analyze how using AI and social robots as hiring tools entails a new type of “stage” (actor’s space), with a special emphasis on social acting. Humans are now required to adjust their performance and facial expressions in the search for, and approval of, a new job. I will use my “theatrical glasses” with an intersectional lens, and through a methodology of cultural analysis, reflect on various examples of conversational AI used in recruitment processes. The silver bullet syndrome is a term that points to a tendency to believe in a miraculous new technological tool that will “magically” solve human-related problems in a company or an organization. The captivating marketing message of the Swedish recruitment conversational AI tool – Tengai Unbiased – is the promise of a scientifically proven objective hiring tool, to solve the diversity problem for company management. But is it really free from bias? According to Karen Barad, agency is not an attribute, but the ongoing reconfiguration of the world influenced by what she terms intra-actions, a mutual constitution of entanglement between human and non-human agencies (2003:818). However, tech developers often disregard their entanglement of human-to-machine interferences which unfortunately generates unconscious bias. The thesis raises ethical questions of how algorithmic measurement of social competence risks holding unconscious biases, benefiting those already privileged or those acting within a normative spectrum.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Radosavljevic, Bojan, und Axel Kimblad. „Etik och säkerhet när AI möter IoT“. Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20613.

Der volle Inhalt der Quelle
Annotation:
I dagens samhälle går den tekniska utvecklingen fort framåt. Artificiell intelligens och Internet of Things är två tekniker inom utvecklingen vars popularitet har ökat på senare tid. Dessa tekniker i integration har visat sig kunna bidra med stora verksamhetsnyttor, bland annat i form av ökad precishet vad gäller analyser, bättre kundvärde och effektivisering av ”downtime”. Med ny teknik kommer även utmaningar. I takt med att teknologierna ständigt växer uppstår frågor kring säkerhet och etik och hur detta ska hanteras. Målet med denna studien var att ta reda på hur experter värderar etiska frågor när artificiell intelligens används i kombination med Internet of Things-enheter. Vi fokuserade på följande forskningsfråga för att nå vårt mål: Hur värderas frågor om etik när artificiell intelligens används i kombination med Internet of Things? Resultatet vi kom fram till visar att både forskare och näringslivet värderar de etiska aspekterna högt. Studien visar även att de ansåg att teknikerna kan vara lösningen till många samhällsproblem men att etiken bör vara ett ämne som löpande bör diskuteras.
In today's society, technological developments are moving fast. Artificial intelligence and the Internet of Things are two technologies within the development whose popularity has increased in recent years. These technologies in integration have proven to be able to contribute with major business benefits, including in the form of increased precision with regard to analyzes, better customer value and efficiency of downtime. New technology also presents challenges. As the technologies are constantly growing, issues arise regarding safety and ethics and how this should be managed. The aim of this study is to find out how experts value ethical issues when using artificial intelligence in combination with the Internet of Things devices. We focused on the following research question to reach our goal: How are ethical issues evaluated when using artificial intelligence in combination with the Internet of Things? The result we found shows that both researchers and the business world value the ethical aspects highly. The study also shows that they considered the techniques to be the solution to many societal problems, but that ethics should be a topic that should be discussed on an ongoing basis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Karahanli, Naz Gizem, und Johannes Touma. „Digitalization of the customer experience in banking Use of AI and SSTs in complex/sensitive tasks: pre-collection“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300899.

Der volle Inhalt der Quelle
Annotation:
The digital revolution is changing the banking industry, and how banks create value and deliver services to their customers. Customer experience becomes the main pillar of digitally transformed banks through self-service technologies (SSTs) and the use of artificial intelligence (AI); the research focus of this study is to explore the impact of those modern technologies when dealing with sensitive information and emotional encounters in the banking sector. A case study method has been used with an in-depth investigation consisting of both internal and external interviews for Valhalla Bank in Sweden. External interview results presented the debtor’s perspective by laying out the main challenges faced during the repayment process. The study concluded by answering the main research questions and suggesting practical implications for financial institutions. Banks should proactively seek both explicit and latent needs of different customer segments; any customer interaction data has the potential to become the source of optimizing call scheduling, script customization, or customer experience evaluation. Customers expect flexibility to choose between human interaction and self-service technologies. Sensitive topics can be dealt with digital tools when they can provide advanced functionality with maturity to establish trust and security. Lastly, even though the technology is perceived as cold with a lack of empathy, customers are ready to experiment as they are not comfortable nor satisfied with the current interactions. Regardless of the state of the digital journey of a financial institution, customers should be well-informed about technologies while banks prioritize ethical controls to provide transparent relationships in which any type of customer can feel valued.
Den digitala revolutionen förändrar banksektorn och hur banker skapar värde och levererar tjänster till sina kunder. Kundupplevelse blir huvudpelaren för digitalt transformerade banker genom självbetjäning tekniker (SST) och användningen av artificiell intelligens (AI). Forskningsfokus för denna studie är att undersöka effekterna av den moderna tekniken när man hanterar känslig information och känslomässiga möten inom banksektorn. En fallstudiemetod har använts med en djupgående undersökning bestående av både interna och externa intervjuer för Valhalla Bank i Sverige. Externa intervjuresultat presenterade gäldenärens perspektiv genom att redogöra för de största utmaningarna under återbetalning processen. E-bank kanaler och lösningar har dominerat kundernas preferenser med en hoppfull syn på att bygga upp förtroende för relativt ny teknik. Studien avslutades med att besvara de viktigaste forskningsfrågorna och föreslå praktiska konsekvenser för finansinstituten. Banker bör proaktivt söka både tydliga och latenta behov för olika kundsegment, samtidigt som deras mest värdefulla tillgång är kundernas digitala fotavtryck. Alla kundinteraktion data har potential att bli källan till att optimera samtals planering, anpassning av skript eller utvärdering av kundupplevelse. Kunder förväntar sig flexibilitet och frihet att välja mellan mänsklig interaktion och självbetjänings teknik. Känsliga ämnen kan hanteras med digitala verktyg när de kan ge avancerad funktionalitet med mognad för att skapa förtroende och säkerhet. Slutligen, även om tekniken upplevs som ”kall” med brist på empati och känslor, särskilt när det gäller komplexa och känsliga uppgifter som skuldfrågor, är kunderna redo att experimentera eftersom de inte är bekväma eller nöjda med de nuvarande mänskliga interaktionerna. Oavsett tillståndet för en finansiell instituts digitala resa bör kunderna vara välinformerade om ny teknik medan bankerna prioriterar etisk kontroll med detaljerade handlingsplaner för att ge en nära och transparent relation där alla typer av kunder kan känna sig värderade och förstådda.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Abu-Shaqra, Baha. „Technoethics and Sensemaking: Risk Assessment and Knowledge Management of Ethical Hacking in a Sociotechnical Society“. Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40393.

Der volle Inhalt der Quelle
Annotation:
Cyber attacks by domestic and foreign threat actors are increasing in frequency and sophistication. Cyber adversaries exploit a cybersecurity skill/knowledge gap and an open society, undermining the information security/privacy of citizens and businesses and eroding trust in governments, thus threatening social and political stability. The use of open digital hacking technologies in ethical hacking in higher education and within broader society raises ethical, technical, social, and political challenges for liberal democracies. Programs teaching ethical hacking in higher education are steadily growing but there is a concern that teaching students hacking skills increases crime risk to society by drawing students toward criminal acts. A cybersecurity skill gap undermines the security/viability of business and government institutions. The thesis presents an examination of opportunities and risks involved in using AI powered intelligence gathering/surveillance technologies in ethical hacking teaching practices in Canada. Taking a qualitative exploratory case study approach, technoethical inquiry theory (Bunge-Luppicini) and Weick’s sensemaking model were applied as a sociotechnical theory (STEI-KW) to explore ethical hacking teaching practices in two Canadian universities. In-depth interviews with ethical hacking university experts, industry practitioners, and policy experts, and a document review were conducted. Findings pointed to a skill/knowledge gap in ethical hacking literature regarding the meanings, ethics, values, skills/knowledge, roles and responsibilities, and practices of ethical hacking and ethical hackers which underlies an identity and legitimacy crisis for professional ethical hacking practitioners; and a Teaching vs Practice cybersecurity skill gap in ethical hacking curricula. Two main S&T innovation risk mitigation initiatives were explored: An OSINT Analyst cybersecurity role and associated body of knowledge foundation framework as an interdisciplinary research area, and a networked centre of excellence of ethical hacking communities of practice as a knowledge management and governance/policy innovation approach focusing on the systematization and standardization of an ethical hacking body of knowledge.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Liliequist, Erik. „Artificial Intelligence - Are there any social obstacles? : An empirical study of social obstacles“. Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229506.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence is currently one of the most talked about topics with regard to technical development. The possibilities are enormous and it might revolutionize how we live our lives. There are talk of robots and AI removing the need for human workers. At the same time there are also those who view this as deeply troublesome. Either from an individual perspective, asking the question what we should do once we do not need to work more? Or from an existential perspective, raising issues of what responsibilities we have as humans and what it means to be human? This study does not aim to answer these grand questions, but rather shift the focus to the near future of three to five years. Yet, there is still a focus on the social aspects of the development of AI. What are the perceived greatest social issues and obstacles for a continued implementation of AI solutions in society? To answer these question interviews have been conducted with representatives for the Swedish society, ranging from politicians, union and employers’ organizations to philosophers and AI researchers. Further a literature study has been made of similar studies, comparing and reflecting their findings with the views of the interviewees. In short, the interviewees have a very positive view of AI in the near future, believing that a continued implementation would go relatively smoothly. Yet, they pointed to a few key obstacles that might need to be addressed. Mainly there is a risk of increased polarization of wages and power due to AI, although stressed that it depends on how we use the technology rather than the technology itself. Another obstacle was connected to individual uncertainty of the development of AI, causing a fear of what might happen. Further, several different ethical issues were raised. There was an agreement that we need to address these as soon as possible, but they did not view this as an obstacle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sütfeld, Leon René. „Advances in Vehicle Automation: Ethics and Technology“. Doctoral thesis, 2021. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-202109145339.

Der volle Inhalt der Quelle
Annotation:
With the arrival of automated vehicles (AVs) on our streets virtually around the corner, this thesis explores advances in automated driving technology with a focus on ethical decision making in dilemmatic traf- fic situations. In a total of five publications, we take a multi-facetted approach to analyse and address the core challenges related to auto- mated ethical decision making in AVs. In publications one through three, we conduct a series of immersive virtual reality studies to analyze human behavior in traffic dilemmas, explore mathematical approaches to model the decision making process, investigate how the assessment methodology can affect moral judgment, and discuss the implications of these studies for algorithmic decision making in the real-world. In publication number four, we provide a comprehensive summary of the status quo of AV technology and legislation with regard to automated ethical decision making. Here, we discuss when and why ethical deci- sion making systems become necessary in AVs, review existing guide- lines for the behavior of AVs in dilemma situations, and compile a set of 10 demands and open questions that need to be addressed in the pursuit of a framework for ethical decision making in AVs. Finally, the basis for automated ethical decision making in AVs will be provided by accurate assessments of the immediate environment of the car. The pri- mary technology used to provide the required information processing of camera and LiDAR images in AVs is machine learning, and in particular deep learning. In publication five, we propose a form of adaptive acti- vation functions, addressing a central element of deep neural networks, which could, for instance, lead to increased detection rates of relevant objects, and thus help to provide a more accurate assessment of the AVs environment. Overall, this thesis provides a structured and compre- hensive overview of the state of the art in ethical decision making for AVs. It includes important implications for the design of decision mak- ing algorithms in practice, and concisely outlines the central remaining challenges on the road to a safe, fair and successful introduction of fully automated vehicles into the market.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "AI technology ethics"

1

Dubber, Markus D., Frank Pasquale und Sunit Das, Hrsg. The Oxford Handbook of Ethics of AI. Oxford University Press, 2020. http://dx.doi.org/10.1093/oxfordhb/9780190067397.001.0001.

Der volle Inhalt der Quelle
Annotation:
This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liao, S. Matthew, Hrsg. Ethics of Artificial Intelligence. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190905033.001.0001.

Der volle Inhalt der Quelle
Annotation:
Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today’s most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment caused by automation, how to avoid designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As AI technologies progress, questions about the ethics of AI, in both the near future and the long term, become more pressing than ever. Should a self-driving car prioritize the lives of the passengers over those of pedestrians? Should we as a society develop autonomous weapon systems capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human-level moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development have come together to explore these existential questions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lin, Patrick, Keith Abney und Ryan Jenkins, Hrsg. Robot Ethics 2.0. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190652951.001.0001.

Der volle Inhalt der Quelle
Annotation:
As a game-changing technology, robotics naturally will create ripple effects through society. Some of them may become tsunamis. So it’s no surprise that “robot ethics”—the study of these effects on ethics, law, and policy—has caught the attention of governments, industry, and the broader society, especially in the past several years. Since our first book on the subject in 2012, a groundswell of concern has emerged, from the Campaign to Stop Killer Robots to the Campaign Against Sex Robots. Among other bizarre events, a robot car has killed its driver, and a kamikaze police robot bomb has killed a sniper. Given these new and evolving worries, we now enter the second generation of the debates—robot ethics 2.0. This edited volume is a one-stop authoritative resource for the latest research in the field, which is often scattered across academic journals, books, media articles, reports, and other channels. Without presuming much familiarity with either robotics or ethics, this book helps to make the discussion more accessible to policymakers and the broader public, as well as academic audiences. Besides featuring new use-cases for robots and their challenges—not just robot cars, but also space robots, AI, and the internet of things (as massively distributed robots)—we also feature one of the most diverse group of researchers on the subject for truly global perspectives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Vallor, Shannon, und George A. Bekey. Artificial Intelligence and the Ethics of Self-Learning Robots. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190652951.003.0022.

Der volle Inhalt der Quelle
Annotation:
The convergence of robotics technology with the science of artificial intelligence is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors. Recent advances in machine learning techniques have produced artificial agents that can acquire highly complex skills formerly thought to be the exclusive province of human intelligence. These developments raise a host of new ethical concerns about the responsible design, manufacture, and use of robots enabled with artificial intelligence—particularly those equipped with self-learning capacities. While the potential benefits of self-learning robots are immense, their potential dangers are equally serious. While some warn of a future where AI escapes the control of its human creators or even turns against us, this chapter focuses on other, far less cinematic risks of AI that are much nearer to hand, requiring immediate study and action by technologists, lawmakers, and other stakeholders.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cave, Stephen, Kanta Dihal und Sarah Dillon, Hrsg. AI Narratives. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198846666.001.0001.

Der volle Inhalt der Quelle
Annotation:
This book is the first to examine the history of imaginative thinking about intelligent machines. As real artificial intelligence (AI) begins to touch on all aspects of our lives, this long narrative history shapes how the technology is developed, deployed, and regulated. It is therefore a crucial social and ethical issue. Part I of this book provides a historical overview from ancient Greece to the start of modernity. These chapters explore the revealing prehistory of key concerns of contemporary AI discourse, from the nature of mind and creativity to issues of power and rights, from the tension between fascination and ambivalence to investigations into artificial voices and technophobia. Part II focuses on the twentieth and twenty-first centuries in which a greater density of narratives emerged alongside rapid developments in AI technology. These chapters reveal not only how AI narratives have consistently been entangled with the emergence of real robotics and AI, but also how they offer a rich source of insight into how we might live with these revolutionary machines. Through their close textual engagements, these chapters explore the relationship between imaginative narratives and contemporary debates about AI’s social, ethical, and philosophical consequences, including questions of dehumanization, automation, anthropomorphization, cybernetics, cyberpunk, immortality, slavery, and governance. The contributions, from leading humanities and social science scholars, show that narratives about AI offer a crucial epistemic site for exploring contemporary debates about these powerful new technologies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Baecker, Ronald M. Computers and Society. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198827085.001.0001.

Der volle Inhalt der Quelle
Annotation:
The last century has seen enormous leaps in the development of digital technologies, and most aspects of modern life have changed significantly with their widespread availability and use. Technology at various scales - supercomputers, corporate networks, desktop and laptop computers, the internet, tablets, mobile phones, and processors that are hidden in everyday devices and are so small you can barely see them with the naked eye - all pervade our world in a major way. Computers and Society: Modern Perspectives is a wide-ranging and comprehensive textbook that critically assesses the global technical achievements in digital technologies and how are they are applied in media; education and learning; medicine and health; free speech, democracy, and government; and war and peace. Ronald M. Baecker reviews critical ethical issues raised by computers, such as digital inclusion, security, safety, privacy,automation, and work, and discusses social, political, and ethical controversies and choices now faced by society. Particular attention is paid to new and exciting developments in artificial intelligence and machine learning, and the issues that have arisen from our complex relationship with AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

DiGiovanna, James. Artificial Identity. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190652951.003.0020.

Der volle Inhalt der Quelle
Annotation:
Enhancement and AI create moral dilemmas not envisaged in standard ethical theories. Some of this stems from the increased malleability of personal identity that this technology affords: an artificial being can instantly alter its memory, preferences, and moral character. If a self can, at will, jettison essential identity-giving characteristics, how are we to rely upon, befriend, or judge it? Moral problems will stem from the fact that such beings are para-persons: they meet all the standard requirements of personhood (self-awareness, agency, intentional states, second-order desires, etc.) but have an additional ability—the capacity for instant change—that disqualifies them from ordinary personal identity. In order to rescue some responsibility assignments for para-persons, a fine-grained analysis of responsibility-bearing parts of selves and the persistence conditions of these parts is proposed and recommended also for standard persons who undergo extreme change.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jotterand, Fabrice, Marcello Ienca, Tenzin Wangmo und Bernice Elger, Hrsg. Intelligent Assistive Technologies for Dementia. Oxford University Press, 2019. http://dx.doi.org/10.1093/med/9780190459802.001.0001.

Der volle Inhalt der Quelle
Annotation:
The development and implementation of intelligent assistive technologies (IATs) to compensate for the specific physical and cognitive deficits of older adults with dementia have been recognized by many as one of the most promising approaches to this emerging financial and caregiving burden. In the past 15 years, advancements in artificial intelligence (AI), pervasive and ubiquitous computing (PUC), and other advanced trends in software and hardware technology have led to the development and design of a wide range of IATs to help older people compensate for the physical and sensory deficits that may accompany dementia and age-related cognitive decline. These technologies are designed to support impaired older adults in the completion of activities of daily living, assist them in the prevention or management of risk, and/or maintain their recreational and social environment. The widespread implementation and use of assistive technologies is a very rapid process, which is reshaping dementia care and producing constantly changing strategies. This volume aims at providing an up-to-date overview of the current state of the art of assistive technologies for dementia care and an examination of their implications at the medical level, including psychological and clinical issues and their ethical and regulatory challenges. The overall goal of this book is to raise societal awareness on the use of IATs for dementia care and take a first step into developing an international regulatory and policy framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "AI technology ethics"

1

Tzimas, Themistoklis. „The Ethics of AI“. In Law, Governance and Technology Series, 69–99. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78585-7_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Salo-Pöntinen, Henrikki. „AI Ethics - Critical Reflections on Embedding Ethical Frameworks in AI Technology“. In Culture and Computing. Design Thinking and Cultural Computing, 311–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77431-8_20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Metz, Thaddeus. „African Reasons Why AI Should Not Maximize Utility“. In African Values, Ethics, and Technology, 55–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70550-3_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mamgain, Vaishali. „The Final Alert on Ethics in AI Based Technology“. In AI and Robotics in Disaster Studies, 259–63. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4291-6_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schebesch, Klaus Bruno. „The Interdependence of AI and Sustainability: Can AI Show a Path Toward Sustainability?“ In Challenges and Opportunities to Develop Organizations Through Creativity, Technology and Ethics, 383–400. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43449-6_23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Johansen, Mikkel Willum. „Science Fiction at the Far Side of Technology: Vernor Vinge’s Singularity Thesis Versus the Limits of AI-Research“. In Science Fiction, Ethics and the Human Condition, 21–40. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56577-4_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Calvo, Rafael A., Dorian Peters, Karina Vold und Richard M. Ryan. „Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry“. In Philosophical Studies Series, 31–54. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50585-1_2.

Der volle Inhalt der Quelle
Annotation:
Abstract Autonomy has been central to moral and political philosophy for millennia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: (1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; (2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and (3) Autonomy-support must be analysed from at least six spheres of experience in order to appropriately capture contradictory and downstream effects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Nastis, Stefanos. „Legal constraints on technologies“. In Manuali – Scienze Tecnologiche, 36. Florence: Firenze University Press, 2020. http://dx.doi.org/10.36253/978-88-5518-044-3.36.

Der volle Inhalt der Quelle
Annotation:
The legal constraints of two important technologies for sustainable precision agriculture are presented: unmanned aircraft and artificial intelligence. Unmanned aircraft, or drones, are a rapidly developing technology. By 2035, it is estimated that in the EU, drones will create over 100,000 new jobs and produce more than 10 billion euros per year in revenue. The current situation regarding drone operation is detailed, along with the recommendations of the European Aviation and Space Agency (EASA). Furthermore, the procedure for obtaining a commercial drone permit is briefly described and the situations where such a permit may be required are presented. Finally, the course concludes with the latest EU regulations on ethical use of Artificial Intelligence, presenting the ethics guidelines of the EU for trustworthy AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Larsson, Stefan. „AI in the EU: Ethical Guidelines as a Governance Tool“. In The European Union and the Technology Shift, 85–111. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63672-2_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nehme, Esther, Hanine Salloum, Jacques Bou Abdo und Ross Taylor. „AI, IoT, and Blockchain: Business Models, Ethical Issues, and Legal Perspectives“. In Internet of Things, Artificial Intelligence and Blockchain Technology, 67–88. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74150-1_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "AI technology ethics"

1

Majot, Andrew M., und Roman V. Yampolskiy. „AI safety engineering through introduction of self-reference into felicific calculus via artificial pain and pleasure“. In 2014 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS). IEEE, 2014. http://dx.doi.org/10.1109/ethics.2014.6893398.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Anton, Eduard, Kevin Kus und Frank Teuteberg. „Is Ethics Really Such a Big Deal? The Influence of Perceived Usefulness of AI-based Surveillance Technology on Ethical Decision-Making in Scenarios of Public Surveillance“. In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2021. http://dx.doi.org/10.24251/hicss.2021.261.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Libin, Alexander V. „Integrated disciplines and future competencies: A blueprint for ethically aligned curriculum for IT, CS, ITC & beyond“. In Sixth International Conference on Higher Education Advances. Valencia: Universitat Politècnica de València, 2020. http://dx.doi.org/10.4995/head20.2020.11241.

Der volle Inhalt der Quelle
Annotation:
Autonomous and intelligent technical systems are specifically designed to reduce the necessity for human intervention in our daily lives. In so doing, these new computer-based systems are also raising concerns about their impact on individuals and society. Because of their innovative nature, the full benefit will be obtained only if the technology is aligned with society's defined values guided by ethical principles. Through the proposed ethically aligned curriculum (ETHIKA) for computer sciences (CS) and information technology (IT) specialties we intend, therefore, to establish frameworks to guide and inform dialogue and debate around the non-technical implications, in particular related to ethical dilemmas. Hereby we understand "ethical" to go beyond universal moral constructs, such as trust, harm, good or bad, and include ethical designs for AI-based technologies, socially-oriented computer sciences, and ethical risks of digital society. As digital economy prospers, more CS/IT-professionals realize the power of education-driven intellectual capacity (InCED). It is hypothesized, that InCED has direct impact on learning competencies of students, warranting future successful management of professional and life ethical challenges. ETHIKA elucidate, through both methodological and experimental inquiries, the impact of global digitalization and related ethical risks on learning and professional competencies in both professional CS/IT-community and the University students.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mujtaba, Dena F., und Nihar R. Mahapatra. „Ethical Considerations in AI-Based Recruitment“. In 2019 IEEE International Symposium on Technology and Society (ISTAS). IEEE, 2019. http://dx.doi.org/10.1109/istas48451.2019.8937920.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cardenas, Soraya, und Serafin F. Vallejo-Cardenas. „Continuing the Conversation on How Structural Racial and Ethnic Inequalities Affect AI Biases“. In 2019 IEEE International Symposium on Technology and Society (ISTAS). IEEE, 2019. http://dx.doi.org/10.1109/istas48451.2019.8937853.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Poon Chong, Peter, und Terrence Lalla. „A REVIEW OF BIAS IN DECISION-MAKING MODELS“. In International Conference on Emerging Trends in Engineering & Technology (IConETech-2020). Faculty of Engineering, The University of the West Indies, St. Augustine, 2020. http://dx.doi.org/10.47412/aata9467.

Der volle Inhalt der Quelle
Annotation:
A decision-making model solution is a dependent variable derived from independent variables, parameters and forcing functions. Independent variables collected in linguistic form require intuition which can be potentially biased. A collection of qualitative research papers on bias in models was perused to identify the causes of bias. Decision-making in the manufacturing, finance, law, and management industries require solutions from a complex assortment of data. The popularity of combining decision-making with artificial intelligence (AI) for intelligent systems causes concern, as it can be a predisposition to a true solution. A true solution avoids impartiality and maintains repeated results from a natural phenomenon without favoritism or discrimination. This paper appraised the development of the decision-making environment to identify the path and effect of bias on the variables used in models. The literature reviewed was associated with the design of a decision-making criterion rationalizing the application of variables. The influences on variables were observed with respect to the available resources, environment, and people. This list was further extended to consider the constraints of the resource, customer, network, and regulation fed to the structure. The involvement of bias was founded because of the need for rational decision making, cognitive misperceptions, and psychological principles. The study of variables showed the opportunity for a conscious bias from unethical actions during the development of a decision-making environment. In principle, bias may be best reduced with continuous model monitoring and fair adjustments. Ignoring these implications increases the chance of a bias decision-making model. It also influences the decision result and may be avoided with an ethical and fair quality review. The paper increases the awareness of bias in decision-making and guides actors to the identification and avoidance/reduction of bias effects. This may be a guide for the reduction of the model error to achieve a true solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie