To see the other types of publications on this topic, follow the link: Trustworthiness of AI.

Journal articles on the topic 'Trustworthiness of AI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Trustworthiness of AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti, and Daniele Nardi. "A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.

Full text
Abstract:
This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to quantify trustworthiness by assessing the discrepancies between expected and observed behaviors and how these affect perceived uncertainty and trust. The paper provides a formalization for measuring trustworthiness, taking into account both perceived and intrinsic characteristics. By detailing the factors that influence trust, this study aims to foster more ethical and widely accepted AI technologies, ensuring they meet both functional and ethical criteria.
APA, Harvard, Vancouver, ISO, and other styles
2

Rishika Sen, Shrihari Vasudevan, Ricardo Britto, and Mj Prasath. "Ascertaining trustworthiness of AI systems in telecommunications." ITU Journal on Future and Evolving Technologies 5, no. 4 (December 10, 2024): 503–14. https://doi.org/10.52953/wibx7049.

Full text
Abstract:
With the rapid uptake of Artificial Intelligence (AI) in the Telecommunications (Telco) industry and the pivotal role AI is expected to play in future generation technologies (e.g., 5G, 5G Advanced and 6G), establishing the trustworthiness of AI used in Telco becomes critical. Trustworthy Artificial Intelligence (TWAI) guidelines need to be implemented to establish trust in AI-powered products and services by being compliant to these guidelines. This paper focuses on measuring compliance to such guidelines. This paper proposes a Large Language Model (LLM)-driven approach to measure TWAI compliance of multiple public AI code repositories using off-the-shelf LLMs. This paper proposes an LLM-based scanner for automated measurement of the trustworthiness of any AI system. The proposed solution measures and reports the level of compliance of an AI system. Results of the experiments demonstrate the feasibility of the proposed approached for the automated measurement of trustworthiness of AI systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Schmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin, and Stefan Wrobel. "The why and how of trustworthy AI." at - Automatisierungstechnik 70, no. 9 (September 1, 2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.

Full text
Abstract:
Abstract Artificial intelligence is increasingly penetrating industrial applications as well as areas that affect our daily lives. As a consequence, there is a need for criteria to validate whether the quality of AI applications is sufficient for their intended use. Both in the academic community and societal debate, an agreement has emerged under the term “trustworthiness” as the set of essential quality requirements that should be placed on an AI application. At the same time, the question of how these quality requirements can be operationalized is to a large extent still open. In this paper, we consider trustworthy AI from two perspectives: the product and organizational perspective. For the former, we present an AI-specific risk analysis and outline how verifiable arguments for the trustworthiness of an AI application can be developed. For the second perspective, we explore how an AI management system can be employed to assure the trustworthiness of an organization with respect to its handling of AI. Finally, we argue that in order to achieve AI trustworthiness, coordinated measures from both product and organizational perspectives are required.
APA, Harvard, Vancouver, ISO, and other styles
4

Vashistha, Ritwik, and Arya Farahi. "U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.

Full text
Abstract:
With growing concerns regarding bias and discrimination in predictive models, the AI community has increasingly focused on assessing AI system trustworthiness. Conventionally, trustworthy AI literature relies on the probabilistic framework and calibration as prerequisites for trustworthiness. In this work, we depart from this viewpoint by proposing a novel trust framework inspired by the philosophy literature on trust. We present a precise mathematical definition of trustworthiness, termed U-trustworthiness, specifically tailored for a subset of tasks aimed at maximizing a utility function. We argue that a model’s U-trustworthiness is contingent upon its ability to maximize Bayes utility within this task subset. Our first set of results challenges the probabilistic framework by demonstrating its potential to favor less trustworthy models and introduce the risk of misleading trustworthiness assessments. Within the context of U-trustworthiness, we prove that properly-ranked models are inherently U-trustworthy. Furthermore, we advocate for the adoption of the AUC metric as the preferred measure of trustworthiness. By offering both theoretical guarantees and experimental validation, AUC enables robust evaluation of trustworthiness, thereby enhancing model selection and hyperparameter tuning to yield more trustworthy outcomes.
APA, Harvard, Vancouver, ISO, and other styles
5

Bradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok, and Kristen Brent Venable. "From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness." Proceedings of the AAAI Symposium Series 4, no. 1 (November 8, 2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.

Full text
Abstract:
It is well known that successful human-AI collaboration depends on the perceived trustworthiness of the AI. We argue that a key to securing trust in such collaborations is ensuring that the AI competently addresses ethics' foundational role in engagements. Specifically, developers need to identify, address, and implement mechanisms for accommodating ethical components of AI choices. We propose an approach that instantiates ethics semantically as ontology-based moral policies. To accommodate the wide variation and interpretation of ethics, we capture such variations into ethics sets, which are situationally specific aggregations of relevant moral policies. We are extending our ontology-based policy management systems with new representations and capabilities to allow trustworthy AI-human ethical collaborative behavior. Moreover, we believe that such AI-human ethical encounters demand that trustworthiness is bi-directional – humans need to be able to assess and calibrate their actions to be consistent with the trustworthiness of AI in a given context, and AIs need to be able to do the same with respect to humans.
APA, Harvard, Vancouver, ISO, and other styles
6

Alzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy, et al. "Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements." International Journal of Intelligent Systems 2023 (October 26, 2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.

Full text
Abstract:
Given the tremendous potential and influence of artificial intelligence (AI) and algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse fields, including education, business, healthcare industries, government, and justice sectors. While AI and DM offer significant benefits, they also carry the risk of unfavourable outcomes for users and society. As a result, ensuring the safety, reliability, and trustworthiness of these systems becomes crucial. This article aims to provide a comprehensive review of the synergy between AI and DM, focussing on the importance of trustworthiness. The review addresses the following four key questions, guiding readers towards a deeper understanding of this topic: (i) why do we need trustworthy AI? (ii) what are the requirements for trustworthy AI? In line with this second question, the key requirements that establish the trustworthiness of these systems have been explained, including explainability, accountability, robustness, fairness, acceptance of AI, privacy, accuracy, reproducibility, and human agency, and oversight. (iii) how can we have trustworthy data? and (iv) what are the priorities in terms of trustworthy requirements for challenging applications? Regarding this last question, six different applications have been discussed, including trustworthy AI in education, environmental science, 5G-based IoT networks, robotics for architecture, engineering and construction, financial technology, and healthcare. The review emphasises the need to address trustworthiness in AI systems before their deployment in order to achieve the AI goal for good. An example is provided that demonstrates how trustworthy AI can be employed to eliminate bias in human resources management systems. The insights and recommendations presented in this paper will serve as a valuable guide for AI researchers seeking to achieve trustworthiness in their applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Kafali, Efi, Davy Preuveneers, Theodoros Semertzidis, and Petros Daras. "Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework." Big Data and Cognitive Computing 8, no. 11 (October 24, 2024): 142. http://dx.doi.org/10.3390/bdcc8110142.

Full text
Abstract:
This study critically examines the trustworthiness of widely used AI applications, focusing on their integration into daily life, often without users fully understanding the risks or how these threats might affect them. As AI apps become more accessible, users tend to trust them due to their convenience and usability, frequently overlooking critical issues such as security, privacy, and ethics. To address this gap, we introduce a user-centric framework that enables individuals to assess the trustworthiness of AI applications based on their own experiences and perceptions. The framework evaluates several dimensions—transparency, security, privacy, ethics, and compliance—while also aiming to raise awareness and bring the topic of AI trustworthiness into public dialogue. By analyzing AI threats, real-world incidents, and strategies for mitigating the risks posed by AI apps, this study contributes to the ongoing discussions on AI safety and trust.
APA, Harvard, Vancouver, ISO, and other styles
8

Mentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti, and Dimitris Apostolou. "Exploring the landscape of trustworthy artificial intelligence: Status and challenges." Intelligent Decision Technologies 18, no. 2 (June 7, 2024): 837–54. http://dx.doi.org/10.3233/idt-240366.

Full text
Abstract:
Artificial Intelligence (AI) has pervaded everyday life, reshaping the landscape of business, economy, and society through the alteration of interactions and connections among stakeholders and citizens. Nevertheless, the widespread adoption of AI presents significant risks and hurdles, sparking apprehension regarding the trustworthiness of AI systems by humans. Lately, numerous governmental entities have introduced regulations and principles aimed at fostering trustworthy AI systems, while companies, research institutions, and public sector organizations have released their own sets of principles and guidelines for ensuring ethical and trustworthy AI. Additionally, they have developed methods and software toolkits to aid in evaluating and improving the attributes of trustworthiness. The present paper aims to explore this evolution by analysing and supporting the trustworthiness of AI systems. We commence with an examination of the characteristics inherent in trustworthy AI, along with the corresponding principles and standards associated with them. We then examine the methods and tools that are available to designers and developers in their quest to operationalize trusted AI systems. Finally, we outline research challenges towards end-to-end engineering of trustworthy AI by-design.
APA, Harvard, Vancouver, ISO, and other styles
9

Vadlamudi, Siddhartha. "Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion." Engineering International 3, no. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.

Full text
Abstract:
Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative outcomes related to AI are proportionately high, particularly at scale. Most AI is really artificial narrow intelligence, intended to achieve a specific task on previously curated information from a certain source. Since most AI models expand on correlations, predictions could fail to sum up to various populations or settings and might fuel existing disparities and biases. As the AI industry is amazingly imbalanced, and experts are as of now overpowered by other digital devices, there could be a little capacity to catch blunders. With this article, we aim to present the idea of TAI and its five essential standards (1) usefulness, (2) non-maleficence, (3) autonomy, (4) justice, and (5) logic. We further draw on these five standards to build up a data-driven analysis for TAI and present its application by portraying productive paths for future research, especially as to the distributed ledger technology-based acknowledgment of TAI.
APA, Harvard, Vancouver, ISO, and other styles
10

AJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill, and Fajuyigbe Gbenga Joseph. "Software Engineering’s Key Role in AI Content Trustworthiness." International Journal of Research and Scientific Innovation XI, no. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.

Full text
Abstract:
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. It can also be defined as the science and engineering of making intelligent machines, especially intelligent computer programs. In recent decades, there has been a discernible surge in the focus of the scientific and government sectors on reliable AI. The International Organization for Standardization, which focuses on technical, industrial, and commercial standardization, has devised several strategies to promote trust in AI systems, with an emphasis on fairness, transparency, accountability, and controllability. Therefore, this paper aims to examine the role of Software Engineering in AI Content trustworthiness. A secondary data analysis methodology was used in this work to investigate the crucial role that software engineering plays in ensuring the accuracy of AI content. The dataset was guaranteed to contain reliable and comprehensive material relevant to our inquiry because it was derived from peer-reviewed publications. To decrease potential biases and increase data consistency, a rigorous validation process was employed. The findings of the paper showed that lawful, ethical, and robust are the fundamental components of reliable Artificial Intelligence. The criteria for Reliable Artificial Intelligence include Transparency, Human agency and oversight, technical robustness and safety, privacy and data governance, diversity, non-discrimination, fairness, etc. The functions of software engineering in the credibility of AI content are Algorithm Design and Implementation, Data Quality and Preprocessing, Explainability and Interpretability, Ethical Considerations and Governance, User feedback, and Iterative Improvements among others. It is therefore essential for Software engineering to ensure the dependability of material generated by AI systems at every stage of the development lifecycle. To build and maintain reliable AI systems, engineers must address problems with data quality, model interpretability, ethical difficulties, security, and user input.
APA, Harvard, Vancouver, ISO, and other styles
11

Abbott, Ryan, and Brinson S. Elliott. "Putting the Artificial Intelligence in Alternative Dispute Resolution." Amicus Curiae 4, no. 3 (June 24, 2023): 685–706. http://dx.doi.org/10.14296/ac.v4i3.5627.

Full text
Abstract:
This article argues that the evolving regulatory and governance environment for artificial intelligence (AI) will significantly impact alternative dispute resolution (ADR). Very recently, AI regulation has emerged as a pressing international policy issue, with jurisdictions engaging in a sort of regulatory arms race. In the same way that existing ADR regulations impact the use of AI in ADR, so too will new AI regulations impact ADR, among other reasons, because ADR is already utilizing AI and will increasingly utilize AI in the future. Appropriate AI regulations should thus benefit ADR, as the regulatory approaches in both fields share many of the same goals and values, such as promoting trustworthiness. Keywords: artificial intelligence; online dispute resolution; alternative dispute resolution; regulation; governance; trustworthiness; transparency; fairness; diversity; explainability.
APA, Harvard, Vancouver, ISO, and other styles
12

Kuipers, Benjamin. "AI and Society: Ethics, Trust, and Cooperation." Communications of the ACM 66, no. 8 (July 25, 2023): 39–42. http://dx.doi.org/10.1145/3583134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Avin, Shahar, Haydn Belfield, Miles Brundage, Gretchen Krueger, Jasmine Wang, Adrian Weller, Markus Anderljung, et al. "Filling gaps in trustworthy development of AI." Science 374, no. 6573 (December 10, 2021): 1327–29. http://dx.doi.org/10.1126/science.abi7176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Xin, Ruyue, Jingye Wang, Peng Chen, and Zhiming Zhao. "Trustworthy AI-based Performance Diagnosis Systems for Cloud Applications: A Review." ACM Computing Surveys 57, no. 5 (January 9, 2025): 1–37. https://doi.org/10.1145/3701740.

Full text
Abstract:
Performance diagnosis systems are defined as detecting abnormal performance phenomena and play a crucial role in cloud applications. An effective performance diagnosis system is often developed based on artificial intelligence (AI) approaches, which can be summarized into a general framework from data to models. However, the AI-based framework has potential hazards that could degrade the user experience and trust. For example, a lack of data privacy may compromise the security of AI models, and low robustness can be hard to apply in complex cloud environments. Therefore, defining the requirements for building a trustworthy AI-based performance diagnosis system has become essential. This article systematically reviews trustworthiness requirements in AI-based performance diagnosis systems. We first introduce trustworthiness requirements and extract six key requirements from a technical perspective, including data privacy, fairness, robustness, explainability, efficiency, and human intervention. We then unify these requirements into a general performance diagnosis framework, ranging from data collection to model development. Next, we comprehensively provide related works for each component and concrete actions to improve trustworthiness in the framework. Finally, we identify possible research directions and challenges for the future development of trustworthy AI-based performance diagnosis systems.
APA, Harvard, Vancouver, ISO, and other styles
15

Paulsen, Jens Erik. "AI, Trustworthiness, and the Digital Dirty Harry Problem." Nordic Journal of Studies in Policing 8, no. 02 (June 23, 2021): 1–19. http://dx.doi.org/10.18261/issn.2703-7045-2021-02-02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kundu, Shinjini. "Measuring trustworthiness is crucial for medical AI tools." Nature Human Behaviour 7, no. 11 (November 20, 2023): 1812–13. http://dx.doi.org/10.1038/s41562-023-01711-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kshirsagar, Meghana, Krishn Kumar Gupt, Gauri Vaidya, Conor Ryan, Joseph P. Sullivan, and Vivek Kshirsagar. "Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI." International Journal of Natural Computing Research 11, no. 1 (January 1, 2022): 1–23. http://dx.doi.org/10.4018/ijncr.310006.

Full text
Abstract:
Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Serafimova, Silviya. "Questioning the Role of Moral AI as an Adviser within the Framework of Trustworthiness Ethics." Filosofiya-Philosophy 30, no. 4 (December 6, 2021): 402–12. http://dx.doi.org/10.53656/phil2021-04-07.

Full text
Abstract:
The main objective of this article is to demonstrate why despite the growing interest in justifying AI’s trustworthiness, one can argue for AI’s reliability. By analyzing why trustworthiness ethics in Nickel’s sense provides some wellgrounded hints for rethinking the rational, affective and normative accounts of trust in respect to AI, I examine some concerns about the trustworthiness of Savulescu and Maslen’s model of moral AI as an adviser. Specifically, I tackle one of its exemplifications regarding Klincewicz’s hypothetical scenario of John which is refracted through the lens of the HLEG’s fifth requirement of trustworthy artificial intelligence (TAI), namely, that of Diversity, non-discrimination and fairness.
APA, Harvard, Vancouver, ISO, and other styles
19

Hohma, Ellen, and Christoph Lütge. "From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems." AI 4, no. 4 (October 13, 2023): 904–26. http://dx.doi.org/10.3390/ai4040046.

Full text
Abstract:
The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of AI governance. To address these challenges, we propose a stronger shift in focus from solely the trustworthiness of AI products to the perceived trustworthiness of the development process by introducing a concept for a trustworthy development process for AI systems. We derive this process from a semi-systematic literature analysis of common AI governance documents to identify the most prominent measures for operationalizing responsible AI and compare them to implications for AI providers from EU-centered regulatory frameworks. Assessing the resulting process along derived characteristics of trustworthy processes shows that, while clarity is often mentioned as a major drawback, and many AI providers tend to wait for finalized regulations before reacting, the summarized landscape of proposed AI governance mechanisms can already cover many of the binding and non-binding demands circulating similar activities to address fundamental risks. Furthermore, while many factors of procedural trustworthiness are already fulfilled, limitations are seen particularly due to the vagueness of currently proposed measures, calling for a detailing of measures based on use cases and the system’s context.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Bijun, Onur Asan, and Mo Mansouri. "What May Impact Trustworthiness of AI in Digital Healthcare: Discussion from Patients’ Viewpoint." Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 12, no. 1 (March 2023): 5–10. http://dx.doi.org/10.1177/2327857923121001.

Full text
Abstract:
The healthcare industry is undergoing a transformation of traditional medical relationships from human-physician interactions to digital healthcare focusing on physician-AI-patient interactions. Patients’ trustworthiness is the cornerstone of adopting new technologies expounding the reliability, integrity, and ability of AI-based systems and devices to provide an accurate and safe healthcare environment. The main objective of this study is to investigate the various factors that influence patients’ trustworthiness in AI-based systems and devices, taking into account differences in patients’ experiences and backgrounds. First, an exploratory conceptual framework inspired by the United Theory of Acceptance and Use of Technology (UTAUT) and Health Belief Model (HBM) is established to further explain the patients’ trust to support the adoption willingness of AI. Then a case study that includes 218 samples from chronic patients is conducted. The results of the study indicate that factors such as accountability, risk perception, facilitating conditions, and social influence play a significant role in determining a patient’s trust in AI-based healthcare devices, while ease of ease may not have a direct impact to trust. And among the demographic factors, only race showed a strong correlation with the level of patient trust in AI. The contributions of this study can provide a comprehensive understanding of patients’ trustworthiness and inform the development and deployment of the technology in a way that prioritizes patients’ interests.
APA, Harvard, Vancouver, ISO, and other styles
21

Jansaenroj, Krit. "ATTITUDE OF MILLENNIALS AND GENERATION Z TOWARDS ARTIFICIAL INTELLIGENCE IN SURGERY." International Journal of Advanced Research 10, no. 7 (July 31, 2022): 921–26. http://dx.doi.org/10.21474/ijar01/15114.

Full text
Abstract:
Because of its increasing ability to turn ambiguity and complexity in data into actionable-though imperfect-clinical choices or suggestions, artificial intelligence (AI) has the potential to change health care practices. Trust is the only mechanism that influences physicians use and adoption of AI in the growing interaction between humans and AI. Trust is a psychological process that enables people to deal with ambiguity in what they know and do not know. The purpose of this online survey was to determine the relationship between age groups, familiarity, and trustworthiness present towards AI through the question of whether particular participants would prefer a human or an AI surgeon if they had to undergo a surgery.The results showed that age groups and trustworthiness are not correlated, due to a variety of factors, and,also, familiarity is not correlated with age group.
APA, Harvard, Vancouver, ISO, and other styles
22

Mattioli, Juliette, Martin Gonzalez, Lucas Mattioli, Karla Quintero, and Henri Sohier. "Leveraging Tropical Algebra to Assess Trustworthy AI." Proceedings of the AAAI Symposium Series 4, no. 1 (November 8, 2024): 81–88. http://dx.doi.org/10.1609/aaaiss.v4i1.31775.

Full text
Abstract:
Given the complexity of the application domain, the qualitative and quantifiable nature of the concepts involved, the wide heterogeneity and granularity of trustworthy attributes, and in some cases the non-comparability of the latter, assessing the trustworthiness of AI-based systems is a challenging process. In order to overcome these challenges, the Confiance.ai program proposes an innovative solution based on a Multi-Criteria Decision Aiding (MCDA) methodology. This approach involves several stages: framing trustworthiness as a set of well-defined attributes, exploring attributes to determine related Key Performance Indicators (KPI) or metrics, selecting evaluation protocols, and defining a method to aggregate multiple criteria to estimate an overall assessment of trust. This approach is illustrated by applying the RUM methodology (Robustness, Uncertainty, Monitoring) to ML context, while the focus on aggregation methods are based on Tropical Algebra.
APA, Harvard, Vancouver, ISO, and other styles
23

Nayak, Bhabani Sankar. "ROBUSTNESS AND TRUSTWORTHINESS IN AI SYSTEMS: A TECHNICAL PERSPECTIVE." INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND INFORMATION TECHNOLOGY 8, no. 1 (February 8, 2025): 1849–62. https://doi.org/10.34218/ijrcait_08_01_135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ucar, Aysegul, Mehmet Karakose, and Necim Kırımça. "Artificial Intelligence for Predictive Maintenance Applications: Key Components, Trustworthiness, and Future Trends." Applied Sciences 14, no. 2 (January 20, 2024): 898. http://dx.doi.org/10.3390/app14020898.

Full text
Abstract:
Predictive maintenance (PdM) is a policy applying data and analytics to predict when one of the components in a real system has been destroyed, and some anomalies appear so that maintenance can be performed before a breakdown takes place. Using cutting-edge technologies like data analytics and artificial intelligence (AI) enhances the performance and accuracy of predictive maintenance systems and increases their autonomy and adaptability in complex and dynamic working environments. This paper reviews the recent developments in AI-based PdM, focusing on key components, trustworthiness, and future trends. The state-of-the-art (SOTA) techniques, challenges, and opportunities associated with AI-based PdM are first analyzed. The integration of AI technologies into PdM in real-world applications, the human–robot interaction, the ethical issues emerging from using AI, and the testing and validation abilities of the developed policies are later discussed. This study exhibits the potential working areas for future research, such as digital twin, metaverse, generative AI, collaborative robots (cobots), blockchain technology, trustworthy AI, and Industrial Internet of Things (IIoT), utilizing a comprehensive survey of the current SOTA techniques, opportunities, and challenges allied with AI-based PdM.
APA, Harvard, Vancouver, ISO, and other styles
25

Azzam, Tarek. "Artificial intelligence and validity." New Directions for Evaluation 2023, no. 178-179 (June 2023): 85–95. http://dx.doi.org/10.1002/ev.20565.

Full text
Abstract:
AbstractThis article explores the interaction between artificial intelligence (AI) and validity and identifies areas where AI can help build validity arguments, and where AI might not be ready to contribute to our work in establishing validity. The validity of claims made in an evaluation is critical to the field, since it highlights the strengths and limitations of findings and can contribute to the utilization of the evaluation. Within this article, validity will be discussed within two broad categories: quantitative validity and qualitative trustworthiness. Within these categories, there are multiple types of validity, including internal validity, measurement validity, establishing trustworthiness, and credibility, to name a few. Each validity type will be discussed within the context of AI, examining if and how AI can be leveraged (or not) to help establish a specific validity type, or where it might not be possible for AI (in its current form) to contribute to the development of a validity argument. Multiple examples will be provided throughout the article to highlight the concepts introduced.
APA, Harvard, Vancouver, ISO, and other styles
26

Fehr, Jana, Giovanna Jaramillo-Gutierrez, Luis Oala, Matthias I. Gröschel, Manuel Bierwirth, Pradeep Balachandran, Alixandro Werneck-Leite, and Christoph Lippert. "Piloting A Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools." Healthcare 10, no. 10 (September 30, 2022): 1923. http://dx.doi.org/10.3390/healthcare10101923.

Full text
Abstract:
Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency and trustworthiness was calculated on a scale from 0% to 100%. Our assessment of three medical AI use cases pin-pointed reporting gaps and resulted in transparency scores of 67% for two use cases and one with 59%. We report anecdotal evidence that business constraints and limited information from external datasets were major obstacles to providing transparency for the three use cases. The observed transparency gaps also lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines. All three pilot use cases faced challenges to provide transparency about medical AI tools, but more studies are needed to investigate those in the wider medical AI sector. Applying this framework for an external assessment of transparency may be infeasible if business constraints prevent the disclosure of information. New strategies may be necessary to enable audits of medical AI tools while preserving business secrets.
APA, Harvard, Vancouver, ISO, and other styles
27

Slosser, Jacob Livingston, Birgit Aasa, and Henrik Palmer Olsen. "Trustworthy AI." Technology and Regulation 2023 (October 27, 2023): 58–68. https://doi.org/10.71265/pztsvw73.

Full text
Abstract:
The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an informed notion of what is meant for a system to be trustworthy and examine the concept of trust, highlighting its reliance on a specific relationship between humans that cannot be strictly transmuted into a relationship between humans and machines. They outline a trust-based model for a cooperative approach to AI and provide an example of what that might look like.
APA, Harvard, Vancouver, ISO, and other styles
28

Long, Qiyu. "Effect of Racial Homophily on AI Anthropomorphism and News Anchor Credibility." Journal of Education, Humanities and Social Sciences 45 (December 26, 2024): 528–38. https://doi.org/10.54097/bd1tkw72.

Full text
Abstract:
This study explores the influence of anthropomorphism and racial homophily on audience trust in Artificial Intelligence News Anchors (AINAs) in the context of contemporary journalism. Utilizing a comprehensive between-groups experiment, participants were recruited online and presented with audiovisual news clips featuring AINAs. The research investigates the relationships among anthropomorphic cues, viewers’ perceptions of racial homogeneity, and the trustworthiness of news conveyed by these AI entities. Findings indicate a significant positive correlation between visual cues and news trustworthiness, while anthropomorphic features exert a moderating effect. However, the study highlights limitations in sample representativeness and generalizability across diverse cultural contexts, suggesting that results may not apply universally to all AINA viewers. The study calls for further exploration of the interaction between racial traits and AI technology, emphasizing the need to consider personal attributes and the evolving landscape of AI in journalism. By advancing the theoretical framework of human-AI interaction, this research contributes valuable insights into the intersection of technology, media, and audience perception.
APA, Harvard, Vancouver, ISO, and other styles
29

Ganguly, Shantanu, and Nivedita Pandey. "Deployment of AI Tools and Technologies on Academic Integrity and Research." Bangladesh Journal of Bioethics 15, no. 2 (July 1, 2024): 28–32. http://dx.doi.org/10.62865/bjbio.v15i2.122.

Full text
Abstract:
Academic integrity is a set of ethical ideals and values that guide the behavior of individuals in academic and educational settings. It encompasses honesty, trustworthiness, fairness, and a commitment to upholding the highest standards of ethical conduct in the quest for knowledge, learning, and research. Academic integrity is essential in maintaining the trustworthiness, reputation, and effectiveness of educational institutions and scholarly communities. Whereas, AI, or Artificial Intelligence, is a broad field of computer science that focuses on creating frameworks, software, or machines that can perform tasks that would typically require human intelligence. These tasks include problem-solving, learning from experience, understanding natural language, recognizing patterns, and making choices. AI systems aim to mimic or replicate human cognitive functions, and they can range from simple rule-based systems to highly complex, autodidactic neural networks. AI can significantly impact academic integrity and research in both positive and potentially challenging ways.
APA, Harvard, Vancouver, ISO, and other styles
30

R. S. Deshpande, P. V. Ambatkar. "Interpretable Deep Learning Models: Enhancing Transparency and Trustworthiness in Explainable AI." Proceeding International Conference on Science and Engineering 11, no. 1 (February 18, 2023): 1352–63. http://dx.doi.org/10.52783/cienceng.v11i1.286.

Full text
Abstract:
Explainable AI (XAI) aims to address the opacity of deep learning models, which can limit their adoption in critical decision-making applications. This paper presents a novel framework that integrates interpretable components and visualization techniques to enhance the transparency and trustworthiness of deep learning models. We propose a hybrid explanation method combining saliency maps, feature attribution, and local interpretable model-agnostic explanations (LIME) to provide comprehensive insights into the model's decision-making process. Our experiments with convolutional neural networks (CNNs) and transformers demonstrate that our approach improves interpretability without compromising performance. User studies with domain experts indicate that our visualization dashboard facilitates better understanding and trust in AI systems. This research contributes to developing more transparent and trustworthy deep learning models, paving the way for broader adoption in sensitive applications where human users need to understand and trust AI decisions.
APA, Harvard, Vancouver, ISO, and other styles
31

Psaltis, Athanasios, Kassiani Zafeirouli, Peter Leškovský, Stavroula Bourou, Juan Camilo Vásquez-Correa, Aitor García-Pablos, Santiago Cerezo Sánchez, Anastasios Dimou, Charalampos Z. Patrikakis, and Petros Daras. "Fostering Trustworthiness of Federated Learning Ecosystem through Realistic Scenarios." Information 14, no. 6 (June 16, 2023): 342. http://dx.doi.org/10.3390/info14060342.

Full text
Abstract:
The present study thoroughly evaluates the most common blocking challenges faced by the federated learning (FL) ecosystem and analyzes existing state-of-the-art solutions. A system adaptation pipeline is designed to enable the integration of different AI-based tools in the FL system, while FL training is conducted under realistic conditions using a distributed hardware infrastructure. The suggested pipeline and FL system’s robustness are tested against challenges related to tool deployment, data heterogeneity, and privacy attacks for multiple tasks and data types. A representative set of AI-based tools and related datasets have been selected to cover several validation cases and distributed to each edge device to closely reflect real-world scenarios. The study presents significant outcomes of the experiments and analyzes the models’ performance under different realistic FL conditions, while highlighting potential limitations and issues that occurred during the FL process.
APA, Harvard, Vancouver, ISO, and other styles
32

Farayola, Michael Mayowa, Irina Tal, Regina Connolly, Takfarinas Saber, and Malika Bendechache. "Ethics and Trustworthiness of AI for Predicting the Risk of Recidivism: A Systematic Literature Review." Information 14, no. 8 (July 27, 2023): 426. http://dx.doi.org/10.3390/info14080426.

Full text
Abstract:
Artificial Intelligence (AI) can be very beneficial in the criminal justice system for predicting the risk of recidivism. AI provides unrivalled high computing power, speed, and accuracy; all harnessed to strengthen the efficiency in predicting convicted individuals who may be on the verge of recommitting a crime. The application of AI models for predicting recidivism has brought positive effects by minimizing the possible re-occurrence of crime. However, the question remains of whether criminal justice system stakeholders can trust AI systems regarding fairness, transparency, privacy and data protection, consistency, societal well-being, and accountability when predicting convicted individuals’ possible risk of recidivism. These are all requirements for a trustworthy AI. This paper conducted a systematic literature review examining trust and the different requirements for trustworthy AI applied to predicting the risks of recidivism. Based on this review, we identified current challenges and future directions regarding applying AI models to predict the risk of recidivism. In addition, this paper provides a comprehensive framework of trustworthy AI for predicting the risk of recidivism.
APA, Harvard, Vancouver, ISO, and other styles
33

Capelli, Giulia, Daunia Verdi, Isabella Frigerio, Niki Rashidian, Antonella Ficorilli, Vincent Grasso, Darya Majidi, Andrew A. Gumbs, and Gaya Spolverato. "White paper: ethics and trustworthiness of artificial intelligence in clinical surgery." Artificial Intelligence Surgery 3, no. 2 (2023): 111–22. http://dx.doi.org/10.20517/ais.2023.04.

Full text
Abstract:
This white paper documents the consensus opinion of the Artificial Intelligence Surgery (AIS) task force on Artificial Intelligence (AI) Ethics and the AIS Editorial Board Study Group on Ethics on the ethical considerations and current trustworthiness of artificial intelligence and autonomous actions in surgery. The ethics were divided into 6 topics defined by the Task Force: Reliability of robotic and AI systems; Respect for privacy and sensitive data; Use of complete and representative (i.e., unbiased) data; Transparencies and uncertainties in AI; Fairness: are we exacerbating inequalities in access to healthcare?; Technology as an equalizer in surgical education. Task Force members were asked to research a topic, draft a section, and come up with several potential consensus statements. These were voted on by members of the Task Force and the Study Group, and all proposals that received > 75 % agreement were adopted and included in the White Paper.
APA, Harvard, Vancouver, ISO, and other styles
34

Ali Khan, Umair, Janne Kauttonen, Lili Aunimo, and Ari V Alamäki. "A System to Ensure Information Trustworthiness in Artificial Intelligence Enhanced Higher Education." Journal of Information Technology Education: Research 23 (2024): 013. http://dx.doi.org/10.28945/5295.

Full text
Abstract:
Aim/Purpose: The purpose of this paper is to address the challenges posed by disinformation in an educational context. The paper aims to review existing information assessment techniques, highlight their limitations, and propose a conceptual design for a multimodal, explainable information assessment system for higher education. The ultimate goal is to provide a roadmap for researchers that meets current requirements of information assessment in education. Background: The background of this paper is rooted in the growing concern over disinformation, especially in higher education, where it can impact critical thinking and decision-making. The issue is exacerbated by the rise of AI-based analytics on social media and their use in educational settings. Existing information assessment techniques have limitations, requiring a more comprehensive AI-based approach that considers a wide range of data types and multiple dimensions of disinformation. Methodology: Our approach involves an extensive literature review of current methods for information assessment, along with their limitations. We then establish theoretical foundations and design concepts for EMIAS based on AI techniques and knowledge graph theory. Contribution: We introduce a comprehensive theoretical framework for an AI-based multimodal information assessment system specifically designed for the education sector. It not only provides a novel approach to assessing information credibility but also proposes the use of explainable AI and a three-pronged approach to information evaluation, addressing a critical gap in the current literature. This research also serves as a guide for educational institutions considering the deployment of advanced AI-based systems for information evaluation. Findings: We uncover a critical need for robust information assessment systems in higher education to tackle disinformation. We propose an AI-based EMIAS system designed to evaluate the trustworthiness and quality of content while providing explanatory justifications. We underscore the challenges of integrating this system into educational infrastructures and emphasize its potential benefits, such as improved teaching quality and fostering critical thinking. Recommendations for Practitioners: Implement the proposed EMIAS system to enhance the credibility of information in educational settings and foster critical thinking among students and teachers. Recommendation for Researchers: Explore domain-specific adaptations of EMIAS, research on user feedback mechanisms, and investigate seamless integration techniques within existing academic infrastructure. Impact on Society: This paper’s findings could strengthen academic integrity and foster a more informed society by improving the quality of information in education. Future Research: Further research should investigate the practical implementation, effectiveness, and adaptation of EMIAS across various educational contexts.
APA, Harvard, Vancouver, ISO, and other styles
35

Purves, Duncan, Schuyler Sturm, and John Madock. "What to Trust When We Trust Artificial Intelligence (Extended Abstract)." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 1166. http://dx.doi.org/10.1609/aies.v7i1.31713.

Full text
Abstract:
What to Trust When We Trust Artificial Intelligence Abstract: So-called “trustworthy AI” has emerged as a guiding aim of industry leaders, computer and data science researchers, and policy makers in the US and Europe. Often, trustworthy AI is characterized in terms of a list of criteria. These lists usually include at least fairness, accountability, and transparency. Fairness, accountability, and transparency are valuable objectives, and they have begun to receive attention from philosophers and legal scholars. However, those who put forth criteria for trustworthy AI have failed to explain why satisfying the criteria makes an AI system—or the organizations that make use of the AI system—worthy of trust. Nor do they explain why the aim of trustworthy AI is important enough to justify devoting resources to achieve it. It even remains unclear whether an AI system is the sort of thing that can be trustworthy or not. To explain why fairness, accountability, and transparency are suitable criteria for trustworthy AI one needs an analysis of trustworthy AI. Providing an analysis of trustworthy AI is a distinct task from providing criteria. Criteria are diagnostic; they provide a useful test for the phenomenon of interest, but they do not purport to explain the nature of the phenomenon. It is conceivable that an AI system could lack transparency, accountability, or fairness while remaining trustworthy. An analysis of trustworthy AI provides the fundamental features of an AI system in virtue of which it is (or is not) worthy of trust. An AI system that lacks these features will, necessarily, fail to be worthy of trust. This paper puts forward an analysis of trustworthy AI that can be used to critically evaluate criteria for trustworthy AI such as fairness, accountability, and transparency. In this paper we first make clear the target concept to be analyzed: trustworthy AI. We argue that AI, at least in its current form, should be understood as a distributed, complex system embedded in a larger institutional context. This characterization of AI is consistent with recent definitions proposed by national and international regulatory bodies, and it eliminates some unhappy ambiguity in the common usage of the term. We further limit the scope of our discussion to AI systems which are used to inform decision-making about qualification problems, problems wherein a decision-maker must decide whether an individual is qualified for some beneficial or harmful treatment. We argue that, given reasonable assumptions about the nature of trust and trustworthiness, only AI systems that are used to inform decision-making about qualification problems are appropriate candidates for attributions of (un)trustworthiness. We then distinguish between two models of trust and trustworthiness that we find in the existing literature. We motivate our account by highlighting this as a dilemma in in the accounts of trustworthy AI that have previously been offered. These accounts claim that trustworthiness is either exclusive to full agents (and it is thus nonsense when we talk of trustworthy AI), or they offer an account of trustworthiness that collapses into mere reliability. The first sort of account we refer to as an agential account and the second sort we refer to as a reliability account. We offer that one of the core challenges of putting forth an account of trustworthy AI is to avoid reducing to one of these two camps. It is thus a desideratum of our account that it avoids being exclusive to full moral agents, while it simultaneously avoids capturing things such as mere tools. We go on to propose our positive account which we submit avoids these twin pitfalls. We subsequently argue that if AI can be trustworthy, then it will be trustworthy on an institutional model. Starting from an account of institutional trust offered by Purves and Davis, we argue that trustworthy AI systems have three features: they are competent with regard to the task they are assigned, they are responsive to the morally salient facts governing the decision-making context in which they are deployed, and they publicly provide evidence of these features. As noted, this account builds on a model of institutional trust offered by Purves and Davis and an account of default trust from Margaret Urban Walker. The resulting account allows us to accommodate the core challenge of finding a balance between agential accounts and reliability accounts. We go on to refine our account, answer objections, and revisit the list criteria from above as explained in terms of competence, responsiveness, and evidence.
APA, Harvard, Vancouver, ISO, and other styles
36

Song, Yao, and Yan Luximon. "Trust in AI Agent: A Systematic Review of Facial Anthropomorphic Trustworthiness for Social Robot Design." Sensors 20, no. 18 (September 7, 2020): 5087. http://dx.doi.org/10.3390/s20185087.

Full text
Abstract:
As an emerging artificial intelligence system, social robot could socially communicate and interact with human beings. Although this area is attracting more and more attention, limited research has tried to systematically summarize potential features that could improve facial anthropomorphic trustworthiness for social robot. Based on the literature from human facial perception, product, and robot face evaluation, this paper systematically reviews, evaluates, and summarizes static facial features, dynamic features, their combinations, and related emotional expressions, shedding light on further exploration of facial anthropomorphic trustworthiness for social robot design.
APA, Harvard, Vancouver, ISO, and other styles
37

Paolanti, Marina, Simona Tiribelli, Benedetta Giovanola, Adriano Mancini, Emanuele Frontoni, and Roberto Pierdicca. "Ethical Framework to Assess and Quantify the Trustworthiness of Artificial Intelligence Techniques: Application Case in Remote Sensing." Remote Sensing 16, no. 23 (December 3, 2024): 4529. https://doi.org/10.3390/rs16234529.

Full text
Abstract:
In the rapidly evolving field of remote sensing, Deep Learning (DL) techniques have become pivotal in interpreting and processing complex datasets. However, the increasing reliance on these algorithms necessitates a robust ethical framework to evaluate their trustworthiness. This paper introduces a comprehensive ethical framework designed to assess and quantify the trustworthiness of DL techniques in the context of remote sensing. We first define trustworthiness in DL as a multidimensional construct encompassing accuracy, reliability, transparency and explainability, fairness, and accountability. Our framework then operationalizes these dimensions through a set of quantifiable metrics, allowing for the systematic evaluation of DL models. To illustrate the applicability of our framework, we selected an existing case study in remote sensing, wherein we apply our ethical assessment to a DL model used for classification. Our results demonstrate the model’s performance across different trustworthiness metrics, highlighting areas for ethical improvement. This paper not only contributes a novel framework for ethical analysis in the field of DL, but also provides a practical tool for developers and practitioners in remote sensing to ensure the responsible deployment of DL technologies. Through a dual approach that combines top-down international standards with bottom-up, context-specific considerations, our framework serves as a practical tool for ensuring responsible AI applications in remote sensing. Its application through a case study highlights its potential to influence policy-making and guide ethical AI development in this domain.
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Min-Ji, and DoHoon Lee. "A Study on the Trustworthiness Evaluation of AI Model for Discrimination of Fireblight." Journal of Korea Multimedia Society 26, no. 2 (February 28, 2023): 420–28. http://dx.doi.org/10.9717/kmms.2023.26.2.420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhou, Zhiyu. "Technological Control Tool of Everyday life? Six Questions on the Design Ethics of Artificial Intelligence." Journal of Design Service and Social Innovation 1, no. 1 (2023): 36–43. http://dx.doi.org/10.59528/ms.jdssi2023.0614a5.

Full text
Abstract:
Artificial intelligence (AI) continues to expand into different areas of social life, bringing design ethics and public rights under challenge. Instead of stopping the research application of AI, it is better to urgently study some of the practical problems that AI technology may bring about and promptly formulate corresponding laws to regulate them. This paper discusses the following issues: 1. Security and privacy of face recognition; 2. Political and economic applications of AI; 3. Emotional learning of AI; 3. Human-computer development of brain-computer interface; 5. Ethical supervision of AI; 6. Automatic design of AI etc. It analyses design ethics basic principles, such as security, privacy, fairness, trustworthiness, honesty, etc., and calls for strengthening the institutional construction of design ethics and public safety for AI technology products.
APA, Harvard, Vancouver, ISO, and other styles
40

Šekrst, Kristina. "Chinese Chat Room: AI Hallucinations, Epistemology and Cognition." Studies in Logic, Grammar and Rhetoric 69, no. 1 (December 1, 2024): 365–81. https://doi.org/10.2478/slgr-2024-0029.

Full text
Abstract:
Abstract The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the trustworthiness of AI outputs. This exploration seeks to deepen our understanding of the possibility of AI cognition and its implications for the broader philosophical questions of knowledge and intelligence.
APA, Harvard, Vancouver, ISO, and other styles
41

Mattioli, Juliette, and Bertrand Braunschweig. "AITA: AI trustworthiness assessment." AI Magazine, June 13, 2023. http://dx.doi.org/10.1002/aaai.12096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Braunschweig, Bertrand, Stefan Buijsman, Faïcel Chamroukhi, Fredrik Heintz, Foutse Khomh, Juliette Mattioli, and Maximilian Poretschkin. "AITA: AI trustworthiness assessment." AI and Ethics, January 3, 2024. http://dx.doi.org/10.1007/s43681-023-00397-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ferrario, Andrea. "Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach." Science and Engineering Ethics 30, no. 6 (November 21, 2024). http://dx.doi.org/10.1007/s11948-024-00522-z.

Full text
Abstract:
AbstractWe address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of the AI and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, beliefs we hold with a degree of confidence, are the appropriate attitude for capturing the facets of the trustworthiness of an AI over time. Then, we introduce a reliabilistic account providing justification to the credences in the trustworthiness of AI, which we derive from Tang’s probabilistic theory of justified credence. Our account stipulates that a credence in the trustworthiness of an AI system is justified if and only if it is caused by an assessment process that tends to result in a high proportion of credences for which the actual and perceived trustworthiness of the AI are calibrated. This approach informs research on the ethics of AI and human-AI interactions by providing actionable recommendations on how to measure the reliability of the process through which users perceive the trustworthiness of the system, investigating its calibration to the actual levels of trustworthiness of the AI as well as users’ appropriate reliance on the system.
APA, Harvard, Vancouver, ISO, and other styles
44

Lahusen, Christian, Martino Maggetti, and Marija Slavkovik. "Trust, trustworthiness and AI governance." Scientific Reports 14, no. 1 (September 5, 2024). http://dx.doi.org/10.1038/s41598-024-71761-0.

Full text
Abstract:
AbstractAn emerging issue in AI alignment is the use of artificial intelligence (AI) by public authorities, and specifically the integration of algorithmic decision-making (ADM) into core state functions. In this context, the alignment of AI with the values related to the notions of trust and trustworthiness constitutes a particularly sensitive problem from a theoretical, empirical, and normative perspective. In this paper, we offer an interdisciplinary overview of the scholarship on trust in sociology, political science, and computer science anchored in artificial intelligence. On this basis, we argue that only a coherent and comprehensive interdisciplinary approach making sense of the different properties attributed to trust and trustworthiness can convey a proper understanding of complex watchful trust dynamics in a socio-technical context. Ensuring the trustworthiness of AI-Governance ultimately requires an understanding of how to combine trust-related values while addressing machines, humans and institutions at the same time. We offer a road-map of the steps that could be taken to address the challenges identified.
APA, Harvard, Vancouver, ISO, and other styles
45

Durán, Juan Manuel, and Giorgia Pozzi. "Trust and Trustworthiness in AI." Philosophy & Technology 38, no. 1 (February 4, 2025). https://doi.org/10.1007/s13347-025-00843-2.

Full text
Abstract:
Abstract Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on trust and trustworthiness, focusing on interpersonal accounts of trust. This allows us to explain why trust, in its most general terms, is to be understood as reliance plus some “extra factor”. We then turn to the first part of the definition provided, i.e., reliance, and analyze two opposing approaches to establishing AI systems’ reliability. On the one hand, we consider transparency and, on the other, computational reliabilism. Subsequently, we focus on debates revolving around the “extra factor”. To this end, we consider viewpoints that most actively resist the possibility and desirability of trusting AI systems before turning to the analysis of the most prominent advocates of it. Finally, we take up the main conclusions of the previous sections and briefly point at issues that remain open and need further attention.
APA, Harvard, Vancouver, ISO, and other styles
46

Pink, Sarah, Emma Quilty, John Grundy, and Rashina Hoda. "Trust, artificial intelligence and software practitioners: an interdisciplinary agenda." AI & SOCIETY, March 7, 2024. http://dx.doi.org/10.1007/s00146-024-01882-7.

Full text
Abstract:
AbstractTrust and trustworthiness are central concepts in contemporary discussions about the ethics of and qualities associated with artificial intelligence (AI) and the relationships between people, organisations and AI. In this article we develop an interdisciplinary approach, using socio-technical software engineering and design anthropological approaches, to investigate how trust and trustworthiness concepts are articulated and performed by AI software practitioners. We examine how trust and trustworthiness are defined in relation to AI across these disciplines, and investigate how AI, trust and trustworthiness are conceptualised and experienced through an ethnographic study of the work practices of nine practitioners in the software industry. We present key implications of our findings for the generation of trust and trustworthiness and for the training and education of future software practitioners.
APA, Harvard, Vancouver, ISO, and other styles
47

Alelyani, Turki. "Establishing trust in artificial intelligence-driven autonomous healthcare systems: an expert-guided framework." Frontiers in Digital Health 6 (November 27, 2024). http://dx.doi.org/10.3389/fdgth.2024.1474692.

Full text
Abstract:
The increasing prevalence of Autonomous Systems (AS) powered by Artificial Intelligence (AI) in society and their expanding role in ensuring safety necessitate the assessment of their trustworthiness. The verification and development community faces the challenge of evaluating the trustworthiness of AI-powered AS in a comprehensive and objective manner. To address this challenge, this study conducts a semi-structured interview with experts to gather their insights and perspectives on the trustworthiness of AI-powered autonomous systems in healthcare. By integrating the expert insights, a comprehensive framework is proposed for assessing the trustworthiness of AI-powered autonomous systems in the domain of healthcare. This framework is designed to contribute to the advancement of trustworthiness assessment practices in the field of AI and autonomous systems, fostering greater confidence in their deployment in healthcare settings.
APA, Harvard, Vancouver, ISO, and other styles
48

Reinhardt, Karoline. "Trust and trustworthiness in AI ethics." AI and Ethics, September 26, 2022. http://dx.doi.org/10.1007/s43681-022-00200-5.

Full text
Abstract:
AbstractDue to the extensive progress of research in artificial intelligence (AI) as well as its deployment and application, the public debate on AI systems has also gained momentum in recent years. With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates; despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. In this paper, I give a detailed overview on the notion of trust employed in AI Ethics Guidelines thus far. Based on that, I assess their overlaps and their omissions from the perspective of practical philosophy. I argue that, currently, AI ethics tends to overload the notion of trustworthiness. It thus runs the risk of becoming a buzzword that cannot be operationalized into a working concept for AI research. What is needed, however, is an approach that is also informed with findings of the research on trust in other fields, for instance, in social sciences and humanities, especially in the field of practical philosophy. This paper is intended as a step in this direction.
APA, Harvard, Vancouver, ISO, and other styles
49

Bostrom, Ann, Julie L. Demuth, Christopher D. Wirz, Mariana G. Cains, Andrea Schumacher, Deianna Madlambayan, Akansha Singh Bansal, et al. "Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences." Risk Analysis, November 8, 2023. http://dx.doi.org/10.1111/risa.14245.

Full text
Abstract:
AbstractDemands to manage the risks of artificial intelligence (AI) are growing. These demands and the government standards arising from them both call for trustworthy AI. In response, we adopt a convergent approach to review, evaluate, and synthesize research on the trust and trustworthiness of AI in the environmental sciences and propose a research agenda. Evidential and conceptual histories of research on trust and trustworthiness reveal persisting ambiguities and measurement shortcomings related to inconsistent attention to the contextual and social dependencies and dynamics of trust. Potentially underappreciated in the development of trustworthy AI for environmental sciences is the importance of engaging AI users and other stakeholders, which human–AI teaming perspectives on AI development similarly underscore. Co‐development strategies may also help reconcile efforts to develop performance‐based trustworthiness standards with dynamic and contextual notions of trust. We illustrate the importance of these themes with applied examples and show how insights from research on trust and the communication of risk and uncertainty can help advance the understanding of trust and trustworthiness of AI in the environmental sciences.
APA, Harvard, Vancouver, ISO, and other styles
50

Erengin, Türkü, Roman Briker, and Simon B. de Jong. "You, Me, and the AI: The Role of Third‐Party Human Teammates for Trust Formation Toward AI Teammates." Journal of Organizational Behavior, January 2025. https://doi.org/10.1002/job.2857.

Full text
Abstract:
ABSTRACTAs artificial intelligence (AI) becomes increasingly integrated in teams, understanding the factors that drive trust formation between human and AI teammates becomes crucial. Yet, the emergent literature has overlooked the impact of third parties on human‐AI teaming. Drawing from social cognitive theory and human‐AI teams research, we suggest that how much a human teammate perceives an AI teammate as trustworthy, and engages in trust behaviors toward the AI, determines a focal employee's trust perceptions and behavior toward this AI teammate. Additionally, we propose these effects hinge on an employee's perceptions of trustworthiness and trust in the human teammate. We test these predictions across two studies: (1) an online experiment comprising individuals with work experience that examines perceptions of disembodied AI trustworthiness, and (2) an incentivized observational study that investigates trust behaviors toward an embodied AI. Both studies reveal that a human teammate's perceived trustworthiness of, and trust in, the AI teammate strongly predict the employee's trustworthiness perceptions and behavioral trust in the AI teammate. Furthermore, this relationship vanishes when employees perceive their human teammates as less trustworthy. These results advance our understanding of third‐party effects in human‐AI trust formation, providing organizations with insights for managing social influences in human‐AI teams.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography