Auswahl der wissenschaftlichen Literatur zum Thema „Trustworthiness of AI“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Trustworthiness of AI" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Trustworthiness of AI"

1

Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti und Daniele Nardi. „A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness“. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16.10.2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to quantify trustworthiness by assessing the discrepancies between expected and observed behaviors and how these affect perceived uncertainty and trust. The paper provides a formalization for measuring trustworthiness, taking into account both perceived and intrinsic characteristics. By detailing the factors that influence trust, this study aims to foster more ethical and widely accepted AI technologies, ensuring they meet both functional and ethical criteria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rishika Sen, Shrihari Vasudevan, Ricardo Britto und Mj Prasath. „Ascertaining trustworthiness of AI systems in telecommunications“. ITU Journal on Future and Evolving Technologies 5, Nr. 4 (10.12.2024): 503–14. https://doi.org/10.52953/wibx7049.

Der volle Inhalt der Quelle
Annotation:
With the rapid uptake of Artificial Intelligence (AI) in the Telecommunications (Telco) industry and the pivotal role AI is expected to play in future generation technologies (e.g., 5G, 5G Advanced and 6G), establishing the trustworthiness of AI used in Telco becomes critical. Trustworthy Artificial Intelligence (TWAI) guidelines need to be implemented to establish trust in AI-powered products and services by being compliant to these guidelines. This paper focuses on measuring compliance to such guidelines. This paper proposes a Large Language Model (LLM)-driven approach to measure TWAI compliance of multiple public AI code repositories using off-the-shelf LLMs. This paper proposes an LLM-based scanner for automated measurement of the trustworthiness of any AI system. The proposed solution measures and reports the level of compliance of an AI system. Results of the experiments demonstrate the feasibility of the proposed approached for the automated measurement of trustworthiness of AI systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Schmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin und Stefan Wrobel. „The why and how of trustworthy AI“. at - Automatisierungstechnik 70, Nr. 9 (01.09.2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.

Der volle Inhalt der Quelle
Annotation:
Abstract Artificial intelligence is increasingly penetrating industrial applications as well as areas that affect our daily lives. As a consequence, there is a need for criteria to validate whether the quality of AI applications is sufficient for their intended use. Both in the academic community and societal debate, an agreement has emerged under the term “trustworthiness” as the set of essential quality requirements that should be placed on an AI application. At the same time, the question of how these quality requirements can be operationalized is to a large extent still open. In this paper, we consider trustworthy AI from two perspectives: the product and organizational perspective. For the former, we present an AI-specific risk analysis and outline how verifiable arguments for the trustworthiness of an AI application can be developed. For the second perspective, we explore how an AI management system can be employed to assure the trustworthiness of an organization with respect to its handling of AI. Finally, we argue that in order to achieve AI trustworthiness, coordinated measures from both product and organizational perspectives are required.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Vashistha, Ritwik, und Arya Farahi. „U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.

Der volle Inhalt der Quelle
Annotation:
With growing concerns regarding bias and discrimination in predictive models, the AI community has increasingly focused on assessing AI system trustworthiness. Conventionally, trustworthy AI literature relies on the probabilistic framework and calibration as prerequisites for trustworthiness. In this work, we depart from this viewpoint by proposing a novel trust framework inspired by the philosophy literature on trust. We present a precise mathematical definition of trustworthiness, termed U-trustworthiness, specifically tailored for a subset of tasks aimed at maximizing a utility function. We argue that a model’s U-trustworthiness is contingent upon its ability to maximize Bayes utility within this task subset. Our first set of results challenges the probabilistic framework by demonstrating its potential to favor less trustworthy models and introduce the risk of misleading trustworthiness assessments. Within the context of U-trustworthiness, we prove that properly-ranked models are inherently U-trustworthy. Furthermore, we advocate for the adoption of the AUC metric as the preferred measure of trustworthiness. By offering both theoretical guarantees and experimental validation, AUC enables robust evaluation of trustworthiness, thereby enhancing model selection and hyperparameter tuning to yield more trustworthy outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok und Kristen Brent Venable. „From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness“. Proceedings of the AAAI Symposium Series 4, Nr. 1 (08.11.2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.

Der volle Inhalt der Quelle
Annotation:
It is well known that successful human-AI collaboration depends on the perceived trustworthiness of the AI. We argue that a key to securing trust in such collaborations is ensuring that the AI competently addresses ethics' foundational role in engagements. Specifically, developers need to identify, address, and implement mechanisms for accommodating ethical components of AI choices. We propose an approach that instantiates ethics semantically as ontology-based moral policies. To accommodate the wide variation and interpretation of ethics, we capture such variations into ethics sets, which are situationally specific aggregations of relevant moral policies. We are extending our ontology-based policy management systems with new representations and capabilities to allow trustworthy AI-human ethical collaborative behavior. Moreover, we believe that such AI-human ethical encounters demand that trustworthiness is bi-directional – humans need to be able to assess and calibrate their actions to be consistent with the trustworthiness of AI in a given context, and AIs need to be able to do the same with respect to humans.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Alzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy et al. „Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements“. International Journal of Intelligent Systems 2023 (26.10.2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.

Der volle Inhalt der Quelle
Annotation:
Given the tremendous potential and influence of artificial intelligence (AI) and algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse fields, including education, business, healthcare industries, government, and justice sectors. While AI and DM offer significant benefits, they also carry the risk of unfavourable outcomes for users and society. As a result, ensuring the safety, reliability, and trustworthiness of these systems becomes crucial. This article aims to provide a comprehensive review of the synergy between AI and DM, focussing on the importance of trustworthiness. The review addresses the following four key questions, guiding readers towards a deeper understanding of this topic: (i) why do we need trustworthy AI? (ii) what are the requirements for trustworthy AI? In line with this second question, the key requirements that establish the trustworthiness of these systems have been explained, including explainability, accountability, robustness, fairness, acceptance of AI, privacy, accuracy, reproducibility, and human agency, and oversight. (iii) how can we have trustworthy data? and (iv) what are the priorities in terms of trustworthy requirements for challenging applications? Regarding this last question, six different applications have been discussed, including trustworthy AI in education, environmental science, 5G-based IoT networks, robotics for architecture, engineering and construction, financial technology, and healthcare. The review emphasises the need to address trustworthiness in AI systems before their deployment in order to achieve the AI goal for good. An example is provided that demonstrates how trustworthy AI can be employed to eliminate bias in human resources management systems. The insights and recommendations presented in this paper will serve as a valuable guide for AI researchers seeking to achieve trustworthiness in their applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kafali, Efi, Davy Preuveneers, Theodoros Semertzidis und Petros Daras. „Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework“. Big Data and Cognitive Computing 8, Nr. 11 (24.10.2024): 142. http://dx.doi.org/10.3390/bdcc8110142.

Der volle Inhalt der Quelle
Annotation:
This study critically examines the trustworthiness of widely used AI applications, focusing on their integration into daily life, often without users fully understanding the risks or how these threats might affect them. As AI apps become more accessible, users tend to trust them due to their convenience and usability, frequently overlooking critical issues such as security, privacy, and ethics. To address this gap, we introduce a user-centric framework that enables individuals to assess the trustworthiness of AI applications based on their own experiences and perceptions. The framework evaluates several dimensions—transparency, security, privacy, ethics, and compliance—while also aiming to raise awareness and bring the topic of AI trustworthiness into public dialogue. By analyzing AI threats, real-world incidents, and strategies for mitigating the risks posed by AI apps, this study contributes to the ongoing discussions on AI safety and trust.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti und Dimitris Apostolou. „Exploring the landscape of trustworthy artificial intelligence: Status and challenges“. Intelligent Decision Technologies 18, Nr. 2 (07.06.2024): 837–54. http://dx.doi.org/10.3233/idt-240366.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence (AI) has pervaded everyday life, reshaping the landscape of business, economy, and society through the alteration of interactions and connections among stakeholders and citizens. Nevertheless, the widespread adoption of AI presents significant risks and hurdles, sparking apprehension regarding the trustworthiness of AI systems by humans. Lately, numerous governmental entities have introduced regulations and principles aimed at fostering trustworthy AI systems, while companies, research institutions, and public sector organizations have released their own sets of principles and guidelines for ensuring ethical and trustworthy AI. Additionally, they have developed methods and software toolkits to aid in evaluating and improving the attributes of trustworthiness. The present paper aims to explore this evolution by analysing and supporting the trustworthiness of AI systems. We commence with an examination of the characteristics inherent in trustworthy AI, along with the corresponding principles and standards associated with them. We then examine the methods and tools that are available to designers and developers in their quest to operationalize trusted AI systems. Finally, we outline research challenges towards end-to-end engineering of trustworthy AI by-design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Vadlamudi, Siddhartha. „Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion“. Engineering International 3, Nr. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative outcomes related to AI are proportionately high, particularly at scale. Most AI is really artificial narrow intelligence, intended to achieve a specific task on previously curated information from a certain source. Since most AI models expand on correlations, predictions could fail to sum up to various populations or settings and might fuel existing disparities and biases. As the AI industry is amazingly imbalanced, and experts are as of now overpowered by other digital devices, there could be a little capacity to catch blunders. With this article, we aim to present the idea of TAI and its five essential standards (1) usefulness, (2) non-maleficence, (3) autonomy, (4) justice, and (5) logic. We further draw on these five standards to build up a data-driven analysis for TAI and present its application by portraying productive paths for future research, especially as to the distributed ledger technology-based acknowledgment of TAI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

AJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill und Fajuyigbe Gbenga Joseph. „Software Engineering’s Key Role in AI Content Trustworthiness“. International Journal of Research and Scientific Innovation XI, Nr. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. It can also be defined as the science and engineering of making intelligent machines, especially intelligent computer programs. In recent decades, there has been a discernible surge in the focus of the scientific and government sectors on reliable AI. The International Organization for Standardization, which focuses on technical, industrial, and commercial standardization, has devised several strategies to promote trust in AI systems, with an emphasis on fairness, transparency, accountability, and controllability. Therefore, this paper aims to examine the role of Software Engineering in AI Content trustworthiness. A secondary data analysis methodology was used in this work to investigate the crucial role that software engineering plays in ensuring the accuracy of AI content. The dataset was guaranteed to contain reliable and comprehensive material relevant to our inquiry because it was derived from peer-reviewed publications. To decrease potential biases and increase data consistency, a rigorous validation process was employed. The findings of the paper showed that lawful, ethical, and robust are the fundamental components of reliable Artificial Intelligence. The criteria for Reliable Artificial Intelligence include Transparency, Human agency and oversight, technical robustness and safety, privacy and data governance, diversity, non-discrimination, fairness, etc. The functions of software engineering in the credibility of AI content are Algorithm Design and Implementation, Data Quality and Preprocessing, Explainability and Interpretability, Ethical Considerations and Governance, User feedback, and Iterative Improvements among others. It is therefore essential for Software engineering to ensure the dependability of material generated by AI systems at every stage of the development lifecycle. To build and maintain reliable AI systems, engineers must address problems with data quality, model interpretability, ethical difficulties, security, and user input.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Trustworthiness of AI"

1

Wang, Brydon. „The role of trustworthiness in automated decision-making systems and the law“. Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231388/1/Brydon_Wang_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis considers the role of trustworthiness in automated decision-making systems (ADS) spanning across data collection, modelling, analysis and decision output in different legal contexts. Through an updated model of trustworthiness, it argues that existing legal norms and principles for administering construction contracts and the impact of automation on these contracts provide fertile ground to inform the governance of algorithmic systems in smart cities. The thesis finds that trustworthy, benevolent ADS requires a specific form of transparency that operates through mutual vulnerability in the trusting relationship, and seams in the automated decision-making process where human discretion is exercised.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Labarbarie, Pol. „Transferable adversarial patches : a potential threat for real-world computer vision algorithms“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG084.

Der volle Inhalt der Quelle
Annotation:
Les réseaux de neurones profonds offrent aujourd'hui des performances inégalées notamment pour les fonctions de vision par ordinateur comme par exemple la classification d'images, la détection d'objets et la segmentation sémantique. Malgré ces avancées, les modèles d'apprentissage profond présentent des vulnérabilités qui peuvent être exploitées par des agents malveillants pour induire des comportements dangereux de la part des modèles d'IA. Une des menaces, appelée attaque par patch, consiste à introduire dans la scène un objet texturé pour duper le modèle. Par exemple, un patch placé sur un panneau stop peut amener le réseau à le classer à tort comme étant un panneau de limitation de vitesse. Ce type d'attaque soulève d'importants problèmes de sécurité pour les systèmes de vision par ordinateur opérant dans le monde physique. Dans cette thèse, nous étudions si un tel patch peut perturber un système physique dans des conditions d'attaque réalistes, i.e., sans connaissance préalable sur le système ciblé.Bien que de nombreuses attaques par patch aient été proposées dans la littérature, il n'existe pas, à notre connaissance, de travail qui décrit les caractéristiques essentielles qualifiant une attaque par patch de critique. L'une de nos contributions est la définition de ce que serait une attaque par patch critique. Pour être qualifié de critique, une attaque par patch doit vérifier deux critères essentiels. Tout d'abord, le patch doit être robustes à des transformations physiques, ce qui est résumé par la notion de physicalité du patch. Ensuite, le patch doit être transférable, c'est-à-dire que le patch a la capacité de duper avec succès un réseau sans posséder aucune connaissance préalable sur celui-ci. La transférabilité de l'attaque est un facteur clé, car les systèmes physiques déployés par les entreprises sont souvent opaques ou inconnus. Bien que la physicalité des patchs ait été développée et améliorée par de nombreux travaux, leur transférabilité reste faible et peu de méthodes proposent de l'améliorer.Afin de créer une attaque par patch transférable pour une grande variété de classifieurs d'images, nous proposons une nouvelle méthode de conception des patchs. Cette méthode repose sur l'utilisation de la distance de Wasserstein, distance définie entre deux mesures de probabilité. Notre patch est appris en minimisant la distance de Wasserstein entre la distribution des caractéristiques des images corrompues et la distribution des caractéristiques d'images d'une classe cible préalablement choisie. Une fois appris et placé dans la scène, notre patch induit plusieurs réseaux à prédire la classe de la distribution ciblée. Nous montrons qu'un tel patch est transférable et peut être implémenté dans le monde physique afin de perturber des classifieurs d'images sans aucune connaissance sur ceux-ci.Afin de mieux caractériser la potentielle menace des attaques par patch, nous proposons d'étudier leur transférabilité quand ceux-ci sont développés pour duper des détecteurs d'objets. Les détecteurs d'objets sont des modèles plus complexes que les classifieurs d'image et sont souvent plus utilisés dans les systèmes opérant dans le monde physique. Nous étudions plus particulièrement les attaques par patch dites cape d'invisibilité, un type particulier de patch conçu pour inhiber la détection d'objets. Nos résultats révèlent que le protocole d'évaluation utilisé dans la littérature comporte plusieurs problèmes rendant l'évaluation de ces patchs incorrecte. Pour y remédier, nous introduisons un problème de substitution qui garantit que le patch produit supprime bien la bonne détection de l'objet que nous souhaitons attaquer. En utilisant ce nouveau processus d'évaluation, nous montrons que les attaques par patch de la littérature ne parviennent pas à inhiber la détection d'objets, limitant ainsi leur criticité
The use of Deep Neural Networks has revolutionized the field of computer vision, leading to significant performance improvement in tasks such as image classification, object detection, and semantic segmentation. Despite these breakthroughs, Deep Learning systems have exhibited vulnerabilities that malicious entities can exploit to induce harmful behavior in AI models. One of the threats is adversarial patch attacks, which are disruptive objects that often resemble stickers and are designed to deceive models when placed in a real-world scene. For example, a patch on a stop sign may sway the network to misclassify it as a speed limit sign. This type of attack raises significant safety issues for computer vision systems operating in the real world. In this thesis, we study if such a patch can disrupt a real-world system without prior knowledge concerning the targeted system.Even though numerous patch attacks have been proposed in the literature, no work in literature describes the prerequisites of a critical patch. One of our contributions is to propose a definition of what may be a critical adversarial patch. To be characterized as critical, adversarial patch attacks must meet two essential criteria. They must be robust to physical transformations summarized by the notion of patch physicality, and they must exhibit transferability among networks, meaning the patch can successfully fool networks without possessing any knowledge about the targeted system. Transferability is an essential prerequisite for a critical patch, as the targeted real-world system is usually protected and inaccessible from the outside. Although patch physicality has been developed and improved through multiple works, its transferability remains a challenge.To address the challenge of attack transferability among image classifiers, we introduce a new adversarial patch attack based on the Wasserstein distance, which computes the distance between two probability distributions. We exploit the Wasserstein distance to alter the feature distribution of a set of corrupted images to match another feature distribution from images of a target class. When placed in the scene, our patch causes various state-of-the-art networks to output the class chosen as the target distribution. We show that our patch is more transferable than previous patches and can be implemented in the real world to deceive real-world image classifiers.In addition to our work on classification networks, we conduct a study on patch transferability against object detectors, as these systems may be more often involved in real-world systems. We focus on invisible cloak patches, a particular type of patche that is designed to hide objects. Our findings reveal several significant flaws in the current evaluation protocol which is used to assess the effectiveness of these patches. To address these flaws, we introduce a surrogate problem that ensures that the produced patch is suppressing the object we want to attack. We show that state-of-the-art adversarial patch against object detectors fail to hide objects from being detected, limiting their criticality against real-world systems
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Trustworthiness of AI"

1

Assessing and Improving AI Trustworthiness: Current Contexts and Concerns. Washington, D.C.: National Academies Press, 2021. http://dx.doi.org/10.17226/26208.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Eliot, Dr Lance. AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence and Machine Learning. LBE Press Publishing, 2016.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Trustworthiness of AI"

1

Salloum, Said A. „Trustworthiness of the AI“. In Studies in Big Data, 643–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52280-2_41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner und Andreas Holzinger. „Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?“ In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.

Der volle Inhalt der Quelle
Annotation:
AbstractThe release of ChatGPT to the general public has sparked discussions about the dangers of artificial intelligence (AI) among the public. The European Commission’s draft of the AI Act has further fueled these discussions, particularly in relation to the definition of AI and the assignment of risk levels to different technologies. Security concerns in AI systems arise from the need to protect against potential adversaries and to safeguard individuals from AI decisions that may harm their well-being. However, ensuring secure and trustworthy AI systems is challenging, especially with deep learning models that lack explainability. This paper proposes the concept of Controllable AI as an alternative to Trustworthy AI and explores the major differences between the two. The aim is to initiate discussions on securing complex AI systems without sacrificing practical capabilities or transparency. The paper provides an overview of techniques that can be employed to achieve Controllable AI. It discusses the background definitions of explainability, Trustworthy AI, and the AI Act. The principles and techniques of Controllable AI are detailed, including detecting and managing control loss, implementing transparent AI decisions, and addressing intentional bias or backdoors. The paper concludes by discussing the potential applications of Controllable AI and its implications for real-world scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gadewadikar, Jyotirmay, Jeremy Marshall, Zachary Bilodeau und Vatatmaja. „Systems Engineering–Driven AI Assurance and Trustworthiness“. In The Proceedings of the 2023 Conference on Systems Engineering Research, 343–56. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49179-5_23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kastania, Nikoleta Polyxeni “Paulina”. „AI in Education: Prioritizing Transparency and Trustworthiness“. In Encyclopedia of Educational Innovation, 1–5. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-13-2262-4_309-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Eguia, Alexander, Nuria Quintano, Irina Marsh, Michel Barreteau, Jakub Główka und Agnieszka Sprońska. „Ensuring Trustworthiness of Hybrid AI-Based Robotics Systems“. In Springer Proceedings in Advanced Robotics, 142–46. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-76428-8_27.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Batut, Aria, Lina Prudhomme, Martijn van Sambeek und Weiqin Chen. „Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public“. In Artificial Intelligence in HCI, 15–26. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60611-3_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tsai, Chun-Hua, und John M. Carroll. „Logic and Pragmatics in AI Explanation“. In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.

Der volle Inhalt der Quelle
Annotation:
AbstractThis paper reviews logical approaches and challenges raised for explaining AI. We discuss the issues of presenting explanations as accurate computational models that users cannot understand or use. Then, we introduce pragmatic approaches that consider explanation a sort of speech act that commits to felicity conditions, including intelligibility, trustworthiness, and usefulness to the users. We argue Explainable AI (XAI) is more than a matter of accurate and complete computational explanation, that it requires pragmatics to address the issues it seeks to address. At the end of this paper, we draw a historical analogy to usability. This term was understood logically and pragmatically, but that has evolved empirically through time to become more prosperous and more functional.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ren, Hao, Jjnwen Liang, Zicong Hong, Enyuan Zhou und Junbao Pan. „Application: Privacy, Security, Robustness and Trustworthiness in Edge AI“. In Machine Learning on Commodity Tiny Devices, 161–86. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003340225-10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Nguyen, Duc An, Khanh T. P. Nguyen und Kamal Medjaher. „Enhancing Trustworthiness in AI-Based Prognostics: A Comprehensive Review of Explainable AI for PHM“. In Springer Series in Reliability Engineering, 101–36. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71495-5_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Uslu, Suleyman, Davinder Kaur, Samuel J. Rivera, Arjan Durresi und Meghna Babbar-Sebens. „Causal Inference to Enhance AI Trustworthiness in Environmental Decision-Making“. In Advanced Information Networking and Applications, 214–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57916-5_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Trustworthiness of AI"

1

Janev, Valentina, Miloš Nenadović, Dejan Paunović, Sahar Vahdati, Jason Li, Muhammad Hamza Yousuf, Jaume Montanya et al. „IntelliLung AI-DSS Trustworthiness Evaluation Framework“. In 2024 32nd Telecommunications Forum (TELFOR), 1–4. IEEE, 2024. https://doi.org/10.1109/telfor63250.2024.10819068.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ottun, Abdul-Rasheed, Rasinthe Marasinghe, Toluwani Elemosho, Mohan Liyanage, Ashfaq Hussain Ahmed, Michell Boerger, Chamara Sandeepa et al. „SPATIAL: Practical AI Trustworthiness with Human Oversight“. In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), 1427–30. IEEE, 2024. http://dx.doi.org/10.1109/icdcs60910.2024.00138.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Calabrò, Antonello, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf und Annabelle Mercier. „Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems“. In 20th International Conference on Web Information Systems and Technologies, 288–95. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012998700003825.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Troussas, Christos, Christos Papakostas, Akrivi Krouska, Phivos Mylonas und Cleo Sgouropoulou. „FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems“. In 20th International Conference on Web Information Systems and Technologies, 385–92. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0013061100003825.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kioskli, Kitty, Laura Bishop, Nineta Polemi und Antonis Ramfos. „Towards a Human-Centric AI Trustworthiness Risk Management Framework“. In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004766.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence (AI) aims to replicate human behavior in socio-technical systems, with a strong focus on AI engineering to replace human decision-making. However, an overemphasis on AI system autonomy can lead to bias, unfair, non-ethical decisions, and thus a lack of trust, resulting in decreased performance, motivation, and competitiveness. To mitigate these AI threats, developers are incorporating ethical considerations, often with input from ethicists, and using technical tools like IBM's Fairness 360 and Google's What-If tool to assess and improve fairness in AI systems. These efforts aim to create more trustworthy and equitable AI technologies. Building trustworthiness in AI technology does not necessarily imply that the human user will fundamentally trust it. For humans to use technology trust must be present, something challenging when AI lacks a permanent/stable physical embodiment. It is also important to ensure humans do not over-trust resulting in AI misuse. Trustworthiness should be assessed in relation to human acceptance, performance, satisfaction, and empowerment to make design choices that grant them ultimate control over AI systems, and the extent to which the technology meets the business context of the socio-technical system where it's used. For AI to be perceived as trustworthy, it must also align with the legal, moral, ethical principles, and behavioral patterns of its human users, whilst also considering the organizational responsibility and liability associated with the socio-technical system's business objectives. Commitment to incorporating these principles to create secure and effective decision support AI systems will offer a competitive advantage to organizations that integrate them.Based on this need, the proposed framework is a synthesis of research from diverse disciplines (cybersecurity, social and behavioral sciences, ethics) designed to ensure the trustworthiness of AI-driven hybrid decision support while accommodating the specific decision support needs and trust of human users. Additionally, it aims to align with the key performance indicators of the socio-technical environment where it operates. This framework serves to empower AI system developers, business leaders offering AI-based services, as well as AI system users, such as educators, professionals, and policymakers, in achieving a more absolute form of human-AI trustworthiness. It can also be used by security defenders to make fair decisions during AI incident handling. Our framework extends the proposed NIST AI Risk Management Framework (AI-RFM) since at all stages of the trustworthiness risk management dynamic cycle (threat assessment, impact assessment, risk assessment, risk mitigation), human users are considered (e.g., their morals, ethics, behavior, IT maturity) as well as the primary business objectives of the AI socio-technical system under assessment. Co-creation and human experiment processes must accompany all stages of system management and are therefore part of the proposed framework. This interaction facilitates the execution of continuous trustworthiness improvement processes. During each cycle of trustworthiness risk mitigation, human user assessment will take place, leading to the identification of corrective actions and additional mitigation activities to be implemented before the next improvement cycle. Thus, the main objective of this framework is to help build ‘trustworthy’ AI systems that are ultimately trusted by their users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Yingxu. „A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems“. In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945351.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Echeberria-Barrio, Xabier, Mikel Gorricho, Selene Valencia und Francesco Zola. „Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness“. In 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140209.

Der volle Inhalt der Quelle
Annotation:
The usage of Artificial Intelligence (AI) systems has increased exponentially, thanks to their ability to reduce the amount of data to be analyzed, the user efforts and preserving a high rate of accuracy. However, introducing this new element in the loop has converted them into attacked points that can compromise the reliability of the systems. This new scenario has raised crucial challenges regarding the reliability and trustworthiness of the AI models, as well as about the uncertainties in their response decisions, becoming even more crucial when applied in critical domains such as healthcare, chemical, electrical plants, etc. To contain these issues, in this paper, we present NeuralSentinel (NS), a tool able to validate the reliability and trustworthiness of AI models. This tool combines attack and defence strategies and explainability concepts to stress an AI model and help non-expert staff increase their confidence in this new system by understanding the model decisions. NS provide a simple and easy-to-use interface for helping humans in the loop dealing with all the needed information. This tool was deployed and used in a Hackathon event to evaluate the reliability of a skin cancer image detector. During the event, experts and non-experts attacked and defended the detector, learning which factors were the most important for model misclassification and which techniques were the most efficient. The event was also used to detect NS’s limitations and gather feedback for further improvements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Garbuk, Sergey V. „Intellimetry as a Way to Ensure AI Trustworthiness“. In 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). IEEE, 2018. http://dx.doi.org/10.1109/ic-aiai.2018.8674447.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Awadid, Afef, Kahina Amokrane-Ferka, Henri Sohier, Juliette Mattioli, Faouzi Adjed, Martin Gonzalez und Souhaiel Khalfaoui. „AI Systems Trustworthiness Assessment: State of the Art“. In Workshop on Model-based System Engineering and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012619600003645.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Smith, Carol. „Letting Go of the Numbers: Measuring AI Trustworthiness“. In 13th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012644300003654.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie