Letteratura scientifica selezionata sul tema "Trustworthiness of AI"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Trustworthiness of AI".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Trustworthiness of AI"

1

Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti e Daniele Nardi. "A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16 ottobre 2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.

Testo completo
Abstract (sommario):
This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to quantify trustworthiness by assessing the discrepancies between expected and observed behaviors and how these affect perceived uncertainty and trust. The paper provides a formalization for measuring trustworthiness, taking into account both perceived and intrinsic characteristics. By detailing the factors that influence trust, this study aims to foster more ethical and widely accepted AI technologies, ensuring they meet both functional and ethical criteria.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Rishika Sen, Shrihari Vasudevan, Ricardo Britto e Mj Prasath. "Ascertaining trustworthiness of AI systems in telecommunications". ITU Journal on Future and Evolving Technologies 5, n. 4 (10 dicembre 2024): 503–14. https://doi.org/10.52953/wibx7049.

Testo completo
Abstract (sommario):
With the rapid uptake of Artificial Intelligence (AI) in the Telecommunications (Telco) industry and the pivotal role AI is expected to play in future generation technologies (e.g., 5G, 5G Advanced and 6G), establishing the trustworthiness of AI used in Telco becomes critical. Trustworthy Artificial Intelligence (TWAI) guidelines need to be implemented to establish trust in AI-powered products and services by being compliant to these guidelines. This paper focuses on measuring compliance to such guidelines. This paper proposes a Large Language Model (LLM)-driven approach to measure TWAI compliance of multiple public AI code repositories using off-the-shelf LLMs. This paper proposes an LLM-based scanner for automated measurement of the trustworthiness of any AI system. The proposed solution measures and reports the level of compliance of an AI system. Results of the experiments demonstrate the feasibility of the proposed approached for the automated measurement of trustworthiness of AI systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Schmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin e Stefan Wrobel. "The why and how of trustworthy AI". at - Automatisierungstechnik 70, n. 9 (1 settembre 2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.

Testo completo
Abstract (sommario):
Abstract Artificial intelligence is increasingly penetrating industrial applications as well as areas that affect our daily lives. As a consequence, there is a need for criteria to validate whether the quality of AI applications is sufficient for their intended use. Both in the academic community and societal debate, an agreement has emerged under the term “trustworthiness” as the set of essential quality requirements that should be placed on an AI application. At the same time, the question of how these quality requirements can be operationalized is to a large extent still open. In this paper, we consider trustworthy AI from two perspectives: the product and organizational perspective. For the former, we present an AI-specific risk analysis and outline how verifiable arguments for the trustworthiness of an AI application can be developed. For the second perspective, we explore how an AI management system can be employed to assure the trustworthiness of an organization with respect to its handling of AI. Finally, we argue that in order to achieve AI trustworthiness, coordinated measures from both product and organizational perspectives are required.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Vashistha, Ritwik, e Arya Farahi. "U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 18 (24 marzo 2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.

Testo completo
Abstract (sommario):
With growing concerns regarding bias and discrimination in predictive models, the AI community has increasingly focused on assessing AI system trustworthiness. Conventionally, trustworthy AI literature relies on the probabilistic framework and calibration as prerequisites for trustworthiness. In this work, we depart from this viewpoint by proposing a novel trust framework inspired by the philosophy literature on trust. We present a precise mathematical definition of trustworthiness, termed U-trustworthiness, specifically tailored for a subset of tasks aimed at maximizing a utility function. We argue that a model’s U-trustworthiness is contingent upon its ability to maximize Bayes utility within this task subset. Our first set of results challenges the probabilistic framework by demonstrating its potential to favor less trustworthy models and introduce the risk of misleading trustworthiness assessments. Within the context of U-trustworthiness, we prove that properly-ranked models are inherently U-trustworthy. Furthermore, we advocate for the adoption of the AUC metric as the preferred measure of trustworthiness. By offering both theoretical guarantees and experimental validation, AUC enables robust evaluation of trustworthiness, thereby enhancing model selection and hyperparameter tuning to yield more trustworthy outcomes.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Bradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok e Kristen Brent Venable. "From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness". Proceedings of the AAAI Symposium Series 4, n. 1 (8 novembre 2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.

Testo completo
Abstract (sommario):
It is well known that successful human-AI collaboration depends on the perceived trustworthiness of the AI. We argue that a key to securing trust in such collaborations is ensuring that the AI competently addresses ethics' foundational role in engagements. Specifically, developers need to identify, address, and implement mechanisms for accommodating ethical components of AI choices. We propose an approach that instantiates ethics semantically as ontology-based moral policies. To accommodate the wide variation and interpretation of ethics, we capture such variations into ethics sets, which are situationally specific aggregations of relevant moral policies. We are extending our ontology-based policy management systems with new representations and capabilities to allow trustworthy AI-human ethical collaborative behavior. Moreover, we believe that such AI-human ethical encounters demand that trustworthiness is bi-directional – humans need to be able to assess and calibrate their actions to be consistent with the trustworthiness of AI in a given context, and AIs need to be able to do the same with respect to humans.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Alzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy et al. "Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements". International Journal of Intelligent Systems 2023 (26 ottobre 2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.

Testo completo
Abstract (sommario):
Given the tremendous potential and influence of artificial intelligence (AI) and algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse fields, including education, business, healthcare industries, government, and justice sectors. While AI and DM offer significant benefits, they also carry the risk of unfavourable outcomes for users and society. As a result, ensuring the safety, reliability, and trustworthiness of these systems becomes crucial. This article aims to provide a comprehensive review of the synergy between AI and DM, focussing on the importance of trustworthiness. The review addresses the following four key questions, guiding readers towards a deeper understanding of this topic: (i) why do we need trustworthy AI? (ii) what are the requirements for trustworthy AI? In line with this second question, the key requirements that establish the trustworthiness of these systems have been explained, including explainability, accountability, robustness, fairness, acceptance of AI, privacy, accuracy, reproducibility, and human agency, and oversight. (iii) how can we have trustworthy data? and (iv) what are the priorities in terms of trustworthy requirements for challenging applications? Regarding this last question, six different applications have been discussed, including trustworthy AI in education, environmental science, 5G-based IoT networks, robotics for architecture, engineering and construction, financial technology, and healthcare. The review emphasises the need to address trustworthiness in AI systems before their deployment in order to achieve the AI goal for good. An example is provided that demonstrates how trustworthy AI can be employed to eliminate bias in human resources management systems. The insights and recommendations presented in this paper will serve as a valuable guide for AI researchers seeking to achieve trustworthiness in their applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Kafali, Efi, Davy Preuveneers, Theodoros Semertzidis e Petros Daras. "Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework". Big Data and Cognitive Computing 8, n. 11 (24 ottobre 2024): 142. http://dx.doi.org/10.3390/bdcc8110142.

Testo completo
Abstract (sommario):
This study critically examines the trustworthiness of widely used AI applications, focusing on their integration into daily life, often without users fully understanding the risks or how these threats might affect them. As AI apps become more accessible, users tend to trust them due to their convenience and usability, frequently overlooking critical issues such as security, privacy, and ethics. To address this gap, we introduce a user-centric framework that enables individuals to assess the trustworthiness of AI applications based on their own experiences and perceptions. The framework evaluates several dimensions—transparency, security, privacy, ethics, and compliance—while also aiming to raise awareness and bring the topic of AI trustworthiness into public dialogue. By analyzing AI threats, real-world incidents, and strategies for mitigating the risks posed by AI apps, this study contributes to the ongoing discussions on AI safety and trust.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Mentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti e Dimitris Apostolou. "Exploring the landscape of trustworthy artificial intelligence: Status and challenges". Intelligent Decision Technologies 18, n. 2 (7 giugno 2024): 837–54. http://dx.doi.org/10.3233/idt-240366.

Testo completo
Abstract (sommario):
Artificial Intelligence (AI) has pervaded everyday life, reshaping the landscape of business, economy, and society through the alteration of interactions and connections among stakeholders and citizens. Nevertheless, the widespread adoption of AI presents significant risks and hurdles, sparking apprehension regarding the trustworthiness of AI systems by humans. Lately, numerous governmental entities have introduced regulations and principles aimed at fostering trustworthy AI systems, while companies, research institutions, and public sector organizations have released their own sets of principles and guidelines for ensuring ethical and trustworthy AI. Additionally, they have developed methods and software toolkits to aid in evaluating and improving the attributes of trustworthiness. The present paper aims to explore this evolution by analysing and supporting the trustworthiness of AI systems. We commence with an examination of the characteristics inherent in trustworthy AI, along with the corresponding principles and standards associated with them. We then examine the methods and tools that are available to designers and developers in their quest to operationalize trusted AI systems. Finally, we outline research challenges towards end-to-end engineering of trustworthy AI by-design.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Vadlamudi, Siddhartha. "Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion". Engineering International 3, n. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.

Testo completo
Abstract (sommario):
Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative outcomes related to AI are proportionately high, particularly at scale. Most AI is really artificial narrow intelligence, intended to achieve a specific task on previously curated information from a certain source. Since most AI models expand on correlations, predictions could fail to sum up to various populations or settings and might fuel existing disparities and biases. As the AI industry is amazingly imbalanced, and experts are as of now overpowered by other digital devices, there could be a little capacity to catch blunders. With this article, we aim to present the idea of TAI and its five essential standards (1) usefulness, (2) non-maleficence, (3) autonomy, (4) justice, and (5) logic. We further draw on these five standards to build up a data-driven analysis for TAI and present its application by portraying productive paths for future research, especially as to the distributed ledger technology-based acknowledgment of TAI.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

AJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill e Fajuyigbe Gbenga Joseph. "Software Engineering’s Key Role in AI Content Trustworthiness". International Journal of Research and Scientific Innovation XI, n. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.

Testo completo
Abstract (sommario):
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. It can also be defined as the science and engineering of making intelligent machines, especially intelligent computer programs. In recent decades, there has been a discernible surge in the focus of the scientific and government sectors on reliable AI. The International Organization for Standardization, which focuses on technical, industrial, and commercial standardization, has devised several strategies to promote trust in AI systems, with an emphasis on fairness, transparency, accountability, and controllability. Therefore, this paper aims to examine the role of Software Engineering in AI Content trustworthiness. A secondary data analysis methodology was used in this work to investigate the crucial role that software engineering plays in ensuring the accuracy of AI content. The dataset was guaranteed to contain reliable and comprehensive material relevant to our inquiry because it was derived from peer-reviewed publications. To decrease potential biases and increase data consistency, a rigorous validation process was employed. The findings of the paper showed that lawful, ethical, and robust are the fundamental components of reliable Artificial Intelligence. The criteria for Reliable Artificial Intelligence include Transparency, Human agency and oversight, technical robustness and safety, privacy and data governance, diversity, non-discrimination, fairness, etc. The functions of software engineering in the credibility of AI content are Algorithm Design and Implementation, Data Quality and Preprocessing, Explainability and Interpretability, Ethical Considerations and Governance, User feedback, and Iterative Improvements among others. It is therefore essential for Software engineering to ensure the dependability of material generated by AI systems at every stage of the development lifecycle. To build and maintain reliable AI systems, engineers must address problems with data quality, model interpretability, ethical difficulties, security, and user input.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Trustworthiness of AI"

1

Wang, Brydon. "The role of trustworthiness in automated decision-making systems and the law". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231388/1/Brydon_Wang_Thesis.pdf.

Testo completo
Abstract (sommario):
This thesis considers the role of trustworthiness in automated decision-making systems (ADS) spanning across data collection, modelling, analysis and decision output in different legal contexts. Through an updated model of trustworthiness, it argues that existing legal norms and principles for administering construction contracts and the impact of automation on these contracts provide fertile ground to inform the governance of algorithmic systems in smart cities. The thesis finds that trustworthy, benevolent ADS requires a specific form of transparency that operates through mutual vulnerability in the trusting relationship, and seams in the automated decision-making process where human discretion is exercised.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Labarbarie, Pol. "Transferable adversarial patches : a potential threat for real-world computer vision algorithms". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG084.

Testo completo
Abstract (sommario):
Les réseaux de neurones profonds offrent aujourd'hui des performances inégalées notamment pour les fonctions de vision par ordinateur comme par exemple la classification d'images, la détection d'objets et la segmentation sémantique. Malgré ces avancées, les modèles d'apprentissage profond présentent des vulnérabilités qui peuvent être exploitées par des agents malveillants pour induire des comportements dangereux de la part des modèles d'IA. Une des menaces, appelée attaque par patch, consiste à introduire dans la scène un objet texturé pour duper le modèle. Par exemple, un patch placé sur un panneau stop peut amener le réseau à le classer à tort comme étant un panneau de limitation de vitesse. Ce type d'attaque soulève d'importants problèmes de sécurité pour les systèmes de vision par ordinateur opérant dans le monde physique. Dans cette thèse, nous étudions si un tel patch peut perturber un système physique dans des conditions d'attaque réalistes, i.e., sans connaissance préalable sur le système ciblé.Bien que de nombreuses attaques par patch aient été proposées dans la littérature, il n'existe pas, à notre connaissance, de travail qui décrit les caractéristiques essentielles qualifiant une attaque par patch de critique. L'une de nos contributions est la définition de ce que serait une attaque par patch critique. Pour être qualifié de critique, une attaque par patch doit vérifier deux critères essentiels. Tout d'abord, le patch doit être robustes à des transformations physiques, ce qui est résumé par la notion de physicalité du patch. Ensuite, le patch doit être transférable, c'est-à-dire que le patch a la capacité de duper avec succès un réseau sans posséder aucune connaissance préalable sur celui-ci. La transférabilité de l'attaque est un facteur clé, car les systèmes physiques déployés par les entreprises sont souvent opaques ou inconnus. Bien que la physicalité des patchs ait été développée et améliorée par de nombreux travaux, leur transférabilité reste faible et peu de méthodes proposent de l'améliorer.Afin de créer une attaque par patch transférable pour une grande variété de classifieurs d'images, nous proposons une nouvelle méthode de conception des patchs. Cette méthode repose sur l'utilisation de la distance de Wasserstein, distance définie entre deux mesures de probabilité. Notre patch est appris en minimisant la distance de Wasserstein entre la distribution des caractéristiques des images corrompues et la distribution des caractéristiques d'images d'une classe cible préalablement choisie. Une fois appris et placé dans la scène, notre patch induit plusieurs réseaux à prédire la classe de la distribution ciblée. Nous montrons qu'un tel patch est transférable et peut être implémenté dans le monde physique afin de perturber des classifieurs d'images sans aucune connaissance sur ceux-ci.Afin de mieux caractériser la potentielle menace des attaques par patch, nous proposons d'étudier leur transférabilité quand ceux-ci sont développés pour duper des détecteurs d'objets. Les détecteurs d'objets sont des modèles plus complexes que les classifieurs d'image et sont souvent plus utilisés dans les systèmes opérant dans le monde physique. Nous étudions plus particulièrement les attaques par patch dites cape d'invisibilité, un type particulier de patch conçu pour inhiber la détection d'objets. Nos résultats révèlent que le protocole d'évaluation utilisé dans la littérature comporte plusieurs problèmes rendant l'évaluation de ces patchs incorrecte. Pour y remédier, nous introduisons un problème de substitution qui garantit que le patch produit supprime bien la bonne détection de l'objet que nous souhaitons attaquer. En utilisant ce nouveau processus d'évaluation, nous montrons que les attaques par patch de la littérature ne parviennent pas à inhiber la détection d'objets, limitant ainsi leur criticité
The use of Deep Neural Networks has revolutionized the field of computer vision, leading to significant performance improvement in tasks such as image classification, object detection, and semantic segmentation. Despite these breakthroughs, Deep Learning systems have exhibited vulnerabilities that malicious entities can exploit to induce harmful behavior in AI models. One of the threats is adversarial patch attacks, which are disruptive objects that often resemble stickers and are designed to deceive models when placed in a real-world scene. For example, a patch on a stop sign may sway the network to misclassify it as a speed limit sign. This type of attack raises significant safety issues for computer vision systems operating in the real world. In this thesis, we study if such a patch can disrupt a real-world system without prior knowledge concerning the targeted system.Even though numerous patch attacks have been proposed in the literature, no work in literature describes the prerequisites of a critical patch. One of our contributions is to propose a definition of what may be a critical adversarial patch. To be characterized as critical, adversarial patch attacks must meet two essential criteria. They must be robust to physical transformations summarized by the notion of patch physicality, and they must exhibit transferability among networks, meaning the patch can successfully fool networks without possessing any knowledge about the targeted system. Transferability is an essential prerequisite for a critical patch, as the targeted real-world system is usually protected and inaccessible from the outside. Although patch physicality has been developed and improved through multiple works, its transferability remains a challenge.To address the challenge of attack transferability among image classifiers, we introduce a new adversarial patch attack based on the Wasserstein distance, which computes the distance between two probability distributions. We exploit the Wasserstein distance to alter the feature distribution of a set of corrupted images to match another feature distribution from images of a target class. When placed in the scene, our patch causes various state-of-the-art networks to output the class chosen as the target distribution. We show that our patch is more transferable than previous patches and can be implemented in the real world to deceive real-world image classifiers.In addition to our work on classification networks, we conduct a study on patch transferability against object detectors, as these systems may be more often involved in real-world systems. We focus on invisible cloak patches, a particular type of patche that is designed to hide objects. Our findings reveal several significant flaws in the current evaluation protocol which is used to assess the effectiveness of these patches. To address these flaws, we introduce a surrogate problem that ensures that the produced patch is suppressing the object we want to attack. We show that state-of-the-art adversarial patch against object detectors fail to hide objects from being detected, limiting their criticality against real-world systems
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Trustworthiness of AI"

1

Assessing and Improving AI Trustworthiness: Current Contexts and Concerns. Washington, D.C.: National Academies Press, 2021. http://dx.doi.org/10.17226/26208.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Eliot, Dr Lance. AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence and Machine Learning. LBE Press Publishing, 2016.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Trustworthiness of AI"

1

Salloum, Said A. "Trustworthiness of the AI". In Studies in Big Data, 643–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52280-2_41.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner e Andreas Holzinger. "Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?" In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.

Testo completo
Abstract (sommario):
AbstractThe release of ChatGPT to the general public has sparked discussions about the dangers of artificial intelligence (AI) among the public. The European Commission’s draft of the AI Act has further fueled these discussions, particularly in relation to the definition of AI and the assignment of risk levels to different technologies. Security concerns in AI systems arise from the need to protect against potential adversaries and to safeguard individuals from AI decisions that may harm their well-being. However, ensuring secure and trustworthy AI systems is challenging, especially with deep learning models that lack explainability. This paper proposes the concept of Controllable AI as an alternative to Trustworthy AI and explores the major differences between the two. The aim is to initiate discussions on securing complex AI systems without sacrificing practical capabilities or transparency. The paper provides an overview of techniques that can be employed to achieve Controllable AI. It discusses the background definitions of explainability, Trustworthy AI, and the AI Act. The principles and techniques of Controllable AI are detailed, including detecting and managing control loss, implementing transparent AI decisions, and addressing intentional bias or backdoors. The paper concludes by discussing the potential applications of Controllable AI and its implications for real-world scenarios.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Gadewadikar, Jyotirmay, Jeremy Marshall, Zachary Bilodeau e Vatatmaja. "Systems Engineering–Driven AI Assurance and Trustworthiness". In The Proceedings of the 2023 Conference on Systems Engineering Research, 343–56. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49179-5_23.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Kastania, Nikoleta Polyxeni “Paulina”. "AI in Education: Prioritizing Transparency and Trustworthiness". In Encyclopedia of Educational Innovation, 1–5. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-13-2262-4_309-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Eguia, Alexander, Nuria Quintano, Irina Marsh, Michel Barreteau, Jakub Główka e Agnieszka Sprońska. "Ensuring Trustworthiness of Hybrid AI-Based Robotics Systems". In Springer Proceedings in Advanced Robotics, 142–46. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-76428-8_27.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Batut, Aria, Lina Prudhomme, Martijn van Sambeek e Weiqin Chen. "Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public". In Artificial Intelligence in HCI, 15–26. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60611-3_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Tsai, Chun-Hua, e John M. Carroll. "Logic and Pragmatics in AI Explanation". In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.

Testo completo
Abstract (sommario):
AbstractThis paper reviews logical approaches and challenges raised for explaining AI. We discuss the issues of presenting explanations as accurate computational models that users cannot understand or use. Then, we introduce pragmatic approaches that consider explanation a sort of speech act that commits to felicity conditions, including intelligibility, trustworthiness, and usefulness to the users. We argue Explainable AI (XAI) is more than a matter of accurate and complete computational explanation, that it requires pragmatics to address the issues it seeks to address. At the end of this paper, we draw a historical analogy to usability. This term was understood logically and pragmatically, but that has evolved empirically through time to become more prosperous and more functional.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ren, Hao, Jjnwen Liang, Zicong Hong, Enyuan Zhou e Junbao Pan. "Application: Privacy, Security, Robustness and Trustworthiness in Edge AI". In Machine Learning on Commodity Tiny Devices, 161–86. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003340225-10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Nguyen, Duc An, Khanh T. P. Nguyen e Kamal Medjaher. "Enhancing Trustworthiness in AI-Based Prognostics: A Comprehensive Review of Explainable AI for PHM". In Springer Series in Reliability Engineering, 101–36. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71495-5_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Uslu, Suleyman, Davinder Kaur, Samuel J. Rivera, Arjan Durresi e Meghna Babbar-Sebens. "Causal Inference to Enhance AI Trustworthiness in Environmental Decision-Making". In Advanced Information Networking and Applications, 214–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57916-5_19.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Trustworthiness of AI"

1

Janev, Valentina, Miloš Nenadović, Dejan Paunović, Sahar Vahdati, Jason Li, Muhammad Hamza Yousuf, Jaume Montanya et al. "IntelliLung AI-DSS Trustworthiness Evaluation Framework". In 2024 32nd Telecommunications Forum (TELFOR), 1–4. IEEE, 2024. https://doi.org/10.1109/telfor63250.2024.10819068.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Ottun, Abdul-Rasheed, Rasinthe Marasinghe, Toluwani Elemosho, Mohan Liyanage, Ashfaq Hussain Ahmed, Michell Boerger, Chamara Sandeepa et al. "SPATIAL: Practical AI Trustworthiness with Human Oversight". In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), 1427–30. IEEE, 2024. http://dx.doi.org/10.1109/icdcs60910.2024.00138.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Calabrò, Antonello, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf e Annabelle Mercier. "Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems". In 20th International Conference on Web Information Systems and Technologies, 288–95. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012998700003825.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Troussas, Christos, Christos Papakostas, Akrivi Krouska, Phivos Mylonas e Cleo Sgouropoulou. "FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems". In 20th International Conference on Web Information Systems and Technologies, 385–92. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0013061100003825.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Kioskli, Kitty, Laura Bishop, Nineta Polemi e Antonis Ramfos. "Towards a Human-Centric AI Trustworthiness Risk Management Framework". In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004766.

Testo completo
Abstract (sommario):
Artificial Intelligence (AI) aims to replicate human behavior in socio-technical systems, with a strong focus on AI engineering to replace human decision-making. However, an overemphasis on AI system autonomy can lead to bias, unfair, non-ethical decisions, and thus a lack of trust, resulting in decreased performance, motivation, and competitiveness. To mitigate these AI threats, developers are incorporating ethical considerations, often with input from ethicists, and using technical tools like IBM's Fairness 360 and Google's What-If tool to assess and improve fairness in AI systems. These efforts aim to create more trustworthy and equitable AI technologies. Building trustworthiness in AI technology does not necessarily imply that the human user will fundamentally trust it. For humans to use technology trust must be present, something challenging when AI lacks a permanent/stable physical embodiment. It is also important to ensure humans do not over-trust resulting in AI misuse. Trustworthiness should be assessed in relation to human acceptance, performance, satisfaction, and empowerment to make design choices that grant them ultimate control over AI systems, and the extent to which the technology meets the business context of the socio-technical system where it's used. For AI to be perceived as trustworthy, it must also align with the legal, moral, ethical principles, and behavioral patterns of its human users, whilst also considering the organizational responsibility and liability associated with the socio-technical system's business objectives. Commitment to incorporating these principles to create secure and effective decision support AI systems will offer a competitive advantage to organizations that integrate them.Based on this need, the proposed framework is a synthesis of research from diverse disciplines (cybersecurity, social and behavioral sciences, ethics) designed to ensure the trustworthiness of AI-driven hybrid decision support while accommodating the specific decision support needs and trust of human users. Additionally, it aims to align with the key performance indicators of the socio-technical environment where it operates. This framework serves to empower AI system developers, business leaders offering AI-based services, as well as AI system users, such as educators, professionals, and policymakers, in achieving a more absolute form of human-AI trustworthiness. It can also be used by security defenders to make fair decisions during AI incident handling. Our framework extends the proposed NIST AI Risk Management Framework (AI-RFM) since at all stages of the trustworthiness risk management dynamic cycle (threat assessment, impact assessment, risk assessment, risk mitigation), human users are considered (e.g., their morals, ethics, behavior, IT maturity) as well as the primary business objectives of the AI socio-technical system under assessment. Co-creation and human experiment processes must accompany all stages of system management and are therefore part of the proposed framework. This interaction facilitates the execution of continuous trustworthiness improvement processes. During each cycle of trustworthiness risk mitigation, human user assessment will take place, leading to the identification of corrective actions and additional mitigation activities to be implemented before the next improvement cycle. Thus, the main objective of this framework is to help build ‘trustworthy’ AI systems that are ultimately trusted by their users.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Wang, Yingxu. "A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems". In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945351.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Echeberria-Barrio, Xabier, Mikel Gorricho, Selene Valencia e Francesco Zola. "Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness". In 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140209.

Testo completo
Abstract (sommario):
The usage of Artificial Intelligence (AI) systems has increased exponentially, thanks to their ability to reduce the amount of data to be analyzed, the user efforts and preserving a high rate of accuracy. However, introducing this new element in the loop has converted them into attacked points that can compromise the reliability of the systems. This new scenario has raised crucial challenges regarding the reliability and trustworthiness of the AI models, as well as about the uncertainties in their response decisions, becoming even more crucial when applied in critical domains such as healthcare, chemical, electrical plants, etc. To contain these issues, in this paper, we present NeuralSentinel (NS), a tool able to validate the reliability and trustworthiness of AI models. This tool combines attack and defence strategies and explainability concepts to stress an AI model and help non-expert staff increase their confidence in this new system by understanding the model decisions. NS provide a simple and easy-to-use interface for helping humans in the loop dealing with all the needed information. This tool was deployed and used in a Hackathon event to evaluate the reliability of a skin cancer image detector. During the event, experts and non-experts attacked and defended the detector, learning which factors were the most important for model misclassification and which techniques were the most efficient. The event was also used to detect NS’s limitations and gather feedback for further improvements.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Garbuk, Sergey V. "Intellimetry as a Way to Ensure AI Trustworthiness". In 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). IEEE, 2018. http://dx.doi.org/10.1109/ic-aiai.2018.8674447.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Awadid, Afef, Kahina Amokrane-Ferka, Henri Sohier, Juliette Mattioli, Faouzi Adjed, Martin Gonzalez e Souhaiel Khalfaoui. "AI Systems Trustworthiness Assessment: State of the Art". In Workshop on Model-based System Engineering and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012619600003645.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Smith, Carol. "Letting Go of the Numbers: Measuring AI Trustworthiness". In 13th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012644300003654.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia