Academic literature on the topic 'Trustworthiness of AI'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Trustworthiness of AI.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Trustworthiness of AI"
Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti, and Daniele Nardi. "A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.
Full textRishika Sen, Shrihari Vasudevan, Ricardo Britto, and Mj Prasath. "Ascertaining trustworthiness of AI systems in telecommunications." ITU Journal on Future and Evolving Technologies 5, no. 4 (December 10, 2024): 503–14. https://doi.org/10.52953/wibx7049.
Full textSchmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin, and Stefan Wrobel. "The why and how of trustworthy AI." at - Automatisierungstechnik 70, no. 9 (September 1, 2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.
Full textVashistha, Ritwik, and Arya Farahi. "U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.
Full textBradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok, and Kristen Brent Venable. "From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness." Proceedings of the AAAI Symposium Series 4, no. 1 (November 8, 2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.
Full textAlzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy, et al. "Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements." International Journal of Intelligent Systems 2023 (October 26, 2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.
Full textKafali, Efi, Davy Preuveneers, Theodoros Semertzidis, and Petros Daras. "Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework." Big Data and Cognitive Computing 8, no. 11 (October 24, 2024): 142. http://dx.doi.org/10.3390/bdcc8110142.
Full textMentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti, and Dimitris Apostolou. "Exploring the landscape of trustworthy artificial intelligence: Status and challenges." Intelligent Decision Technologies 18, no. 2 (June 7, 2024): 837–54. http://dx.doi.org/10.3233/idt-240366.
Full textVadlamudi, Siddhartha. "Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion." Engineering International 3, no. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.
Full textAJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill, and Fajuyigbe Gbenga Joseph. "Software Engineering’s Key Role in AI Content Trustworthiness." International Journal of Research and Scientific Innovation XI, no. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.
Full textDissertations / Theses on the topic "Trustworthiness of AI"
Wang, Brydon. "The role of trustworthiness in automated decision-making systems and the law." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231388/1/Brydon_Wang_Thesis.pdf.
Full textLabarbarie, Pol. "Transferable adversarial patches : a potential threat for real-world computer vision algorithms." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG084.
Full textThe use of Deep Neural Networks has revolutionized the field of computer vision, leading to significant performance improvement in tasks such as image classification, object detection, and semantic segmentation. Despite these breakthroughs, Deep Learning systems have exhibited vulnerabilities that malicious entities can exploit to induce harmful behavior in AI models. One of the threats is adversarial patch attacks, which are disruptive objects that often resemble stickers and are designed to deceive models when placed in a real-world scene. For example, a patch on a stop sign may sway the network to misclassify it as a speed limit sign. This type of attack raises significant safety issues for computer vision systems operating in the real world. In this thesis, we study if such a patch can disrupt a real-world system without prior knowledge concerning the targeted system.Even though numerous patch attacks have been proposed in the literature, no work in literature describes the prerequisites of a critical patch. One of our contributions is to propose a definition of what may be a critical adversarial patch. To be characterized as critical, adversarial patch attacks must meet two essential criteria. They must be robust to physical transformations summarized by the notion of patch physicality, and they must exhibit transferability among networks, meaning the patch can successfully fool networks without possessing any knowledge about the targeted system. Transferability is an essential prerequisite for a critical patch, as the targeted real-world system is usually protected and inaccessible from the outside. Although patch physicality has been developed and improved through multiple works, its transferability remains a challenge.To address the challenge of attack transferability among image classifiers, we introduce a new adversarial patch attack based on the Wasserstein distance, which computes the distance between two probability distributions. We exploit the Wasserstein distance to alter the feature distribution of a set of corrupted images to match another feature distribution from images of a target class. When placed in the scene, our patch causes various state-of-the-art networks to output the class chosen as the target distribution. We show that our patch is more transferable than previous patches and can be implemented in the real world to deceive real-world image classifiers.In addition to our work on classification networks, we conduct a study on patch transferability against object detectors, as these systems may be more often involved in real-world systems. We focus on invisible cloak patches, a particular type of patche that is designed to hide objects. Our findings reveal several significant flaws in the current evaluation protocol which is used to assess the effectiveness of these patches. To address these flaws, we introduce a surrogate problem that ensures that the produced patch is suppressing the object we want to attack. We show that state-of-the-art adversarial patch against object detectors fail to hide objects from being detected, limiting their criticality against real-world systems
Books on the topic "Trustworthiness of AI"
Assessing and Improving AI Trustworthiness: Current Contexts and Concerns. Washington, D.C.: National Academies Press, 2021. http://dx.doi.org/10.17226/26208.
Full textEliot, Dr Lance. AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence and Machine Learning. LBE Press Publishing, 2016.
Find full textBook chapters on the topic "Trustworthiness of AI"
Salloum, Said A. "Trustworthiness of the AI." In Studies in Big Data, 643–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52280-2_41.
Full textKieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner, and Andreas Holzinger. "Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?" In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.
Full textGadewadikar, Jyotirmay, Jeremy Marshall, Zachary Bilodeau, and Vatatmaja. "Systems Engineering–Driven AI Assurance and Trustworthiness." In The Proceedings of the 2023 Conference on Systems Engineering Research, 343–56. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49179-5_23.
Full textKastania, Nikoleta Polyxeni “Paulina.” "AI in Education: Prioritizing Transparency and Trustworthiness." In Encyclopedia of Educational Innovation, 1–5. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-13-2262-4_309-1.
Full textEguia, Alexander, Nuria Quintano, Irina Marsh, Michel Barreteau, Jakub Główka, and Agnieszka Sprońska. "Ensuring Trustworthiness of Hybrid AI-Based Robotics Systems." In Springer Proceedings in Advanced Robotics, 142–46. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-76428-8_27.
Full textBatut, Aria, Lina Prudhomme, Martijn van Sambeek, and Weiqin Chen. "Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public." In Artificial Intelligence in HCI, 15–26. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60611-3_2.
Full textTsai, Chun-Hua, and John M. Carroll. "Logic and Pragmatics in AI Explanation." In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.
Full textRen, Hao, Jjnwen Liang, Zicong Hong, Enyuan Zhou, and Junbao Pan. "Application: Privacy, Security, Robustness and Trustworthiness in Edge AI." In Machine Learning on Commodity Tiny Devices, 161–86. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003340225-10.
Full textNguyen, Duc An, Khanh T. P. Nguyen, and Kamal Medjaher. "Enhancing Trustworthiness in AI-Based Prognostics: A Comprehensive Review of Explainable AI for PHM." In Springer Series in Reliability Engineering, 101–36. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71495-5_6.
Full textUslu, Suleyman, Davinder Kaur, Samuel J. Rivera, Arjan Durresi, and Meghna Babbar-Sebens. "Causal Inference to Enhance AI Trustworthiness in Environmental Decision-Making." In Advanced Information Networking and Applications, 214–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57916-5_19.
Full textConference papers on the topic "Trustworthiness of AI"
Janev, Valentina, Miloš Nenadović, Dejan Paunović, Sahar Vahdati, Jason Li, Muhammad Hamza Yousuf, Jaume Montanya, et al. "IntelliLung AI-DSS Trustworthiness Evaluation Framework." In 2024 32nd Telecommunications Forum (TELFOR), 1–4. IEEE, 2024. https://doi.org/10.1109/telfor63250.2024.10819068.
Full textOttun, Abdul-Rasheed, Rasinthe Marasinghe, Toluwani Elemosho, Mohan Liyanage, Ashfaq Hussain Ahmed, Michell Boerger, Chamara Sandeepa, et al. "SPATIAL: Practical AI Trustworthiness with Human Oversight." In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), 1427–30. IEEE, 2024. http://dx.doi.org/10.1109/icdcs60910.2024.00138.
Full textCalabrò, Antonello, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf, and Annabelle Mercier. "Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems." In 20th International Conference on Web Information Systems and Technologies, 288–95. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012998700003825.
Full textTroussas, Christos, Christos Papakostas, Akrivi Krouska, Phivos Mylonas, and Cleo Sgouropoulou. "FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems." In 20th International Conference on Web Information Systems and Technologies, 385–92. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0013061100003825.
Full textKioskli, Kitty, Laura Bishop, Nineta Polemi, and Antonis Ramfos. "Towards a Human-Centric AI Trustworthiness Risk Management Framework." In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004766.
Full textWang, Yingxu. "A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems." In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945351.
Full textEcheberria-Barrio, Xabier, Mikel Gorricho, Selene Valencia, and Francesco Zola. "Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness." In 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140209.
Full textGarbuk, Sergey V. "Intellimetry as a Way to Ensure AI Trustworthiness." In 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). IEEE, 2018. http://dx.doi.org/10.1109/ic-aiai.2018.8674447.
Full textAwadid, Afef, Kahina Amokrane-Ferka, Henri Sohier, Juliette Mattioli, Faouzi Adjed, Martin Gonzalez, and Souhaiel Khalfaoui. "AI Systems Trustworthiness Assessment: State of the Art." In Workshop on Model-based System Engineering and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012619600003645.
Full textSmith, Carol. "Letting Go of the Numbers: Measuring AI Trustworthiness." In 13th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012644300003654.
Full text