Letteratura scientifica selezionata sul tema "Trustworthiness of AI"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Trustworthiness of AI".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Trustworthiness of AI"
Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti e Daniele Nardi. "A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16 ottobre 2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.
Testo completoRishika Sen, Shrihari Vasudevan, Ricardo Britto e Mj Prasath. "Ascertaining trustworthiness of AI systems in telecommunications". ITU Journal on Future and Evolving Technologies 5, n. 4 (10 dicembre 2024): 503–14. https://doi.org/10.52953/wibx7049.
Testo completoSchmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin e Stefan Wrobel. "The why and how of trustworthy AI". at - Automatisierungstechnik 70, n. 9 (1 settembre 2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.
Testo completoVashistha, Ritwik, e Arya Farahi. "U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 18 (24 marzo 2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.
Testo completoBradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok e Kristen Brent Venable. "From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness". Proceedings of the AAAI Symposium Series 4, n. 1 (8 novembre 2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.
Testo completoAlzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy et al. "Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements". International Journal of Intelligent Systems 2023 (26 ottobre 2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.
Testo completoKafali, Efi, Davy Preuveneers, Theodoros Semertzidis e Petros Daras. "Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework". Big Data and Cognitive Computing 8, n. 11 (24 ottobre 2024): 142. http://dx.doi.org/10.3390/bdcc8110142.
Testo completoMentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti e Dimitris Apostolou. "Exploring the landscape of trustworthy artificial intelligence: Status and challenges". Intelligent Decision Technologies 18, n. 2 (7 giugno 2024): 837–54. http://dx.doi.org/10.3233/idt-240366.
Testo completoVadlamudi, Siddhartha. "Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion". Engineering International 3, n. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.
Testo completoAJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill e Fajuyigbe Gbenga Joseph. "Software Engineering’s Key Role in AI Content Trustworthiness". International Journal of Research and Scientific Innovation XI, n. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.
Testo completoTesi sul tema "Trustworthiness of AI"
Wang, Brydon. "The role of trustworthiness in automated decision-making systems and the law". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231388/1/Brydon_Wang_Thesis.pdf.
Testo completoLabarbarie, Pol. "Transferable adversarial patches : a potential threat for real-world computer vision algorithms". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG084.
Testo completoThe use of Deep Neural Networks has revolutionized the field of computer vision, leading to significant performance improvement in tasks such as image classification, object detection, and semantic segmentation. Despite these breakthroughs, Deep Learning systems have exhibited vulnerabilities that malicious entities can exploit to induce harmful behavior in AI models. One of the threats is adversarial patch attacks, which are disruptive objects that often resemble stickers and are designed to deceive models when placed in a real-world scene. For example, a patch on a stop sign may sway the network to misclassify it as a speed limit sign. This type of attack raises significant safety issues for computer vision systems operating in the real world. In this thesis, we study if such a patch can disrupt a real-world system without prior knowledge concerning the targeted system.Even though numerous patch attacks have been proposed in the literature, no work in literature describes the prerequisites of a critical patch. One of our contributions is to propose a definition of what may be a critical adversarial patch. To be characterized as critical, adversarial patch attacks must meet two essential criteria. They must be robust to physical transformations summarized by the notion of patch physicality, and they must exhibit transferability among networks, meaning the patch can successfully fool networks without possessing any knowledge about the targeted system. Transferability is an essential prerequisite for a critical patch, as the targeted real-world system is usually protected and inaccessible from the outside. Although patch physicality has been developed and improved through multiple works, its transferability remains a challenge.To address the challenge of attack transferability among image classifiers, we introduce a new adversarial patch attack based on the Wasserstein distance, which computes the distance between two probability distributions. We exploit the Wasserstein distance to alter the feature distribution of a set of corrupted images to match another feature distribution from images of a target class. When placed in the scene, our patch causes various state-of-the-art networks to output the class chosen as the target distribution. We show that our patch is more transferable than previous patches and can be implemented in the real world to deceive real-world image classifiers.In addition to our work on classification networks, we conduct a study on patch transferability against object detectors, as these systems may be more often involved in real-world systems. We focus on invisible cloak patches, a particular type of patche that is designed to hide objects. Our findings reveal several significant flaws in the current evaluation protocol which is used to assess the effectiveness of these patches. To address these flaws, we introduce a surrogate problem that ensures that the produced patch is suppressing the object we want to attack. We show that state-of-the-art adversarial patch against object detectors fail to hide objects from being detected, limiting their criticality against real-world systems
Libri sul tema "Trustworthiness of AI"
Assessing and Improving AI Trustworthiness: Current Contexts and Concerns. Washington, D.C.: National Academies Press, 2021. http://dx.doi.org/10.17226/26208.
Testo completoEliot, Dr Lance. AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence and Machine Learning. LBE Press Publishing, 2016.
Cerca il testo completoCapitoli di libri sul tema "Trustworthiness of AI"
Salloum, Said A. "Trustworthiness of the AI". In Studies in Big Data, 643–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52280-2_41.
Testo completoKieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner e Andreas Holzinger. "Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?" In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.
Testo completoGadewadikar, Jyotirmay, Jeremy Marshall, Zachary Bilodeau e Vatatmaja. "Systems Engineering–Driven AI Assurance and Trustworthiness". In The Proceedings of the 2023 Conference on Systems Engineering Research, 343–56. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49179-5_23.
Testo completoKastania, Nikoleta Polyxeni “Paulina”. "AI in Education: Prioritizing Transparency and Trustworthiness". In Encyclopedia of Educational Innovation, 1–5. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-13-2262-4_309-1.
Testo completoEguia, Alexander, Nuria Quintano, Irina Marsh, Michel Barreteau, Jakub Główka e Agnieszka Sprońska. "Ensuring Trustworthiness of Hybrid AI-Based Robotics Systems". In Springer Proceedings in Advanced Robotics, 142–46. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-76428-8_27.
Testo completoBatut, Aria, Lina Prudhomme, Martijn van Sambeek e Weiqin Chen. "Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public". In Artificial Intelligence in HCI, 15–26. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60611-3_2.
Testo completoTsai, Chun-Hua, e John M. Carroll. "Logic and Pragmatics in AI Explanation". In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.
Testo completoRen, Hao, Jjnwen Liang, Zicong Hong, Enyuan Zhou e Junbao Pan. "Application: Privacy, Security, Robustness and Trustworthiness in Edge AI". In Machine Learning on Commodity Tiny Devices, 161–86. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003340225-10.
Testo completoNguyen, Duc An, Khanh T. P. Nguyen e Kamal Medjaher. "Enhancing Trustworthiness in AI-Based Prognostics: A Comprehensive Review of Explainable AI for PHM". In Springer Series in Reliability Engineering, 101–36. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71495-5_6.
Testo completoUslu, Suleyman, Davinder Kaur, Samuel J. Rivera, Arjan Durresi e Meghna Babbar-Sebens. "Causal Inference to Enhance AI Trustworthiness in Environmental Decision-Making". In Advanced Information Networking and Applications, 214–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57916-5_19.
Testo completoAtti di convegni sul tema "Trustworthiness of AI"
Janev, Valentina, Miloš Nenadović, Dejan Paunović, Sahar Vahdati, Jason Li, Muhammad Hamza Yousuf, Jaume Montanya et al. "IntelliLung AI-DSS Trustworthiness Evaluation Framework". In 2024 32nd Telecommunications Forum (TELFOR), 1–4. IEEE, 2024. https://doi.org/10.1109/telfor63250.2024.10819068.
Testo completoOttun, Abdul-Rasheed, Rasinthe Marasinghe, Toluwani Elemosho, Mohan Liyanage, Ashfaq Hussain Ahmed, Michell Boerger, Chamara Sandeepa et al. "SPATIAL: Practical AI Trustworthiness with Human Oversight". In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), 1427–30. IEEE, 2024. http://dx.doi.org/10.1109/icdcs60910.2024.00138.
Testo completoCalabrò, Antonello, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf e Annabelle Mercier. "Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems". In 20th International Conference on Web Information Systems and Technologies, 288–95. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012998700003825.
Testo completoTroussas, Christos, Christos Papakostas, Akrivi Krouska, Phivos Mylonas e Cleo Sgouropoulou. "FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems". In 20th International Conference on Web Information Systems and Technologies, 385–92. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0013061100003825.
Testo completoKioskli, Kitty, Laura Bishop, Nineta Polemi e Antonis Ramfos. "Towards a Human-Centric AI Trustworthiness Risk Management Framework". In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004766.
Testo completoWang, Yingxu. "A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems". In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945351.
Testo completoEcheberria-Barrio, Xabier, Mikel Gorricho, Selene Valencia e Francesco Zola. "Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness". In 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140209.
Testo completoGarbuk, Sergey V. "Intellimetry as a Way to Ensure AI Trustworthiness". In 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). IEEE, 2018. http://dx.doi.org/10.1109/ic-aiai.2018.8674447.
Testo completoAwadid, Afef, Kahina Amokrane-Ferka, Henri Sohier, Juliette Mattioli, Faouzi Adjed, Martin Gonzalez e Souhaiel Khalfaoui. "AI Systems Trustworthiness Assessment: State of the Art". In Workshop on Model-based System Engineering and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012619600003645.
Testo completoSmith, Carol. "Letting Go of the Numbers: Measuring AI Trustworthiness". In 13th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012644300003654.
Testo completo