Добірка наукової літератури з теми "Trustworthiness of AI"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Trustworthiness of AI".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Trustworthiness of AI"
Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti, and Daniele Nardi. "A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.
Повний текст джерелаRishika Sen, Shrihari Vasudevan, Ricardo Britto, and Mj Prasath. "Ascertaining trustworthiness of AI systems in telecommunications." ITU Journal on Future and Evolving Technologies 5, no. 4 (December 10, 2024): 503–14. https://doi.org/10.52953/wibx7049.
Повний текст джерелаSchmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin, and Stefan Wrobel. "The why and how of trustworthy AI." at - Automatisierungstechnik 70, no. 9 (September 1, 2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.
Повний текст джерелаVashistha, Ritwik, and Arya Farahi. "U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.
Повний текст джерелаBradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok, and Kristen Brent Venable. "From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness." Proceedings of the AAAI Symposium Series 4, no. 1 (November 8, 2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.
Повний текст джерелаAlzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy, et al. "Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements." International Journal of Intelligent Systems 2023 (October 26, 2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.
Повний текст джерелаKafali, Efi, Davy Preuveneers, Theodoros Semertzidis, and Petros Daras. "Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework." Big Data and Cognitive Computing 8, no. 11 (October 24, 2024): 142. http://dx.doi.org/10.3390/bdcc8110142.
Повний текст джерелаMentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti, and Dimitris Apostolou. "Exploring the landscape of trustworthy artificial intelligence: Status and challenges." Intelligent Decision Technologies 18, no. 2 (June 7, 2024): 837–54. http://dx.doi.org/10.3233/idt-240366.
Повний текст джерелаVadlamudi, Siddhartha. "Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion." Engineering International 3, no. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.
Повний текст джерелаAJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill, and Fajuyigbe Gbenga Joseph. "Software Engineering’s Key Role in AI Content Trustworthiness." International Journal of Research and Scientific Innovation XI, no. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.
Повний текст джерелаДисертації з теми "Trustworthiness of AI"
Wang, Brydon. "The role of trustworthiness in automated decision-making systems and the law." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231388/1/Brydon_Wang_Thesis.pdf.
Повний текст джерелаLabarbarie, Pol. "Transferable adversarial patches : a potential threat for real-world computer vision algorithms." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG084.
Повний текст джерелаThe use of Deep Neural Networks has revolutionized the field of computer vision, leading to significant performance improvement in tasks such as image classification, object detection, and semantic segmentation. Despite these breakthroughs, Deep Learning systems have exhibited vulnerabilities that malicious entities can exploit to induce harmful behavior in AI models. One of the threats is adversarial patch attacks, which are disruptive objects that often resemble stickers and are designed to deceive models when placed in a real-world scene. For example, a patch on a stop sign may sway the network to misclassify it as a speed limit sign. This type of attack raises significant safety issues for computer vision systems operating in the real world. In this thesis, we study if such a patch can disrupt a real-world system without prior knowledge concerning the targeted system.Even though numerous patch attacks have been proposed in the literature, no work in literature describes the prerequisites of a critical patch. One of our contributions is to propose a definition of what may be a critical adversarial patch. To be characterized as critical, adversarial patch attacks must meet two essential criteria. They must be robust to physical transformations summarized by the notion of patch physicality, and they must exhibit transferability among networks, meaning the patch can successfully fool networks without possessing any knowledge about the targeted system. Transferability is an essential prerequisite for a critical patch, as the targeted real-world system is usually protected and inaccessible from the outside. Although patch physicality has been developed and improved through multiple works, its transferability remains a challenge.To address the challenge of attack transferability among image classifiers, we introduce a new adversarial patch attack based on the Wasserstein distance, which computes the distance between two probability distributions. We exploit the Wasserstein distance to alter the feature distribution of a set of corrupted images to match another feature distribution from images of a target class. When placed in the scene, our patch causes various state-of-the-art networks to output the class chosen as the target distribution. We show that our patch is more transferable than previous patches and can be implemented in the real world to deceive real-world image classifiers.In addition to our work on classification networks, we conduct a study on patch transferability against object detectors, as these systems may be more often involved in real-world systems. We focus on invisible cloak patches, a particular type of patche that is designed to hide objects. Our findings reveal several significant flaws in the current evaluation protocol which is used to assess the effectiveness of these patches. To address these flaws, we introduce a surrogate problem that ensures that the produced patch is suppressing the object we want to attack. We show that state-of-the-art adversarial patch against object detectors fail to hide objects from being detected, limiting their criticality against real-world systems
Книги з теми "Trustworthiness of AI"
Assessing and Improving AI Trustworthiness: Current Contexts and Concerns. Washington, D.C.: National Academies Press, 2021. http://dx.doi.org/10.17226/26208.
Повний текст джерелаEliot, Dr Lance. AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence and Machine Learning. LBE Press Publishing, 2016.
Знайти повний текст джерелаЧастини книг з теми "Trustworthiness of AI"
Salloum, Said A. "Trustworthiness of the AI." In Studies in Big Data, 643–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52280-2_41.
Повний текст джерелаKieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner, and Andreas Holzinger. "Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?" In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.
Повний текст джерелаGadewadikar, Jyotirmay, Jeremy Marshall, Zachary Bilodeau, and Vatatmaja. "Systems Engineering–Driven AI Assurance and Trustworthiness." In The Proceedings of the 2023 Conference on Systems Engineering Research, 343–56. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49179-5_23.
Повний текст джерелаKastania, Nikoleta Polyxeni “Paulina.” "AI in Education: Prioritizing Transparency and Trustworthiness." In Encyclopedia of Educational Innovation, 1–5. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-13-2262-4_309-1.
Повний текст джерелаEguia, Alexander, Nuria Quintano, Irina Marsh, Michel Barreteau, Jakub Główka, and Agnieszka Sprońska. "Ensuring Trustworthiness of Hybrid AI-Based Robotics Systems." In Springer Proceedings in Advanced Robotics, 142–46. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-76428-8_27.
Повний текст джерелаBatut, Aria, Lina Prudhomme, Martijn van Sambeek, and Weiqin Chen. "Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public." In Artificial Intelligence in HCI, 15–26. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60611-3_2.
Повний текст джерелаTsai, Chun-Hua, and John M. Carroll. "Logic and Pragmatics in AI Explanation." In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.
Повний текст джерелаRen, Hao, Jjnwen Liang, Zicong Hong, Enyuan Zhou, and Junbao Pan. "Application: Privacy, Security, Robustness and Trustworthiness in Edge AI." In Machine Learning on Commodity Tiny Devices, 161–86. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003340225-10.
Повний текст джерелаNguyen, Duc An, Khanh T. P. Nguyen, and Kamal Medjaher. "Enhancing Trustworthiness in AI-Based Prognostics: A Comprehensive Review of Explainable AI for PHM." In Springer Series in Reliability Engineering, 101–36. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71495-5_6.
Повний текст джерелаUslu, Suleyman, Davinder Kaur, Samuel J. Rivera, Arjan Durresi, and Meghna Babbar-Sebens. "Causal Inference to Enhance AI Trustworthiness in Environmental Decision-Making." In Advanced Information Networking and Applications, 214–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57916-5_19.
Повний текст джерелаТези доповідей конференцій з теми "Trustworthiness of AI"
Janev, Valentina, Miloš Nenadović, Dejan Paunović, Sahar Vahdati, Jason Li, Muhammad Hamza Yousuf, Jaume Montanya, et al. "IntelliLung AI-DSS Trustworthiness Evaluation Framework." In 2024 32nd Telecommunications Forum (TELFOR), 1–4. IEEE, 2024. https://doi.org/10.1109/telfor63250.2024.10819068.
Повний текст джерелаOttun, Abdul-Rasheed, Rasinthe Marasinghe, Toluwani Elemosho, Mohan Liyanage, Ashfaq Hussain Ahmed, Michell Boerger, Chamara Sandeepa, et al. "SPATIAL: Practical AI Trustworthiness with Human Oversight." In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), 1427–30. IEEE, 2024. http://dx.doi.org/10.1109/icdcs60910.2024.00138.
Повний текст джерелаCalabrò, Antonello, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf, and Annabelle Mercier. "Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems." In 20th International Conference on Web Information Systems and Technologies, 288–95. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012998700003825.
Повний текст джерелаTroussas, Christos, Christos Papakostas, Akrivi Krouska, Phivos Mylonas, and Cleo Sgouropoulou. "FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems." In 20th International Conference on Web Information Systems and Technologies, 385–92. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0013061100003825.
Повний текст джерелаKioskli, Kitty, Laura Bishop, Nineta Polemi, and Antonis Ramfos. "Towards a Human-Centric AI Trustworthiness Risk Management Framework." In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004766.
Повний текст джерелаWang, Yingxu. "A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems." In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945351.
Повний текст джерелаEcheberria-Barrio, Xabier, Mikel Gorricho, Selene Valencia, and Francesco Zola. "Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness." In 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140209.
Повний текст джерелаGarbuk, Sergey V. "Intellimetry as a Way to Ensure AI Trustworthiness." In 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). IEEE, 2018. http://dx.doi.org/10.1109/ic-aiai.2018.8674447.
Повний текст джерелаAwadid, Afef, Kahina Amokrane-Ferka, Henri Sohier, Juliette Mattioli, Faouzi Adjed, Martin Gonzalez, and Souhaiel Khalfaoui. "AI Systems Trustworthiness Assessment: State of the Art." In Workshop on Model-based System Engineering and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012619600003645.
Повний текст джерелаSmith, Carol. "Letting Go of the Numbers: Measuring AI Trustworthiness." In 13th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012644300003654.
Повний текст джерела