Literatura científica selecionada sobre o tema "Trustworthiness of AI"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Trustworthiness of AI".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Trustworthiness of AI"
Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti e Daniele Nardi. "A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16 de outubro de 2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.
Texto completo da fonteRishika Sen, Shrihari Vasudevan, Ricardo Britto e Mj Prasath. "Ascertaining trustworthiness of AI systems in telecommunications". ITU Journal on Future and Evolving Technologies 5, n.º 4 (10 de dezembro de 2024): 503–14. https://doi.org/10.52953/wibx7049.
Texto completo da fonteSchmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin e Stefan Wrobel. "The why and how of trustworthy AI". at - Automatisierungstechnik 70, n.º 9 (1 de setembro de 2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.
Texto completo da fonteVashistha, Ritwik, e Arya Farahi. "U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de março de 2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.
Texto completo da fonteBradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok e Kristen Brent Venable. "From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness". Proceedings of the AAAI Symposium Series 4, n.º 1 (8 de novembro de 2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.
Texto completo da fonteAlzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy et al. "Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements". International Journal of Intelligent Systems 2023 (26 de outubro de 2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.
Texto completo da fonteKafali, Efi, Davy Preuveneers, Theodoros Semertzidis e Petros Daras. "Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework". Big Data and Cognitive Computing 8, n.º 11 (24 de outubro de 2024): 142. http://dx.doi.org/10.3390/bdcc8110142.
Texto completo da fonteMentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti e Dimitris Apostolou. "Exploring the landscape of trustworthy artificial intelligence: Status and challenges". Intelligent Decision Technologies 18, n.º 2 (7 de junho de 2024): 837–54. http://dx.doi.org/10.3233/idt-240366.
Texto completo da fonteVadlamudi, Siddhartha. "Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion". Engineering International 3, n.º 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.
Texto completo da fonteAJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill e Fajuyigbe Gbenga Joseph. "Software Engineering’s Key Role in AI Content Trustworthiness". International Journal of Research and Scientific Innovation XI, n.º IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.
Texto completo da fonteTeses / dissertações sobre o assunto "Trustworthiness of AI"
Wang, Brydon. "The role of trustworthiness in automated decision-making systems and the law". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231388/1/Brydon_Wang_Thesis.pdf.
Texto completo da fonteLabarbarie, Pol. "Transferable adversarial patches : a potential threat for real-world computer vision algorithms". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG084.
Texto completo da fonteThe use of Deep Neural Networks has revolutionized the field of computer vision, leading to significant performance improvement in tasks such as image classification, object detection, and semantic segmentation. Despite these breakthroughs, Deep Learning systems have exhibited vulnerabilities that malicious entities can exploit to induce harmful behavior in AI models. One of the threats is adversarial patch attacks, which are disruptive objects that often resemble stickers and are designed to deceive models when placed in a real-world scene. For example, a patch on a stop sign may sway the network to misclassify it as a speed limit sign. This type of attack raises significant safety issues for computer vision systems operating in the real world. In this thesis, we study if such a patch can disrupt a real-world system without prior knowledge concerning the targeted system.Even though numerous patch attacks have been proposed in the literature, no work in literature describes the prerequisites of a critical patch. One of our contributions is to propose a definition of what may be a critical adversarial patch. To be characterized as critical, adversarial patch attacks must meet two essential criteria. They must be robust to physical transformations summarized by the notion of patch physicality, and they must exhibit transferability among networks, meaning the patch can successfully fool networks without possessing any knowledge about the targeted system. Transferability is an essential prerequisite for a critical patch, as the targeted real-world system is usually protected and inaccessible from the outside. Although patch physicality has been developed and improved through multiple works, its transferability remains a challenge.To address the challenge of attack transferability among image classifiers, we introduce a new adversarial patch attack based on the Wasserstein distance, which computes the distance between two probability distributions. We exploit the Wasserstein distance to alter the feature distribution of a set of corrupted images to match another feature distribution from images of a target class. When placed in the scene, our patch causes various state-of-the-art networks to output the class chosen as the target distribution. We show that our patch is more transferable than previous patches and can be implemented in the real world to deceive real-world image classifiers.In addition to our work on classification networks, we conduct a study on patch transferability against object detectors, as these systems may be more often involved in real-world systems. We focus on invisible cloak patches, a particular type of patche that is designed to hide objects. Our findings reveal several significant flaws in the current evaluation protocol which is used to assess the effectiveness of these patches. To address these flaws, we introduce a surrogate problem that ensures that the produced patch is suppressing the object we want to attack. We show that state-of-the-art adversarial patch against object detectors fail to hide objects from being detected, limiting their criticality against real-world systems
Livros sobre o assunto "Trustworthiness of AI"
Assessing and Improving AI Trustworthiness: Current Contexts and Concerns. Washington, D.C.: National Academies Press, 2021. http://dx.doi.org/10.17226/26208.
Texto completo da fonteEliot, Dr Lance. AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence and Machine Learning. LBE Press Publishing, 2016.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Trustworthiness of AI"
Salloum, Said A. "Trustworthiness of the AI". In Studies in Big Data, 643–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52280-2_41.
Texto completo da fonteKieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner e Andreas Holzinger. "Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?" In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.
Texto completo da fonteGadewadikar, Jyotirmay, Jeremy Marshall, Zachary Bilodeau e Vatatmaja. "Systems Engineering–Driven AI Assurance and Trustworthiness". In The Proceedings of the 2023 Conference on Systems Engineering Research, 343–56. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49179-5_23.
Texto completo da fonteKastania, Nikoleta Polyxeni “Paulina”. "AI in Education: Prioritizing Transparency and Trustworthiness". In Encyclopedia of Educational Innovation, 1–5. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-13-2262-4_309-1.
Texto completo da fonteEguia, Alexander, Nuria Quintano, Irina Marsh, Michel Barreteau, Jakub Główka e Agnieszka Sprońska. "Ensuring Trustworthiness of Hybrid AI-Based Robotics Systems". In Springer Proceedings in Advanced Robotics, 142–46. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-76428-8_27.
Texto completo da fonteBatut, Aria, Lina Prudhomme, Martijn van Sambeek e Weiqin Chen. "Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public". In Artificial Intelligence in HCI, 15–26. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60611-3_2.
Texto completo da fonteTsai, Chun-Hua, e John M. Carroll. "Logic and Pragmatics in AI Explanation". In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.
Texto completo da fonteRen, Hao, Jjnwen Liang, Zicong Hong, Enyuan Zhou e Junbao Pan. "Application: Privacy, Security, Robustness and Trustworthiness in Edge AI". In Machine Learning on Commodity Tiny Devices, 161–86. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003340225-10.
Texto completo da fonteNguyen, Duc An, Khanh T. P. Nguyen e Kamal Medjaher. "Enhancing Trustworthiness in AI-Based Prognostics: A Comprehensive Review of Explainable AI for PHM". In Springer Series in Reliability Engineering, 101–36. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71495-5_6.
Texto completo da fonteUslu, Suleyman, Davinder Kaur, Samuel J. Rivera, Arjan Durresi e Meghna Babbar-Sebens. "Causal Inference to Enhance AI Trustworthiness in Environmental Decision-Making". In Advanced Information Networking and Applications, 214–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57916-5_19.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Trustworthiness of AI"
Janev, Valentina, Miloš Nenadović, Dejan Paunović, Sahar Vahdati, Jason Li, Muhammad Hamza Yousuf, Jaume Montanya et al. "IntelliLung AI-DSS Trustworthiness Evaluation Framework". In 2024 32nd Telecommunications Forum (TELFOR), 1–4. IEEE, 2024. https://doi.org/10.1109/telfor63250.2024.10819068.
Texto completo da fonteOttun, Abdul-Rasheed, Rasinthe Marasinghe, Toluwani Elemosho, Mohan Liyanage, Ashfaq Hussain Ahmed, Michell Boerger, Chamara Sandeepa et al. "SPATIAL: Practical AI Trustworthiness with Human Oversight". In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), 1427–30. IEEE, 2024. http://dx.doi.org/10.1109/icdcs60910.2024.00138.
Texto completo da fonteCalabrò, Antonello, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf e Annabelle Mercier. "Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems". In 20th International Conference on Web Information Systems and Technologies, 288–95. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012998700003825.
Texto completo da fonteTroussas, Christos, Christos Papakostas, Akrivi Krouska, Phivos Mylonas e Cleo Sgouropoulou. "FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems". In 20th International Conference on Web Information Systems and Technologies, 385–92. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0013061100003825.
Texto completo da fonteKioskli, Kitty, Laura Bishop, Nineta Polemi e Antonis Ramfos. "Towards a Human-Centric AI Trustworthiness Risk Management Framework". In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004766.
Texto completo da fonteWang, Yingxu. "A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems". In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945351.
Texto completo da fonteEcheberria-Barrio, Xabier, Mikel Gorricho, Selene Valencia e Francesco Zola. "Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness". In 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140209.
Texto completo da fonteGarbuk, Sergey V. "Intellimetry as a Way to Ensure AI Trustworthiness". In 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). IEEE, 2018. http://dx.doi.org/10.1109/ic-aiai.2018.8674447.
Texto completo da fonteAwadid, Afef, Kahina Amokrane-Ferka, Henri Sohier, Juliette Mattioli, Faouzi Adjed, Martin Gonzalez e Souhaiel Khalfaoui. "AI Systems Trustworthiness Assessment: State of the Art". In Workshop on Model-based System Engineering and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012619600003645.
Texto completo da fonteSmith, Carol. "Letting Go of the Numbers: Measuring AI Trustworthiness". In 13th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012644300003654.
Texto completo da fonte