Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Trustworthiness of AI“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Trustworthiness of AI" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Trustworthiness of AI"
Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti und Daniele Nardi. „A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness“. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16.10.2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.
Der volle Inhalt der QuelleRishika Sen, Shrihari Vasudevan, Ricardo Britto und Mj Prasath. „Ascertaining trustworthiness of AI systems in telecommunications“. ITU Journal on Future and Evolving Technologies 5, Nr. 4 (10.12.2024): 503–14. https://doi.org/10.52953/wibx7049.
Der volle Inhalt der QuelleSchmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin und Stefan Wrobel. „The why and how of trustworthy AI“. at - Automatisierungstechnik 70, Nr. 9 (01.09.2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.
Der volle Inhalt der QuelleVashistha, Ritwik, und Arya Farahi. „U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.
Der volle Inhalt der QuelleBradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok und Kristen Brent Venable. „From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness“. Proceedings of the AAAI Symposium Series 4, Nr. 1 (08.11.2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.
Der volle Inhalt der QuelleAlzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, Ammar Dukhan, Ahmed H. Alkenani, Ahmed Al-Asadi, Haider A. Alwzwazy et al. „Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements“. International Journal of Intelligent Systems 2023 (26.10.2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.
Der volle Inhalt der QuelleKafali, Efi, Davy Preuveneers, Theodoros Semertzidis und Petros Daras. „Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework“. Big Data and Cognitive Computing 8, Nr. 11 (24.10.2024): 142. http://dx.doi.org/10.3390/bdcc8110142.
Der volle Inhalt der QuelleMentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti und Dimitris Apostolou. „Exploring the landscape of trustworthy artificial intelligence: Status and challenges“. Intelligent Decision Technologies 18, Nr. 2 (07.06.2024): 837–54. http://dx.doi.org/10.3233/idt-240366.
Der volle Inhalt der QuelleVadlamudi, Siddhartha. „Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion“. Engineering International 3, Nr. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.
Der volle Inhalt der QuelleAJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill und Fajuyigbe Gbenga Joseph. „Software Engineering’s Key Role in AI Content Trustworthiness“. International Journal of Research and Scientific Innovation XI, Nr. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.
Der volle Inhalt der QuelleDissertationen zum Thema "Trustworthiness of AI"
Wang, Brydon. „The role of trustworthiness in automated decision-making systems and the law“. Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231388/1/Brydon_Wang_Thesis.pdf.
Der volle Inhalt der QuelleLabarbarie, Pol. „Transferable adversarial patches : a potential threat for real-world computer vision algorithms“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG084.
Der volle Inhalt der QuelleThe use of Deep Neural Networks has revolutionized the field of computer vision, leading to significant performance improvement in tasks such as image classification, object detection, and semantic segmentation. Despite these breakthroughs, Deep Learning systems have exhibited vulnerabilities that malicious entities can exploit to induce harmful behavior in AI models. One of the threats is adversarial patch attacks, which are disruptive objects that often resemble stickers and are designed to deceive models when placed in a real-world scene. For example, a patch on a stop sign may sway the network to misclassify it as a speed limit sign. This type of attack raises significant safety issues for computer vision systems operating in the real world. In this thesis, we study if such a patch can disrupt a real-world system without prior knowledge concerning the targeted system.Even though numerous patch attacks have been proposed in the literature, no work in literature describes the prerequisites of a critical patch. One of our contributions is to propose a definition of what may be a critical adversarial patch. To be characterized as critical, adversarial patch attacks must meet two essential criteria. They must be robust to physical transformations summarized by the notion of patch physicality, and they must exhibit transferability among networks, meaning the patch can successfully fool networks without possessing any knowledge about the targeted system. Transferability is an essential prerequisite for a critical patch, as the targeted real-world system is usually protected and inaccessible from the outside. Although patch physicality has been developed and improved through multiple works, its transferability remains a challenge.To address the challenge of attack transferability among image classifiers, we introduce a new adversarial patch attack based on the Wasserstein distance, which computes the distance between two probability distributions. We exploit the Wasserstein distance to alter the feature distribution of a set of corrupted images to match another feature distribution from images of a target class. When placed in the scene, our patch causes various state-of-the-art networks to output the class chosen as the target distribution. We show that our patch is more transferable than previous patches and can be implemented in the real world to deceive real-world image classifiers.In addition to our work on classification networks, we conduct a study on patch transferability against object detectors, as these systems may be more often involved in real-world systems. We focus on invisible cloak patches, a particular type of patche that is designed to hide objects. Our findings reveal several significant flaws in the current evaluation protocol which is used to assess the effectiveness of these patches. To address these flaws, we introduce a surrogate problem that ensures that the produced patch is suppressing the object we want to attack. We show that state-of-the-art adversarial patch against object detectors fail to hide objects from being detected, limiting their criticality against real-world systems
Bücher zum Thema "Trustworthiness of AI"
Assessing and Improving AI Trustworthiness: Current Contexts and Concerns. Washington, D.C.: National Academies Press, 2021. http://dx.doi.org/10.17226/26208.
Der volle Inhalt der QuelleEliot, Dr Lance. AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence and Machine Learning. LBE Press Publishing, 2016.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Trustworthiness of AI"
Salloum, Said A. „Trustworthiness of the AI“. In Studies in Big Data, 643–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52280-2_41.
Der volle Inhalt der QuelleKieseberg, Peter, Edgar Weippl, A. Min Tjoa, Federico Cabitza, Andrea Campagner und Andreas Holzinger. „Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?“ In Lecture Notes in Computer Science, 1–12. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_1.
Der volle Inhalt der QuelleGadewadikar, Jyotirmay, Jeremy Marshall, Zachary Bilodeau und Vatatmaja. „Systems Engineering–Driven AI Assurance and Trustworthiness“. In The Proceedings of the 2023 Conference on Systems Engineering Research, 343–56. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49179-5_23.
Der volle Inhalt der QuelleKastania, Nikoleta Polyxeni “Paulina”. „AI in Education: Prioritizing Transparency and Trustworthiness“. In Encyclopedia of Educational Innovation, 1–5. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-13-2262-4_309-1.
Der volle Inhalt der QuelleEguia, Alexander, Nuria Quintano, Irina Marsh, Michel Barreteau, Jakub Główka und Agnieszka Sprońska. „Ensuring Trustworthiness of Hybrid AI-Based Robotics Systems“. In Springer Proceedings in Advanced Robotics, 142–46. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-76428-8_27.
Der volle Inhalt der QuelleBatut, Aria, Lina Prudhomme, Martijn van Sambeek und Weiqin Chen. „Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public“. In Artificial Intelligence in HCI, 15–26. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60611-3_2.
Der volle Inhalt der QuelleTsai, Chun-Hua, und John M. Carroll. „Logic and Pragmatics in AI Explanation“. In xxAI - Beyond Explainable AI, 387–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_19.
Der volle Inhalt der QuelleRen, Hao, Jjnwen Liang, Zicong Hong, Enyuan Zhou und Junbao Pan. „Application: Privacy, Security, Robustness and Trustworthiness in Edge AI“. In Machine Learning on Commodity Tiny Devices, 161–86. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003340225-10.
Der volle Inhalt der QuelleNguyen, Duc An, Khanh T. P. Nguyen und Kamal Medjaher. „Enhancing Trustworthiness in AI-Based Prognostics: A Comprehensive Review of Explainable AI for PHM“. In Springer Series in Reliability Engineering, 101–36. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71495-5_6.
Der volle Inhalt der QuelleUslu, Suleyman, Davinder Kaur, Samuel J. Rivera, Arjan Durresi und Meghna Babbar-Sebens. „Causal Inference to Enhance AI Trustworthiness in Environmental Decision-Making“. In Advanced Information Networking and Applications, 214–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57916-5_19.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Trustworthiness of AI"
Janev, Valentina, Miloš Nenadović, Dejan Paunović, Sahar Vahdati, Jason Li, Muhammad Hamza Yousuf, Jaume Montanya et al. „IntelliLung AI-DSS Trustworthiness Evaluation Framework“. In 2024 32nd Telecommunications Forum (TELFOR), 1–4. IEEE, 2024. https://doi.org/10.1109/telfor63250.2024.10819068.
Der volle Inhalt der QuelleOttun, Abdul-Rasheed, Rasinthe Marasinghe, Toluwani Elemosho, Mohan Liyanage, Ashfaq Hussain Ahmed, Michell Boerger, Chamara Sandeepa et al. „SPATIAL: Practical AI Trustworthiness with Human Oversight“. In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), 1427–30. IEEE, 2024. http://dx.doi.org/10.1109/icdcs60910.2024.00138.
Der volle Inhalt der QuelleCalabrò, Antonello, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf und Annabelle Mercier. „Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems“. In 20th International Conference on Web Information Systems and Technologies, 288–95. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012998700003825.
Der volle Inhalt der QuelleTroussas, Christos, Christos Papakostas, Akrivi Krouska, Phivos Mylonas und Cleo Sgouropoulou. „FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems“. In 20th International Conference on Web Information Systems and Technologies, 385–92. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0013061100003825.
Der volle Inhalt der QuelleKioskli, Kitty, Laura Bishop, Nineta Polemi und Antonis Ramfos. „Towards a Human-Centric AI Trustworthiness Risk Management Framework“. In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004766.
Der volle Inhalt der QuelleWang, Yingxu. „A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems“. In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945351.
Der volle Inhalt der QuelleEcheberria-Barrio, Xabier, Mikel Gorricho, Selene Valencia und Francesco Zola. „Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness“. In 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140209.
Der volle Inhalt der QuelleGarbuk, Sergey V. „Intellimetry as a Way to Ensure AI Trustworthiness“. In 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). IEEE, 2018. http://dx.doi.org/10.1109/ic-aiai.2018.8674447.
Der volle Inhalt der QuelleAwadid, Afef, Kahina Amokrane-Ferka, Henri Sohier, Juliette Mattioli, Faouzi Adjed, Martin Gonzalez und Souhaiel Khalfaoui. „AI Systems Trustworthiness Assessment: State of the Art“. In Workshop on Model-based System Engineering and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012619600003645.
Der volle Inhalt der QuelleSmith, Carol. „Letting Go of the Numbers: Measuring AI Trustworthiness“. In 13th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012644300003654.
Der volle Inhalt der Quelle