Добірка наукової літератури з теми "Semantic Explainable AI"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Semantic Explainable AI".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Semantic Explainable AI"
Li, Ding, Yan Liu, and Jun Huang. "Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI." Machine Learning and Knowledge Extraction 6, no. 2 (May 16, 2024): 1087–113. http://dx.doi.org/10.3390/make6020050.
Повний текст джерелаTurley, Jordan E., Jeffrey A. Dunne, and Zerotti Woods. "Explainable AI for trustworthy image analysis." Journal of the Acoustical Society of America 156, no. 4_Supplement (October 1, 2024): A109. https://doi.org/10.1121/10.0035277.
Повний текст джерелаThakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar, and Sydney Simpson. "Explainable Artificial Intelligence for Developing Smart Cities Solutions." Smart Cities 3, no. 4 (November 13, 2020): 1353–82. http://dx.doi.org/10.3390/smartcities3040065.
Повний текст джерелаMankodiya, Harsh, Dhairya Jadav, Rajesh Gupta, Sudeep Tanwar, Wei-Chiang Hong, and Ravi Sharma. "OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles." Applied Sciences 12, no. 11 (May 24, 2022): 5310. http://dx.doi.org/10.3390/app12115310.
Повний текст джерелаAyoob, Mohamed, Oshan Nettasinghe, Vithushan Sylvester, Helmini Bowala, and Hamdaan Mohideen. "Peering into the Heart: A Comprehensive Exploration of Semantic Segmentation and Explainable AI on the MnMs-2 Cardiac MRI Dataset." Applied Computer Systems 30, no. 1 (January 1, 2025): 12–20. https://doi.org/10.2478/acss-2025-0002.
Повний текст джерелаTerziyan, Vagan, and Oleksandra Vitko. "Explainable AI for Industry 4.0: Semantic Representation of Deep Learning Models." Procedia Computer Science 200 (2022): 216–26. http://dx.doi.org/10.1016/j.procs.2022.01.220.
Повний текст джерелаSchorr, Christian, Payman Goodarzi, Fei Chen, and Tim Dahmen. "Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets." Applied Sciences 11, no. 5 (March 3, 2021): 2199. http://dx.doi.org/10.3390/app11052199.
Повний текст джерелаFutia, Giuseppe, and Antonio Vetrò. "On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research." Information 11, no. 2 (February 22, 2020): 122. http://dx.doi.org/10.3390/info11020122.
Повний текст джерелаHindennach, Susanne, Lei Shi, Filip MiletiĆ, and Andreas Bulling. "Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI Research." Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (April 17, 2024): 1–43. http://dx.doi.org/10.1145/3641009.
Повний текст джерелаSilva, Vivian S., André Freitas, and Siegfried Handschuh. "Exploring Knowledge Graphs in an Interpretable Composite Approach for Text Entailment." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7023–30. http://dx.doi.org/10.1609/aaai.v33i01.33017023.
Повний текст джерелаДисертації з теми "Semantic Explainable AI"
Gjeka, Mario. "Uno strumento per le spiegazioni di sistemi di Explainable Artificial Intelligence." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Знайти повний текст джерелаFUTIA, GIUSEPPE. "Neural Networks forBuilding Semantic Models and Knowledge Graphs." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2850594.
Повний текст джерелаNaqvi, Syed Muhammad Raza. "Exploration des LLM et de l'XAI sémantique pour les capacités des robots industriels et les connaissances communes en matière de fabrication." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2025. http://www.theses.fr/2025TLSEP014.
Повний текст джерелаIn Industry 4.0, advanced manufacturing is vital in shaping future factories, enabling enhanced planning, scheduling, and control. The ability to adaptproduction lines swiftly in response to customer demands or unexpected situations is essential to enhance the future of manufacturing. While AI is emerging as a solution, industries still rely on human expertise due to trust issues and a lack of transparency in AI decisions. Explainable AI integrating commonsense knowledge related to manufacturing is crucial for making AI decisions understandable and trustworthy. Within this context, we propose the S-XAI framework, an integrated solution combining machine specifications with MCSK to provide explainable and transparent decision-making. The focus is on providing real-time machine capabilities to ensure precise decision-making while simultaneously explaining the decision-making process to all involved stakeholders. Accordingly, the first objective was formalizing machine specifications, including capabilities, capacities, functions, quality, and process characteristics, focusing on robotics. To do so, we created a Robot Capability ontology formalizing all relevant aspects of machine specifications, such as Capability, Capacity, Function, Quality, and Process Characteristics. On top of this formalization, the RCO allows manufacturing stakeholders to capture robotic capabilities described in specification manuals (advertised capabilities) and compare them with real-world performance (operational capabilities). RCO is based on the Machine Service Description Language, a domain reference ontology created for manufacturing services, and aligned with the Basic Formal Ontology, Industrial Foundry Ontology, Information Artifact Ontology, and Relations Ontology. The second objective was the formalization of MCSK. We introduce MCSK and present a methodology for identifying it, starting with recognizing different CSK patterns in manufacturing and aligning them with manufacturing concepts. Extracting MCSK in a usable form is challenging, so our approach structures MCSK into NL statements utilizing LLMs. to facilitate rule-based reasoning, thereby enhancing decision-making capabilities. The third and final objective is to propose an S-XAI framework utilizing RCO and MCSK to assess if existing machines can perform specific tasks and generate understandable NL explanations. This was achieved by integrating the RCO, which provides operational capabilities like repeatability and precision, with MCSK, which outlines the process requirements. By utilizing MCSK-based semantic reasoning, the S-XAI system seamlessly provides NL explanations that detail each logic and outcome. In the S-XAI framework, an NN predicts the operational capabilities of robots, while symbolic AI incorporates these predictions within an MCSK-based reasoning system grounded in the RCO. This hybrid setup maximizes the strengths of each AI system and ensures that predictions support a transparent decision-making process. Additionally, S-XAI enhances the interpretability of NN predictions through XAI techniques such as LIME, SHAP, and PDP, clarifying NN predictions and enabling detailed insights for better calibration and proactive management, ultimately fostering a resilient and informed manufacturing environment
Частини книг з теми "Semantic Explainable AI"
Sarker, Md Kamruzzaman, Joshua Schwartz, Pascal Hitzler, Lu Zhou, Srikanth Nadella, Brandon Minnery, Ion Juvina, Michael L. Raymer, and William R. Aue. "Wikipedia Knowledge Graph for Explainable AI." In Knowledge Graphs and Semantic Web, 72–87. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65384-2_6.
Повний текст джерелаAsuquo, Daniel Ekpenyong, Patience Usoro Usip, and Kingsley Friday Attai. "Explainable Machine Learning-Based Knowledge Graph for Modeling Location-Based Recreational Services from Users Profile." In Semantic AI in Knowledge Graphs, 141–62. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003313267-7.
Повний текст джерелаHofmarcher, Markus, Thomas Unterthiner, José Arjona-Medina, Günter Klambauer, Sepp Hochreiter, and Bernhard Nessler. "Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 285–96. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_15.
Повний текст джерелаSabbatini, Federico, Giovanni Ciatto, and Andrea Omicini. "Semantic Web-Based Interoperability for Intelligent Agents with PSyKE." In Explainable and Transparent AI and Multi-Agent Systems, 124–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15565-9_8.
Повний текст джерелаHong, Seunghoon, Dingdong Yang, Jongwook Choi, and Honglak Lee. "Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 77–95. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_5.
Повний текст джерелаSander, Jennifer, and Achim Kuwertz. "Supplementing Machine Learning with Knowledge Models Towards Semantic Explainable AI." In Advances in Intelligent Systems and Computing, 3–11. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74009-2_1.
Повний текст джерелаHuang, Qi, Emanuele Mezzi, Osman Mutlu, Miltiadis Kofinas, Vidya Prasad, Shadnan Azwad Khan, Elena Ranguelova, and Niki van Stein. "Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI." In Communications in Computer and Information Science, 308–31. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63787-2_16.
Повний текст джерелаMikriukov, Georgii, Gesina Schwalbe, Christian Hellert, and Korinna Bade. "Revealing Similar Semantics Inside CNNs: An Interpretable Concept-Based Comparison of Feature Spaces." In Communications in Computer and Information Science, 3–20. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-74630-7_1.
Повний текст джерелаMikriukov, Georgii, Gesina Schwalbe, Christian Hellert, and Korinna Bade. "Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability." In Communications in Computer and Information Science, 499–524. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0_26.
Повний текст джерелаReed, Stephen K. "Explainable AI." In Cognitive Skills You Need for the 21st Century, 170–79. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197529003.003.0015.
Повний текст джерелаТези доповідей конференцій з теми "Semantic Explainable AI"
Schneider, Sarah, Doris Antensteiner, Daniel Soukup, and Matthias Scheutz. "Encoding Semantic Attributes - Towards Explainable AI in Industry." In PETRA '23: Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3594806.3596531.
Повний текст джерелаDas, Devleena, and Sonia Chernova. "Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and Pairwise Ranking to Explain Robot Failures." In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021. http://dx.doi.org/10.1109/iros51168.2021.9635890.
Повний текст джерелаSarkar, Rajdeep, Mihael Arcan, and John McCrae. "KG-CRuSE: Recurrent Walks over Knowledge Graph for Explainable Conversation Reasoning using Semantic Embeddings." In Proceedings of the 4th Workshop on NLP for Conversational AI. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlp4convai-1.9.
Повний текст джерелаSampat, Shailaja. "Technical, Hard and Explainable Question Answering (THE-QA)." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/916.
Повний текст джерелаBardozzo, Francesco, Mattia Delli Priscoli, Toby Collins, Antonello Forgione, Alexandre Hostettler, and Roberto Tagliaferri. "Cross X-AI: Explainable Semantic Segmentation of Laparoscopic Images in Relation to Depth Estimation." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892345.
Повний текст джерелаDavis, Eric, and Katrina Schleisman. "Integrating Episodic and Semantic Memory in Machine Teammates to Enable Explainable After-Action Review and Intervention Planning in HAA Operations." In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005003.
Повний текст джерелаChen, Yu-Hsuan, Levant Burak Kara, and Jonathan Cagan. "Automating Style Analysis and Visualization With Explainable AI - Case Studies on Brand Recognition." In ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-115150.
Повний текст джерелаNguyen, Hung, Tobias Clement, Loc Nguyen, Nils Kemmerzell, Binh Truong, Khang Nguyen, Mohamed Abdelaal, and Hung Cao. "LangXAI: Integrating Large Vision Models for Generating Textual Explanations to Enhance Explainability in Visual Perception Tasks." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/1025.
Повний текст джерелаDavis, Eric, Sourya Dey, Adam Karvonen, Ethan Lew, Donya Quick, Panchapakesan Shyamshankar, Ted Hille, and Matt Lebeau. "Leveraging Manifold Learning and Relationship Equity Management for Symbiotic Explainable Artificial Intelligence." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003759.
Повний текст джерелаBasaj, Dominika, Witold Oleszkiewicz, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Tomasz Trzcinski, and Bartosz Zieliński. "Explaining Self-Supervised Image Representations with Visual Probing." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/82.
Повний текст джерела