Academic literature on the topic 'Semantic Explainable AI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Semantic Explainable AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Semantic Explainable AI"

1

Li, Ding, Yan Liu, and Jun Huang. "Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI." Machine Learning and Knowledge Extraction 6, no. 2 (May 16, 2024): 1087–113. http://dx.doi.org/10.3390/make6020050.

Full text
Abstract:
Software vulnerability detection aims to proactively reduce the risk to software security and reliability. Despite advancements in deep-learning-based detection, a semantic gap still remains between learned features and human-understandable vulnerability semantics. In this paper, we present an XAI-based framework to assess program code in a graph context as feature representations and their effect on code vulnerability classification into multiple Common Weakness Enumeration (CWE) types. Our XAI framework is deep-learning-model-agnostic and programming-language-neutral. We rank the feature importance of 40 syntactic constructs for each of the top 20 distributed CWE types from three datasets in Java and C++. By means of four metrics of information retrieval, we measure the similarity of human-understandable CWE types using each CWE type’s feature contribution ranking learned from XAI methods. We observe that the subtle semantic difference between CWE types occurs after the variation in neighboring features’ contribution rankings. Our study shows that the XAI explanation results have approximately 78% Top-1 to 89% Top-5 similarity hit rates and a mean average precision of 0.70 compared with the baseline of CWE similarity identified by the open community experts. Our framework allows for code vulnerability patterns to be learned and contributing factors to be assessed at the same stage.
APA, Harvard, Vancouver, ISO, and other styles
2

Turley, Jordan E., Jeffrey A. Dunne, and Zerotti Woods. "Explainable AI for trustworthy image analysis." Journal of the Acoustical Society of America 156, no. 4_Supplement (October 1, 2024): A109. https://doi.org/10.1121/10.0035277.

Full text
Abstract:
The capabilities of convolutional neural networks to explore data in various fields has been documented extensively throughout the literature. One common challenge with adopting AI/ML solutions, however, is the issue of trust. Decision makers are rightfully hesitant to take action based solely on “the computer said so” even if the computer has great confidence that it is correct. There is obvious value in a system that can answer the question of why it made a given prediction and back this up with specific evidence. Basic models like regression or nearest neighbors can support such answers but have significant limitations in real-world applications, and more capable models like neural networks are much too complex to interpret. We have developed a prototype system that combines convolutional neural networks with semantic representations of reasonableness. We use logic similar to how humans justify conclusions, breaking objects into smaller pieces that we trust a neural network to identify. Leveraging a suite of machine learning algorithms, the tool provides not merely an output “conclusion,” but a supporting string of evidence that humans can use to better understand the conclusion, as well as explore potential weaknesses in the AI/ML components. This paper will provide an in-depth overview of the prototype and show some exemplar results. [Work supported by the Johns Hopkins University Applied Physics Laboratory.]
APA, Harvard, Vancouver, ISO, and other styles
3

Thakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar, and Sydney Simpson. "Explainable Artificial Intelligence for Developing Smart Cities Solutions." Smart Cities 3, no. 4 (November 13, 2020): 1353–82. http://dx.doi.org/10.3390/smartcities3040065.

Full text
Abstract:
Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.
APA, Harvard, Vancouver, ISO, and other styles
4

Mankodiya, Harsh, Dhairya Jadav, Rajesh Gupta, Sudeep Tanwar, Wei-Chiang Hong, and Ravi Sharma. "OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles." Applied Sciences 12, no. 11 (May 24, 2022): 5310. http://dx.doi.org/10.3390/app12115310.

Full text
Abstract:
In recent years, artificial intelligence (AI) has become one of the most prominent fields in autonomous vehicles (AVs). With the help of AI, the stress levels of drivers have been reduced, as most of the work is executed by the AV itself. With the increasing complexity of models, explainable artificial intelligence (XAI) techniques work as handy tools that allow naive people and developers to understand the intricate workings of deep learning models. These techniques can be paralleled to AI to increase their interpretability. One essential task of AVs is to be able to follow the road. This paper attempts to justify how AVs can detect and segment the road on which they are moving using deep learning (DL) models. We trained and compared three semantic segmentation architectures for the task of pixel-wise road detection. Max IoU scores of 0.9459 and 0.9621 were obtained on the train and test set. Such DL algorithms are called “black box models” as they are hard to interpret due to their highly complex structures. Integrating XAI enables us to interpret and comprehend the predictions of these abstract models. We applied various XAI methods and generated explanations for the proposed segmentation model for road detection in AVs.
APA, Harvard, Vancouver, ISO, and other styles
5

Ayoob, Mohamed, Oshan Nettasinghe, Vithushan Sylvester, Helmini Bowala, and Hamdaan Mohideen. "Peering into the Heart: A Comprehensive Exploration of Semantic Segmentation and Explainable AI on the MnMs-2 Cardiac MRI Dataset." Applied Computer Systems 30, no. 1 (January 1, 2025): 12–20. https://doi.org/10.2478/acss-2025-0002.

Full text
Abstract:
Abstract Accurate and interpretable segmentation of medical images is crucial for computer-aided diagnosis and image-guided interventions. This study explores the integration of semantic segmentation and explainable AI techniques on the MnMs-2 Cardiac MRI dataset. We propose a segmentation model that achieves competitive dice scores (nearly 90 %) and Hausdorff distance (less than 70), demonstrating its effectiveness for cardiac MRI analysis. Furthermore, we leverage Grad-CAM, and Feature Ablation, explainable AI techniques, to visualise the regions of interest guiding the model predictions for a target class. This integration enhances interpretability, allowing us to gain insights into the model decision-making process and build trust in its predictions.
APA, Harvard, Vancouver, ISO, and other styles
6

Terziyan, Vagan, and Oleksandra Vitko. "Explainable AI for Industry 4.0: Semantic Representation of Deep Learning Models." Procedia Computer Science 200 (2022): 216–26. http://dx.doi.org/10.1016/j.procs.2022.01.220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schorr, Christian, Payman Goodarzi, Fei Chen, and Tim Dahmen. "Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets." Applied Sciences 11, no. 5 (March 3, 2021): 2199. http://dx.doi.org/10.3390/app11052199.

Full text
Abstract:
Trust in artificial intelligence (AI) predictions is a crucial point for a widespread acceptance of new technologies, especially in sensitive areas like autonomous driving. The need for tools explaining AI for deep learning of images is thus eminent. Our proposed toolbox Neuroscope addresses this demand by offering state-of-the-art visualization algorithms for image classification and newly adapted methods for semantic segmentation of convolutional neural nets (CNNs). With its easy to use graphical user interface (GUI), it provides visualization on all layers of a CNN. Due to its open model-view-controller architecture, networks generated and trained with Keras and PyTorch are processable, with an interface allowing extension to additional frameworks. We demonstrate the explanation abilities provided by Neuroscope using the example of traffic scene analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Futia, Giuseppe, and Antonio Vetrò. "On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research." Information 11, no. 2 (February 22, 2020): 122. http://dx.doi.org/10.3390/info11020122.

Full text
Abstract:
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations.
APA, Harvard, Vancouver, ISO, and other styles
9

Hindennach, Susanne, Lei Shi, Filip MiletiĆ, and Andreas Bulling. "Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI Research." Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (April 17, 2024): 1–43. http://dx.doi.org/10.1145/3641009.

Full text
Abstract:
When users perceive AI systems as mindful, independent agents, they hold them responsible instead of the AI experts who created and designed these systems. So far, it has not been studied whether explanations support this shift in responsibility through the use of mind-attributing verbs like "to think". To better understand the prevalence of mind-attributing explanations we analyse AI explanations in 3,533 explainable AI (XAI) research articles from the Semantic Scholar Open Research Corpus (S2ORC). Using methods from semantic shift detection, we identify three dominant types of mind attribution: (1) metaphorical (e.g. "to learn" or "to predict"), (2) awareness (e.g. "to consider"), and (3) agency (e.g. "to make decisions"). We then analyse the impact of mind-attributing explanations on awareness and responsibility in a vignette-based experiment with 199 participants. We find that participants who were given a mind-attributing explanation were more likely to rate the AI system as aware of the harm it caused. Moreover, the mind-attributing explanation had a responsibility-concealing effect: Considering the AI experts' involvement lead to reduced ratings of AI responsibility for participants who were given a non-mind-attributing or no explanation. In contrast, participants who read the mind-attributing explanation still held the AI system responsible despite considering the AI experts' involvement. Taken together, our work underlines the need to carefully phrase explanations about AI systems in scientific writing to reduce mind attribution and clearly communicate human responsibility.
APA, Harvard, Vancouver, ISO, and other styles
10

Silva, Vivian S., André Freitas, and Siegfried Handschuh. "Exploring Knowledge Graphs in an Interpretable Composite Approach for Text Entailment." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7023–30. http://dx.doi.org/10.1609/aaai.v33i01.33017023.

Full text
Abstract:
Recognizing textual entailment is a key task for many semantic applications, such as Question Answering, Text Summarization, and Information Extraction, among others. Entailment scenarios can range from a simple syntactic variation to more complex semantic relationships between pieces of text, but most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. We propose a composite approach for recognizing text entailment which analyzes the entailment pair to decide whether it must be resolved syntactically or semantically. We also make the answer interpretable: whenever an entailment is solved semantically, we explore a knowledge base composed of structured lexical definitions to generate natural language humanlike justifications, explaining the semantic relationship holding between the pieces of text. Besides outperforming wellestablished entailment algorithms, our composite approach gives an important step towards Explainable AI, using world knowledge to make the semantic reasoning process explicit and understandable.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Semantic Explainable AI"

1

Gjeka, Mario. "Uno strumento per le spiegazioni di sistemi di Explainable Artificial Intelligence." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
L'obiettivo di questa tesi è quello di mostrare l’importanza delle spiegazioni in un sistema intelligente. Il bisogno di avere un'intelligenza artificiale spiegabile e trasparente sta crescendo notevolmente, esigenza evidenziata dalla ricerca delle aziende di sviluppare sistemi informatici intelligenti trasparenti e spiegabili.
APA, Harvard, Vancouver, ISO, and other styles
2

FUTIA, GIUSEPPE. "Neural Networks forBuilding Semantic Models and Knowledge Graphs." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2850594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Naqvi, Syed Muhammad Raza. "Exploration des LLM et de l'XAI sémantique pour les capacités des robots industriels et les connaissances communes en matière de fabrication." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2025. http://www.theses.fr/2025TLSEP014.

Full text
Abstract:
Dans l'industrie 4.0, la fabrication avancée est essentielle pour façonner les usines du futur, en permettant d'améliorer la planification, l'ordonnancement et le contrôle. La capacité d'adapter rapidement les lignes de production en réponse aux demandes des clients ou à des situations inattendues est essentielle pour améliorer l'avenir de la fabrication. Bien que l'IA apparaisse comme une solution, les industries s'appuient toujours sur l'expertise humaine en raison des problèmes de confiance et du manque de transparence des décisions de l'IA. L'IA explicable intégrant des connaissances de base liées à la fabrication est cruciale pour rendre les décisions de l'IA compréhensibles et dignes de confiance. Dans ce contexte, nous proposons le cadre S-XAI, une solution intégrée combinant les spécifications de la machine et le MCSK pour fournir une prise de décision explicable et transparente. L'accent est mis sur la fourniture de capacités machine en temps réel afin de garantir une prise de décision précise tout en expliquant simultanément le processus de prise de décision à toutes les parties prenantes concernées. En conséquence, le premier objectif était de formaliser les spécifications des machines, y compris les capacités, les fonctions, la qualité et les caractéristiques des processus, en se concentrant sur la robotique. Pour ce faire, nous avons créé une ontologie des capacités des robots qui formalise tous les aspects pertinents des spécifications des machines, tels que la capacité, l'aptitude, la fonction, la qualité et les caractéristiques du processus. En plus de cette formalisation, le RCO permet aux acteurs de la fabrication de capturer les capacités robotiques décrites dans les manuels de spécification (capacités annoncées) et de les comparer avec les performances réelles (capacités opérationnelles). Le RCO est basé sur le langage de description des services de machines, une ontologie de référence créée pour les services de fabrication et alignée sur l'ontologie formelle de base, l'ontologie de la fonderie industrielle, l'ontologie des artefacts d'information et l'ontologie des relations. Le deuxième objectif était la formalisation du MCSK. Nous introduisons le MCSK et présentons une méthodologie pour l'identifier, en commençant par reconnaître les différents modèles de CSK dans la fabrication et en les alignant sur les concepts de fabrication. L'extraction du MCSK sous une forme utilisable est un défi, c'est pourquoi notre approche structure le MCSK en énoncés NL en utilisant des LLM pour faciliter le raisonnement basé sur des règles, améliorant ainsi les capacités de prise de décision. Le troisième et dernier objectif est de proposer un cadre S-XAI utilisant le RCO et le MCSK pour évaluer si les machines existantes peuvent effectuer des tâches spécifiques et générer des explications NL compréhensibles. Cet objectif a été atteint en intégrant le RCO, qui fournit des capacités opérationnelles telles que la répétabilité et la précision, au MCSK, qui décrit les exigences du processus. En utilisant le raisonnement sémantique basé sur le MCSK, le système S-XAI fournit de manière transparente des explications NL qui détaillent chaque logique et chaque résultat.Dans le cadre du S-XAI, un NN prédit les capacités opérationnelles des robots, tandis que l'IA symbolique incorpore ces prédictions dans un système de raisonnement basé sur le MCSK et fondé sur le RCO.Cette configuration hybride maximise les forces de chaque système d'IA et garantit que les prédictions soutiennent un processus décisionnel transparent. En outre, la S-XAI améliore l'interprétabilité des prédictions du NN grâce à des techniques XAI telles que LIME, SHAP et PDP, clarifiant les prédictions du NN et permettant d'obtenir des informations détaillées pour un meilleur calibrage et une gestion proactive, favorisant ainsi un environnement de fabrication résilient et informé
In Industry 4.0, advanced manufacturing is vital in shaping future factories, enabling enhanced planning, scheduling, and control. The ability to adaptproduction lines swiftly in response to customer demands or unexpected situations is essential to enhance the future of manufacturing. While AI is emerging as a solution, industries still rely on human expertise due to trust issues and a lack of transparency in AI decisions. Explainable AI integrating commonsense knowledge related to manufacturing is crucial for making AI decisions understandable and trustworthy. Within this context, we propose the S-XAI framework, an integrated solution combining machine specifications with MCSK to provide explainable and transparent decision-making. The focus is on providing real-time machine capabilities to ensure precise decision-making while simultaneously explaining the decision-making process to all involved stakeholders. Accordingly, the first objective was formalizing machine specifications, including capabilities, capacities, functions, quality, and process characteristics, focusing on robotics. To do so, we created a Robot Capability ontology formalizing all relevant aspects of machine specifications, such as Capability, Capacity, Function, Quality, and Process Characteristics. On top of this formalization, the RCO allows manufacturing stakeholders to capture robotic capabilities described in specification manuals (advertised capabilities) and compare them with real-world performance (operational capabilities). RCO is based on the Machine Service Description Language, a domain reference ontology created for manufacturing services, and aligned with the Basic Formal Ontology, Industrial Foundry Ontology, Information Artifact Ontology, and Relations Ontology. The second objective was the formalization of MCSK. We introduce MCSK and present a methodology for identifying it, starting with recognizing different CSK patterns in manufacturing and aligning them with manufacturing concepts. Extracting MCSK in a usable form is challenging, so our approach structures MCSK into NL statements utilizing LLMs. to facilitate rule-based reasoning, thereby enhancing decision-making capabilities. The third and final objective is to propose an S-XAI framework utilizing RCO and MCSK to assess if existing machines can perform specific tasks and generate understandable NL explanations. This was achieved by integrating the RCO, which provides operational capabilities like repeatability and precision, with MCSK, which outlines the process requirements. By utilizing MCSK-based semantic reasoning, the S-XAI system seamlessly provides NL explanations that detail each logic and outcome. In the S-XAI framework, an NN predicts the operational capabilities of robots, while symbolic AI incorporates these predictions within an MCSK-based reasoning system grounded in the RCO. This hybrid setup maximizes the strengths of each AI system and ensures that predictions support a transparent decision-making process. Additionally, S-XAI enhances the interpretability of NN predictions through XAI techniques such as LIME, SHAP, and PDP, clarifying NN predictions and enabling detailed insights for better calibration and proactive management, ultimately fostering a resilient and informed manufacturing environment
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Semantic Explainable AI"

1

Sarker, Md Kamruzzaman, Joshua Schwartz, Pascal Hitzler, Lu Zhou, Srikanth Nadella, Brandon Minnery, Ion Juvina, Michael L. Raymer, and William R. Aue. "Wikipedia Knowledge Graph for Explainable AI." In Knowledge Graphs and Semantic Web, 72–87. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65384-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Asuquo, Daniel Ekpenyong, Patience Usoro Usip, and Kingsley Friday Attai. "Explainable Machine Learning-Based Knowledge Graph for Modeling Location-Based Recreational Services from Users Profile." In Semantic AI in Knowledge Graphs, 141–62. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003313267-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hofmarcher, Markus, Thomas Unterthiner, José Arjona-Medina, Günter Klambauer, Sepp Hochreiter, and Bernhard Nessler. "Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 285–96. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sabbatini, Federico, Giovanni Ciatto, and Andrea Omicini. "Semantic Web-Based Interoperability for Intelligent Agents with PSyKE." In Explainable and Transparent AI and Multi-Agent Systems, 124–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15565-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hong, Seunghoon, Dingdong Yang, Jongwook Choi, and Honglak Lee. "Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 77–95. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sander, Jennifer, and Achim Kuwertz. "Supplementing Machine Learning with Knowledge Models Towards Semantic Explainable AI." In Advances in Intelligent Systems and Computing, 3–11. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74009-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Qi, Emanuele Mezzi, Osman Mutlu, Miltiadis Kofinas, Vidya Prasad, Shadnan Azwad Khan, Elena Ranguelova, and Niki van Stein. "Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI." In Communications in Computer and Information Science, 308–31. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63787-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mikriukov, Georgii, Gesina Schwalbe, Christian Hellert, and Korinna Bade. "Revealing Similar Semantics Inside CNNs: An Interpretable Concept-Based Comparison of Feature Spaces." In Communications in Computer and Information Science, 3–20. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-74630-7_1.

Full text
Abstract:
Abstract Safety-critical applications require transparency in artificial intelligence (AI) components, but widely used convolutional neural networks (CNNs) widely used for perception tasks lack inherent interpretability. Hence, insights into what CNNs have learned are primarily based on performance metrics, because these allow, e.g., for cross-architecture CNN comparison. However, these neglect how knowledge is stored inside. To tackle this yet unsolved problem, our work proposes two methods for estimating the layer-wise similarity between semantic information inside CNN latent spaces. These allow insights into both the flow and likeness of semantic information within CNN layers, and into the degree of their similarity between different network architectures. As a basis, we use two renowned explainable artificial intelligence (XAI) techniques, which are used to obtain concept activation vectors, i.e., global vector representations in the latent space. These are compared with respect to their activation on test inputs. When applied to three diverse object detectors and two datasets, our methods reveal that (1) similar semantic concepts are learned regardless of the CNN architecture, and (2) similar concepts emerge in similar relative layer depth, independent of the total number of layers. Finally, our approach poses a promising step towards semantic model comparability and comprehension of how different CNNs process semantic information.
APA, Harvard, Vancouver, ISO, and other styles
9

Mikriukov, Georgii, Gesina Schwalbe, Christian Hellert, and Korinna Bade. "Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability." In Communications in Computer and Information Science, 499–524. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0_26.

Full text
Abstract:
AbstractAnalysis of how semantic concepts are represented within Convolutional Neural Networks (CNNs) is a widely used approach in Explainable Artificial Intelligence (XAI) for interpreting CNNs. A motivation is the need for transparency in safety-critical AI-based systems, as mandated in various domains like automated driving. However, to use the concept representations for safety-relevant purposes, like inspection or error retrieval, these must be of high quality and, in particular, stable. This paper focuses on two stability goals when working with concept representations in computer vision CNNs: stability of concept retrieval and of concept attribution. The guiding use-case is a post-hoc explainability framework for object detection (OD) CNNs, towards which existing concept analysis (CA) methods are successfully adapted. To address concept retrieval stability, we propose a novel metric that considers both concept separation and consistency, and is agnostic to layer and concept representation dimensionality. We then investigate impacts of concept abstraction level, number of concept training samples, CNN size, and concept representation dimensionality on stability. For concept attribution stability we explore the effect of gradient instability on gradient-based explainability methods. The results on various CNNs for classification and object detection yield the main findings that (1) the stability of concept retrieval can be enhanced through dimensionality reduction via data aggregation, and (2) in shallow layers where gradient instability is more pronounced, gradient smoothing techniques are advised. Finally, our approach provides valuable insights into selecting the appropriate layer and concept representation dimensionality, paving the way towards CA in safety-critical XAI applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Reed, Stephen K. "Explainable AI." In Cognitive Skills You Need for the 21st Century, 170–79. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197529003.003.0015.

Full text
Abstract:
Deep connectionist learning has resulted in very impressive accomplishments, but it is unclear how it achieves its results. A dilemma in using the output of machine learning is that the best performing methods are the least explainable. Explainable artificial intelligence seeks to develop systems that can explain their reasoning to a human user. The application of IBM’s WatsonPaths to medicine includes a diagnostic network that infers a diagnosis from symptoms with a degree of confidence associated with each diagnosis. The Semanticscience Integrated Ontology uses categories such as objects, processes, attributes, and relations to create networks of biological knowledge. The same categories are fundamental in representing other types of knowledge such as cognition. Extending an ontology requires a consistent use of semantic terms across different domains of knowledge.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Semantic Explainable AI"

1

Schneider, Sarah, Doris Antensteiner, Daniel Soukup, and Matthias Scheutz. "Encoding Semantic Attributes - Towards Explainable AI in Industry." In PETRA '23: Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3594806.3596531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Das, Devleena, and Sonia Chernova. "Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and Pairwise Ranking to Explain Robot Failures." In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021. http://dx.doi.org/10.1109/iros51168.2021.9635890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sarkar, Rajdeep, Mihael Arcan, and John McCrae. "KG-CRuSE: Recurrent Walks over Knowledge Graph for Explainable Conversation Reasoning using Semantic Embeddings." In Proceedings of the 4th Workshop on NLP for Conversational AI. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlp4convai-1.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sampat, Shailaja. "Technical, Hard and Explainable Question Answering (THE-QA)." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/916.

Full text
Abstract:
The ability of an agent to rationally answer questions about a given task is the key measure of its intelligence. While we have obtained phenomenal performance over various language and vision tasks separately, 'Technical, Hard and Explainable Question Answering' (THE-QA) is a new challenging corpus which addresses them jointly. THE-QA is a question answering task involving diagram understanding and reading comprehension. We plan to establish benchmarks over this new corpus using deep learning models guided by knowledge representation methods. The proposed approach will envisage detailed semantic parsing of technical figures and text, which is robust against diverse formats. It will be aided by knowledge acquisition and reasoning module that categorizes different knowledge types, identify sources to acquire that knowledge and perform reasoning to answer the questions correctly. THE-QA data will present a strong challenge to the community for future research and will bridge the gap between state-of-the-art Artificial Intelligence (AI) and 'Human-level' AI.
APA, Harvard, Vancouver, ISO, and other styles
5

Bardozzo, Francesco, Mattia Delli Priscoli, Toby Collins, Antonello Forgione, Alexandre Hostettler, and Roberto Tagliaferri. "Cross X-AI: Explainable Semantic Segmentation of Laparoscopic Images in Relation to Depth Estimation." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Davis, Eric, and Katrina Schleisman. "Integrating Episodic and Semantic Memory in Machine Teammates to Enable Explainable After-Action Review and Intervention Planning in HAA Operations." In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005003.

Full text
Abstract:
A critical step to ensure that AI systems can function as effective teammates is to develop new modeling approaches for AI based on the full range of human memory processes and systems evidenced by cognitive sciences research. In this paper we introduce novel techniques that integrate episodic and semantic memory within Artificially Intelligent (AI) teammates. We draw inspiration from evidence that points to the key role of episodic memory in representing event-specific knowledge to enable simulation of future experiences, and evidence for a representational organization of conceptual semantic knowledge via self-organizing maps (SOMs). Together, we demonstrate that these two types of memory working in concert can improve machine capabilities in co-learning and co-training scenarios. We evaluate our system in the context of simulated helicopter air ambulance (HAA) trajectories and a formal model of performance and skill, with interventions to enable an AI teammate to improve its capabilities on joint HAA missions. Our modeling approach contrasts with traditional neural network training, in which specific training data is not preserved in the final trained model embedding. In contrast, the training data for our model consists of episodes containing spatial and temporal information that are preserved in the model’s embedding. The trained model creates a structure of relationships among key parameters of these episodes, allowing us to understand the similarity and differences between performers (both human and machine) in outcomes, performance, and trajectory. We further extend these capabilities by enhancing our semantic memory model to encode not just a series of episodes, but labeled directed edges between regions of semantic memory representing meta-episodes. These directed edges represent interventions applied by the performer to improve future episodic outcomes in response to identified gaps in capability. These interventions represent the application of specific co-training strategies as a labeled transition system, linking episodes representing pre-intervention and post-intervention performance. This allows us to represent the expected impact of interventions, simulating improvements and skill decay, providing the machine with team-aligned goals for self-improvement between episodes to positively impact future teamwork.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yu-Hsuan, Levant Burak Kara, and Jonathan Cagan. "Automating Style Analysis and Visualization With Explainable AI - Case Studies on Brand Recognition." In ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-115150.

Full text
Abstract:
Abstract Incorporating style-related objectives into shape design has been centrally important to maximize product appeal. However, stylistic features such as aesthetics and semantic attributes are hard to codify even for experts. As such, algorithmic style capture and reuse have not fully benefited from automated data-driven methodologies due to the challenging nature of design describability. This paper proposes an AI-driven method to fully automate the discovery of brand-related features. Our approach introduces BIGNet, a two-tier Brand Identification Graph Neural Network (GNN) to classify and analyze scalar vector graphics (SVG). First, to tackle the scarcity of vectorized product images, this research proposes two data acquisition workflows: parametric modeling from small curve-based datasets, and vectorization from large pixel-based datasets. Secondly, this study constructs a novel hierarchical GNN architecture to learn from both SVG’s curve-level and chunk-level parameters. In the first case study, BIGNet not only classifies phone brands but also captures brand-related features across multiple scales, such as the location of the lens, the height-width ratio, and the screen-frame gap, as confirmed by AI evaluation. In the second study, this paper showcases the generalizability of BIGNet learning from a vectorized car image dataset and validates the consistency and robustness of its predictions given four scenarios. The results match the difference commonly observed in luxury vs. economy brands in the automobile market. Finally, this paper also visualizes the activation maps generated from a convolutional neural network and shows BIGNet’s advantage of being a more human-friendly, explainable, and explicit style-capturing agent.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Hung, Tobias Clement, Loc Nguyen, Nils Kemmerzell, Binh Truong, Khang Nguyen, Mohamed Abdelaal, and Hung Cao. "LangXAI: Integrating Large Vision Models for Generating Textual Explanations to Enhance Explainability in Visual Perception Tasks." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/1025.

Full text
Abstract:
LangXAI is a framework that integrates Explainable Artificial Intelligence (XAI) with advanced vision models to generate textual explanations for visual recognition tasks. Despite XAI advancements, an understanding gap persists for end-users with limited domain knowledge in artificial intelligence and computer vision. LangXAI addresses this by furnishing text-based explanations for classification, object detection, and semantic segmentation model outputs to end-users. Preliminary results demonstrate LangXAI's enhanced plausibility, with high BERTScore across tasks, fostering a more transparent and reliable AI framework on vision tasks for end-users. The code and demo of this work can be found at https://analytics-everywhere-lab.github.io/langxai.io/.
APA, Harvard, Vancouver, ISO, and other styles
9

Davis, Eric, Sourya Dey, Adam Karvonen, Ethan Lew, Donya Quick, Panchapakesan Shyamshankar, Ted Hille, and Matt Lebeau. "Leveraging Manifold Learning and Relationship Equity Management for Symbiotic Explainable Artificial Intelligence." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003759.

Full text
Abstract:
Improvements in neural methods have led to the unprecedented adoption of AI in domains previously limited to human experts. As these technologies mature, especially in the area of neuro-symbolic intelligence, interest has increased in artificial cognitive capabilities that would allow an AI system to function less like an application and more like an interdependent teammate. In addition to improving language capabilities, next-generation AI systems need to support symbiotic, human-centered processes, including objective alignment, trust calibration, common ground, and the ability to build complex workflows that manage risks due to resources such as time, environmental constraints, and diverse computational settings from super computers to autonomous vehicles.In this paper we review current challenges in achieving Symbiotic Intelligence, and introduce novel capabilities in Artificial Executive Function we have developed towards solving these challenges. We present our work in the context of current literature on context-aware and self-aware computing and present basic building blocks of a novel, open-source, AI architecture for Symbiotic Intelligence. Our methods have been demonstrated effectively in both simulated crisis and during the pandemic. We argue our system meets the basic criteria outlined by DARPA and AFRL providing: (1) introspection via graph-based reasoning to establish expectations for both autonomous and team performance, to communicate expectations for interdependent co-performance, capability, an understanding of shared goals; (2) adaptivity through the use of automatic workflow generation using semantic labels to understand requirements, constraints, and expectations; (3) self healing capabilities using after-action review and co-training capabilities; (4) goal oriented reasoning via an awareness of machine, human, and team responsibilities and goals; (5) approximate, risk-aware, planning using a flexible workflow infrastructure with interchangeable units of computation capable of supporting both high fidelity, costly, reasoning suitable for traditional data centers, as well as in-the-field reasoning with highly performable surrogate models suitable for more constrained edge computing environments. Our framework provides unique symbiotic reasoning to support crisis response, allowing fast, flexible, analysis pipelines that can be responsive to changing resource and risk conditions in the field. We discuss the theory behind our methods, practical concerns, and our experimental results that provide evidence of their efficacy, especially in crisis decision making.
APA, Harvard, Vancouver, ISO, and other styles
10

Basaj, Dominika, Witold Oleszkiewicz, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Tomasz Trzcinski, and Bartosz Zieliński. "Explaining Self-Supervised Image Representations with Visual Probing." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/82.

Full text
Abstract:
Recently introduced self-supervised methods for image representation learning provide on par or superior results to their fully supervised competitors, yet the corresponding efforts to explain the self-supervised approaches lag behind. Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models by leveraging probing tasks employed previously in natural language processing. The probing tasks require knowledge about semantic relationships between image parts. Hence, we propose a systematic approach to obtain analogs of natural language in vision, such as visual words, context, and taxonomy. We show the effectiveness and applicability of those analogs in the context of explaining self-supervised representations. Our key findings emphasize that relations between language and vision can serve as an effective yet intuitive tool for discovering how machine learning models work, independently of data modality. Our work opens a plethora of research pathways towards more explainable and transparent AI.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography