Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Semantic Explainable AI.

Статті в журналах з теми "Semantic Explainable AI"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Semantic Explainable AI".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Li, Ding, Yan Liu, and Jun Huang. "Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI." Machine Learning and Knowledge Extraction 6, no. 2 (May 16, 2024): 1087–113. http://dx.doi.org/10.3390/make6020050.

Повний текст джерела
Анотація:
Software vulnerability detection aims to proactively reduce the risk to software security and reliability. Despite advancements in deep-learning-based detection, a semantic gap still remains between learned features and human-understandable vulnerability semantics. In this paper, we present an XAI-based framework to assess program code in a graph context as feature representations and their effect on code vulnerability classification into multiple Common Weakness Enumeration (CWE) types. Our XAI framework is deep-learning-model-agnostic and programming-language-neutral. We rank the feature importance of 40 syntactic constructs for each of the top 20 distributed CWE types from three datasets in Java and C++. By means of four metrics of information retrieval, we measure the similarity of human-understandable CWE types using each CWE type’s feature contribution ranking learned from XAI methods. We observe that the subtle semantic difference between CWE types occurs after the variation in neighboring features’ contribution rankings. Our study shows that the XAI explanation results have approximately 78% Top-1 to 89% Top-5 similarity hit rates and a mean average precision of 0.70 compared with the baseline of CWE similarity identified by the open community experts. Our framework allows for code vulnerability patterns to be learned and contributing factors to be assessed at the same stage.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Turley, Jordan E., Jeffrey A. Dunne, and Zerotti Woods. "Explainable AI for trustworthy image analysis." Journal of the Acoustical Society of America 156, no. 4_Supplement (October 1, 2024): A109. https://doi.org/10.1121/10.0035277.

Повний текст джерела
Анотація:
The capabilities of convolutional neural networks to explore data in various fields has been documented extensively throughout the literature. One common challenge with adopting AI/ML solutions, however, is the issue of trust. Decision makers are rightfully hesitant to take action based solely on “the computer said so” even if the computer has great confidence that it is correct. There is obvious value in a system that can answer the question of why it made a given prediction and back this up with specific evidence. Basic models like regression or nearest neighbors can support such answers but have significant limitations in real-world applications, and more capable models like neural networks are much too complex to interpret. We have developed a prototype system that combines convolutional neural networks with semantic representations of reasonableness. We use logic similar to how humans justify conclusions, breaking objects into smaller pieces that we trust a neural network to identify. Leveraging a suite of machine learning algorithms, the tool provides not merely an output “conclusion,” but a supporting string of evidence that humans can use to better understand the conclusion, as well as explore potential weaknesses in the AI/ML components. This paper will provide an in-depth overview of the prototype and show some exemplar results. [Work supported by the Johns Hopkins University Applied Physics Laboratory.]
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Thakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar, and Sydney Simpson. "Explainable Artificial Intelligence for Developing Smart Cities Solutions." Smart Cities 3, no. 4 (November 13, 2020): 1353–82. http://dx.doi.org/10.3390/smartcities3040065.

Повний текст джерела
Анотація:
Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mankodiya, Harsh, Dhairya Jadav, Rajesh Gupta, Sudeep Tanwar, Wei-Chiang Hong, and Ravi Sharma. "OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles." Applied Sciences 12, no. 11 (May 24, 2022): 5310. http://dx.doi.org/10.3390/app12115310.

Повний текст джерела
Анотація:
In recent years, artificial intelligence (AI) has become one of the most prominent fields in autonomous vehicles (AVs). With the help of AI, the stress levels of drivers have been reduced, as most of the work is executed by the AV itself. With the increasing complexity of models, explainable artificial intelligence (XAI) techniques work as handy tools that allow naive people and developers to understand the intricate workings of deep learning models. These techniques can be paralleled to AI to increase their interpretability. One essential task of AVs is to be able to follow the road. This paper attempts to justify how AVs can detect and segment the road on which they are moving using deep learning (DL) models. We trained and compared three semantic segmentation architectures for the task of pixel-wise road detection. Max IoU scores of 0.9459 and 0.9621 were obtained on the train and test set. Such DL algorithms are called “black box models” as they are hard to interpret due to their highly complex structures. Integrating XAI enables us to interpret and comprehend the predictions of these abstract models. We applied various XAI methods and generated explanations for the proposed segmentation model for road detection in AVs.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ayoob, Mohamed, Oshan Nettasinghe, Vithushan Sylvester, Helmini Bowala, and Hamdaan Mohideen. "Peering into the Heart: A Comprehensive Exploration of Semantic Segmentation and Explainable AI on the MnMs-2 Cardiac MRI Dataset." Applied Computer Systems 30, no. 1 (January 1, 2025): 12–20. https://doi.org/10.2478/acss-2025-0002.

Повний текст джерела
Анотація:
Abstract Accurate and interpretable segmentation of medical images is crucial for computer-aided diagnosis and image-guided interventions. This study explores the integration of semantic segmentation and explainable AI techniques on the MnMs-2 Cardiac MRI dataset. We propose a segmentation model that achieves competitive dice scores (nearly 90 %) and Hausdorff distance (less than 70), demonstrating its effectiveness for cardiac MRI analysis. Furthermore, we leverage Grad-CAM, and Feature Ablation, explainable AI techniques, to visualise the regions of interest guiding the model predictions for a target class. This integration enhances interpretability, allowing us to gain insights into the model decision-making process and build trust in its predictions.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Terziyan, Vagan, and Oleksandra Vitko. "Explainable AI for Industry 4.0: Semantic Representation of Deep Learning Models." Procedia Computer Science 200 (2022): 216–26. http://dx.doi.org/10.1016/j.procs.2022.01.220.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Schorr, Christian, Payman Goodarzi, Fei Chen, and Tim Dahmen. "Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets." Applied Sciences 11, no. 5 (March 3, 2021): 2199. http://dx.doi.org/10.3390/app11052199.

Повний текст джерела
Анотація:
Trust in artificial intelligence (AI) predictions is a crucial point for a widespread acceptance of new technologies, especially in sensitive areas like autonomous driving. The need for tools explaining AI for deep learning of images is thus eminent. Our proposed toolbox Neuroscope addresses this demand by offering state-of-the-art visualization algorithms for image classification and newly adapted methods for semantic segmentation of convolutional neural nets (CNNs). With its easy to use graphical user interface (GUI), it provides visualization on all layers of a CNN. Due to its open model-view-controller architecture, networks generated and trained with Keras and PyTorch are processable, with an interface allowing extension to additional frameworks. We demonstrate the explanation abilities provided by Neuroscope using the example of traffic scene analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Futia, Giuseppe, and Antonio Vetrò. "On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research." Information 11, no. 2 (February 22, 2020): 122. http://dx.doi.org/10.3390/info11020122.

Повний текст джерела
Анотація:
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hindennach, Susanne, Lei Shi, Filip MiletiĆ, and Andreas Bulling. "Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI Research." Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (April 17, 2024): 1–43. http://dx.doi.org/10.1145/3641009.

Повний текст джерела
Анотація:
When users perceive AI systems as mindful, independent agents, they hold them responsible instead of the AI experts who created and designed these systems. So far, it has not been studied whether explanations support this shift in responsibility through the use of mind-attributing verbs like "to think". To better understand the prevalence of mind-attributing explanations we analyse AI explanations in 3,533 explainable AI (XAI) research articles from the Semantic Scholar Open Research Corpus (S2ORC). Using methods from semantic shift detection, we identify three dominant types of mind attribution: (1) metaphorical (e.g. "to learn" or "to predict"), (2) awareness (e.g. "to consider"), and (3) agency (e.g. "to make decisions"). We then analyse the impact of mind-attributing explanations on awareness and responsibility in a vignette-based experiment with 199 participants. We find that participants who were given a mind-attributing explanation were more likely to rate the AI system as aware of the harm it caused. Moreover, the mind-attributing explanation had a responsibility-concealing effect: Considering the AI experts' involvement lead to reduced ratings of AI responsibility for participants who were given a non-mind-attributing or no explanation. In contrast, participants who read the mind-attributing explanation still held the AI system responsible despite considering the AI experts' involvement. Taken together, our work underlines the need to carefully phrase explanations about AI systems in scientific writing to reduce mind attribution and clearly communicate human responsibility.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Silva, Vivian S., André Freitas, and Siegfried Handschuh. "Exploring Knowledge Graphs in an Interpretable Composite Approach for Text Entailment." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7023–30. http://dx.doi.org/10.1609/aaai.v33i01.33017023.

Повний текст джерела
Анотація:
Recognizing textual entailment is a key task for many semantic applications, such as Question Answering, Text Summarization, and Information Extraction, among others. Entailment scenarios can range from a simple syntactic variation to more complex semantic relationships between pieces of text, but most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. We propose a composite approach for recognizing text entailment which analyzes the entailment pair to decide whether it must be resolved syntactically or semantically. We also make the answer interpretable: whenever an entailment is solved semantically, we explore a knowledge base composed of structured lexical definitions to generate natural language humanlike justifications, explaining the semantic relationship holding between the pieces of text. Besides outperforming wellestablished entailment algorithms, our composite approach gives an important step towards Explainable AI, using world knowledge to make the semantic reasoning process explicit and understandable.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kerr, Alison Duncan, and Kevin Scharp. "The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence." Minds and Machines 32, no. 3 (September 2022): 585–611. http://dx.doi.org/10.1007/s11023-022-09609-7.

Повний текст джерела
Анотація:
AbstractArtificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Erxuan Zeng, Yichi Long, Xiaoyao Wang, Yuting Xiao, and Yuxue Feng. "Literature Review: Personalized Learning Recommendation System in Educational Scenarios: XAI-Driven Student Behavior Understanding and Teacher Collaboration Mechanism." Frontiers in Interdisciplinary Applied Science 2, no. 01 (March 17, 2025): 78–92. https://doi.org/10.71465/fias.v2i01.17.

Повний текст джерела
Анотація:
This literature review delves into personalized learning recommendation systems (PLRSs) within educational contexts. It places a significant emphasis on the understanding of student behavior that is driven by Explainable AI (XAI). Additionally, it focuses on the mechanisms of teacher collaboration. The traditional educational models are not without their drawbacks. These limitations have instigated a transition towards personalized learning. This movement has, in turn, propelled the development of PLRSs. These systems are designed with the dual objectives of boosting learning efficiency and enhancing learning outcomes. To accomplish these goals, they offer customized learning resources and strategies. There are key trends. One trend is dynamic and adaptive recommendation strategies. Another trend is the use of explainable AI (XAI). XAI builds trust. XAI also builds transparency. In - depth student behavior understanding is a trend. Performance modeling is also a trend. Advanced content understanding is a trend. Semantic analysis is a trend as well. The application of collaborative filtering is a trend. The application of hybrid approaches is a trend. The emphasis on teacher collaboration is a trend. The emphasis on human - AI interaction is a trend too. Dynamic systems can adapt to students' changing needs. XAI makes AI - driven recommendations understandable and trustworthy. Precise student models improve the relevance of recommendations. Semantic analysis of educational content does the same. Hybrid approaches enhance the performance of collaborative filtering. Teacher - AI collaboration is important for the effective implementation of PLRSs. However, there are several challenges. Future research should focus on different things. It should develop more comprehensive student models. It should enhance XAI techniques for educational contexts. It should empirically study the impact of XAI on learning outcomes. It should design effective teacher collaboration mechanisms. It should create human - AI interaction strategies. It should address ethical issues and fairness in personalized learning. It should explore the integration of multimodal data and learning analytics. By dealing with these challenges, PLRSs can be optimized. This can create more intelligent, transparent, and human - centered personalized learning environments. In the end, this will enhance learning outcomes. It will also empower students and educators.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Akula, Arjun, Shuai Wang, and Song-Chun Zhu. "CoCoX: Generating Conceptual and Counterfactual Explanations via Fault-Lines." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2594–601. http://dx.doi.org/10.1609/aaai.v34i03.5643.

Повний текст джерела
Анотація:
We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Schwegler, Markus, Christoph Müller, and Alexander Reiterer. "Integrated Gradients for Feature Assessment in Point Cloud-Based Data Sets." Algorithms 16, no. 7 (June 28, 2023): 316. http://dx.doi.org/10.3390/a16070316.

Повний текст джерела
Анотація:
Integrated gradients is an explainable AI technique that aims to explain the relationship between a model’s predictions in terms of its features. Adapting this technique to point clouds and semantic segmentation models allows a class-wise attribution of the predictions with respect to the input features. This allows better insight into how a model reached a prediction. Furthermore, it allows a quantitative analysis of how much each feature contributes to a prediction. To obtain these attributions, a baseline with high entropy is generated and interpolated with the point cloud to be visualized. These interpolated point clouds are then run through the network and their gradients are collected. By observing the change in gradients during each iteration an attribution can be found for each input feature. These can then be projected back onto the original point cloud and compared to the predictions and input point cloud. These attributions are generated using RandLA-Net due to it being an efficient semantic segmentation model that uses comparatively few parameters, therefore keeping the number of gradients that must be stored at a reasonable level. The attribution was run on the public Semantic3D dataset and the SVGEO large-scale urban dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Milella, Frida, Davide Donato Russo, and Stefania Bandini. "AI-Powered Solutions to Support Informal Caregivers in Their Decision-Making: A Systematic Review of the Literature <sup><a class="tippyShow" data-tippy-arrow="true" data-tippy-content="This article is an extended version of the conference paper: Milella F, Russo DD, Bandini S, How artificial intelligence can support informal caregivers in their caring duties to elderly? a systematic review of the literature. In: AIXAS2023 Italian Workshop on Artificial Intelligence for an Ageing Society, co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AIxIA 2023), November 6-9, 2023, Rome, Italy (forthcoming)." data-tippy-interactive="true" data-tippy-theme="light-border" style="cursor:pointer">1</a></sup>." OBM Geriatrics 07, no. 04 (December 15, 2023): 1–11. http://dx.doi.org/10.21926/obm.geriatr.2304262.

Повний текст джерела
Анотація:
Due to aging demographics, prolonged life expectancy, and chronic diseases, European societies' increasing need for care services has led to a shift towards informal care supplied by family members, friends, or neighbors. However, the progressive decrease in the caregiver-to-patient ratio will result in a significant augmentation in incorporating intelligent aid within general care. This study aimed to build upon the authors' previous systematic literature review on technologies for informal caregivers. Specifically, it focused on analyzing AI-based solutions to understand the advantages and challenges of using AI in decision-making support for informal caregivers in elderly care. Three databases (Scopus, IEEE Xplore, ACM Digital Libraries) were searched. The search yielded 1002 articles, with 24 that met the inclusion and exclusion criteria. Within the scope of this study, we will exclusively concentrate on a subset of 11 papers on AI technologies. The study reveals that AI-based solutions have great potential for real-time analysis advancement, explainable AI enhancement, and meta-information semantic refinement. While digital assistants can personalize information for caregivers, security and privacy are key concerns. The rise of more integrated and complicated solutions reveals that these technologies suit aging monitoring and informal care coordination in emergencies or deviations from usual activities. Informal caregiver decision assistance can be improved in this scenario.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Venkatesh Nagubathula. "Document Automation in Enterprise Integration: A Technical Framework for Cloud-Based SaaS Solutions." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 2 (March 4, 2025): 486–512. https://doi.org/10.32628/cseit25112385.

Повний текст джерела
Анотація:
Document automation within Enterprise Integration Systems (EIS) has emerged as a transformative force in modern business environments, enabling seamless data exchange and process optimization. This comprehensive framework addresses the multifaceted challenges of integrating automated document processing into cloud-based SaaS solutions. The architecture encompasses cloud infrastructure utilizing containerized deployment and edge computing, connectivity layers with API gateways and event-driven architectures, advanced document processing engines powered by machine learning, and robust security frameworks. Implementation methodologies focus on API-first strategies, microservices decomposition, and sophisticated AI model management. The framework delivers significant improvements across industry-specific implementations in financial services, healthcare, and legal technology sectors, with each vertical benefiting from specialized automation approaches. Performance benchmarks demonstrate substantial reductions in processing time, improvements in accuracy, and decreased compliance burdens. Looking forward, emerging technologies including quantum machine learning, federated learning, smart document formats, semantic search integration, and explainable AI promise to further revolutionize document automation capabilities, creating increasingly intelligent, secure, and efficient enterprise integration systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Demertzis, Konstantinos, Konstantinos Rantos, Lykourgos Magafas, Charalabos Skianis, and Lazaros Iliadis. "A Secure and Privacy-Preserving Blockchain-Based XAI-Justice System." Information 14, no. 9 (August 28, 2023): 477. http://dx.doi.org/10.3390/info14090477.

Повний текст джерела
Анотація:
Pursuing “intelligent justice” necessitates an impartial, productive, and technologically driven methodology for judicial determinations. This scholarly composition proposes a framework that harnesses Artificial Intelligence (AI) innovations such as Natural Language Processing (NLP), ChatGPT, ontological alignment, and the semantic web, in conjunction with blockchain and privacy techniques, to examine, deduce, and proffer recommendations for the administration of justice. Specifically, through the integration of blockchain technology, the system affords a secure and transparent infrastructure for the management of legal documentation and transactions while preserving data confidentiality. Privacy approaches, including differential privacy and homomorphic encryption techniques, are further employed to safeguard sensitive data and uphold discretion. The advantages of the suggested framework encompass heightened efficiency and expediency, diminished error propensity, a more uniform approach to judicial determinations, and augmented security and privacy. Additionally, by utilizing explainable AI methodologies, the ethical and legal ramifications of deploying intelligent algorithms and blockchain technologies within the legal domain are scrupulously contemplated, ensuring a secure, efficient, and transparent justice system that concurrently protects sensitive information upholds privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Sauter, Daniel, Georg Lodde, Felix Nensa, Dirk Schadendorf, Elisabeth Livingstone, and Markus Kukuk. "Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology." Sensors 22, no. 14 (July 18, 2022): 5346. http://dx.doi.org/10.3390/s22145346.

Повний текст джерела
Анотація:
Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing transparency, reproducibility, and classification performance. In particular, biases are well known to cause poor generalization. Proposed tools from explainable artificial intelligence (XAI), bias detection, and bias discovery suffer from technical challenges, complexity, unintuitive usage, inherent biases, or a semantic gap. A promising XAI method, not studied in the context of digital histopathology is automated concept-based explanation (ACE). It automatically extracts visual concepts from image data. Our objective is to evaluate ACE’s technical validity following design science principals and to compare it to Guided Gradient-weighted Class Activation Mapping (Grad-CAM), a conventional pixel-wise explanation method. To that extent, we created and studied five convolutional neural networks (CNNs) in four different skin cancer settings. Our results demonstrate that ACE is a valid tool for gaining insights into the decision process of histopathological CNNs that can go beyond explanations from the control method. ACE validly visualized a class sampling ratio bias, measurement bias, sampling bias, and class-correlated bias. Furthermore, the complementary use with Guided Grad-CAM offers several benefits. Finally, we propose practical solutions for several technical challenges. In contradiction to results from the literature, we noticed lower intuitiveness in some dermatopathology scenarios as compared to concept-based explanations on real-world images.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Vandana Kalra. "Coupling NLP for Intelligent Knowledge Management in Organizations: A Framework for AI-Powered Decision Support." Journal of Information Systems Engineering and Management 10, no. 10s (February 13, 2025): 23–28. https://doi.org/10.52783/jisem.v10i10s.1337.

Повний текст джерела
Анотація:
Knowledge management (KM) is crucial component for business development in modern enterprises and this type of management is facilitated through technology. Nevertheless, conventional knowledge management systems (KMS) face problems concerning, but not limited to, information silos, difficulty in accessing data, and the complexity in managing unstructured data. As new advancements are made towards Natural Language Processing (NLP), Artificial Intelligence (AI) technologies that allow for contextual knowledge discovery, intelligent search, automated summarization, and real time content classification become readily available. This research analyzes the application of NLP systems concerning their integration with knowledge systems in business, information retrieval, enterprise search, and knowledge recommendation systems. For these integrations to be successful, Name Entity Recognition (NER), semantic search, Retrieval-Augmented Generation (RAG), Optical Character Reader (OCR), and Explainable AI (XAI) technologies need to be utilized. This will assure that decision-making processes are secure and ethical. This paper also presents an NLP-Driven Knowledge Management Framework (NLP-KMF), which is a novel framework that helps manage knowledge. The paper discusses the real-world usage of NLP-powered knowledge management in corporate learning, customer service, and compliance with Google, Accenture, IBM, and JPMorgan Chase serving as the centers of case studies. Strategies to counter issues such as AI bias and misinformation alongside privacy threats are discussed as well. The last section of the paper analyzes the forthcoming research areas that could include topics such as multimodal AI for knowledge management, AI repositories that continuously learn, and decision intelligence driven by AI. This serves as a constructive and precise plan for organizations that wish to evolve from static knowledge databases to dynamic self-adapting AI systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Dwivedi, Kshitij, Michael F. Bonner, Radoslaw Martin Cichy, and Gemma Roig. "Unveiling functions of the visual cortex using task-specific deep neural networks." PLOS Computational Biology 17, no. 8 (August 13, 2021): e1009267. http://dx.doi.org/10.1371/journal.pcbi.1009267.

Повний текст джерела
Анотація:
The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Nguyen, Anh Duy, Huy Hieu Pham, Huynh Thanh Trung, Quoc Viet Hung Nguyen, Thao Nguyen Truong, and Phi Le Nguyen. "High accurate and explainable multi-pill detection framework with graph neural network-assisted multimodal data fusion." PLOS ONE 18, no. 9 (September 28, 2023): e0291865. http://dx.doi.org/10.1371/journal.pone.0291865.

Повний текст джерела
Анотація:
Due to the significant resemblance in visual appearance, pill misuse is prevalent and has become a critical issue, responsible for one-third of all deaths worldwide. Pill identification, thus, is a crucial concern that needs to be investigated thoroughly. Recently, several attempts have been made to exploit deep learning to tackle the pill identification problem. However, most published works consider only single-pill identification and fail to distinguish hard samples with identical appearances. Also, most existing pill image datasets only feature single pill images captured in carefully controlled environments under ideal lighting conditions and clean backgrounds. In this work, we are the first to tackle the multi-pill detection problem in real-world settings, aiming at localizing and identifying pills captured by users during pill intake. Moreover, we also introduce a multi-pill image dataset taken in unconstrained conditions. To handle hard samples, we propose a novel method for constructing heterogeneous a priori graphs incorporating three forms of inter-pill relationships, including co-occurrence likelihood, relative size, and visual semantic correlation. We then offer a framework for integrating a priori with pills’ visual features to enhance detection accuracy. Our experimental results have proved the robustness, reliability, and explainability of the proposed framework. Experimentally, it outperforms all detection benchmarks in terms of all evaluation metrics. Specifically, our proposed framework improves COCO mAP metrics by 9.4% over Faster R-CNN and 12.0% compared to vanilla YOLOv5. Our study opens up new opportunities for protecting patients from medication errors using an AI-based pill identification solution.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kim, Tae Hoon, Moez Krichen, Stephen Ojo, Meznah A. Alamro, and Gabriel Avelino Sampedro. "TSSG-CNN: A Tuberculosis Semantic Segmentation-Guided Model for Detecting and Diagnosis Using the Adaptive Convolutional Neural Network." Diagnostics 14, no. 11 (June 1, 2024): 1174. http://dx.doi.org/10.3390/diagnostics14111174.

Повний текст джерела
Анотація:
Tuberculosis (TB) is an infectious disease caused by Mycobacterium. It primarily impacts the lungs but can also endanger other organs, such as the renal system, spine, and brain. When an infected individual sneezes, coughs, or speaks, the virus can spread through the air, which contributes to its high contagiousness. The goal is to enhance detection recognition with an X-ray image dataset. This paper proposed a novel approach, named the Tuberculosis Segmentation-Guided Diagnosis Model (TSSG-CNN) for Detecting Tuberculosis, using a combined semantic segmentation and adaptive convolutional neural network (CNN) architecture. The proposed approach is distinguished from most of the previously proposed approaches in that it uses the combination of a deep learning segmentation model with a follow-up classification model based on CNN layers to segment chest X-ray images more precisely as well as to improve the diagnosis of TB. It contrasts with other approaches like ILCM, which is optimized for sequential learning, and explainable AI approaches, which focus on explanations. Moreover, our model is beneficial for the simplified procedure of feature optimization from the perspectives of approach using the Mayfly Algorithm (MA). Other models, including simple CNN, Batch Normalized CNN (BN-CNN), and Dense CNN (DCNN), are also evaluated on this dataset to evaluate the effectiveness of the proposed approach. The performance of the TSSG-CNN model outperformed all the models with an impressive accuracy of 98.75% and an F1 score of 98.70%. The evaluation findings demonstrate how well the deep learning segmentation model works and the potential for further research. The results suggest that this is the most accurate strategy and highlight the potential of the TSSG-CNN Model as a useful technique for precise and early diagnosis of TB.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Banawan, Michelle P., Jinnie Shin, Tracy Arner, Renu Balyan, Walter L. Leite, and Danielle S. McNamara. "Shared Language: Linguistic Similarity in an Algebra Discussion Forum." Computers 12, no. 3 (February 27, 2023): 53. http://dx.doi.org/10.3390/computers12030053.

Повний текст джерела
Анотація:
Academic discourse communities and learning circles are characterized by collaboration, sharing commonalities in terms of social interactions and language. The discourse of these communities is composed of jargon, common terminologies, and similarities in how they construe and communicate meaning. This study examines the extent to which discourse reveals “shared language” among its participants that can promote inclusion or affinity. Shared language is characterized in terms of linguistic features and lexical, syntactical, and semantic similarities. We leverage a multi-method approach, including (1) feature engineering using state-of-the-art natural language processing techniques to select the most appropriate features, (2) the bag-of-words classification model to predict linguistic similarity, (3) explainable AI using the local interpretable model-agnostic explanations to explain the model, and (4) a two-step cluster analysis to extract innate groupings between linguistic similarity and emotion. We found that linguistic similarity within and between the threaded discussions was significantly varied, revealing the dynamic and unconstrained nature of the discourse. Further, word choice moderately predicted linguistic similarity between posts within threaded discussions (accuracy = 0.73; F1-score = 0.67), revealing that discourse participants’ lexical choices effectively discriminate between posts in terms of similarity. Lastly, cluster analysis reveals profiles that are distinctly characterized in terms of linguistic similarity, trust, and affect. Our findings demonstrate the potential role of linguistic similarity in supporting social cohesion and affinity within online discourse communities.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kolekar, Suresh, Shilpa Gite, Biswajeet Pradhan, and Abdullah Alamri. "Explainable AI in Scene Understanding for Autonomous Vehicles in Unstructured Traffic Environments on Indian Roads Using the Inception U-Net Model with Grad-CAM Visualization." Sensors 22, no. 24 (December 10, 2022): 9677. http://dx.doi.org/10.3390/s22249677.

Повний текст джерела
Анотація:
The intelligent transportation system, especially autonomous vehicles, has seen a lot of interest among researchers owing to the tremendous work in modern artificial intelligence (AI) techniques, especially deep neural learning. As a result of increased road accidents over the last few decades, significant industries are moving to design and develop autonomous vehicles. Understanding the surrounding environment is essential for understanding the behavior of nearby vehicles to enable the safe navigation of autonomous vehicles in crowded traffic environments. Several datasets are available for autonomous vehicles focusing only on structured driving environments. To develop an intelligent vehicle that drives in real-world traffic environments, which are unstructured by nature, there should be an availability of a dataset for an autonomous vehicle that focuses on unstructured traffic environments. Indian Driving Lite dataset (IDD-Lite), focused on an unstructured driving environment, was released as an online competition in NCPPRIPG 2019. This study proposed an explainable inception-based U-Net model with Grad-CAM visualization for semantic segmentation that combines an inception-based module as an encoder for automatic extraction of features and passes to a decoder for the reconstruction of the segmentation feature map. The black-box nature of deep neural networks failed to build trust within consumers. Grad-CAM is used to interpret the deep-learning-based inception U-Net model to increase consumer trust. The proposed inception U-net with Grad-CAM model achieves 0.622 intersection over union (IoU) on the Indian Driving Dataset (IDD-Lite), outperforming the state-of-the-art (SOTA) deep neural-network-based segmentation models.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Rodríguez Oconitrillo, Luis Raúl Rodríguez, Juan José Vargas, Arturo Camacho, Álvaro Burgos, and Juan Manuel Corchado. "RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning." Electronics 10, no. 12 (June 21, 2021): 1500. http://dx.doi.org/10.3390/electronics10121500.

Повний текст джерела
Анотація:
The need for studies connecting machine explainability with human behavior is essential, especially for a detailed understanding of a human’s perspective, thoughts, and sensations according to a context. A novel system called RYEL was developed based on Subject-Matter Experts (SME) to investigate new techniques for acquiring higher-order thinking, the perception, the use of new computational explanatory techniques, support decision-making, and the judge’s cognition and behavior. Thus, a new spectrum is covered and promises to be a new area of study called Interpretation-Assessment/Assessment-Interpretation (IA-AI), consisting of explaining machine inferences and the interpretation and assessment from a human. It allows expressing a semantic, ontological, and hermeneutical meaning related to the psyche of a human (judge). The system has an interpretative and explanatory nature, and in the future, could be used in other domains of discourse. More than 33 experts in Law and Artificial Intelligence validated the functional design. More than 26 judges, most of them specializing in psychology and criminology from Colombia, Ecuador, Panama, Spain, Argentina, and Costa Rica, participated in the experiments. The results of the experimentation have been very positive. As a challenge, this research represents a paradigm shift in legal data processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ivanisenko, Timofey V., Pavel S. Demenkov, and Vladimir A. Ivanisenko. "An Accurate and Efficient Approach to Knowledge Extraction from Scientific Publications Using Structured Ontology Models, Graph Neural Networks, and Large Language Models." International Journal of Molecular Sciences 25, no. 21 (November 3, 2024): 11811. http://dx.doi.org/10.3390/ijms252111811.

Повний текст джерела
Анотація:
The rapid growth of biomedical literature makes it challenging for researchers to stay current. Integrating knowledge from various sources is crucial for studying complex biological systems. Traditional text-mining methods often have limited accuracy because they don’t capture semantic and contextual nuances. Deep-learning models can be computationally expensive and typically have low interpretability, though efforts in explainable AI aim to mitigate this. Furthermore, transformer-based models have a tendency to produce false or made-up information—a problem known as hallucination—which is especially prevalent in large language models (LLMs). This study proposes a hybrid approach combining text-mining techniques with graph neural networks (GNNs) and fine-tuned large language models (LLMs) to extend biomedical knowledge graphs and interpret predicted edges based on published literature. An LLM is used to validate predictions and provide explanations. Evaluated on a corpus of experimentally confirmed protein interactions, the approach achieved a Matthews correlation coefficient (MCC) of 0.772. Applied to insomnia, the approach identified 25 interactions between 32 human proteins absent in known knowledge bases, including regulatory interactions between MAOA and 5-HT2C, binding between ADAM22 and 14-3-3 proteins, which is implicated in neurological diseases, and a circadian regulatory loop involving RORB and NR1D1. The hybrid GNN-LLM method analyzes biomedical literature efficiency to uncover potential molecular interactions for complex disorders. It can accelerate therapeutic target discovery by focusing expert verification on the most relevant automatically extracted information.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Adarsh, Shivam, Elliott Ash, Stefan Bechtold, Barton Beebe, and Jeanne Fromer. "Automating Abercrombie: Machine‐learning trademark distinctiveness." Journal of Empirical Legal Studies 21, no. 4 (November 17, 2024): 826–60. http://dx.doi.org/10.1111/jels.12398.

Повний текст джерела
Анотація:
AbstractTrademark law protects marks to enable firms to signal their products' qualities to consumers. To qualify for protection, a mark must be able to identify and distinguish goods. US courts typically locate a mark on a “spectrum of distinctiveness”—known as the Abercrombie spectrum—that categorizes marks as fanciful, arbitrary, or suggestive, and thus as “inherently distinctive,” or as descriptive or generic, and thus as not inherently distinctive. This article explores whether locating trademarks on the Abercrombie spectrum can be automated using current natural‐language processing techniques. Using about 1.5 million US trademark registrations between 2012 and 2019 as well as 2.2 million related USPTO office actions, the article presents a machine‐learning model that learns semantic features of trademark applications and predicts whether a mark is inherently distinctive. Our model can predict trademark actions with 86% accuracy overall, and it can identify subsets of trademark applications where it is highly certain in its predictions of distinctiveness. Using an eXplainable AI (XAI) algorithm, we further analyze which features in trademark applications drive our model's predictions. We then explore the practical and normative implications of our approach. On a practical level, we outline a decision‐support system that could, as a “robot trademark clerk,” assist trademark experts in their determination of a trademark's distinctiveness. Such a system could also help trademark experts understand which features of a trademark application contribute the most toward a trademark's distinctiveness. On a theoretical level, we discuss the normative limits of the Abercrombie spectrum and propose to move beyond Abercrombie for trademarks whose distinctiveness is uncertain. We discuss how machine‐learning projects in the law not only inform us about the aspects of the legal system that may be automated in the future, but also force us to tackle normative tradeoffs that may be invisible otherwise.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Del Gaizo, John, Curry Sherard, Khaled Shorbaji, Brett Welch, Roshan Mathi, and Arman Kilic. "Prediction of coronary artery bypass graft outcomes using a single surgical note: An artificial intelligence-based prediction model study." PLOS ONE 19, no. 4 (April 25, 2024): e0300796. http://dx.doi.org/10.1371/journal.pone.0300796.

Повний текст джерела
Анотація:
Background Healthcare providers currently calculate risk of the composite outcome of morbidity or mortality associated with a coronary artery bypass grafting (CABG) surgery through manual input of variables into a logistic regression-based risk calculator. This study indicates that automated artificial intelligence (AI)-based techniques can instead calculate risk. Specifically, we present novel numerical embedding techniques that enable NLP (natural language processing) models to achieve higher performance than the risk calculator using a single preoperative surgical note. Methods The most recent preoperative surgical consult notes of 1,738 patients who received an isolated CABG from July 1, 2014 to November 1, 2022 at a single institution were analyzed. The primary outcome was the Society of Thoracic Surgeons defined composite outcome of morbidity or mortality (MM). We tested three numerical-embedding techniques on the widely used TextCNN classification model: 1a) Basic embedding, treat numbers as word tokens; 1b) Basic embedding with a dataloader that Replaces out-of-context (ROOC) numbers with a tag, where context is defined as within a number of tokens of specified keywords; 2) ScaleNum, an embedding technique that scales in-context numbers via a learned sigmoid-linear-log function; and 3) AttnToNum, a ScaleNum-derivative that updates the ScaleNum embeddings via multi-headed attention applied to local context. Predictive performance was measured via area under the receiver operating characteristic curve (AUC) on holdout sets from 10 random-split experiments. For eXplainable-AI (X-AI), we calculate SHapley Additive exPlanation (SHAP) values at an ngram resolution (SHAP-N). While the analyses focus on TextCNN, we execute an analogous performance pipeline with a long short-term memory (LSTM) model to test if the numerical embedding advantage is robust to model architecture. Results A total of 567 (32.6%) patients had MM following CABG. The embedding performances are as follows with the TextCNN architecture: 1a) Basic, mean AUC 0.788 [95% CI (confidence interval): 0.768–0.809]; 1b) ROOC, 0.801 [CI: 0.788–0.815]; 2) ScaleNum, 0.808 [CI: 0.785–0.821]; and 3) AttnToNum, 0.821 [CI: 0.806–0.834]. The LSTM architecture produced a similar trend. Permutation tests indicate that AttnToNum outperforms the other embedding techniques, though not statistically significant verse ScaleNum (p-value of .07). SHAP-N analyses indicate that the model learns to associate low blood urine nitrate (BUN) and creatinine values with survival. A correlation analysis of the attention-updated numerical embeddings indicates that AttnToNum learns to incorporate both number magnitude and local context to derive semantic similarities. Conclusion This research presents both quantitative and clinical novel contributions. Quantitatively, we contribute two new embedding techniques: AttnToNum and ScaleNum. Both can embed strictly positive and bounded numerical values, and both surpass basic embeddings in predictive performance. The results suggest AttnToNum outperforms ScaleNum. With regards to clinical research, we show that AI methods can predict outcomes after CABG using a single preoperative note at a performance that matches or surpasses the current risk calculator. These findings reveal the potential role of NLP in automated registry reporting and quality improvement.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Повний текст джерела
Анотація:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Wyatt, Lucie S., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. "Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review." Applied Sciences 14, no. 18 (September 10, 2024): 8108. http://dx.doi.org/10.3390/app14188108.

Повний текст джерела
Анотація:
This review provides an overview of explainable AI (XAI) methods for oncological ultrasound image analysis and compares their performance evaluations. A systematic search of Medline Embase and Scopus between 25 March and 14 April 2024 identified 17 studies describing 14 XAI methods, including visualization, semantics, example-based, and hybrid functions. These methods primarily provided specific, local, and post hoc explanations. Performance evaluations focused on AI model performance, with limited assessment of explainability impact. Standardized evaluations incorporating clinical end-users are generally lacking. Enhanced XAI transparency may facilitate AI integration into clinical workflows. Future research should develop real-time methodologies and standardized quantitative evaluative metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Darwiche, Adnan, and Pierre Marquis. "On Quantifying Literals in Boolean Logic and its Applications to Explainable AI." Journal of Artificial Intelligence Research 72 (October 11, 2021): 285–328. http://dx.doi.org/10.1613/jair.1.12756.

Повний текст джерела
Анотація:
Quantified Boolean logic results from adding operators to Boolean logic for existentially and universally quantifying variables. This extends the reach of Boolean logic by enabling a variety of applications that have been explored over the decades. The existential quantification of literals (variable states) and its applications have also been studied in the literature. In this paper, we complement this by introducing and studying universal literal quantification and its applications, particularly to explainable AI. We also provide a novel semantics for quantification, discuss the interplay between variable/literal and existential/universal quantification, and identify some classes of Boolean formulas and circuits on which quantification can be done efficiently. Literal quantification is more fine-grained than variable quantification as the latter can be defined in terms of the former, leading to a refinement of quantified Boolean logic with literal quantification as its primitive.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

He, Gaole, Agathe Balayn, Stefan Buijsman, Jie Yang, and Ujwal Gadiraju. "Opening the Analogical Portal to Explainability: Can Analogies Help Laypeople in AI-assisted Decision Making?" Journal of Artificial Intelligence Research 81 (September 19, 2024): 117–62. http://dx.doi.org/10.1613/jair.1.15118.

Повний текст джерела
Анотація:
Concepts are an important construct in semantics, based on which humans understand the world with various levels of abstraction. With the recent advances in explainable artificial intelligence (XAI), concept-level explanations are receiving an increasing amount of attention from the broad research community. However, laypeople may find such explanations difficult to digest due to the potential knowledge gap and the concomitant cognitive load. Inspired by prior work that has explored analogies and sensemaking, we argue that augmenting concept-level explanations with analogical inference information from commonsense knowledge can be a potential solution to tackle this issue. To investigate the validity of our proposition, we first designed an effective analogy-based explanation generation method and collected 600 analogy-based explanations from 100 crowd workers. Next, we proposed a set of structured dimensions for the qualitative assessment of such explanations, and conducted an empirical evaluation of the generated analogies with experts. Our findings revealed significant positive correlations between the qualitative dimensions of analogies and the perceived helpfulness of analogy-based explanations, suggesting the effectiveness of the dimensions. To understand the practical utility and the effectiveness of analogybased explanations in assisting human decision-making, we conducted a follow-up empirical study (N = 280) on a skin cancer detection task with non-expert humans and an imperfect AI system. Thus, we designed a between-subjects study spanning five different experimental conditions with varying types of explanations. The results of our study confirmed that a knowledge gap can prevent participants from understanding concept-level explanations. Consequently, when only the target domain of our designed analogy-based explanation was provided (in a specific experimental condition), participants demonstrated relatively more appropriate reliance on the AI system. In contrast to our expectations, we found that analogies were not effective in fostering appropriate reliance. We carried out a qualitative analysis of the open-ended responses from participants in the study regarding their perceived usefulness of explanations and analogies. Our findings suggest that human intuition and the perceived plausibility of analogies may have played a role in affecting user reliance on the AI system. We also found that the understanding of commonsense explanations varied with the varying experience of the recipient user, which points out the need for further work on personalization when leveraging commonsense explanations. In summary, although we did not find quantitative support for our hypotheses around the benefits of using analogies, we found considerable qualitative evidence suggesting the potential of high-quality analogies in aiding non-expert users in their decision making with AI-assistance. These insights can inform the design of future methods for the generation and use of effective analogy-based explanations.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

U, Vignesh, and Tushar Moolchandani. "Revolutionizing Autonomous Parking: GNN-Powered Slot Detection for Enhanced Efficiency." Interdisciplinary Journal of Information, Knowledge, and Management 19 (2024): 019. http://dx.doi.org/10.28945/5334.

Повний текст джерела
Анотація:
Aim/Purpose: Accurate detection of vacant parking spaces is crucial for autonomous parking. Deep learning, particularly Graph Neural Networks (GNNs), holds promise for addressing the challenges of diverse parking lot appearances and complex visual environments. Our GNN-based approach leverages the spatial layout of detected marking points in around-view images to learn robust feature representations that are resilient to occlusions and lighting variations. We demonstrate significant accuracy improvements on benchmark datasets compared to existing methods, showcasing the effectiveness of our GNN-based solution. Further research is needed to explore the scalability and generalizability of this approach in real-world scenarios and to consider the potential ethical implications of autonomous parking technologies. Background: GNNs offer a number of advantages over traditional parking spot detection methods. Unlike methods that treat objects as discrete entities, GNNs may leverage the inherent connections among parking markers (lines, dots) inside an image. This ability to exploit spatial connections leads to more accurate parking space detection, even in challenging scenarios with shifting illumination. Real-time applications are another area where GNNs exhibit promise, which is critical for autonomous vehicles. Their ability to intuitively understand linkages across marking sites may further simplify the process compared to traditional deep-learning approaches that need complex feature development. Furthermore, the proposed GNN model streamlines parking space recognition by potentially combining slot inference and marking point recognition in a single step. All things considered, GNNs present a viable method for obtaining stronger and more precise parking slot recognition, opening the door for autonomous car self-parking technology developments. Methodology: The proposed research introduces a novel, end-to-end trainable method for parking slot detection using bird’s-eye images and GNNs. The approach involves a two-stage process. First, a marking-point detector network is employed to identify potential parking markers, extracting features such as confidence scores and positions. After refining these detections, a marking-point encoder network extracts and embeds location and appearance information. The enhanced data is then loaded into a fully linked network, with each node representing a marker. An attentional GNN is then utilized to leverage the spatial relationships between neighbors, allowing for selective information aggregation and capturing intricate interactions. Finally, a dedicated entrance line discriminator network, trained on GNN outputs, classifies pairs of markers as potential entry lines based on learned node attributes. This multi-stage approach, evaluated on benchmark datasets, aims to achieve robust and accurate parking slot detection even in diverse and challenging environments. Contribution: The present study makes a significant contribution to the parking slot detection domain by introducing an attentional GNN-based approach that capitalizes on the spatial relationships between marking points for enhanced robustness. Additionally, the paper offers a fully trainable end-to-end model that eliminates the need for manual post-processing, thereby streamlining the process. Furthermore, the study reduces training costs by dispensing with the need for detailed annotations of marking point properties, thereby making it more accessible and cost-effective. Findings: The goal of this research is to present a unique approach to parking space recognition using GNNs and bird’s-eye photos. The study’s findings demonstrated significant improvements over earlier algorithms, with accuracy on par with the state-of-the-art DMPR-PS method. Moreover, the suggested method provides a fully trainable solution with less reliance on manually specified rules and more economical training needs. One crucial component of this approach is the GNN’s performance. By making use of the spatial correlations between marking locations, the GNN delivers greater accuracy and recall than a completely linked baseline. The GNN successfully learns discriminative features by separating paired marking points (creating parking spots) from unpaired ones, according to further analysis using cosine similarity. There are restrictions, though, especially where there are unclear markings. Successful parking slot identification in various circumstances proves the recommended method’s usefulness, with occasional failures in poor visibility conditions. Future work addresses these limitations and explores adapting the model to different image formats (e.g., side-view) and scenarios without relying on prior entry line information. An ablation study is conducted to investigate the impact of different backbone architectures on image feature extraction. The results reveal that VGG16 is optimal for balancing accuracy and real-time processing requirements. Recommendations for Practitioners: Developers of parking systems are encouraged to incorporate GNN-based techniques into their autonomous parking systems, as these methods exhibit enhanced accuracy and robustness when handling a wide range of parking scenarios. Furthermore, attention mechanisms within deep learning models can provide significant advantages for tasks that involve spatial relationships and contextual information in other vision-based applications. Recommendation for Researchers: Further research is necessary to assess the effectiveness of GNN-based methods in real-world situations. To obtain accurate results, it is important to employ large-scale datasets that include diverse lighting conditions, parking layouts, and vehicle types. Incorporating semantic information such as parking signs and lane markings into GNN models can enhance their ability to interpret and understand context. Moreover, it is crucial to address ethical concerns, including privacy, potential biases, and responsible deployment, in the development of autonomous parking technologies. Impact on Society: Optimized utilization of parking spaces can help cities manage parking resources efficiently, thereby reducing traffic congestion and fuel consumption. Automating parking processes can also enhance accessibility and provide safer and more convenient parking experiences, especially for individuals with disabilities. The development of dependable parking capabilities for autonomous vehicles can also contribute to smoother traffic flow, potentially reducing accidents and positively impacting society. Future Research: Developing and optimizing graph neural network-based models for real-time deployment in autonomous vehicles with limited resources is a critical objective. Investigating the integration of GNNs with other deep learning techniques for multi-modal parking slot detection, radar, and other sensors is essential for enhancing the understanding of the environment. Lastly, it is crucial to develop explainable AI methods to elucidate the decision-making processes of GNN models in parking slot detection, ensuring fairness, transparency, and responsible utilization of this technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Rajabi, Enayat, and Kobra Etminani. "Knowledge-graph-based explainable AI: A systematic review." Journal of Information Science, September 24, 2022, 016555152211128. http://dx.doi.org/10.1177/01655515221112844.

Повний текст джерела
Анотація:
In recent years, knowledge graphs (KGs) have been widely applied in various domains for different purposes. The semantic model of KGs can represent knowledge through a hierarchical structure based on classes of entities, their properties, and their relationships. The construction of large KGs can enable the integration of heterogeneous information sources and help Artificial Intelligence (AI) systems be more explainable and interpretable. This systematic review examines a selection of recent publications to understand how KGs are currently being used in eXplainable AI systems. To achieve this goal, we design a framework and divide the use of KGs into four categories: extracting features, extracting relationships, constructing KGs, and KG reasoning. We also identify where KGs are mostly used in eXplainable AI systems (pre-model, in-model, and post-model) according to the aforementioned categories. Based on our analysis, KGs have been mainly used in pre-model XAI for feature and relation extraction. They were also utilised for inference and reasoning in post-model XAI. We found several studies that leveraged KGs to explain the XAI models in the healthcare domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Vassiliades, Alexandros, Nick Bassiliades, and Theodore Patkos. "Argumentation and explainable artificial intelligence: a survey." Knowledge Engineering Review 36 (2021). http://dx.doi.org/10.1017/s0269888921000011.

Повний текст джерела
Анотація:
Abstract Argumentation and eXplainable Artificial Intelligence (XAI) are closely related, as in the recent years, Argumentation has been used for providing Explainability to AI. Argumentation can show step by step how an AI System reaches a decision; it can provide reasoning over uncertainty and can find solutions when conflicting information is faced. In this survey, we elaborate over the topics of Argumentation and XAI combined, by reviewing all the important methods and studies, as well as implementations that use Argumentation to provide Explainability in AI. More specifically, we show how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues. Subsequently, we elaborate on how Argumentation can help in constructing explainable systems in various applications domains, such as in Medical Informatics, Law, the Semantic Web, Security, Robotics, and some general purpose systems. Finally, we present approaches that combine Machine Learning and Argumentation Theory, toward more interpretable predictive models.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

V, Prasanna, Umarani S, Suganthi B, Ranjani V, Manigandan Thangaraju, and Uma Maheswari P. "Advanced Explainable AI: Self Attention Deep Neural Network of Text Classification." Journal of Machine and Computing, July 5, 2024, 586–93. http://dx.doi.org/10.53759/7669/jmc202404056.

Повний текст джерела
Анотація:
The classification of texts is a crucial component of the data retrieval mechanism. By utilizing semantic details representation, and the text vector sequence is condensed, resulting in a reduction in the temporal and spatial order of the memory pattern. This process helps to clarify the context of the text, extract crucial feature information, and fuse these features to determine the classification outcome. This approach represents the preprocessed text data using character-level vectors. The self-attention mechanism is used to understand the interdependence of words in a text, allowing for the extraction of internal structure-related data. Furthermore, the semantic characteristics of text data have been extracted independently using Deep Convolutional Neural Network (DCNN) and Bi-directional Gated Recurrent Unit (BiGRU) using a Soft-Attention mechanism. These two distinct feature extraction outcomes are then merged. The Softmax layer is employed to categorize the deep-extracted attributes, hence enhancing the accuracy of the classification model. This improvement is achieved by including a uniform distribution component into the cross-entropy loss function. Our results demonstrate that our suggested method for explainability outperforms the model that was suggested in terms of accuracy and computing efficiency. For the purpose of assessing the effectiveness of our suggested approach, we developed many baseline models and performed an evaluation their studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Sun, Changqi, Hao Xu, Yuntian Chen, and Dongxiao Zhang. "AS‐XAI: Self‐Supervised Automatic Semantic Interpretation for CNN." Advanced Intelligent Systems, September 30, 2024. http://dx.doi.org/10.1002/aisy.202400359.

Повний текст джерела
Анотація:
Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black‐box” deep learning models. However, it remains difficult for existing methods to achieve the trade‐off of the three key criteria in interpretability, namely, reliability, understandability, and usability, which hinder their practical applications. In this article, we propose a self‐supervised automatic semantic interpretable explainable artificial intelligence (AS‐XAI) framework, which utilizes transparent orthogonal embedding semantic extraction spaces and row‐centered principal component analysis (PCA) for global semantic interpretation of model decisions in the absence of human interference, without additional computational costs. In addition, the invariance of filter feature high‐rank decomposition is used to evaluate model sensitivity to different semantic concepts. Extensive experiments demonstrate that robust and orthogonal semantic spaces can be automatically extracted by AS‐XAI, providing more effective global interpretability for convolutional neural networks (CNNs) and generating human‐comprehensible explanations. The proposed approach offers broad fine‐grained extensible practical applications, including shared semantic interpretation under out‐of‐distribution (OOD) categories, auxiliary explanations for species that are challenging to distinguish, and classification explanations from various perspectives. In a systematic evaluation by users with varying levels of AI knowledge, AS‐XAI demonstrated superior “glass box” characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Pekar, Viktor, Marina Candi, Ahmad Beltagui, Nikolaos Stylos, and Wei Liu. "Explainable text-based features in predictive models of crowdfunding campaigns." Annals of Operations Research, January 12, 2024. http://dx.doi.org/10.1007/s10479-023-05800-w.

Повний текст джерела
Анотація:
AbstractReward-Based Crowdfunding offers an opportunity for innovative ventures that would not be supported through traditional financing. A key problem for those seeking funding is understanding which features of a crowdfunding campaign will sway the decisions of a sufficient number of funders. Predictive models of fund-raising campaigns used in combination with Explainable AI methods promise to provide such insights. However, previous work on Explainable AI has largely focused on quantitative structured data. In this study, our aim is to construct explainable models of human decisions based on analysis of natural language text, thus contributing to a fast-growing body of research on the use of Explainable AI for text analytics. We propose a novel method to construct predictions based on text via semantic clustering of sentences, which, compared with traditional methods using individual words and phrases, allows complex meaning contained in the text to be operationalised. Using experimental evaluation, we compare our proposed method to keyword extraction and topic modelling, which have traditionally been used in similar applications. Our results demonstrate that the sentence clustering method produces features with significant predictive power, compared to keyword-based methods and topic models, but which are much easier to interpret for human raters. We furthermore conduct a SHAP analysis of the models incorporating sentence clusters, demonstrating concrete insights into the types of natural language content that influence the outcome of crowdfunding campaigns.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Chari, Shruthi, Oshani Seneviratne, Mohamed Ghalwash, Sola Shirai, Daniel M. Gruen, Pablo Meyer, Prithwish Chakraborty, and Deborah L. McGuinness. "Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanations." Semantic Web, May 18, 2023, 1–31. http://dx.doi.org/10.3233/sw-233282.

Повний текст джерела
Анотація:
In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users. In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO). The EO is a general-purpose representation that was designed to help system designers connect explanations to their underlying data and knowledge. This paper addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of fifteen literature-backed explanation types that are supported within the expanded EO. We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in five use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO’s capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases. We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated our resource website, https://tetherless-world.github.io/explanation-ontology, with supporting documentation. Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems’ outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Daga, Enrico, and Paul Groth. "Data journeys: Explaining AI workflows through abstraction." Semantic Web, June 15, 2023, 1–27. http://dx.doi.org/10.3233/sw-233407.

Повний текст джерела
Анотація:
Artificial intelligence systems are not simply built on a single dataset or trained model. Instead, they are made by complex data science workflows involving multiple datasets, models, preparation scripts, and algorithms. Given this complexity, in order to understand these AI systems, we need to provide explanations of their functioning at higher levels of abstraction. To tackle this problem, we focus on the extraction and representation of data journeys from these workflows. A data journey is a multi-layered semantic representation of data processing activity linked to data science code and assets. We propose an ontology to capture the essential elements of a data journey and an approach to extract such data journeys. Using a corpus of Python notebooks from Kaggle, we show that we are able to capture high-level semantic data flow that is more compact than using the code structure itself. Furthermore, we show that introducing an intermediate knowledge graph representation outperforms models that rely only on the code itself. Finally, we report on a user survey to reflect on the challenges and opportunities presented by computational data journeys for explainable AI.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Liu, Pengyuan, Yan Zhang, and Filip Biljecki. "Explainable spatially explicit geospatial artificial intelligence in urban analytics." Environment and Planning B: Urban Analytics and City Science, September 29, 2023. http://dx.doi.org/10.1177/23998083231204689.

Повний текст джерела
Анотація:
Geospatial artificial intelligence (GeoAI) is proliferating in urban analytics, where graph neural networks (GNNs) have become one of the most popular methods in recent years. However, along with the success of GNNs, the black box nature of AI models has led to various concerns (e.g. algorithmic bias and model misuse) regarding their adoption in urban analytics, particularly when studying socio-economics where high transparency is a crucial component of social justice. Therefore, the desire for increased model explainability and interpretability has attracted increasing research interest. This article proposes an explainable spatially explicit GeoAI-based analytical method that combines a graph convolutional network (GCN) and a graph-based explainable AI (XAI) method, called GNNExplainer. Here, we showcase the ability of our proposed method in two studies within urban analytics: traffic volume prediction and population estimation in the tasks of a node classification and a graph classification, respectively. For these tasks, we used Street View Imagery (SVI), a trending data source in urban analytics. We extracted semantic information from the images and assigned them as features of urban roads. The GCN first provided reasonable predictions related to these tasks by encoding roads as nodes and their connectivities and networks as graphs. The GNNExplainer then offered insights into how certain predictions are made. Through such a process, practical insights and conclusions can be derived from the urban phenomena studied here. In this paper we also set out a path for developing XAI in future urban studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Liu, Yang, Xingchen Ding, Shun Peng, and Chengzhi Zhang. "Leveraging ChatGPT to optimize depression intervention through explainable deep learning." Frontiers in Psychiatry 15 (June 6, 2024). http://dx.doi.org/10.3389/fpsyt.2024.1383648.

Повний текст джерела
Анотація:
IntroductionMental health issues bring a heavy burden to individuals and societies around the world. Recently, the large language model ChatGPT has demonstrated potential in depression intervention. The primary objective of this study was to ascertain the viability of ChatGPT as a tool for aiding counselors in their interactions with patients while concurrently evaluating its comparability to human-generated content (HGC). MethodsWe propose a novel framework that integrates state-of-the-art AI technologies, including ChatGPT, BERT, and SHAP, to enhance the accuracy and effectiveness of mental health interventions. ChatGPT generates responses to user inquiries, which are then classified using BERT to ensure the reliability of the content. SHAP is subsequently employed to provide insights into the underlying semantic constructs of the AI-generated recommendations, enhancing the interpretability of the intervention. ResultsRemarkably, our proposed methodology consistently achieved an impressive accuracy rate of 93.76%. We discerned that ChatGPT always employs a polite and considerate tone in its responses. It refrains from using intricate or unconventional vocabulary and maintains an impersonal demeanor. These findings underscore the potential significance of AIGC as an invaluable complementary component in enhancing conventional intervention strategies.DiscussionThis study illuminates the considerable promise offered by the utilization of large language models in the realm of healthcare. It represents a pivotal step toward advancing the development of sophisticated healthcare systems capable of augmenting patient care and counseling practices.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Colucci, Simona, Francesco M. Donini, and Eugenio Di Sciascio. "A review of reasoning characteristics of RDF‐based Semantic Web systems." WIREs Data Mining and Knowledge Discovery, March 28, 2024. http://dx.doi.org/10.1002/widm.1537.

Повний текст джерела
Анотація:
AbstractPresented as a research challenge in 2001, the Semantic Web (SW) is now a mature technology, used in several cross‐domain applications. One of its key benefits is a formal semantics of its RDF data format, which enables a system to validate data, infer implicit knowledge by automated reasoning, and explain it to a user; yet the analysis presented here of 71 RDF‐based SW systems (out of which 17 reasoners) reveals that the exploitation of such semantics varies a lot among all SW applications. Since the simple enumeration of systems, each one with its characteristics, might result in a clueless listing, we borrow from Software Engineering the idea of maturity model, and organize our classification around it. Our model has three orthogonal dimensions: treatment of blank nodes, degree of deductive capabilities, and explanation of results. For each dimension, we define 3–4 levels of increasing exploitation of semantics, corresponding to an increasingly sophisticated output in that dimension. Each system is then classified in each dimension, based on its documentation and published articles. The distribution of systems along each dimension is depicted in the graphical abstract. We deliberately exclude resources consumption (time and space) since it is a dimension not peculiar to SW.This article is categorized under: Fundamental Concepts of Data and Knowledge > Knowledge Representation Fundamental Concepts of Data and Knowledge > Explainable AI
Стилі APA, Harvard, Vancouver, ISO та ін.
44

"Semantic NLP Technologies in Information Retrieval Systems for Legal Research." Advances in Machine Learning & Artificial Intelligence 2, no. 1 (August 5, 2021). http://dx.doi.org/10.33140/amlai.02.01.05.

Повний текст джерела
Анотація:
Companies involved in providing legal research services to lawyers, such as LexisNexis or Westlaw, have rapidly incorporated natural language processing (NLP) into their database systems to deal with the massive amounts of legal texts contained within them. These NLP techniques, which perform analysis on natural language texts by taking advantage of methods developed in the fields of computational linguistics and artificial intelligence, have potential applications ranging from text summarization all the way to the prediction of court judgments. However, a potential concern with the use of this technology is that professionals will come to depend on systems, over which they have little control or understanding, as a source of knowledge. While recent strides in AI and deep learning have led to increased effectiveness in NLP techniques, the decision-making processes of these algorithms have progressively become less intuitive for humans to understand. Concerns about the interpretability of patented legal services such as LexisNexis are more pertinent than ever. The following survey conducted for current NLP techniques shows that one potential avenue to make algorithms in NLP more explainable is to incorporate symbol-based methods that take advantage of knowledge models generated for specific domains. An example of this can be seen in NLP techniques developed to facilitate the retrieval of inventive information from patent applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ibrahim, Rami, and M. Omair Shafiq. "Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future Directions." ACM Computing Surveys, September 21, 2022. http://dx.doi.org/10.1145/3563691.

Повний текст джерела
Анотація:
Convolutional neural networks (CNNs) have shown promising results and have outperformed classical machine learning techniques in tasks such as image classification and object recognition. Their human-brain alike structure enabled them to learn sophisticated features while passing images through their layers. However, their lack of explainability led to the demand for interpretations to justify their predictions. Research on Explainable AI or XAI has gained momentum to provide knowledge and insights into neural networks. This study summarizes the literature to gain more understanding of explainability in CNNs (i.e., Explainable Convolutional Neural Networks). We classify models that made efforts to improve the CNNs interpretation. We present and discuss taxonomies for XAI models that modify CNN architecture, simplify CNN representations, analyze feature relevance, and visualize interpretations. We review various metrics used to evaluate XAI interpretations. In addition, we discuss the applications and tasks of XAI models. This focused and extensive survey develops a perspective on this area by addressing suggestions for overcoming XAI interpretation challenges, like models’ generalization, unifying evaluation criteria, building robust models, and providing interpretations with semantic descriptions. Our taxonomy can be a reference to motivate future research in interpreting neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chatterjee, Ayan, Michael A. Riegler, K. Ganesh, and Pål Halvorsen. "Stress management with HRV following AI, semantic ontology, genetic algorithm and tree explainer." Scientific Reports 15, no. 1 (February 17, 2025). https://doi.org/10.1038/s41598-025-87510-w.

Повний текст джерела
Анотація:
Abstract Heart Rate Variability (HRV) serves as a vital marker of stress levels, with lower HRV indicating higher stress. It measures the variation in the time between heartbeats and offers insights into health. Artificial intelligence (AI) research aims to use HRV data for accurate stress level classification, aiding early detection and well-being approaches. This study’s objective is to create a semantic model of HRV features in a knowledge graph and develop an accurate, reliable, explainable, and ethical AI model for predictive HRV analysis. The SWELL-KW dataset, containing labeled HRV data for stress conditions, is examined. Various techniques like feature selection and dimensionality reduction are explored to improve classification accuracy while minimizing bias. Different machine learning (ML) algorithms, including traditional and ensemble methods, are employed for analyzing both imbalanced and balanced HRV datasets. To address imbalances, various data formats and oversampling techniques such as SMOTE and ADASYN are experimented with. Additionally, a Tree-Explainer, specifically SHAP, is used to interpret and explain the models’ classifications. The combination of genetic algorithm-based feature selection and classification using a Random Forest Classifier yields effective results for both imbalanced and balanced datasets, especially in analyzing non-linear HRV features. These optimized features play a crucial role in developing a stress management system within a Semantic framework. Introducing domain ontology enhances data representation and knowledge acquisition. The consistency and reliability of the Ontology model are assessed using Hermit reasoners, with reasoning time as a performance measure. HRV serves as a significant indicator of stress, offering insights into its correlation with mental well-being. While HRV is non-invasive, its interpretation must integrate other stress assessments for a holistic understanding of an individual’s stress response. Monitoring HRV can help evaluate stress management strategies and interventions, aiding individuals in maintaining well-being.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Achuthan, Krishnashree, Sasangan Ramanathan, Sethuraman Srinivas, and Raghu Raman. "Advancing cybersecurity and privacy with artificial intelligence: current trends and future research directions." Frontiers in Big Data 7 (December 5, 2024). https://doi.org/10.3389/fdata.2024.1497535.

Повний текст джерела
Анотація:
IntroductionThe rapid escalation of cyber threats necessitates innovative strategies to enhance cybersecurity and privacy measures. Artificial Intelligence (AI) has emerged as a promising tool poised to enhance the effectiveness of cybersecurity strategies by offering advanced capabilities for intrusion detection, malware classification, and privacy preservation. However, this work addresses the significant lack of a comprehensive synthesis of AI's use in cybersecurity and privacy across the vast literature, aiming to identify existing gaps and guide further progress.MethodsThis study employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework for a comprehensive literature review, analyzing over 9,350 publications from 2004 to 2023. Utilizing BERTopic modeling, 14 key themes in AI-driven cybersecurity were identified. Topics were clustered and validated through a combination of algorithmic and expert-driven evaluations, focusing on semantic relationships and coherence scores.ResultsAI applications in cybersecurity are concentrated around intrusion detection, malware classification, federated learning in privacy, IoT security, UAV systems and DDoS mitigation. Emerging fields such as adversarial machine learning, blockchain and deep learning are gaining traction. Analysis reveals that AI's adaptability and scalability are critical for addressing evolving threats. Global trends indicate significant contributions from the US, India, UK, and China, highlighting geographical diversity in research priorities.DiscussionWhile AI enhances cybersecurity efficacy, challenges such as computational resource demands, adversarial vulnerabilities, and ethical concerns persist. More research in trustworthy AI, standardizing AI-driven methods, legislations for robust privacy protection amongst others is emphasized. The study also highlights key current and future areas of focus, including quantum machine learning, explainable AI, integrating humanized AI and deepfakes.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Mustafa, Ahmad, Saja Nakhleh, Rama Irsheidat, and Raneem Alruosan. "Interpreting Arabic Transformer Models: A Study on XAI Interpretability for Quranic Semantic Search Models." Jordanian Journal of Computers and Information Technology, 2024, 1. http://dx.doi.org/10.5455/jjcit.71-1704878720.

Повний текст джерела
Анотація:
Transformers have shown their effectiveness in various machine learning tasks. However, their “black box” nature often obscures their decision-making processes, particularly in Arabic, posing a barrier to their broader adoption and trust. This study delves into the interpretability of three Arabic transformer models that have been fine-tuned for semantic search tasks. Through a focused case study, we employ these models for retrieving information from the Holy Qur'an, leveraging Explainable AI (XAI) techniques—namely, LIME and SHAP—to shed light on the decision-making processes of these models. The paper underscores the unique challenges posed by the Qur’anic text and demonstrates how XAI can significantly boost the transparency and interpretability of semantic search systems for such complex text. Our findings reveal that applying XAI techniques to Arabic transformer models for Qur'anic content not only demystifies the models internal mechanics but also makes the insights derived from them more accessible to a broader audience. This contribution is twofold: It enriches the field of XAI within the context of Arabic semantic search and illustrates the utility of these techniques in deepening our understanding of intricate religious documents. By providing this nuanced approach to the interpretability of Arabic transformer models in the domain of semantic search, our study underscores the potential of XAI to bridge the gap between advanced machine learning technologies and the nuanced needs of users seeking to explore complex texts like the Holy Qur'an. Our code is available at https://gist.github.com/a-mustafa/51fcacf30ecdf0c13ac91ad16fecfa89
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Black, Elizabeth, Martim Brandão, Oana Cocarascu, Bart De Keijzer, Yali Du, Derek Long, Michael Luck, et al. "Reasoning and interaction for social artificial intelligence." AI Communications, September 12, 2022, 1–17. http://dx.doi.org/10.3233/aic-220133.

Повний текст джерела
Анотація:
Current work on multi-agent systems at King’s College London is extensive, though largely based in two research groups within the Department of Informatics: the Distributed Artificial Intelligence (DAI) thematic group and the Reasoning & Planning (RAP) thematic group. DAI combines AI expertise with political and economic theories and data, to explore social and technological contexts of interacting intelligent entities. It develops computational models for analysing social, political and economic phenomena to improve the effectiveness and fairness of policies and regulations, and combines intelligent agent systems, software engineering, norms, trust and reputation, agent-based simulation, communication and provenance of data, knowledge engineering, crowd computing and semantic technologies, and algorithmic game theory and computational social choice, to address problems arising in autonomous systems, financial markets, privacy and security, urban living and health. RAP conducts research in symbolic models for reasoning involving argumentation, knowledge representation, planning, and other related areas, including development of logical models of argumentation-based reasoning and decision-making, and their usage for explainable AI and integration of machine and human reasoning, as well as combining planning and argumentation methodologies for strategic argumentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Silva, Vivian, Siegfried Handschuh, and André Freitas. "Recognizing and Justifying Text Entailment Through Distributional Navigation on Definition Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 32, no. 1 (April 26, 2018). http://dx.doi.org/10.1609/aaai.v32i1.11914.

Повний текст джерела
Анотація:
Text entailment, the task of determining whether a piece of text logically follows from another piece of text, has become an important component for many natural language processing tasks, such as question answering and information retrieval. For entailments requiring world knowledge, most systems still work as a "black box," providing a yes/no answer that doesn't explain the reasoning behind it. We propose an interpretable text entailment approach that, given a structured definition graph, uses a navigation algorithm based on distributional semantic models to find a path in the graph which links text and hypothesis. If such path is found, it is used to provide a human-readable justification explaining why the entailment holds. Experiments show that the proposed approach present results comparable to some well-established entailment algorithms, while also meeting Explainable AI requirements, supplying clear explanations which allow the inference model interpretation.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії