Journal articles on the topic 'XAI Interpretability'

To see the other types of publications on this topic, follow the link: XAI Interpretability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 journal articles for your research on the topic 'XAI Interpretability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (April 13, 2022): 3926. http://dx.doi.org/10.3390/app12083926.

Full text
Abstract:
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.
APA, Harvard, Vancouver, ISO, and other styles
2

Zerilli, John. "Explaining Machine Learning Decisions." Philosophy of Science 89, no. 1 (January 2022): 1–19. http://dx.doi.org/10.1017/psa.2021.13.

Full text
Abstract:
AbstractThe operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem.
APA, Harvard, Vancouver, ISO, and other styles
3

Veitch, Erik, and Ole Andreas Alsos. "Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles." Journal of Marine Science and Engineering 9, no. 11 (November 6, 2021): 1227. http://dx.doi.org/10.3390/jmse9111227.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions.
APA, Harvard, Vancouver, ISO, and other styles
4

Dindorf, Carlo, Wolfgang Teufl, Bertram Taetz, Gabriele Bleser, and Michael Fröhlich. "Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty." Sensors 20, no. 16 (August 6, 2020): 4385. http://dx.doi.org/10.3390/s20164385.

Full text
Abstract:
Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown promising results. The current study examined the influence of different input representations on a trained model’s accuracy, interpretability, as well as clinical relevancy using XAI methods. The gait of 27 healthy subjects and 20 subjects after total hip arthroplasty (THA) was recorded with an inertial measurement unit (IMU)-based system. Three different input representations were used for classification. Local Interpretable Model-Agnostic Explanations (LIME) was used for model interpretation. The best accuracy was achieved with automatically extracted features (mean accuracy Macc = 100%), followed by features based on simple descriptive statistics (Macc = 97.38%) and waveform data (Macc = 95.88%). Globally seen, sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task. The current work shows that the type of input representation crucially determines interpretability as well as clinical relevance. A combined approach using different forms of representations seems advantageous. The results might assist physicians and therapists finding and addressing individual pathologic gait patterns.
APA, Harvard, Vancouver, ISO, and other styles
5

Chaddad, Ahmad, Jihao Peng, Jian Xu, and Ahmed Bouridane. "Survey of Explainable AI Techniques in Healthcare." Sensors 23, no. 2 (January 5, 2023): 634. http://dx.doi.org/10.3390/s23020634.

Full text
Abstract:
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
APA, Harvard, Vancouver, ISO, and other styles
6

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.

Full text
Abstract:
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Ru, Jin Xing, Lifu Chen, Zhouhao Pan, Xingmin Cai, Zengqi Li, Jielan Wang, and Alistair Ford. "Glassboxing Deep Learning to Enhance Aircraft Detection from SAR Imagery." Remote Sensing 13, no. 18 (September 13, 2021): 3650. http://dx.doi.org/10.3390/rs13183650.

Full text
Abstract:
Although deep learning has achieved great success in aircraft detection from SAR imagery, its blackbox behavior has been criticized for low comprehensibility and interpretability. Such challenges have impeded the trustworthiness and wide application of deep learning techniques in SAR image analytics. In this paper, we propose an innovative eXplainable Artificial Intelligence (XAI) framework to glassbox deep neural networks (DNN) by using aircraft detection as a case study. This framework is composed of three parts: hybrid global attribution mapping (HGAM) for backbone network selection, path aggregation network (PANet), and class-specific confidence scores mapping (CCSM) for visualization of the detector. HGAM integrates the local and global XAI techniques to evaluate the effectiveness of DNN feature extraction; PANet provides advanced feature fusion to generate multi-scale prediction feature maps; while CCSM relies on visualization methods to examine the detection performance with given DNN and input SAR images. This framework can select the optimal backbone DNN for aircraft detection and map the detection performance for better understanding of the DNN. We verify its effectiveness with experiments using Gaofen-3 imagery. Our XAI framework offers an explainable approach to design, develop, and deploy DNN for SAR image analytics.
APA, Harvard, Vancouver, ISO, and other styles
9

Bogdanova, Alina, and Vitaly Romanov. "Explainable source code authorship attribution algorithm." Journal of Physics: Conference Series 2134, no. 1 (December 1, 2021): 012011. http://dx.doi.org/10.1088/1742-6596/2134/1/012011.

Full text
Abstract:
Abstract Source Code Authorship Attribution is a problem that is lately studied more often due improvements in Deep Learning techniques. Among existing solutions, two common issues are inability to add new authors without retraining and lack of interpretability. We address both these problem. In our experiments, we were able to correctly classify 75% of authors for diferent programming languages. Additionally, we applied techniques of explainable AI (XAI) and found that our model seems to pay attention to distinctive features of source code.
APA, Harvard, Vancouver, ISO, and other styles
10

Islam, Mir Riyanul, Mobyen Uddin Ahmed, Shaibal Barua, and Shahina Begum. "A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks." Applied Sciences 12, no. 3 (January 27, 2022): 1353. http://dx.doi.org/10.3390/app12031353.

Full text
Abstract:
Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant growth over the last couple of years with the development of highly accurate models but with a paucity of explainability and interpretability. The literature shows evidence from numerous studies on the philosophy and methodologies of XAI. Nonetheless, there is an evident scarcity of secondary studies in connection with the application domains and tasks, let alone review studies following prescribed guidelines, that can enable researchers’ understanding of the current trends in XAI, which could lead to future research for domain- and application-specific method development. Therefore, this paper presents a systematic literature review (SLR) on the recent developments of XAI methods and evaluation metrics concerning different application domains and tasks. This study considers 137 articles published in recent years and identified through the prominent bibliographic databases. This systematic synthesis of research articles resulted in several analytical findings: XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations. Research studies have been performed on the addition of explanations to widely used AI/ML models for expert users. However, more attention is required to generate explanations for general users from sensitive domains such as finance and the judicial system.
APA, Harvard, Vancouver, ISO, and other styles
11

Linardatos, Pantelis, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. "Explainable AI: A Review of Machine Learning Interpretability Methods." Entropy 23, no. 1 (December 25, 2020): 18. http://dx.doi.org/10.3390/e23010018.

Full text
Abstract:
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
APA, Harvard, Vancouver, ISO, and other styles
12

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (April 30, 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Full text
Abstract:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
APA, Harvard, Vancouver, ISO, and other styles
13

Mankodiya, Harsh, Dhairya Jadav, Rajesh Gupta, Sudeep Tanwar, Wei-Chiang Hong, and Ravi Sharma. "OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles." Applied Sciences 12, no. 11 (May 24, 2022): 5310. http://dx.doi.org/10.3390/app12115310.

Full text
Abstract:
In recent years, artificial intelligence (AI) has become one of the most prominent fields in autonomous vehicles (AVs). With the help of AI, the stress levels of drivers have been reduced, as most of the work is executed by the AV itself. With the increasing complexity of models, explainable artificial intelligence (XAI) techniques work as handy tools that allow naive people and developers to understand the intricate workings of deep learning models. These techniques can be paralleled to AI to increase their interpretability. One essential task of AVs is to be able to follow the road. This paper attempts to justify how AVs can detect and segment the road on which they are moving using deep learning (DL) models. We trained and compared three semantic segmentation architectures for the task of pixel-wise road detection. Max IoU scores of 0.9459 and 0.9621 were obtained on the train and test set. Such DL algorithms are called “black box models” as they are hard to interpret due to their highly complex structures. Integrating XAI enables us to interpret and comprehend the predictions of these abstract models. We applied various XAI methods and generated explanations for the proposed segmentation model for road detection in AVs.
APA, Harvard, Vancouver, ISO, and other styles
14

de Lange, Petter Eilif, Borger Melsom, Christian Bakke Vennerød, and Sjur Westgaard. "Explainable AI for Credit Assessment in Banks." Journal of Risk and Financial Management 15, no. 12 (November 28, 2022): 556. http://dx.doi.org/10.3390/jrfm15120556.

Full text
Abstract:
Banks’ credit scoring models are required by financial authorities to be explainable. This paper proposes an explainable artificial intelligence (XAI) model for predicting credit default on a unique dataset of unsecured consumer loans provided by a Norwegian bank. We combined a LightGBM model with SHAP, which enables the interpretation of explanatory variables affecting the predictions. The LightGBM model clearly outperforms the bank’s actual credit scoring model (Logistic Regression). We found that the most important explanatory variables for predicting default in the LightGBM model are the volatility of utilized credit balance, remaining credit in percentage of total credit and the duration of the customer relationship. Our main contribution is the implementation of XAI methods in banking, exploring how these methods can be applied to improve the interpretability and reliability of state-of-the-art AI models. We also suggest a method for analyzing the potential economic value of an improved credit scoring model.
APA, Harvard, Vancouver, ISO, and other styles
15

Mankodiya, Harsh, Dhairya Jadav, Rajesh Gupta, Sudeep Tanwar, Abdullah Alharbi, Amr Tolba, Bogdan-Constantin Neagu, and Maria Simona Raboaca. "XAI-Fall: Explainable AI for Fall Detection on Wearable Devices Using Sequence Models and XAI Techniques." Mathematics 10, no. 12 (June 9, 2022): 1990. http://dx.doi.org/10.3390/math10121990.

Full text
Abstract:
A fall detection system is vital for the safety of older people, as it contacts emergency services when it detects a person has fallen. There have been various approaches to detect falls, such as using a single tri-axial accelerometer to detect falls or fixing sensors on the walls of a room to detect falls in a particular area. These approaches have two major drawbacks: either (i) they use a single sensor, which is insufficient to detect falls, or (ii) they are attached to a wall that does not detect a person falling outside its region. Hence, to provide a robust method for detecting falls, the proposed approach uses three different sensors for fall detection, which are placed at five different locations on the subject’s body to gather the data used for training purposes. The UMAFall dataset is used to attain sensor readings to train the models for fall detection. Five models are trained corresponding to the five sensors models, and a majority voting classifier is used to determine the output. Accuracy of 93.5%, 93.5%, 97.2%, 94.6%, and 93.1% is achieved on each of the five sensors models, and 92.54% is the overall accuracy achieved by the majority voting classifier. The XAI technique called LIME is incorporated into the system in order to explain the model’s outputs and improve the model’s interpretability.
APA, Harvard, Vancouver, ISO, and other styles
16

Guleria, Pratiyush, Parvathaneni Naga Srinivasu, Shakeel Ahmed, Naif Almusallam, and Fawaz Khaled Alarfaj. "XAI Framework for Cardiovascular Disease Prediction Using Classification Techniques." Electronics 11, no. 24 (December 8, 2022): 4086. http://dx.doi.org/10.3390/electronics11244086.

Full text
Abstract:
Machine intelligence models are robust in classifying the datasets for data analytics and for predicting the insights that would assist in making clinical decisions. The models would assist in the disease prognosis and preliminary disease investigation, which is crucial for effective treatment. There is a massive demand for the interpretability and explainability of decision models in the present day. The models’ trustworthiness can be attained through deploying the ensemble classification models in the eXplainable Artificial Intelligence (XAI) framework. In the current study, the role of ensemble classifiers over the XAI framework for predicting heart disease from the cardiovascular datasets is carried out. There are 303 instances and 14 attributes in the cardiovascular dataset taken for the proposed work. The attribute characteristics in the dataset are categorical, integer, and real type and the associated task related to the dataset is classification. The classification techniques, such as the support vector machine (SVM), AdaBoost, K-nearest neighbor (KNN), bagging, logistic regression (LR), and naive Bayes, are considered for classification purposes. The experimental outcome of each of those algorithms is compared to each other and with the conventional way of implementing the classification models. The efficiency of the XAI-based classification models is reasonably fair, compared to the other state-of-the-art models, which are assessed using the various evaluation metrics, such as area under curve (AUC), receiver operating characteristic (ROC), sensitivity, specificity, and the F1-score. The performances of the XAI-driven SVM, LR, and naive Bayes are robust, with an accuracy of 89%, which is assumed to be reasonably fair, compared to the existing models.
APA, Harvard, Vancouver, ISO, and other styles
17

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence." Digital Technologies Research and Applications 1, no. 1 (January 26, 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Full text
Abstract:
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
APA, Harvard, Vancouver, ISO, and other styles
18

Adak, Anirban, Biswajeet Pradhan, and Nagesh Shukla. "Sentiment Analysis of Customer Reviews of Food Delivery Services Using Deep Learning and Explainable Artificial Intelligence: Systematic Review." Foods 11, no. 10 (May 21, 2022): 1500. http://dx.doi.org/10.3390/foods11101500.

Full text
Abstract:
During the COVID-19 crisis, customers’ preference in having food delivered to their doorstep instead of waiting in a restaurant has propelled the growth of food delivery services (FDSs). With all restaurants going online and bringing FDSs onboard, such as UberEATS, Menulog or Deliveroo, customer reviews on online platforms have become an important source of information about the company’s performance. FDS organisations aim to gather complaints from customer feedback and effectively use the data to determine the areas for improvement to enhance customer satisfaction. This work aimed to review machine learning (ML) and deep learning (DL) models and explainable artificial intelligence (XAI) methods to predict customer sentiments in the FDS domain. A literature review revealed the wide usage of lexicon-based and ML techniques for predicting sentiments through customer reviews in FDS. However, limited studies applying DL techniques were found due to the lack of the model interpretability and explainability of the decisions made. The key findings of this systematic review are as follows: 77% of the models are non-interpretable in nature, and organisations can argue for the explainability and trust in the system. DL models in other domains perform well in terms of accuracy but lack explainability, which can be achieved with XAI implementation. Future research should focus on implementing DL models for sentiment analysis in the FDS domain and incorporating XAI techniques to bring out the explainability of the models.
APA, Harvard, Vancouver, ISO, and other styles
19

Lorente, Maria Paz Sesmero, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez, and Araceli Sanchis de Miguel. "Explaining Deep Learning-Based Driver Models." Applied Sciences 11, no. 8 (April 7, 2021): 3321. http://dx.doi.org/10.3390/app11083321.

Full text
Abstract:
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments.
APA, Harvard, Vancouver, ISO, and other styles
20

Lee, Dongchan, Sangyoung Byeon, and Keewhan Kim. "An Inspection of CNN Model for Citrus Canker Image Classification Based on XAI: Grad-CAM." Korean Data Analysis Society 24, no. 6 (December 30, 2022): 2133–42. http://dx.doi.org/10.37727/jkdas.2022.24.6.2133.

Full text
Abstract:
By the rapid development of hardware performance and information processing technology, interest in processing unstructured data and creating value is increasing. Various types of AI architectures are being developed and as the decision-making junction of the model increased exponentially, the performance is being improved. However, complex model structure is a major cause of hindering researchers' ease of interpret results and unlike the speed of development of model performance, the progress is slow on explanatory ability. Explainable artificial intelligence (XAI) has emerged to solve this problem and decomposes the model's black box to an understandable level to help improve interpretability and reliability. In this research, we approach the ucler disease citrus image classification problem by using CNN model, and the final model showed approximately 97% accuracy. After that, to improve the reliability of the model and to determine the specific area of the image that played a major role in making the final judgment, Gradient-weighted Class Activation Mapping (Grad-CAM), one of the XAI techniques was applied. As a result of the inspection, it was detected that the shape outside the image wasn't distinguished from the object which was greatly affected. So, the unique shape of a specific object was the main cause of misclassification.
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Yu-Sheng, Zhe-Yu Liu, Yu-An Chen, Yu-Siang Wang, Ya-Liang Chang, and Winston H. Hsu. "xCos: An Explainable Cosine Metric for Face Verification Task." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 3s (October 31, 2021): 1–16. http://dx.doi.org/10.1145/3469288.

Full text
Abstract:
We study the XAI (explainable AI) on the face recognition task, particularly the face verification. Face verification has become a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this article, we propose a novel similarity metric, called explainable cosine ( xCos ), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of xCos , we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output xCos score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, not only resulting in providing novel and desirable model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.
APA, Harvard, Vancouver, ISO, and other styles
22

Monje, Leticia, Ramón A. Carrasco, Carlos Rosado, and Manuel Sánchez-Montañés. "Deep Learning XAI for Bus Passenger Forecasting: A Use Case in Spain." Mathematics 10, no. 9 (April 23, 2022): 1428. http://dx.doi.org/10.3390/math10091428.

Full text
Abstract:
Time series forecasting of passenger demand is crucial for optimal planning of limited resources. For smart cities, passenger transport in urban areas is an increasingly important problem, because the construction of infrastructure is not the solution and the use of public transport should be encouraged. One of the most sophisticated techniques for time series forecasting is Long Short Term Memory (LSTM) neural networks. These deep learning models are very powerful for time series forecasting but are not interpretable by humans (black-box models). Our goal was to develop a predictive and linguistically interpretable model, useful for decision making using large volumes of data from different sources. Our case study was one of the most demanded bus lines of Madrid. We obtained an interpretable model from the LSTM neural network using a surrogate model and the 2-tuple fuzzy linguistic model, which improves the linguistic interpretability of the generated Explainable Artificial Intelligent (XAI) model without losing precision.
APA, Harvard, Vancouver, ISO, and other styles
23

Xie, Yibing, Nichakorn Pongsakornsathien, Alessandro Gardi, and Roberto Sabatini. "Explanation of Machine-Learning Solutions in Air-Traffic Management." Aerospace 8, no. 8 (August 12, 2021): 224. http://dx.doi.org/10.3390/aerospace8080224.

Full text
Abstract:
Advances in the trusted autonomy of air-traffic management (ATM) systems are currently being pursued to cope with the predicted growth in air-traffic densities in all classes of airspace. Highly automated ATM systems relying on artificial intelligence (AI) algorithms for anomaly detection, pattern identification, accurate inference, and optimal conflict resolution are technically feasible and demonstrably able to take on a wide variety of tasks currently accomplished by humans. However, the opaqueness and inexplicability of most intelligent algorithms restrict the usability of such technology. Consequently, AI-based ATM decision-support systems (DSS) are foreseen to integrate eXplainable AI (XAI) in order to increase interpretability and transparency of the system reasoning and, consequently, build the human operators’ trust in these systems. This research presents a viable solution to implement XAI in ATM DSS, providing explanations that can be appraised and analysed by the human air-traffic control operator (ATCO). The maturity of XAI approaches and their application in ATM operational risk prediction is investigated in this paper, which can support both existing ATM advisory services in uncontrolled airspace (Classes E and F) and also drive the inflation of avoidance volumes in emerging performance-driven autonomy concepts. In particular, aviation occurrences and meteorological databases are exploited to train a machine learning (ML)-based risk-prediction tool capable of real-time situation analysis and operational risk monitoring. The proposed approach is based on the XGBoost library, which is a gradient-boost decision tree algorithm for which post-hoc explanations are produced by SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). Results are presented and discussed, and considerations are made on the most promising strategies for evolving the human–machine interactions (HMI) to strengthen the mutual trust between ATCO and systems. The presented approach is not limited only to conventional applications but also suitable for UAS-traffic management (UTM) and other emerging applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Khan, Irfan Ullah, Nida Aslam, Rana AlShedayed, Dina AlFrayan, Rand AlEssa, Noura A. AlShuail, and Alhawra Al Safwan. "A Proactive Attack Detection for Heating, Ventilation, and Air Conditioning (HVAC) System Using Explainable Extreme Gradient Boosting Model (XGBoost)." Sensors 22, no. 23 (November 27, 2022): 9235. http://dx.doi.org/10.3390/s22239235.

Full text
Abstract:
The advent of Industry 4.0 has revolutionized the life enormously. There is a growing trend towards the Internet of Things (IoT), which has made life easier on the one hand and improved services on the other. However, it also has vulnerabilities due to cyber security attacks. Therefore, there is a need for intelligent and reliable security systems that can proactively analyze the data generated by these devices and detect cybersecurity attacks. This study proposed a proactive interpretable prediction model using ML and explainable artificial intelligence (XAI) to detect different types of security attacks using the log data generated by heating, ventilation, and air conditioning (HVAC) attacks. Several ML algorithms were used, such as Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), Ada Boost (AB), Light Gradient Boosting (LGBM), Extreme Gradient Boosting (XGBoost), and CatBoost (CB). Furthermore, feature selection was performed using stepwise forward feature selection (FFS) technique. To alleviate the data imbalance, SMOTE and Tomeklink were used. In addition, SMOTE achieved the best results with selected features. Empirical experiments were conducted, and the results showed that the XGBoost classifier has produced the best result with 0.9999 Area Under the Curve (AUC), 0.9998, accuracy (ACC), 0.9996 Recall, 1.000 Precision and 0.9998 F1 Score got the best result. Additionally, XAI was applied to the best performing model to add the interpretability in the black-box model. Local and global explanations were generated using LIME and SHAP. The results of the proposed study have confirmed the effectiveness of ML for predicting the cyber security attacks on IoT devices and Industry 4.0.
APA, Harvard, Vancouver, ISO, and other styles
25

Labaien Soto, Jokin, Ekhi Zugasti Uriguen, and Xabier De Carlos Garcia. "Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencoders." Applied Sciences 13, no. 5 (February 24, 2023): 2912. http://dx.doi.org/10.3390/app13052912.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has gained significant attention in recent years due to concerns over the lack of interpretability of Deep Learning models, which hinders their decision-making processes. To address this issue, counterfactual explanations have been proposed to elucidate the reasoning behind a model’s decisions by providing what-if statements as explanations. However, generating counterfactuals traditionally involves solving an optimization problem for each input, making it impractical for real-time feedback. Moreover, counterfactuals must meet specific criteria, including being user-driven, causing minimal changes, and staying within the data distribution. To overcome these challenges, a novel model-agnostic approach called Real-Time Guided Counterfactual Explanations (RTGCEx) is proposed. This approach utilizes autoencoders to generate real-time counterfactual explanations that adhere to these criteria by optimizing a multiobjective loss function. The performance of RTGCEx has been evaluated on two datasets: MNIST and Gearbox, a synthetic time series dataset. The results demonstrate that RTGCEx outperforms traditional methods in terms of speed and efficacy on MNIST, while also effectively identifying and rectifying anomalies in the Gearbox dataset, highlighting its versatility across different scenarios.
APA, Harvard, Vancouver, ISO, and other styles
26

Demajo, Lara Marie, Vince Vella, and Alexiei Dingli. "An Explanation Framework for Interpretable Credit Scoring." International Journal of Artificial Intelligence & Applications 12, no. 1 (January 31, 2021): 19–38. http://dx.doi.org/10.5121/ijaia.2021.12102.

Full text
Abstract:
With the recent boosted enthusiasm in Artificial Intelligence (AI) and Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. However, despite the evergrowing achievements, the biggest obstacle in most AI systems is their lack of interpretability. This deficiency of transparency limits their application in different domains including credit scoring. Credit scoring systems help financial experts make better decisions regarding whether or not to accept a loan application so that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. A recently introduced concept is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance- based) that are required by different people in different situations. Evaluation through the use of functionally-grounded, application-grounded and human-grounded analysis shows that the explanations provided are simple and consistent as well as correct, effective, easy to understand, sufficiently detailed and trustworthy.
APA, Harvard, Vancouver, ISO, and other styles
27

Zaman, Munawar, and Adnan Hassan. "Fuzzy Heuristics and Decision Tree for Classification of Statistical Feature-Based Control Chart Patterns." Symmetry 13, no. 1 (January 10, 2021): 110. http://dx.doi.org/10.3390/sym13010110.

Full text
Abstract:
Monitoring manufacturing process variation remains challenging, especially within a rapid and automated manufacturing environment. Problematic and unstable processes may produce distinct time series patterns that could be associated with assignable causes for diagnosis purpose. Various machine learning classification techniques such as artificial neural network (ANN), classification and regression tree (CART), and fuzzy inference system have been proposed to enhance the capability of traditional Shewhart control chart for process monitoring and diagnosis. ANN classifiers are often opaque to the user with limited interpretability on the classification procedures. However, fuzzy inference system and CART are more transparent, and the internal steps are more comprehensible to users. There have been limited works comparing these two techniques in the control chart pattern recognition (CCPR) domain. As such, the aim of this paper is to demonstrate the development of fuzzy heuristics and CART technique for CCPR and compare their classification performance. The results show the heuristics Mamdani fuzzy classifier performed well in classification accuracy (95.76%) but slightly lower compared to CART classifier (98.58%). This study opens opportunities for deeper investigation and provides a useful revisit to promote more studies into explainable artificial intelligence (XAI).
APA, Harvard, Vancouver, ISO, and other styles
28

Amoroso, Nicola, Domenico Pomarico, Annarita Fanizzi, Vittorio Didonna, Francesco Giotta, Daniele La Forgia, Agnese Latorre, et al. "A Roadmap towards Breast Cancer Therapies Supported by Explainable Artificial Intelligence." Applied Sciences 11, no. 11 (May 26, 2021): 4881. http://dx.doi.org/10.3390/app11114881.

Full text
Abstract:
In recent years personalized medicine reached an increasing importance, especially in the design of oncological therapies. In particular, the development of patients’ profiling strategies suggests the possibility of promising rewards. In this work, we present an explainable artificial intelligence (XAI) framework based on an adaptive dimensional reduction which (i) outlines the most important clinical features for oncological patients’ profiling and (ii), based on these features, determines the profile, i.e., the cluster a patient belongs to. For these purposes, we collected a cohort of 267 breast cancer patients. The adopted dimensional reduction method determines the relevant subspace where distances among patients are used by a hierarchical clustering procedure to identify the corresponding optimal categories. Our results demonstrate how the molecular subtype is the most important feature for clustering. Then, we assessed the robustness of current therapies and guidelines; our findings show a striking correspondence between available patients’ profiles determined in an unsupervised way and either molecular subtypes or therapies chosen according to guidelines, which guarantees the interpretability characterizing explainable approaches to machine learning techniques. Accordingly, our work suggests the possibility to design data-driven therapies to emphasize the differences observed among the patients.
APA, Harvard, Vancouver, ISO, and other styles
29

Zaman, Munawar, and Adnan Hassan. "Fuzzy Heuristics and Decision Tree for Classification of Statistical Feature-Based Control Chart Patterns." Symmetry 13, no. 1 (January 10, 2021): 110. http://dx.doi.org/10.3390/sym13010110.

Full text
Abstract:
Monitoring manufacturing process variation remains challenging, especially within a rapid and automated manufacturing environment. Problematic and unstable processes may produce distinct time series patterns that could be associated with assignable causes for diagnosis purpose. Various machine learning classification techniques such as artificial neural network (ANN), classification and regression tree (CART), and fuzzy inference system have been proposed to enhance the capability of traditional Shewhart control chart for process monitoring and diagnosis. ANN classifiers are often opaque to the user with limited interpretability on the classification procedures. However, fuzzy inference system and CART are more transparent, and the internal steps are more comprehensible to users. There have been limited works comparing these two techniques in the control chart pattern recognition (CCPR) domain. As such, the aim of this paper is to demonstrate the development of fuzzy heuristics and CART technique for CCPR and compare their classification performance. The results show the heuristics Mamdani fuzzy classifier performed well in classification accuracy (95.76%) but slightly lower compared to CART classifier (98.58%). This study opens opportunities for deeper investigation and provides a useful revisit to promote more studies into explainable artificial intelligence (XAI).
APA, Harvard, Vancouver, ISO, and other styles
30

Hu, Hao, Marie-José Huguet, and Mohamed Siala. "Optimizing Binary Decision Diagrams with MaxSAT for Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 3767–75. http://dx.doi.org/10.1609/aaai.v36i4.20291.

Full text
Abstract:
The growing interest in explainable artificial intelligence(XAI) for critical decision making motivates the need for interpretable machine learning (ML) models. In fact, due to their structure (especially with small sizes), these models are inherently understandable by humans. Recently, several exact methods for computing such models are proposed to overcome weaknesses of traditional heuristic methods by providing more compact models or better prediction quality. Despite their compressed representation of Boolean functions, Binary decision diagrams (BDDs) did not gain enough interest as other interpretable ML models. In this paper, we first propose SAT-based models for learning optimal BDDs (in terms of the number of features) that classify all input examples. Then, we lift the encoding to a MaxSAT model to learn optimal BDDs in limited depths, that maximize the number of examples correctly classified. Finally, we tackle the fragmentation problem by introducing a method to merge compatible subtrees for the BDDs found via the MaxSAT model. Our empirical study shows clear benefits of the proposed approach in terms of prediction quality and interpretability (i.e., lighter size) compared to the state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
31

Ali, Sikandar, Abdullah, Tagne Poupi Theodore Armand, Ali Athar, Ali Hussain, Maisam Ali, Muhammad Yaseen, Moon-Il Joo, and Hee-Cheol Kim. "Metaverse in Healthcare Integrated with Explainable AI and Blockchain: Enabling Immersiveness, Ensuring Trust, and Providing Patient Data Security." Sensors 23, no. 2 (January 4, 2023): 565. http://dx.doi.org/10.3390/s23020565.

Full text
Abstract:
Digitization and automation have always had an immense impact on healthcare. It embraces every new and advanced technology. Recently the world has witnessed the prominence of the metaverse which is an emerging technology in digital space. The metaverse has huge potential to provide a plethora of health services seamlessly to patients and medical professionals with an immersive experience. This paper proposes the amalgamation of artificial intelligence and blockchain in the metaverse to provide better, faster, and more secure healthcare facilities in digital space with a realistic experience. Our proposed architecture can be summarized as follows. It consists of three environments, namely the doctor’s environment, the patient’s environment, and the metaverse environment. The doctors and patients interact in a metaverse environment assisted by blockchain technology which ensures the safety, security, and privacy of data. The metaverse environment is the main part of our proposed architecture. The doctors, patients, and nurses enter this environment by registering on the blockchain and they are represented by avatars in the metaverse environment. All the consultation activities between the doctor and the patient will be recorded and the data, i.e., images, speech, text, videos, clinical data, etc., will be gathered, transferred, and stored on the blockchain. These data are used for disease prediction and diagnosis by explainable artificial intelligence (XAI) models. The GradCAM and LIME approaches of XAI provide logical reasoning for the prediction of diseases and ensure trust, explainability, interpretability, and transparency regarding the diagnosis and prediction of diseases. Blockchain technology provides data security for patients while enabling transparency, traceability, and immutability regarding their data. These features of blockchain ensure trust among the patients regarding their data. Consequently, this proposed architecture ensures transparency and trust regarding both the diagnosis of diseases and the data security of the patient. We also explored the building block technologies of the metaverse. Furthermore, we also investigated the advantages and challenges of a metaverse in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
32

Laios, Alexandros, Evangelos Kalampokis, Racheal Johnson, Amudha Thangavelu, Constantine Tarabanis, David Nugent, and Diederick De Jong. "Explainable Artificial Intelligence for Prediction of Complete Surgical Cytoreduction in Advanced-Stage Epithelial Ovarian Cancer." Journal of Personalized Medicine 12, no. 4 (April 10, 2022): 607. http://dx.doi.org/10.3390/jpm12040607.

Full text
Abstract:
Complete surgical cytoreduction (R0 resection) is the single most important prognosticator in epithelial ovarian cancer (EOC). Explainable Artificial Intelligence (XAI) could clarify the influence of static and real-time features in the R0 resection prediction. We aimed to develop an AI-based predictive model for the R0 resection outcome, apply a methodology to explain the prediction, and evaluate the interpretability by analysing feature interactions. The retrospective cohort finally assessed 571 consecutive advanced-stage EOC patients who underwent cytoreductive surgery. An eXtreme Gradient Boosting (XGBoost) algorithm was employed to develop the predictive model including mostly patient- and surgery-specific variables. The Shapley Additive explanations (SHAP) framework was used to provide global and local explainability for the predictive model. The XGBoost accurately predicted R0 resection (area under curve [AUC] = 0.866; 95% confidence interval [CI] = 0.8–0.93). We identified “turning points” that increased the probability of complete cytoreduction including Intraoperative Mapping of Ovarian Cancer Score and Peritoneal Carcinomatosis Index < 4 and <5, respectively, followed by Surgical Complexity Score > 4, patient’s age < 60 years, and largest tumour bulk < 5 cm in a surgical environment of optimized infrastructural support. We demonstrated high model accuracy for the R0 resection prediction in EOC patients and provided novel global and local feature explainability that can be used for quality control and internal audit.
APA, Harvard, Vancouver, ISO, and other styles
33

Hussain, Sardar Mehboob, Domenico Buongiorno, Nicola Altini, Francesco Berloco, Berardino Prencipe, Marco Moschetta, Vitoantonio Bevilacqua, and Antonio Brunetti. "Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence." Applied Sciences 12, no. 12 (June 19, 2022): 6230. http://dx.doi.org/10.3390/app12126230.

Full text
Abstract:
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Eder, Matthias, Emanuel Moser, Andreas Holzinger, Claire Jean-Quartier, and Fleur Jeanquartier. "Interpretable Machine Learning with Brain Image and Survival Data." BioMedInformatics 2, no. 3 (September 6, 2022): 492–510. http://dx.doi.org/10.3390/biomedinformatics2030031.

Full text
Abstract:
Recent developments in research on artificial intelligence (AI) in medicine deal with the analysis of image data such as Magnetic Resonance Imaging (MRI) scans to support the of decision-making of medical personnel. For this purpose, machine learning (ML) algorithms are often used, which do not explain the internal decision-making process at all. Thus, it is often difficult to validate or interpret the results of the applied AI methods. This manuscript aims to overcome this problem by using methods of explainable AI (XAI) to interpret the decision-making of an ML algorithm in the use case of predicting the survival rate of patients with brain tumors based on MRI scans. Therefore, we explore the analysis of brain images together with survival data to predict survival in gliomas with a focus on improving the interpretability of the results. Using the Brain Tumor Segmentation dataset BraTS 2020, we used a well-validated dataset for evaluation and relied on a convolutional neural network structure to improve the explainability of important features by adding Shapley overlays. The trained network models were used to evaluate SHapley Additive exPlanations (SHAP) directly and were not optimized for accuracy. The resulting overfitting of some network structures is therefore seen as a use case of the presented interpretation method. It is shown that the network structure can be validated by experts using visualizations, thus making the decision-making of the method interpretable. Our study highlights the feasibility of combining explainers with 3D voxels and also the fact that the interpretation of prediction results significantly supports the evaluation of results. The implementation in python is available on gitlab as “XAIforBrainImgSurv”.
APA, Harvard, Vancouver, ISO, and other styles
35

Munkhdalai, Lkhagvadorj, Tsendsuren Munkhdalai, Pham Van Van Huy, Jang-Eui Hong, Keun Ho Ryu, and Nipon Theera-Umpon. "Neural Network-Augmented Locally Adaptive Linear Regression Model for Tabular Data." Sustainability 14, no. 22 (November 17, 2022): 15273. http://dx.doi.org/10.3390/su142215273.

Full text
Abstract:
Creating an interpretable model with high predictive performance is crucial in eXplainable AI (XAI) field. We introduce an interpretable neural network-based regression model for tabular data in this study. Our proposed model uses ordinary least squares (OLS) regression as a base-learner, and we re-update the parameters of our base-learner by using neural networks, which is a meta-learner in our proposed model. The meta-learner updates the regression coefficients using the confidence interval formula. We extensively compared our proposed model to other benchmark approaches on public datasets for regression task. The results showed that our proposed neural network-based interpretable model showed outperformed results compared to the benchmark models. We also applied our proposed model to the synthetic data to measure model interpretability, and we showed that our proposed model can explain the correlation between input and output variables by approximating the local linear function for each point. In addition, we trained our model on the economic data to discover the correlation between the central bank policy rate and inflation over time. As a result, it is drawn that the effect of central bank policy rates on inflation tends to strengthen during a recession and weaken during an expansion. We also performed the analysis on CO2 emission data, and our model discovered some interesting explanations between input and target variables, such as a parabolic relationship between CO2 emissions and gross national product (GNP). Finally, these experiments showed that our proposed neural network-based interpretable model could be applicable for many real-world applications where data type is tabular and explainable models are required.
APA, Harvard, Vancouver, ISO, and other styles
36

Ali, Sikandar, Ali Hussain, Subrata Bhattacharjee, Ali Athar, Abdullah, and Hee-Cheol Kim. "Detection of COVID-19 in X-ray Images Using Densely Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box Model." Sensors 22, no. 24 (December 18, 2022): 9983. http://dx.doi.org/10.3390/s22249983.

Full text
Abstract:
The novel coronavirus (COVID-19), which emerged as a pandemic, has engulfed so many lives and affected millions of people across the world since December 2019. Although this disease is under control nowadays, yet it is still affecting people in many countries. The traditional way of diagnosis is time taking, less efficient, and has a low rate of detection of this disease. Therefore, there is a need for an automatic system that expedites the diagnosis process while retaining its performance and accuracy. Artificial intelligence (AI) technologies such as machine learning (ML) and deep learning (DL) potentially provide powerful solutions to address this problem. In this study, a state-of-the-art CNN model densely connected squeeze convolutional neural network (DCSCNN) has been developed for the classification of X-ray images of COVID-19, pneumonia, normal, and lung opacity patients. Data were collected from different sources. We applied different preprocessing techniques to enhance the quality of images so that our model could learn accurately and give optimal performance. Moreover, the attention regions and decisions of the AI model were visualized using the Grad-CAM and LIME methods. The DCSCNN combines the strength of the Dense and Squeeze networks. In our experiment, seven kinds of classification have been performed, in which six are binary classifications (COVID vs. normal, COVID vs. lung opacity, lung opacity vs. normal, COVID vs. pneumonia, pneumonia vs. lung opacity, pneumonia vs. normal) and one is multiclass classification (COVID vs. pneumonia vs. lung opacity vs. normal). The main contributions of this paper are as follows. First, the development of the DCSNN model which is capable of performing binary classification as well as multiclass classification with excellent classification accuracy. Second, to ensure trust, transparency, and explainability of the model, we applied two popular Explainable AI techniques (XAI). i.e., Grad-CAM and LIME. These techniques helped to address the black-box nature of the model while improving the trust, transparency, and explainability of the model. Our proposed DCSCNN model achieved an accuracy of 98.8% for the classification of COVID-19 vs normal, followed by COVID-19 vs. lung opacity: 98.2%, lung opacity vs. normal: 97.2%, COVID-19 vs. pneumonia: 96.4%, pneumonia vs. lung opacity: 95.8%, pneumonia vs. normal: 97.4%, and lastly for multiclass classification of all the four classes i.e., COVID vs. pneumonia vs. lung opacity vs. normal: 94.7%, respectively. The DCSCNN model provides excellent classification performance consequently, helping doctors to diagnose diseases quickly and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
37

Tiwari, Rudra. "Explainable AI (XAI) and its Applications in Building Trust and Understanding in AI Decision Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 01 (January 27, 2023). http://dx.doi.org/10.55041/ijsrem17592.

Full text
Abstract:
In recent years, there has been a growing need for Explainable AI (XAI) to build trust and understanding in AI decision making. XAI is a field of AI research that focuses on developing algorithms and models that can be easily understood and interpreted by humans. The goal of XAI is to make the inner workings of AI systems transparent and explainable, which can help people to understand the reasoning behind the decisions made by AI and make better decisions. In this paper, we will explore the various applications of XAI in different domains such as healthcare, finance, autonomous vehicles, and legal and government decisions. We will also discuss the different techniques used in XAI such as feature importance analysis, model interpretability, and natural language explanations. Finally, we will examine the challenges and future directions of XAI research. This paper aims to provide an overview of the current state of XAI research and its potential impact on building trust and understanding in AI decision making.
APA, Harvard, Vancouver, ISO, and other styles
38

Nauta, Meike, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. "From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI." ACM Computing Surveys, February 24, 2023. http://dx.doi.org/10.1145/3583558.

Full text
Abstract:
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing black boxes raised the question of how to evaluate explanations of machine learning (ML) models. While interpretability and explainability are often presented as a subjectively validated binary property, we consider it a multi-faceted concept. We identify 12 conceptual properties, such as Compactness and Correctness, that should be evaluated for comprehensively assessing the quality of an explanation. Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method. We find that 1 in 3 papers evaluate exclusively with anecdotal evidence, and 1 in 5 papers evaluate with users. This survey also contributes to the call for objective, quantifiable evaluation methods by presenting an extensive overview of quantitative XAI evaluation methods. Our systematic collection of evaluation methods provides researchers and practitioners with concrete tools to thoroughly validate, benchmark and compare new and existing XAI methods. The Co-12 categorization scheme and our identified evaluation methods open up opportunities to include quantitative metrics as optimization criteria during model training in order to optimize for accuracy and interpretability simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Fan, Li Jiang, Wenjing He, Guangyi Huang, Yiyi Hong, Fen Tang, Jian Lv, et al. "The Clinical Value of Explainable Deep Learning for Diagnosing Fungal Keratitis Using in vivo Confocal Microscopy Images." Frontiers in Medicine 8 (December 14, 2021). http://dx.doi.org/10.3389/fmed.2021.797616.

Full text
Abstract:
Background: Artificial intelligence (AI) has great potential to detect fungal keratitis using in vivo confocal microscopy images, but its clinical value remains unclarified. A major limitation of its clinical utility is the lack of explainability and interpretability.Methods: An explainable AI (XAI) system based on Gradient-weighted Class Activation Mapping (Grad-CAM) and Guided Grad-CAM was established. In this randomized controlled trial, nine ophthalmologists (three expert ophthalmologists, three competent ophthalmologists, and three novice ophthalmologists) read images in each of the conditions: unassisted, AI-assisted, or XAI-assisted. In unassisted condition, only the original IVCM images were shown to the readers. AI assistance comprised a histogram of model prediction probability. For XAI assistance, explanatory maps were additionally shown. The accuracy, sensitivity, and specificity were calculated against an adjudicated reference standard. Moreover, the time spent was measured.Results: Both forms of algorithmic assistance increased the accuracy and sensitivity of competent and novice ophthalmologists significantly without reducing specificity. The improvement was more pronounced in XAI-assisted condition than that in AI-assisted condition. Time spent with XAI assistance was not significantly different from that without assistance.Conclusion: AI has shown great promise in improving the accuracy of ophthalmologists. The inexperienced readers are more likely to benefit from the XAI system. With better interpretability and explainability, XAI-assistance can boost ophthalmologist performance beyond what is achievable by the reader alone or with black-box AI assistance.
APA, Harvard, Vancouver, ISO, and other styles
40

Fleisher, Will. "Understanding, Idealization, and Explainable AI." Episteme, November 3, 2022, 1–27. http://dx.doi.org/10.1017/epi.2022.39.

Full text
Abstract:
Abstract Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. I argue for a unified account of these key concepts that treats the concept of understanding as fundamental. This allows resources from the philosophy of science and the epistemology of understanding to help guide opacity alleviation efforts. A first significant benefit of this understanding account is that it defuses one of the primary, in-principle objections to post hoc explainable AI (XAI) methods. This “rationalization objection” argues that XAI methods provide mere rationalizations rather than genuine explanations. This is because XAI methods involve using a separate “explanation” system to approximate the original black box system. These explanation systems function in a completely different way than the original system, yet XAI methods make inferences about the original system based on the behavior of the explanation system. I argue that, if we conceive of XAI methods as idealized scientific models, this rationalization worry is dissolved. Idealized scientific models misrepresent their target phenomena, yet are capable of providing significant and genuine understanding of their targets.
APA, Harvard, Vancouver, ISO, and other styles
41

Izumo, Takashi, and Yueh-Hsuan Weng. "Coarse ethics: how to ethically assess explainable artificial intelligence." AI and Ethics, September 12, 2021. http://dx.doi.org/10.1007/s43681-021-00091-y.

Full text
Abstract:
AbstractThe integration of artificial intelligence (AI) into human society mandates that their decision-making process is explicable to users, as exemplified in Asimov’s Three Laws of Robotics. Such human interpretability calls for explainable AI (XAI), of which this paper cites various models. However, the transaction between computable accuracy and human interpretability can be a trade-off, requiring answers to questions about the negotiable conditions and the degrees of AI prediction accuracy that may be sacrificed to enable user-interpretability. The extant research has focussed on technical issues, but it is also desirable to apply a branch of ethics to deal with the trade-off problem. This scholarly domain is labelled coarse ethics in this study, which discusses two issues vis-à-vis AI prediction as a type of evaluation. First, which formal conditions would allow trade-offs? The study posits two minimal requisites: adequately high coverage and order-preservation. The second issue concerns conditions that could justify the trade-off between computable accuracy and human interpretability, to which the study suggests two justification methods: impracticability and adjustment of perspective from machine-computable to human-interpretable. This study contributes by connecting ethics to autonomous systems for future regulation by formally assessing the adequacy of AI rationales.
APA, Harvard, Vancouver, ISO, and other styles
42

Lo, Shaw-Hwa, and Yiqiao Yin. "A novel interaction-based methodology towards explainable AI with better understanding of Pneumonia Chest X-ray Images." Discover Artificial Intelligence 1, no. 1 (December 2021). http://dx.doi.org/10.1007/s44163-021-00015-z.

Full text
Abstract:
AbstractIn the field of eXplainable AI (XAI), robust “blackbox” algorithms such as Convolutional Neural Networks (CNNs) are known for making high prediction performance. However, the ability to explain and interpret these algorithms still require innovation in the understanding of influential and, more importantly, explainable features that directly or indirectly impact the performance of predictivity. A number of methods existing in literature focus on visualization techniques but the concepts of explainability and interpretability still require rigorous definition. In view of the above needs, this paper proposes an interaction-based methodology–Influence score (I-score)—to screen out the noisy and non-informative variables in the images hence it nourishes an environment with explainable and interpretable features that are directly associated to feature predictivity. The selected features with high I-score values can be considered as a group of variables with interactive effect, hence the proposed name interaction-based methodology. We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results. We demonstrate how to apply the proposed approach for more general big data problems by improving the explainability and interpretability without sacrificing the prediction performance. The contribution of this paper opens a novel angle that moves the community closer to the future pipelines of XAI problems. In investigation of Pneumonia Chest X-ray Image data, the proposed method achieves 99.7% Area-Under-Curve (AUC) using less than 20,000 parameters while its peers such as VGG16 and its upgraded versions require at least millions of parameters to achieve on-par performance. Using I-score selected explainable features allows reduction of over 98% of parameters while delivering same or even better prediction results.
APA, Harvard, Vancouver, ISO, and other styles
43

Joyce, Dan W., Andrey Kormilitzin, Katharine A. Smith, and Andrea Cipriani. "Explainable artificial intelligence for mental health through transparency and interpretability for understandability." npj Digital Medicine 6, no. 1 (January 18, 2023). http://dx.doi.org/10.1038/s41746-023-00751-9.

Full text
Abstract:
AbstractThe literature on artificial intelligence (AI) or machine learning (ML) in mental health and psychiatry lacks consensus on what “explainability” means. In the more general XAI (eXplainable AI) literature, there has been some convergence on explainability meaning model-agnostic techniques that augment a complex model (with internal mechanics intractable for human understanding) with a simpler model argued to deliver results that humans can comprehend. Given the differing usage and intended meaning of the term “explainability” in AI and ML, we propose instead to approximate model/algorithm explainability by understandability defined as a function of transparency and interpretability. These concepts are easier to articulate, to “ground” in our understanding of how algorithms and models operate and are used more consistently in the literature. We describe the TIFU (Transparency and Interpretability For Understandability) framework and examine how this applies to the landscape of AI/ML in mental health research. We argue that the need for understandablity is heightened in psychiatry because data describing the syndromes, outcomes, disorders and signs/symptoms possess probabilistic relationships to each other—as do the tentative aetiologies and multifactorial social- and psychological-determinants of disorders. If we develop and deploy AI/ML models, ensuring human understandability of the inputs, processes and outputs of these models is essential to develop trustworthy systems fit for deployment.
APA, Harvard, Vancouver, ISO, and other styles
44

Koo, Bon San, Seongho Eun, Kichul Shin, Hyemin Yoon, Chaelin Hong, Do-Hoon Kim, Seokchan Hong, et al. "Machine learning model for identifying important clinical features for predicting remission in patients with rheumatoid arthritis treated with biologics." Arthritis Research & Therapy 23, no. 1 (July 6, 2021). http://dx.doi.org/10.1186/s13075-021-02567-y.

Full text
Abstract:
Abstract Background We developed a model to predict remissions in patients treated with biologic disease-modifying anti-rheumatic drugs (bDMARDs) and to identify important clinical features associated with remission using explainable artificial intelligence (XAI). Methods We gathered the follow-up data of 1204 patients treated with bDMARDs (etanercept, adalimumab, golimumab, infliximab, abatacept, and tocilizumab) from the Korean College of Rheumatology Biologics and Targeted Therapy Registry. Remission was predicted at 1-year follow-up using baseline clinical data obtained at the time of enrollment. Machine learning methods (e.g., lasso, ridge, support vector machine, random forest, and XGBoost) were used for the predictions. The Shapley additive explanation (SHAP) value was used for interpretability of the predictions. Results The ranges for accuracy and area under the receiver operating characteristic of the newly developed machine learning model for predicting remission were 52.8–72.9% and 0.511–0.694, respectively. The Shapley plot in XAI showed that the impacts of the variables on predicting remission differed for each bDMARD. The most important features were age for adalimumab, rheumatoid factor for etanercept, erythrocyte sedimentation rate for infliximab and golimumab, disease duration for abatacept, and C-reactive protein for tocilizumab, with mean SHAP values of − 0.250, − 0.234, − 0.514, − 0.227, − 0.804, and 0.135, respectively. Conclusions Our proposed machine learning model successfully identified clinical features that were predictive of remission in each of the bDMARDs. This approach may be useful for improving treatment outcomes by identifying clinical information related to remissions in patients with rheumatoid arthritis.
APA, Harvard, Vancouver, ISO, and other styles
45

ŞENGÖZ, Nilgün. "A Hybrid Approach for Detection and Classification of Sheep-Goat Pox Disease Using Deep Neural Networks." El-Cezeri Fen ve Mühendislik Dergisi, September 7, 2022. http://dx.doi.org/10.31202/ecjse.1159621.

Full text
Abstract:
Artificial intelligence and its sub-branches, machine learning and deep learning, have proven themselves in many different areas such as medical imaging systems, face recognition, autonomous driving. Especially deep learning models have become very popular today. Because deep learning models are very complex in nature, they are one of the best examples of black-box models. This situation leaves the end user in doubt in terms of interpretability and explainability. Therefore, the need to make such systems understandable methods with explainable artificial intelligence (XAI) has been widely developed in recent years. In this context, a hybrid method has been developed as a result of the study, and classification study has been carried out on the new and original dataset over different deep learning algorithms. Grad-CAM application was performed on VGG16 architecture with classification accuracy of 99.643% and heat maps of pre-processed images were obtained by CLAHE method.
APA, Harvard, Vancouver, ISO, and other styles
46

"A Novel Approach to Adopt Explainable Artificial Intelligence in X-ray Image Classification." Advances in Machine Learning & Artificial Intelligence 3, no. 1 (January 25, 2022). http://dx.doi.org/10.33140/amlai.03.01.01.

Full text
Abstract:
Robust “Blackbox” algorithms such as Convolutional Neural Networks (CNNs) are known for making high prediction performance. However, the ability to explain and interpret these algorithms still require innovation in the understanding of influential and, more importantly, explainable features that directly or indirectly impact the performance of predictivity. In view of the above needs, this study proposes an interaction- based methodology – Influence Score (I-score) – to screen out the noisy and non-informative variables in the images hence it nourishes an environment with explainable and interpretable features that are directly associated to feature predictivity. We apply the proposed method on a real-world application in Pneumonia Chest X-ray Image data set and produced state- of-the-art results. We demonstrate how to apply the proposed approach for more general big data problems by improving the explain ability and interpretability without sacrificing the prediction performance. The contribution of this paper opens a novel angle that moves the community closer to the future pipelines of XAI problems.
APA, Harvard, Vancouver, ISO, and other styles
47

Hatwell, Julian, Mohamed Medhat Gaber, and R. Muhammad Atif Azad. "Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences." BMC Medical Informatics and Decision Making 20, no. 1 (October 2, 2020). http://dx.doi.org/10.1186/s12911-020-01201-2.

Full text
Abstract:
Abstract Background Computer Aided Diagnostics (CAD) can support medical practitioners to make critical decisions about their patients’ disease conditions. Practitioners require access to the chain of reasoning behind CAD to build trust in the CAD advice and to supplement their own expertise. Yet, CAD systems might be based on black box machine learning models and high dimensional data sources such as electronic health records, magnetic resonance imaging scans, cardiotocograms, etc. These foundations make interpretation and explanation of the CAD advice very challenging. This challenge is recognised throughout the machine learning research community. eXplainable Artificial Intelligence (XAI) is emerging as one of the most important research areas of recent years because it addresses the interpretability and trust concerns of critical decision makers, including those in clinical and medical practice. Methods In this work, we focus on AdaBoost, a black box model that has been widely adopted in the CAD literature. We address the challenge – to explain AdaBoost classification – with a novel algorithm that extracts simple, logical rules from AdaBoost models. Our algorithm, Adaptive-Weighted High Importance Path Snippets (Ada-WHIPS), makes use of AdaBoost’s adaptive classifier weights. Using a novel formulation, Ada-WHIPS uniquely redistributes the weights among individual decision nodes of the internal decision trees of the AdaBoost model. Then, a simple heuristic search of the weighted nodes finds a single rule that dominated the model’s decision. We compare the explanations generated by our novel approach with the state of the art in an experimental study. We evaluate the derived explanations with simple statistical tests of well-known quality measures, precision and coverage, and a novel measure stability that is better suited to the XAI setting. Results Experiments on 9 CAD-related data sets showed that Ada-WHIPS explanations consistently generalise better (mean coverage 15%-68%) than the state of the art while remaining competitive for specificity (mean precision 80%-99%). A very small trade-off in specificity is shown to guard against over-fitting which is a known problem in the state of the art methods. Conclusions The experimental results demonstrate the benefits of using our novel algorithm for explaining CAD AdaBoost classifiers widely found in the literature. Our tightly coupled, AdaBoost-specific approach outperforms model-agnostic explanation methods and should be considered by practitioners looking for an XAI solution for this class of models.
APA, Harvard, Vancouver, ISO, and other styles
48

Beucher, Amélie, Christoffer B. Rasmussen, Thomas B. Moeslund, and Mogens H. Greve. "Interpretation of Convolutional Neural Networks for Acid Sulfate Soil Classification." Frontiers in Environmental Science 9 (January 19, 2022). http://dx.doi.org/10.3389/fenvs.2021.809995.

Full text
Abstract:
Convolutional neural networks (CNNs) have been originally used for computer vision tasks, such as image classification. While several digital soil mapping studies have been assessing these deep learning algorithms for the prediction of soil properties, their potential for soil classification has not been explored yet. Moreover, the use of deep learning and neural networks in general has often raised concerns because of their presumed low interpretability (i.e., the black box pitfall). However, a recent and fast-developing sub-field of Artificial Intelligence (AI) called explainable AI (XAI) aims to clarify complex models such as CNNs in a systematic and interpretable manner. For example, it is possible to apply model-agnostic interpretation methods to extract interpretations from any machine learning model. In particular, SHAP (SHapley Additive exPlanations) is a method to explain individual predictions: SHAP values represent the contribution of a covariate to the final model predictions. The present study aimed at, first, evaluating the use of CNNs for the classification of potential acid sulfate soils located in the wetland areas of Jutland, Denmark (c. 6,500 km2), and second and most importantly, applying a model-agnostic interpretation method on the resulting CNN model. About 5,900 soil observations and 14 environmental covariates, including a digital elevation model and derived terrain attributes, were utilized as input data. The selected CNN model yielded slightly higher prediction accuracy than the random forest models which were using original or scaled covariates. These results can be explained by the use of a common variable selection method, namely recursive feature elimination, which was based on random forest and thus optimized the selection for this method. Notably, the SHAP method results enabled to clarify the CNN model predictions, in particular through the spatial interpretation of the most important covariates, which constitutes a crucial development for digital soil mapping.
APA, Harvard, Vancouver, ISO, and other styles
49

Cui, Zhen, Yanlai Zhou, Shenglian Guo, Jun Wang, Huanhuan Ba, and Shaokun He. "A novel hybrid XAJ-LSTM model for multi-step-ahead flood forecasting." Hydrology Research, June 29, 2021. http://dx.doi.org/10.2166/nh.2021.016.

Full text
Abstract:
Abstract The conceptual hydrologic model has been widely used for flood forecasting, while long short-term memory (LSTM) neural network has been demonstrated a powerful ability to tackle time-series predictions. This study proposed a novel hybrid model by combining the Xinanjiang (XAJ) conceptual model and LSTM model (XAJ-LSTM) to achieve precise multi-step-ahead flood forecasts. The hybrid model takes flood forecasts of the XAJ model as the input variables of the LSTM model to enhance the physical mechanism of hydrological modeling. Using the XAJ and the LSTM models as benchmark models for comparison purposes, the hybrid model was applied to the Lushui reservoir catchment in China. The results demonstrated that three models could offer reasonable multi-step-ahead flood forecasts and the XAJ-LSTM model not only could effectively simulate the long-term dependence between precipitation and flood datasets, but also could create more accurate forecasts than the XAJ and the LSTM models. The hybrid model maintained similar forecast performance after feeding with simulated flood values of the XAJ model during horizons to . The study concludes that the XAJ-LSTM model that integrates the conceptual model and machine learning can raise the accuracy of multi-step-ahead flood forecasts while improving the interpretability of data-driven model internals.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography