To see the other types of publications on this topic, follow the link: Model-agnostic Explainability.

Journal articles on the topic 'Model-agnostic Explainability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Model-agnostic Explainability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Diprose, William K., Nicholas Buist, Ning Hua, Quentin Thurier, George Shand, and Reece Robinson. "Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator." Journal of the American Medical Informatics Association 27, no. 4 (February 27, 2020): 592–600. http://dx.doi.org/10.1093/jamia/ocz229.

Full text
Abstract:
Abstract Objective Implementation of machine learning (ML) may be limited by patients’ right to “meaningful information about the logic involved” when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods. Materials and Methods We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association. Results The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P < .001), between physician understanding and trust (P < .001), and between explainability and trust (P < .001). ML outputs that used model-agnostic explainability methods were preferred by 88% of physicians when compared with the control condition; however, no particular ML explainability method had a greater influence on intended physician behavior. Conclusions Physician understanding, explainability, and trust in ML risk calculators are related. Physicians preferred ML outputs accompanied by model-agnostic explanations but the explainability method did not alter intended physician behavior.
APA, Harvard, Vancouver, ISO, and other styles
2

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (June 30, 2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Full text
Abstract:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
APA, Harvard, Vancouver, ISO, and other styles
3

TOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example." Journal of Medicine and Palliative Care 4, no. 2 (March 27, 2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.

Full text
Abstract:
Aim: Machine learning tools have various applications in healthcare. However, the implementation of developed models is still limited because of various challenges. One of the most important problems is the lack of explainability of machine learning models. Explainability refers to the capacity to reveal the reasoning and logic behind the decisions made by AI systems, making it straightforward for human users to understand the process and how the system arrived at a specific outcome. The study aimed to compare the performance of different model-agnostic explanation methods using two different ML models created for HbA1c classification. Material and Method: The H2O AutoML engine was used for the development of two ML models (Gradient boosting machine (GBM) and default random forests (DRF)) using 3,036 records from NHANES open data set. Both global and local model-agnostic explanation methods, including performance metrics, feature important analysis and Partial dependence, Breakdown and Shapley additive explanation plots were utilized for the developed models. Results: While both GBM and DRF models have similar performance metrics, such as mean per class error and area under the receiver operating characteristic curve, they had slightly different variable importance. Local explainability methods also showed different contributions to the features. Conclusion: This study evaluated the significance of explainable machine learning techniques for comprehending complicated models and their role in incorporating AI in healthcare. The results indicate that although there are limitations to current explainability methods, particularly for clinical use, both global and local explanation models offer a glimpse into evaluating the model and can be used to enhance or compare models.
APA, Harvard, Vancouver, ISO, and other styles
4

Ullah, Ihsan, Andre Rios, Vaibhav Gala, and Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation." Applied Sciences 12, no. 1 (December 23, 2021): 136. http://dx.doi.org/10.3390/app12010136.

Full text
Abstract:
Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection.
APA, Harvard, Vancouver, ISO, and other styles
5

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Full text
Abstract:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven healthcare approaches and their performances in the current study are discussed. The toolkits used in local and global post hoc explainability and the multiple techniques for explainability pertaining the Rational, Data, and Performance explainability are discussed in the current study. Methods. The explainability of the artificial intelligence model in the healthcare domain is implemented through the Local Interpretable Model-Agnostic Explanations and Shapley Additive Explanations for better comprehensibility of the internal working mechanism of the original AI models and the correlation among the feature set that influences decision of the model. Results. The current state-of-the-art XAI-based and future technologies through XAI are reported on research findings in various implementation aspects, including research challenges and limitations of existing models. The role of XAI in the healthcare domain ranging from the earlier prediction of future illness to the disease’s smart diagnosis is discussed. The metrics considered in evaluating the model’s explainability are presented, along with various explainability tools. Three case studies about the role of XAI in the healthcare domain with their performances are incorporated for better comprehensibility. Conclusion. The future perspective of XAI in healthcare will assist in obtaining research insight in the healthcare domain.
APA, Harvard, Vancouver, ISO, and other styles
6

Lv, Ge, Chen Jason Zhang, and Lei Chen. "HENCE-X: Toward Heterogeneity-Agnostic Multi-Level Explainability for Deep Graph Networks." Proceedings of the VLDB Endowment 16, no. 11 (July 2023): 2990–3003. http://dx.doi.org/10.14778/3611479.3611503.

Full text
Abstract:
Deep graph networks (DGNs) have demonstrated their outstanding effectiveness on both heterogeneous and homogeneous graphs. However their black-box nature does not allow human users to understand their working mechanisms. Recently, extensive efforts have been devoted to explaining DGNs' prediction, yet heterogeneity-agnostic multi-level explainability is still less explored. Since the two types of graphs are both irreplaceable in real-life applications, having a more general and end-to-end explainer becomes a natural and inevitable choice. In the meantime, feature-level explanation is often ignored by existing techniques, while topological-level explanation alone can be incomplete and deceptive. Thus, we propose a heterogeneity-agnostic multi-level explainer in this paper, named HENCE-X, which is a causality-guided method that can capture the non-linear dependencies of model behavior on the input using conditional probabilities. We theoretically prove that HENCE-X is guaranteed to find the Markov blanket of the explained prediction, meaning that all information that the prediction is dependent on is identified. Experiments on three real-world datasets show that HENCE-X outperforms state-of-the-art (SOTA) methods in generating faithful factual and counterfactual explanations of DGNs.
APA, Harvard, Vancouver, ISO, and other styles
7

Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (December 5, 2021): 3137. http://dx.doi.org/10.3390/math9233137.

Full text
Abstract:
Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS classification. XCM is a new compact convolutional neural network which extracts information relative to the observed variables and time directly from the input data. Thus, XCM architecture enables a good generalization ability on both large and small datasets, while allowing the full exploitation of a faithful post hoc model-specific explainability method (Gradient-weighted Class Activation Mapping) by precisely identifying the observed variables and timestamps of the input data that are important for predictions. We first show that XCM outperforms the state-of-the-art MTS classifiers on both the large and small public UEA datasets. Then, we illustrate how XCM reconciles performance and explainability on a synthetic dataset and show that XCM enables a more precise identification of the regions of the input data that are important for predictions compared to the current deep learning MTS classifier also providing faithful explainability. Finally, we present how XCM can outperform the current most accurate state-of-the-art algorithm on a real-world application while enhancing explainability by providing faithful and more informative explanations.
APA, Harvard, Vancouver, ISO, and other styles
8

Hassan, Fayaz, Jianguo Yu, Zafi Sherhan Syed, Nadeem Ahmed, Mana Saleh Al Reshan, and Asadullah Shaikh. "Achieving model explainability for intrusion detection in VANETs with LIME." PeerJ Computer Science 9 (June 22, 2023): e1440. http://dx.doi.org/10.7717/peerj-cs.1440.

Full text
Abstract:
Vehicular ad hoc networks (VANETs) are intelligent transport subsystems; vehicles can communicate through a wireless medium in this system. There are many applications of VANETs such as traffic safety and preventing the accident of vehicles. Many attacks affect VANETs communication such as denial of service (DoS) and distributed denial of service (DDoS). In the past few years the number of DoS (denial of service) attacks are increasing, so network security and protection of the communication systems are challenging topics; intrusion detection systems need to be improved to identify these attacks effectively and efficiently. Many researchers are currently interested in enhancing the security of VANETs. Based on intrusion detection systems (IDS), machine learning (ML) techniques were employed to develop high-security capabilities. A massive dataset containing application layer network traffic is deployed for this purpose. Interpretability technique Local interpretable model-agnostic explanations (LIME) technique for better interpretation model functionality and accuracy. Experimental results demonstrate that utilizing a random forest (RF) classifier achieves 100% accuracy, demonstrating its capability to identify intrusion-based threats in a VANET setting. In addition, LIME is applied to the RF machine learning model to explain and interpret the classification, and the performance of machine learning models is evaluated in terms of accuracy, recall, and F1 score.
APA, Harvard, Vancouver, ISO, and other styles
9

Vieira, Carla Piazzon Ramos, and Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM." Revista Brasileira de Computação Aplicada 12, no. 1 (January 8, 2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.

Full text
Abstract:
The technologies supporting Artificial Intelligence (AI) have advanced rapidly over the past few years and AI is becoming a commonplace in every aspect of life like the future of self-driving cars or earlier health diagnosis. For this to occur shortly, the entire community stands in front of the barrier of explainability, an inherent problem of latest models (e.g. Deep Neural Networks) that were not present in the previous hype of AI (linear and rule-based models). Most of these recent models are used as black boxes without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. In this paper, we focus on how much we can understand the decisions made by an SVM Classifier in a post-hoc model agnostic approach. Furthermore, we train a tree-based model (inherently interpretable) using labels from the SVM, called secondary training data to provide explanations and compare permutation importance method to the more commonly used measures such as accuracy and show that our methods are both more reliable and meaningful techniques to use. We also outline the main challenges for such methods and conclude that model-agnostic interpretability is a key component in making machine learning more trustworthy.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Hung Viet, and Haewon Byeon. "Prediction of Out-of-Hospital Cardiac Arrest Survival Outcomes Using a Hybrid Agnostic Explanation TabNet Model." Mathematics 11, no. 9 (April 25, 2023): 2030. http://dx.doi.org/10.3390/math11092030.

Full text
Abstract:
Survival after out-of-hospital cardiac arrest (OHCA) is contingent on time-sensitive interventions taken by onlookers, emergency call operators, first responders, emergency medical services (EMS) personnel, and hospital healthcare staff. By building integrated cardiac resuscitation systems of care, measurement systems, and techniques for assuring the correct execution of evidence-based treatments by bystanders, EMS professionals, and hospital employees, survival results can be improved. To aid in OHCA prognosis and treatment, we develop a hybrid agnostic explanation TabNet (HAE-TabNet) model to predict OHCA patient survival. According to the results, the HAE-TabNet model has an “Area under the receiver operating characteristic curve value” (ROC AUC) score of 0.9934 (95% confidence interval 0.9933–0.9935), which outperformed other machine learning models in the previous study, such as XGBoost, k-nearest neighbors, random forest, decision trees, and logistic regression. In order to achieve model prediction explainability for a non-expert in the artificial intelligence field, we combined the HAE-TabNet model with a LIME-based explainable model. This HAE-TabNet model may assist medical professionals in the prognosis and treatment of OHCA patients effectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Szepannaek, Gero, and Karsten Lübke. "How much do we see? On the explainability of partial dependence plots for credit risk scoring." Argumenta Oeconomica 2023, no. 2 (2023): 137–50. http://dx.doi.org/10.15611/aoe.2023.1.07.

Full text
Abstract:
Risk prediction models in credit scoring have to fulfil regulatory requirements, one of which consists in the interpretability of the model. Unfortunately, many popular modern machine learning algorithms result in models that do not satisfy this business need, whereas the research activities in the field of explainable machine learning have strongly increased in recent years. Partial dependence plots denote one of the most popular methods for model-agnostic interpretation of a feature’s effect on the model outcome, but in practice they are usually applied without answering the question of how much can actually be seen in such plots. For this purpose, in this paper a methodology is presented in order to analyse to what extent arbitrary machine learning models are explainable by partial dependence plots. The proposed framework provides both a visualisation, as well as a measure to quantify the explainability of a model on an understandable scale. A corrected version of the German credit data, one of the most popular data sets of this application domain, is used to demonstrate the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
12

Sovrano, Francesco, Salvatore Sapienza, Monica Palmirani, and Fabio Vitali. "Metrics, Explainability and the European AI Act Proposal." J 5, no. 1 (February 18, 2022): 126–38. http://dx.doi.org/10.3390/j5010010.

Full text
Abstract:
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion.
APA, Harvard, Vancouver, ISO, and other styles
13

Kaplun, Dmitry, Alexander Krasichkov, Petr Chetyrbok, Nikolay Oleinikov, Anupam Garg, and Husanbir Singh Pannu. "Cancer Cell Profiling Using Image Moments and Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database." Mathematics 9, no. 20 (October 17, 2021): 2616. http://dx.doi.org/10.3390/math9202616.

Full text
Abstract:
With the evolution of modern digital pathology, examining cancer cell tissues has paved the way to quantify subtle symptoms, for example, by means of image staining procedures using Eosin and Hematoxylin. Cancer tissues in the case of breast and lung cancer are quite challenging to examine by manual expert analysis of patients suffering from cancer. Merely relying on the observable characteristics by histopathologists for cell profiling may under-constrain the scale and diagnostic quality due to tedious repetition with constant concentration. Thus, automatic analysis of cancer cells has been proposed with algorithmic and soft-computing techniques to leverage speed and reliability. The paper’s novelty lies in the utility of Zernike image moments to extract complex features from cancer cell images and using simple neural networks for classification, followed by explainability on the test results using the Local Interpretable Model-Agnostic Explanations (LIME) technique and Explainable Artificial Intelligence (XAI). The general workflow of the proposed high throughput strategy involves acquiring the BreakHis public dataset, which consists of microscopic images, followed by the application of image processing and machine learning techniques. The recommended technique has been mathematically substantiated and compared with the state-of-the-art to justify the empirical basis in the pursuit of our algorithmic discovery. The proposed system is able to classify malignant and benign cancer cell images of 40× resolution with 100% recognition rate. XAI interprets and reasons the test results obtained from the machine learning model, making it reliable and transparent for analysis and parameter tuning.
APA, Harvard, Vancouver, ISO, and other styles
14

Ibrahim, Muhammad Amien, Samsul Arifin, I. Gusti Agung Anom Yudistira, Rinda Nariswari, Abdul Azis Abdillah, Nerru Pranuta Murnaka, and Puguh Wahyu Prasetyo. "An Explainable AI Model for Hate Speech Detection on Indonesian Twitter." CommIT (Communication and Information Technology) Journal 16, no. 2 (June 8, 2022): 175–82. http://dx.doi.org/10.21512/commit.v16i2.8343.

Full text
Abstract:
To avoid citizen disputes, hate speech on social media, such as Twitter, must be automatically detected. The current research in Indonesian Twitter focuses on developing better hate speech detection models. However, there is limited study on the explainability aspects of hate speech detection. The research aims to explain issues that previous researchers have not detailed and attempt to answer the shortcomings of previous researchers. There are 13,169 tweets in the dataset with labels like “hate speech” and “abusive language”. The dataset also provides binary labels on whether hate speech is directed to individual, group, religion, race, physical disability, and gender. In the research, classification is performed by using traditional machine learning models, and the predictions are evaluated using an Explainable AI model, such as Local Interpretable Model-Agnostic Explanations (LIME), to allow users to comprehend why a tweet is regarded as a hateful message. Moreover, models that perform well in classification perceive incorrect words as contributing to hate speech. As a result, such models are unsuitable for deployment in the real world. In the investigation, the combination of XGBoost and logical LIME explanations produces the most logical results. The use of the Explainable AI model highlights the importance of choosing the ideal model while maintaining users’ trust in the deployed model.
APA, Harvard, Vancouver, ISO, and other styles
15

Manikis, Georgios C., Georgios S. Ioannidis, Loizos Siakallis, Katerina Nikiforaki, Michael Iv, Diana Vozlic, Katarina Surlan-Popovic, Max Wintermark, Sotirios Bisdas, and Kostas Marias. "Multicenter DSC–MRI-Based Radiomics Predict IDH Mutation in Gliomas." Cancers 13, no. 16 (August 5, 2021): 3965. http://dx.doi.org/10.3390/cancers13163965.

Full text
Abstract:
To address the current lack of dynamic susceptibility contrast magnetic resonance imaging (DSC–MRI)-based radiomics to predict isocitrate dehydrogenase (IDH) mutations in gliomas, we present a multicenter study that featured an independent exploratory set for radiomics model development and external validation using two independent cohorts. The maximum performance of the IDH mutation status prediction on the validation set had an accuracy of 0.544 (Cohen’s kappa: 0.145, F1-score: 0.415, area under the curve-AUC: 0.639, sensitivity: 0.733, specificity: 0.491), which significantly improved to an accuracy of 0.706 (Cohen’s kappa: 0.282, F1-score: 0.474, AUC: 0.667, sensitivity: 0.6, specificity: 0.736) when dynamic-based standardization of the images was performed prior to the radiomics. Model explainability using local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) revealed potential intuitive correlations between the IDH–wildtype increased heterogeneity and the texture complexity. These results strengthened our hypothesis that DSC–MRI radiogenomics in gliomas hold the potential to provide increased predictive performance from models that generalize well and provide understandable patterns between IDH mutation status and the extracted features toward enabling the clinical translation of radiogenomics in neuro-oncology.
APA, Harvard, Vancouver, ISO, and other styles
16

Oubelaid, Adel, Abdelhameed Ibrahim, and Ahmed M. Elshewey. "Bridging the Gap: An Explainable Methodology for Customer Churn Prediction in Supply Chain Management." Journal of Artificial Intelligence and Metaheuristics 4, no. 1 (2023): 16–23. http://dx.doi.org/10.54216/jaim.040102.

Full text
Abstract:
Customer churn prediction is a critical task for businesses aiming to retain their valuable customers. Nevertheless, the lack of transparency and interpretability in machine learning models hinders their implementation in real-world applications. In this paper, we introduce a novel methodology for customer churn prediction in supply chain management that addresses the need for explainability. Our approach take advantage of XGBoost as the underlying predictive model. We recognize the importance of not only accurately predicting churn but also providing actionable insights into the key factors driving customer attrition. To achieve this, we employ Local Interpretable Model-agnostic Explanations (LIME), a state-of-the-art technique for generating intuitive and understandable explanations. By utilizing LIME to the predictions made by XGBoost, we enable decision-makers to gain intuition into the decision process of the model and the reasons behind churn predictions. Through a comprehensive case study on customer churn data, we demonstrate the success of our explainable ML approach. Our methodology not only achieves high prediction accuracy but also offers interpretable explanations that highlight the underlying drivers of customer churn. These insights supply valuable management for decision-making processes within supply chain management.
APA, Harvard, Vancouver, ISO, and other styles
17

Sathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.

Full text
Abstract:
This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide further insights into the relationship between the input features and the predictions. SHAP methodology additionally provides a more holistic view of the effect of the inputs on the output predictions. The results also present the commonalities between the insights gained using LIME and SHAP. Although this paper focuses on the use of deep neural networks trained on UCI Breast Cancer Wisconsin dataset, the methodology can be applied to other neural networks and architectures trained on other applications. The deep neural network trained in this work provides a high level of accuracy. Analyzing the model using LIME and SHAP adds the much desired benefit of providing explanations for the recommendations made by the trained model.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmed, Md Sabbir, Md Tasin Tazwar, Haseen Khan, Swadhin Roy, Junaed Iqbal, Md Golam Rabiul Alam, Md Rafiul Hassan, and Mohammad Mehedi Hassan. "Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression." Complexity 2022 (September 19, 2022): 1–20. http://dx.doi.org/10.1155/2022/5305353.

Full text
Abstract:
The food security of more than half of the world’s population depends on rice production which is one of the key objectives of precision agriculture. The traditional rice almanac used astronomical and climate factors to estimate yield response. However, this research integrated meteorological, agro-chemical, and soil physiographic factors for yield response prediction. Besides, the impact of those factors on the production of three major rice ecotypes has also been studied in this research. Moreover, this study found a different set of those factors with respect to the yield response of different rice ecotypes. Machine learning algorithms named Extreme Gradient Boosting (XGBoost) and Support Vector Regression (SVR) have been used for predicting the yield response. The SVR shows better results than XGBoost for predicting the yield of the Aus rice ecotype, whereas XGBoost performs better for forecasting the yield of the Aman and Boro rice ecotypes. The result shows that the root mean squared error (RMSE) of three different ecotypes are in between 9.38% and 24.37% and that of R-squared values are between 89.74% and 99.13% on two different machine learning algorithms. Moreover, the explainability of the models is also shown in this study with the help of the explainable artificial intelligence (XAI) model called Local Interpretable Model-Agnostic Explanations (LIME).
APA, Harvard, Vancouver, ISO, and other styles
19

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Mehta, Harshkumar, and Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)." Algorithms 15, no. 8 (August 17, 2022): 291. http://dx.doi.org/10.3390/a15080291.

Full text
Abstract:
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. As a part of this research study, two datasets were taken to demonstrate hate speech detection using XAI. Data preprocessing was performed to clean data of any inconsistencies, clean the text of the tweets, tokenize and lemmatize the text, etc. Categorical variables were also simplified in order to generate a clean dataset for training purposes. Exploratory data analysis was performed on the datasets to uncover various patterns and insights. Various pre-existing models were applied to the Google Jigsaw dataset such as decision trees, k-nearest neighbors, multinomial naïve Bayes, random forest, logistic regression, and long short-term memory (LSTM), among which LSTM achieved an accuracy of 97.6%. Explainable methods such as LIME (local interpretable model—agnostic explanations) were applied to the HateXplain dataset. Variants of BERT (bidirectional encoder representations from transformers) model such as BERT + ANN (artificial neural network) with an accuracy of 93.55% and BERT + MLP (multilayer perceptron) with an accuracy of 93.67% were created to achieve a good performance in terms of explainability using the ERASER (evaluating rationales and simple English reasoning) benchmark.
APA, Harvard, Vancouver, ISO, and other styles
21

Lu, Haohui, and Shahadat Uddin. "Explainable Stacking-Based Model for Predicting Hospital Readmission for Diabetic Patients." Information 13, no. 9 (September 15, 2022): 436. http://dx.doi.org/10.3390/info13090436.

Full text
Abstract:
Artificial intelligence is changing the practice of healthcare. While it is essential to employ such solutions, making them transparent to medical experts is more critical. Most of the previous work presented disease prediction models, but did not explain them. Many healthcare stakeholders do not have a solid foundation in these models. Treating these models as ‘black box’ diminishes confidence in their predictions. The development of explainable artificial intelligence (XAI) methods has enabled us to change the models into a ‘white box’. XAI allows human users to comprehend the results from machine learning algorithms by making them easy to interpret. For instance, the expenditures of healthcare services associated with unplanned readmissions are enormous. This study proposed a stacking-based model to predict 30-day hospital readmission for diabetic patients. We employed Random Under-Sampling to solve the imbalanced class issue, then utilised SelectFromModel for feature selection and constructed a stacking model with base and meta learners. Compared with the different machine learning models, performance analysis showed that our model can better predict readmission than other existing models. This proposed model is also explainable and interpretable. Based on permutation feature importance, the strong predictors were the number of inpatients, the primary diagnosis, discharge to home with home service, and the number of emergencies. The local interpretable model-agnostic explanations method was also employed to demonstrate explainability at the individual level. The findings for the readmission of diabetic patients could be helpful in medical practice and provide valuable recommendations to stakeholders for minimising readmission and reducing public healthcare costs.
APA, Harvard, Vancouver, ISO, and other styles
22

Abdullah, Talal A. A., Mohd Soperi Mohd Zahid, Waleed Ali, and Shahab Ul Hassan. "B-LIME: An Improvement of LIME for Interpretable Deep Learning Classification of Cardiac Arrhythmia from ECG Signals." Processes 11, no. 2 (February 16, 2023): 595. http://dx.doi.org/10.3390/pr11020595.

Full text
Abstract:
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque technique that is regarded as a black box. To ensure the validity of the model’s prediction, it is necessary to explain its authenticity. A well-known locally interpretable model-agnostic explanation method (LIME) uses surrogate techniques to simulate reasonable precision and provide explanations for a given ML model. However, LIME explanations are limited to tabular, textual, and image data. They cannot be provided for signal data features that are temporally interdependent. Moreover, LIME suffers from critical problems such as instability and local fidelity that prevent its implementation in real-world environments. In this work, we propose Bootstrap-LIME (B-LIME), an improvement of LIME, to generate meaningful explanations for ECG signal data. B-LIME implies a combination of heartbeat segmentation and bootstrapping techniques to improve the model’s explainability considering the temporal dependencies between features. Furthermore, we investigate the main cause of instability and lack of local fidelity in LIME. We then propose modifications to the functionality of LIME, including the data generation technique, the explanation method, and the representation technique, to generate stable and locally faithful explanations. Finally, the performance of B-LIME in a hybrid deep-learning model for arrhythmia classification was investigated and validated in comparison with LIME. The results show that the proposed B-LIME provides more meaningful and credible explanations than LIME for cardiac arrhythmia signal data, considering the temporal dependencies between features.
APA, Harvard, Vancouver, ISO, and other styles
23

Merone, Mario, Alessandro Graziosi, Valerio Lapadula, Lorenzo Petrosino, Onorato d’Angelis, and Luca Vollero. "A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems." Sensors 22, no. 20 (October 14, 2022): 7807. http://dx.doi.org/10.3390/s22207807.

Full text
Abstract:
The exponential increase in internet data poses several challenges to cloud systems and data centers, such as scalability, power overheads, network load, and data security. To overcome these limitations, research is focusing on the development of edge computing systems, i.e., based on a distributed computing model in which data processing occurs as close as possible to where the data are collected. Edge computing, indeed, mitigates the limitations of cloud computing, implementing artificial intelligence algorithms directly on the embedded devices enabling low latency responses without network overhead or high costs, and improving solution scalability. Today, the hardware improvements of the edge devices make them capable of performing, even if with some constraints, complex computations, such as those required by Deep Neural Networks. Nevertheless, to efficiently implement deep learning algorithms on devices with limited computing power, it is necessary to minimize the production time and to quickly identify, deploy, and, if necessary, optimize the best Neural Network solution. This study focuses on developing a universal method to identify and port the best Neural Network on an edge system, valid regardless of the device, Neural Network, and task typology. The method is based on three steps: a trade-off step to obtain the best Neural Network within different solutions under investigation; an optimization step to find the best configurations of parameters under different acceleration techniques; eventually, an explainability step using local interpretable model-agnostic explanations (LIME), which provides a global approach to quantify the goodness of the classifier decision criteria. We evaluated several MobileNets on the Fudan Shangai-Tech dataset to test the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Jaehun. "Increasing trust in complex machine learning systems." ACM SIGIR Forum 55, no. 1 (June 2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.

Full text
Abstract:
Machine learning (ML) has become a core technology for many real-world applications. Modern ML models are applied to unprecedentedly complex and difficult challenges, including very large and subjective problems. For instance, applications towards multimedia understanding have been advanced substantially. Here, it is already prevalent that cultural/artistic objects such as music and videos are analyzed and served to users according to their preference, enabled through ML techniques. One of the most recent breakthroughs in ML is Deep Learning (DL), which has been immensely adopted to tackle such complex problems. DL allows for higher learning capacity, making end-to-end learning possible, which reduces the need for substantial engineering effort, while achieving high effectiveness. At the same time, this also makes DL models more complex than conventional ML models. Reports in several domains indicate that such more complex ML models may have potentially critical hidden problems: various biases embedded in the training data can emerge in the prediction, extremely sensitive models can make unaccountable mistakes. Furthermore, the black-box nature of the DL models hinders the interpretation of the mechanisms behind them. Such unexpected drawbacks result in a significant impact on the trustworthiness of the systems in which the ML models are equipped as the core apparatus. In this thesis, a series of studies investigates aspects of trustworthiness for complex ML applications, namely the reliability and explainability. Specifically, we focus on music as the primary domain of interest, considering its complexity and subjectivity. Due to this nature of music, ML models for music are necessarily complex for achieving meaningful effectiveness. As such, the reliability and explainability of music ML models are crucial in the field. The first main chapter of the thesis investigates the transferability of the neural network in the Music Information Retrieval (MIR) context. Transfer learning, where the pre-trained ML models are used as off-the-shelf modules for the task at hand, has become one of the major ML practices. It is helpful since a substantial amount of the information is already encoded in the pre-trained models, which allows the model to achieve high effectiveness even when the amount of the dataset for the current task is scarce. However, this may not always be true if the "source" task which pre-trained the model shares little commonality with the "target" task at hand. An experiment including multiple "source" tasks and "target" tasks was conducted to examine the conditions which have a positive effect on the transferability. The result of the experiment suggests that the number of source tasks is a major factor of transferability. Simultaneously, it is less evident that there is a single source task that is universally effective on multiple target tasks. Overall, we conclude that considering multiple pre-trained models or pre-training a model employing heterogeneous source tasks can increase the chance for successful transfer learning. The second major work investigates the robustness of the DL models in the transfer learning context. The hypothesis is that the DL models can be susceptible to imperceptible noise on the input. This may drastically shift the analysis of similarity among inputs, which is undesirable for tasks such as information retrieval. Several DL models pre-trained in MIR tasks are examined for a set of plausible perturbations in a real-world setup. Based on a proposed sensitivity measure, the experimental results indicate that all the DL models were substantially vulnerable to perturbations, compared to a traditional feature encoder. They also suggest that the experimental framework can be used to test the pre-trained DL models for measuring robustness. In the final main chapter, the explainability of black-box ML models is discussed. In particular, the chapter focuses on the evaluation of the explanation derived from model-agnostic explanation methods. With black-box ML models having become common practice, model-agnostic explanation methods have been developed to explain a prediction. However, the evaluation of such explanations is still an open problem. The work introduces an evaluation framework that measures the quality of the explanations employing fidelity and complexity. Fidelity refers to the explained mechanism's coherence to the black-box model, while complexity is the length of the explanation. Throughout the thesis, we gave special attention to the experimental design, such that robust conclusions can be reached. Furthermore, we focused on delivering machine learning framework and evaluation frameworks. This is crucial, as we intend that the experimental design and results will be reusable in general ML practice. As it implies, we also aim our findings to be applicable beyond the music applications such as computer vision or natural language processing. Trustworthiness in ML is not a domain-specific problem. Thus, it is vital for both researchers and practitioners from diverse problem spaces to increase awareness of complex ML systems' trustworthiness. We believe the research reported in this thesis provides meaningful stepping stones towards the trustworthiness of ML.
APA, Harvard, Vancouver, ISO, and other styles
25

Du, Yuhan, Anthony R. Rafferty, Fionnuala M. McAuliffe, John Mehegan, and Catherine Mooney. "Towards an explainable clinical decision support system for large-for-gestational-age births." PLOS ONE 18, no. 2 (February 21, 2023): e0281821. http://dx.doi.org/10.1371/journal.pone.0281821.

Full text
Abstract:
A myriad of maternal and neonatal complications can result from delivery of a large-for-gestational-age (LGA) infant. LGA birth rates have increased in many countries since the late 20th century, partially due to a rise in maternal body mass index, which is associated with LGA risk. The objective of the current study was to develop LGA prediction models for women with overweight and obesity for the purpose of clinical decision support in a clinical setting. Maternal characteristics, serum biomarkers and fetal anatomy scan measurements for 465 pregnant women with overweight and obesity before and at approximately 21 weeks gestation were obtained from the PEARS (Pregnancy Exercise and Nutrition with smart phone application support) study data. Random forest, support vector machine, adaptive boosting and extreme gradient boosting algorithms were applied with synthetic minority over-sampling technique to develop probabilistic prediction models. Two models were developed for use in different settings: a clinical setting for white women (AUC-ROC of 0.75); and a clinical setting for women of all ethnicity and regions (AUC-ROC of 0.57). Maternal age, mid upper arm circumference, white cell count at the first antenatal visit, fetal biometry and gestational age at fetal anatomy scan were found to be important predictors of LGA. Pobal HP deprivation index and fetal biometry centiles, which are population-specific, are also important. Moreover, we explained our models with Local Interpretable Model-agnostic Explanations (LIME) to improve explainability, which was proven effective by case studies. Our explainable models can effectively predict the probability of an LGA birth for women with overweight and obesity, and are anticipated to be useful to support clinical decision-making and for the development of early pregnancy intervention strategies to reduce pregnancy complications related to LGA.
APA, Harvard, Vancouver, ISO, and other styles
26

Antoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A. Becker, and Catherine Mooney. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review." Applied Sciences 11, no. 11 (May 31, 2021): 5088. http://dx.doi.org/10.3390/app11115088.

Full text
Abstract:
Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Kipyo, Hyeonsik Yang, Jinyeong Yi, Hyung-Eun Son, Ji-Young Ryu, Yong Chul Kim, Jong Cheol Jeong, et al. "Real-Time Clinical Decision Support Based on Recurrent Neural Networks for In-Hospital Acute Kidney Injury: External Validation and Model Interpretation." Journal of Medical Internet Research 23, no. 4 (April 16, 2021): e24120. http://dx.doi.org/10.2196/24120.

Full text
Abstract:
Background Acute kidney injury (AKI) is commonly encountered in clinical practice and is associated with poor patient outcomes and increased health care costs. Despite it posing significant challenges for clinicians, effective measures for AKI prediction and prevention are lacking. Previously published AKI prediction models mostly have a simple design without external validation. Furthermore, little is known about the process of linking model output and clinical decisions due to the black-box nature of neural network models. Objective We aimed to present an externally validated recurrent neural network (RNN)–based continuous prediction model for in-hospital AKI and show applicable model interpretations in relation to clinical decision support. Methods Study populations were all patients aged 18 years or older who were hospitalized for more than 48 hours between 2013 and 2017 in 2 tertiary hospitals in Korea (Seoul National University Bundang Hospital and Seoul National University Hospital). All demographic data, laboratory values, vital signs, and clinical conditions of patients were obtained from electronic health records of each hospital. We developed 2-stage hierarchical prediction models (model 1 and model 2) using RNN algorithms. The outcome variable for model 1 was the occurrence of AKI within 7 days from the present. Model 2 predicted the future trajectory of creatinine values up to 72 hours. The performance of each developed model was evaluated using the internal and external validation data sets. For the explainability of our models, different model-agnostic interpretation methods were used, including Shapley Additive Explanations, partial dependence plots, individual conditional expectation, and accumulated local effects plots. Results We included 69,081 patients in the training, 7675 in the internal validation, and 72,352 in the external validation cohorts for model development after excluding cases with missing data and those with an estimated glomerular filtration rate less than 15 mL/min/1.73 m2 or end-stage kidney disease. Model 1 predicted any AKI development with an area under the receiver operating characteristic curve (AUC) of 0.88 (internal validation) and 0.84 (external validation), and stage 2 or higher AKI development with an AUC of 0.93 (internal validation) and 0.90 (external validation). Model 2 predicted the future creatinine values within 3 days with mean-squared errors of 0.04-0.09 for patients with higher risks of AKI and 0.03-0.08 for those with lower risks. Based on the developed models, we showed AKI probability according to feature values in total patients and each individual with partial dependence, accumulated local effects, and individual conditional expectation plots. We also estimated the effects of feature modifications such as nephrotoxic drug discontinuation on future creatinine levels. Conclusions We developed and externally validated a continuous AKI prediction model using RNN algorithms. Our model could provide real-time assessment of future AKI occurrences and individualized risk factors for AKI in general inpatient cohorts; thus, we suggest approaches to support clinical decisions based on prediction models for in-hospital AKI.
APA, Harvard, Vancouver, ISO, and other styles
28

Abir, Wahidul Hasan, Md Fahim Uddin, Faria Rahman Khanam, Tahia Tazin, Mohammad Monirujjaman Khan, Mehedi Masud, and Sultan Aljahdali. "Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer Learning Method." Computational Intelligence and Neuroscience 2022 (April 27, 2022): 1–14. http://dx.doi.org/10.1155/2022/5140148.

Full text
Abstract:
White blood cells (WBCs) are blood cells that fight infections and diseases as a part of the immune system. They are also known as “defender cells.” But the imbalance in the number of WBCs in the blood can be hazardous. Leukemia is the most common blood cancer caused by an overabundance of WBCs in the immune system. Acute lymphocytic leukemia (ALL) usually occurs when the bone marrow creates many immature WBCs that destroy healthy cells. People of all ages, including children and adolescents, can be affected by ALL. The rapid proliferation of atypical lymphocyte cells can cause a reduction in new blood cells and increase the chances of death in patients. Therefore, early and precise cancer detection can help with better therapy and a higher survival probability in the case of leukemia. However, diagnosing ALL is time-consuming and complicated, and manual analysis is expensive, with subjective and error-prone outcomes. Thus, detecting normal and malignant cells reliably and accurately is crucial. For this reason, automatic detection using computer-aided diagnostic models can help doctors effectively detect early leukemia. The entire approach may be automated using image processing techniques, reducing physicians’ workload and increasing diagnosis accuracy. The impact of deep learning (DL) on medical research has recently proven quite beneficial, offering new avenues and possibilities in the healthcare domain for diagnostic techniques. However, to make that happen soon in DL, the entire community must overcome the explainability limit. Because of the black box operation’s shortcomings in artificial intelligence (AI) models’ decisions, there is a lack of liability and trust in the outcomes. But explainable artificial intelligence (XAI) can solve this problem by interpreting the predictions of AI systems. This study emphasizes leukemia, specifically ALL. The proposed strategy recognizes acute lymphoblastic leukemia as an automated procedure that applies different transfer learning models to classify ALL. Hence, using local interpretable model-agnostic explanations (LIME) to assure validity and reliability, this method also explains the cause of a specific classification. The proposed method achieved 98.38% accuracy with the InceptionV3 model. Experimental results were found between different transfer learning methods, including ResNet101V2, VGG19, and InceptionResNetV2, later verified with the LIME algorithm for XAI, where the proposed method performed the best. The obtained results and their reliability demonstrate that it can be preferred in identifying ALL, which will assist medical examiners.
APA, Harvard, Vancouver, ISO, and other styles
29

Wikle, Christopher K., Abhirup Datta, Bhava Vyasa Hari, Edward L. Boone, Indranil Sahoo, Indulekha Kavila, Stefano Castruccio, Susan J. Simmons, Wesley S. Burr, and Won Chang. "An illustration of model agnostic explainability methods applied to environmental data." Environmetrics, October 25, 2022. http://dx.doi.org/10.1002/env.2772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Zhichao, Hansi Zeng, Juntao Tan, Zuohui Fu, Yongfeng Zhang, and Qingyao Ai. "A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability." ACM Transactions on Information Systems, June 18, 2023. http://dx.doi.org/10.1145/3605357.

Full text
Abstract:
State-of-the-art industrial-level recommender system applications mostly adopt complicated model structures such as deep neural networks. While this helps with the model performance, the lack of system explainability caused by these nearly blackbox models also raises concerns and potentially weakens the users’ trust on the system. Existing work on explainable recommendation mostly focuses on designing interpretable model structures to generate model-intrinsic explanations. However, most of them have complex structures and it is difficult to directly apply these designs onto existing recommendation applications due to the effectiveness and efficiency concerns. On the other hand, while there have been some studies on explaining recommendation models without knowing their internal structures (i.e. model-agnostic explanations), these methods have been criticized for not reflecting the actual reasoning process of the recommendation model, or in other words, not faithful . How to develop model-agnostic explanation methods and evaluate them in terms of faithfulness is mostly unknown. In this work, we propose a reusable evaluation pipeline for model-agnostic explainable recommendation. Our pipeline evaluates the quality of model-agnostic explanation from the perspectives of faithfulness and scrutability. We further propose a model-agnostic explanation framework for recommendation and verify it with the proposed evaluation pipeline. Extensive experiments on public datasets demonstrate that our model-agnostic framework is able to generate explanations that are faithful to the recommendation model. We additionally provide quantitative and qualitative study to show that our explanation framework could enhance the scrutability of blackbox recommendation model. With proper modification, our evaluation pipeline and model-agnostic explanation framework could be easily migrated to existing applications. Through this work, we hope to encourage the community to focus more on faithfulness evaluation of explainable recommender systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Joyce, Dan W., Andrey Kormilitzin, Katharine A. Smith, and Andrea Cipriani. "Explainable artificial intelligence for mental health through transparency and interpretability for understandability." npj Digital Medicine 6, no. 1 (January 18, 2023). http://dx.doi.org/10.1038/s41746-023-00751-9.

Full text
Abstract:
AbstractThe literature on artificial intelligence (AI) or machine learning (ML) in mental health and psychiatry lacks consensus on what “explainability” means. In the more general XAI (eXplainable AI) literature, there has been some convergence on explainability meaning model-agnostic techniques that augment a complex model (with internal mechanics intractable for human understanding) with a simpler model argued to deliver results that humans can comprehend. Given the differing usage and intended meaning of the term “explainability” in AI and ML, we propose instead to approximate model/algorithm explainability by understandability defined as a function of transparency and interpretability. These concepts are easier to articulate, to “ground” in our understanding of how algorithms and models operate and are used more consistently in the literature. We describe the TIFU (Transparency and Interpretability For Understandability) framework and examine how this applies to the landscape of AI/ML in mental health research. We argue that the need for understandablity is heightened in psychiatry because data describing the syndromes, outcomes, disorders and signs/symptoms possess probabilistic relationships to each other—as do the tentative aetiologies and multifactorial social- and psychological-determinants of disorders. If we develop and deploy AI/ML models, ensuring human understandability of the inputs, processes and outputs of these models is essential to develop trustworthy systems fit for deployment.
APA, Harvard, Vancouver, ISO, and other styles
32

Nakashima, Heitor Hoffman, Daielly Mantovani, and Celso Machado Junior. "Users’ trust in black-box machine learning algorithms." Revista de Gestão, October 25, 2022. http://dx.doi.org/10.1108/rege-06-2022-0100.

Full text
Abstract:
PurposeThis paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.Design/methodology/approachThe study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, interviews were conducted in order to understand their perceptions regarding black-box models.FindingsThe data suggest that users’ trust of black-box systems is high and explainability artifacts do not influence this behavior. The interviews reveal that the nature and complexity of the problem a black-box model addresses influences the users’ perceptions, trust being reduced in situations that represent a threat (e.g. autonomous cars). Concerns about the models’ ethics were also mentioned by the interviewees.Research limitations/implicationsThe study considered a small sample of professional analysts from the financial market, which traditionally employs data analysis techniques for credit and risk analysis. Research with personnel in other sectors might reveal different perceptions.Originality/valueOther studies regarding trust in black-box models and explainability artifacts have focused on ordinary users, with little or no knowledge of data analysis. The present research focuses on expert users, which provides a different perspective and shows that, for them, trust is related to the quality of data and the nature of the problem being solved, as well as the practical consequences. Explanation of the algorithm mechanics itself is not significantly relevant.
APA, Harvard, Vancouver, ISO, and other styles
33

Szepannek, Gero, and Karsten Lübke. "Explaining Artificial Intelligence with Care." KI - Künstliche Intelligenz, May 16, 2022. http://dx.doi.org/10.1007/s13218-022-00764-8.

Full text
Abstract:
AbstractIn the recent past, several popular failures of black box AI systems and regulatory requirements have increased the research interest in explainable and interpretable machine learning. Among the different available approaches of model explanation, partial dependence plots (PDP) represent one of the most famous methods for model-agnostic assessment of a feature’s effect on the model response. Although PDPs are commonly used and easy to apply they only provide a simplified view on the model and thus risk to be misleading. Relying on a model interpretation given by a PDP can be of dramatic consequences in an application area such as forensics where decisions may directly affect people’s life. For this reason in this paper the degree of model explainability is investigated on a popular real-world data set from the field of forensics: the glass identification database. By means of this example the paper aims to illustrate two important aspects of machine learning model development from the practical point of view in the context of forensics: (1) the importance of a proper process for model selection, hyperparameter tuning and validation as well as (2) the careful used of explainable artificial intelligence. For this purpose, the concept of explainability is extended to multiclass classification problems as e.g. given by the glass data.
APA, Harvard, Vancouver, ISO, and other styles
34

Sharma, Jeetesh, Murari Lal Mittal, Gunjan Soni, and Arvind Keprate. "Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review." Recent Patents on Engineering 18 (April 17, 2023). http://dx.doi.org/10.2174/1872212118666230417084231.

Full text
Abstract:
Background: Predictive maintenance (PdM) is a technique that keeps track of the condition and performance of equipment during normal operation to reduce the possibility of failures. Accurate anomaly detection, fault diagnosis, and fault prognosis form the basis of a PdM procedure. Objective: This paper aims to explore and discuss research addressing PdM using machine learning and complications using explainable artificial intelligence (XAI) techniques. Methods: While machine learning and artificial intelligence techniques have gained great interest in recent years, the absence of model interpretability or explainability in several machine learning models due to the black-box nature requires further research. Explainable artificial intelligence (XAI) investigates the explainability of machine learning models. This article overviews the maintenance strategies, post-hoc explanations, model-specific explanations, and model-agnostic explanations currently being used. Conclusion: Even though machine learning-based PdM has gained considerable attention, less emphasis has been placed on explainable artificial intelligence (XAI) approaches in predictive maintenance (PdM). Based on our findings, XAI techniques can bring new insights and opportunities for addressing critical maintenance issues, resulting in more informed decisions. The results analysis suggests a viable path for future studies.
APA, Harvard, Vancouver, ISO, and other styles
35

Szczepański, Mateusz, Marek Pawlicki, Rafał Kozik, and Michał Choraś. "New explainability method for BERT-based model in fake news detection." Scientific Reports 11, no. 1 (December 2021). http://dx.doi.org/10.1038/s41598-021-03100-6.

Full text
Abstract:
AbstractThe ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a diverse array of countermeasures were devised. These include linguistic-based approaches, which often utilise Natural Language Processing (NLP) and Deep Learning (DL). However, as the latest advancements in the Artificial Intelligence (AI) domain show, the model’s high performance is no longer enough. The explainability of the system’s decision is equally crucial in real-life scenarios. Therefore, the objective of this paper is to present a novel explainability approach in BERT-based fake news detectors. This approach does not require extensive changes to the system and can be attached as an extension for operating detectors. For this purposes, two Explainable Artificial Intelligence (xAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and Anchors, will be used and evaluated on fake news data, i.e., short pieces of text forming tweets or headlines. This focus of this paper is on the explainability approach for fake news detectors, as the detectors themselves were part of previous works of the authors.
APA, Harvard, Vancouver, ISO, and other styles
36

ÖZTOPRAK, Samet, and Zeynep ORMAN. "A New Model-Agnostic Method and Implementation for Explaining the Prediction on Finance Data." European Journal of Science and Technology, June 29, 2022. http://dx.doi.org/10.31590/ejosat.1079145.

Full text
Abstract:
Artificial neural networks (ANNs) are widely used in critical mission systems such as healthcare, self-driving vehicles and the army, which directly affect human life, and in predicting data related to these systems. However, the black-box nature of ANN algorithms makes their use in mission-critical applications difficult, while raising ethical and forensic concerns that lead to a lack of trust. The development of the Artificial Intelligence (AI) day by day and gaining more space in our lives have revealed that the results obtained from these algorithms should be more explainable and understandable. Explainable Artificial Intelligence (XAI) is a field of AI that supports a set of tools, techniques, and algorithms that can create high-quality interpretable, intuitive, human-understandable explanations of artificial intelligence decisions. In this study, a new model-agnostic method that can be used for the financial sector has been developed by considering the stock market data for explainability. This method enables us to understand the relationship between the inputs given to the created model and the outputs obtained from the model. All inputs were evaluated individually and combined, and the evaluation results were shown with tables and graphics. This model will also help create an explainable layer for different machine learning algorithms and application areas.
APA, Harvard, Vancouver, ISO, and other styles
37

Bachoc, François, Fabrice Gamboa, Max Halford, Jean-Michel Loubes, and Laurent Risser. "Explaining machine learning models using entropic variable projection." Information and Inference: A Journal of the IMA 12, no. 3 (April 27, 2023). http://dx.doi.org/10.1093/imaiai/iaad010.

Full text
Abstract:
Abstract In this paper, we present a new explainability formalism designed to shed light on how each input variable of a test set impacts the predictions of machine learning models. Hence, we propose a group explainability formalism for trained machine learning decision rules, based on their response to the variability of the input variables distribution. In order to emphasize the impact of each input variable, this formalism uses an information theory framework that quantifies the influence of all input–output observations based on entropic projections. This is thus the first unified and model agnostic formalism enabling data scientists to interpret the dependence between the input variables, their impact on the prediction errors and their influence on the output predictions. Convergence rates of the entropic projections are provided in the large sample case. Most importantly, we prove that computing an explanation in our framework has a low algorithmic complexity, making it scalable to real-life large datasets. We illustrate our strategy by explaining complex decision rules learned using XGBoost, Random Forest or Deep Neural Network classifiers on various datasets such as Adult Income, MNIST, CelebA, Boston Housing, Iris, as well as synthetic ones. We finally make clear its differences with the explainability strategies LIME and SHAP, which are based on single observations. Results can be reproduced using the freely distributed Python toolbox https://gems-ai.aniti.fr/.
APA, Harvard, Vancouver, ISO, and other styles
38

Loveleen, Gaur, Bhandari Mohan, Bhadwal Singh Shikhar, Jhanjhi Nz, Mohammad Shorfuzzaman, and Mehedi Masud. "Explanation-driven HCI Model to Examine the Mini-Mental State for Alzheimer’s Disease." ACM Transactions on Multimedia Computing, Communications, and Applications, April 2022. http://dx.doi.org/10.1145/3527174.

Full text
Abstract:
Directing research on Alzheimer’s towards only early prediction and accuracy cannot be considered a feasible approach towards tackling a ubiquitous degenerative disease today. Applying deep learning (DL), Explainable artificial intelligence(XAI) and advancing towards the human-computer interface(HCI) model can be a leap forward in medical research. This research aims to propose a robust explainable HCI model using shapley additive explanation (SHAP), local interpretable model-agnostic explanations (LIME) and DL algorithms. The use of DL algorithms: logistic regression(80.87%), support vector machine (85.8%), k-nearest neighbour(87.24%), multilayer perceptron(91.94%), decision tree(100%) and explainability can help exploring untapped avenues for research in medical sciences that can mould the future of HCI models. The outcomes of the proposed model depict higher prediction accuracy bringing efficient computer interface in decision making, and suggests a high level of relevance in the field of medical and clinical research.
APA, Harvard, Vancouver, ISO, and other styles
39

Vilone, Giulia, and Luca Longo. "A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods." Frontiers in Artificial Intelligence 4 (November 3, 2021). http://dx.doi.org/10.3389/frai.2021.717899.

Full text
Abstract:
Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by five model-agnostic, post-hoc rule extractors by employing eight quantitative metrics. Eventually, the Friedman test was employed to check whether a method consistently performed better than the others, in terms of the selected metrics, and could be considered superior. Findings demonstrate that these metrics do not provide sufficient evidence to identify superior methods over the others. However, when used together, these metrics form a tool, applicable to every rule-extraction method and machine-learned models, that is, suitable to highlight the strengths and weaknesses of the rule-extractors in various applications in an objective and straightforward manner, without any human interventions. Thus, they are capable of successfully modelling distinctively aspects of explainability, providing to researchers and practitioners vital insights on what a model has learned during its training process and how it makes its predictions.
APA, Harvard, Vancouver, ISO, and other styles
40

Alabi, Rasheed Omobolaji, Mohammed Elmusrati, Ilmo Leivo, Alhadi Almangush, and Antti A. Mäkitie. "Machine learning explainability in nasopharyngeal cancer survival using LIME and SHAP." Scientific Reports 13, no. 1 (June 2, 2023). http://dx.doi.org/10.1038/s41598-023-35795-0.

Full text
Abstract:
AbstractNasopharyngeal cancer (NPC) has a unique histopathology compared with other head and neck cancers. Individual NPC patients may attain different outcomes. This study aims to build a prognostic system by combining a highly accurate machine learning model (ML) model with explainable artificial intelligence to stratify NPC patients into low and high chance of survival groups. Explainability is provided using Local Interpretable Model Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) techniques. A total of 1094 NPC patients were retrieved from the Surveillance, Epidemiology, and End Results (SEER) database for model training and internal validation. We combined five different ML algorithms to form a uniquely stacked algorithm. The predictive performance of the stacked algorithm was compared with a state-of-the-art algorithm—extreme gradient boosting (XGBoost) to stratify the NPC patients into chance of survival groups. We validated our model with temporal validation (n = 547) and geographic external validation (Helsinki University Hospital NPC cohort, n = 60). The developed stacked predictive ML model showed an accuracy of 85.9% while the XGBoost had 84.5% after the training and testing phases. This demonstrated that both XGBoost and the stacked model showed comparable performance. External geographic validation of XGBoost model showed a c-index of 0.74, accuracy of 76.7%, and area under curve of 0.76. The SHAP technique revealed that age of the patient at diagnosis, T-stage, ethnicity, M-stage, marital status, and grade were among the prominent input variables in decreasing order of significance for the overall survival of NPC patients. LIME showed the degree of reliability of the prediction made by the model. In addition, both techniques showed how each feature contributed to the prediction made by the model. LIME and SHAP techniques provided personalized protective and risk factors for each NPC patient and unraveled some novel non-linear relationships between input features and survival chance. The examined ML approach showed the ability to predict the chance of overall survival of NPC patients. This is important for effective treatment planning care and informed clinical decisions. To enhance outcome results, including survival in NPC, ML may aid in planning individualized therapy for this patient population.
APA, Harvard, Vancouver, ISO, and other styles
41

Bogdanova, Anna, Akira Imakura, and Tetsuya Sakurai. "DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning." Human-Centric Intelligent Systems, July 6, 2023. http://dx.doi.org/10.1007/s44230-023-00032-4.

Full text
Abstract:
AbstractEnsuring the transparency of machine learning models is vital for their ethical application in various industries. There has been a concurrent trend of distributed machine learning designed to limit access to training data for privacy concerns. Such models, trained over horizontally or vertically partitioned data, present a challenge for explainable AI because the explaining party may have a biased view of background data or a partial view of the feature space. As a result, explanations obtained from different participants of distributed machine learning might not be consistent with one another, undermining trust in the product. This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm (KernelSHAP) and Data Collaboration method of privacy-preserving distributed machine learning. In particular, we present three algorithms for different scenarios of explainability in Data Collaboration and verify their consistency with experiments on open-access datasets. Our results demonstrated a significant (by at least a factor of 1.75) decrease in feature attribution discrepancies among the users of distributed machine learning. The proposed method improves consistency among explanations obtained from different participants, which can enhance trust in the product and enable ethical application in various industries.
APA, Harvard, Vancouver, ISO, and other styles
42

Zini, Julia El, and Mariette Awad. "On the Explainability of Natural Language Processing Deep Models." ACM Computing Surveys, July 19, 2022. http://dx.doi.org/10.1145/3529755.

Full text
Abstract:
Despite their success, deep networks are used as black-box models with outputs that are not easily explainable during the learning and the prediction phases. This lack of interpretability is significantly limiting the adoption of such models in domains where decisions are critical such as the medical and legal fields. Recently, researchers have been interested in developing methods that help explain individual decisions and decipher the hidden representations of machine learning models in general and deep networks specifically. While there has been a recent explosion of work on Ex plainable A rtificial I ntelligence ( ExAI ) on deep models that operate on imagery and tabular data, textual datasets present new challenges to the ExAI community. Such challenges can be attributed to the lack of input structure in textual data, the use of word embeddings that add to the opacity of the models and the difficulty of the visualization of the inner workings of deep models when they are trained on textual data. Lately, methods have been developed to address the aforementioned challenges and present satisfactory explanations on Natural Language Processing (NLP) models. However, such methods are yet to be studied in a comprehensive framework where common challenges are properly stated and rigorous evaluation practices and metrics are proposed. Motivated to democratize ExAI methods in the NLP field, we present in this work a survey that studies model-agnostic as well as model-specific explainability methods on NLP models. Such methods can either develop inherently interpretable NLP models or operate on pre-trained models in a post-hoc manner. We make this distinction and we further decompose the methods into three categories according to what they explain: (1) word embeddings (input-level), (2) inner workings of NLP models (processing-level) and (3) models’ decisions (output-level). We also detail the different evaluation approaches interpretability methods in the NLP field. Finally, we present a case-study on the well-known neural machine translation in an appendix and we propose promising future research directions for ExAI in the NLP field.
APA, Harvard, Vancouver, ISO, and other styles
43

Esam Noori, Worood, and A. S. Albahri. "Towards Trustworthy Myopia Detection: Integration Methodology of Deep Learning Approach, XAI Visualization, and User Interface System." Applied Data Science and Analysis, February 23, 2023, 1–15. http://dx.doi.org/10.58496/adsa/2023/001.

Full text
Abstract:
Myopia, a prevalent vision disorder with potential complications if untreated, requires early and accurate detection for effective treatment. However, traditional diagnostic methods often lack trustworthiness and explainability, leading to biases and mistrust. This study presents a four-phase methodology to develop a robust myopia detection system. In the initial phase, the dataset containing training and testing images is located, preprocessed, and balanced. Subsequently, two models are deployed: a pre-trained VGG16 model renowned for image classification tasks, and a sequential CNN with convolution layers. Performance evaluation metrics such as accuracy, recall, F1-Score, sensitivity, and logloss are utilized to assess the models' effectiveness. The third phase integrates explainability, trustworthiness, and transparency through the application of Explainable Artificial Intelligence (XAI) techniques. Specifically, Local Interpretable Model-Agnostic Explanations (LIME) are employed to provide insights into the decision-making process of the deep learning model, offering explanations for the classification of images as myopic or normal. In the final phase, a user interface is implemented for the myopia detection and XAI model, bringing together the aforementioned phases. The outcomes of this study contribute to the advancement of objective and explainable diagnostic methods in the field of myopia detection. Notably, the VGG16 model achieves an impressive accuracy of 96%, highlighting its efficacy in diagnosing myopia. The LIME results provide valuable interpretations for myopia cases. The proposed methodology enhances transparency, interpretability, and trust in the myopia detection process.
APA, Harvard, Vancouver, ISO, and other styles
44

Filho, Renato Miranda, Anísio M. Lacerda, and Gisele L. Pappa. "Explainable regression via prototypes." ACM Transactions on Evolutionary Learning and Optimization, December 15, 2022. http://dx.doi.org/10.1145/3576903.

Full text
Abstract:
Model interpretability/explainability is increasingly a concern when applying machine learning to real-world problems. In this paper, we are interested in explaining regression models by exploiting prototypes, which are exemplar cases in the problem domain. Previous works focused on finding prototypes that are representative of all training data but ignore the model predictions, i.e., they explain the data distribution but not necessarily the predictions. We propose a two-level model-agnostic method that considers prototypes to provide global and local explanations for regression problems and that account for both the input features and the model output. M-PEER (Multiobjective Prototype-basEd Explanation for Regression) is based on a multi-objective evolutionary method that optimizes both the error of the explainable model and two other “semantics”-based measures of interpretability adapted from the context of classification, namely model fidelity and stability. We compare the proposed method with the state-of-the-art method based on prototypes for explanation – ProtoDash – and with other methods widely used in correlated areas of machine learning, such as instance selection and clustering. We conduct experiments on 25 datasets, and results demonstrate significant gains of M-PEER over other strategies, with an average of 12% improvement in the proposed metrics (i.e., model fidelity and stability) and 17% in root mean squared error (RMSE) when compared to ProtoDash.
APA, Harvard, Vancouver, ISO, and other styles
45

Ahmed, Zia U., Kang Sun, Michael Shelly, and Lina Mu. "Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA." Scientific Reports 11, no. 1 (December 2021). http://dx.doi.org/10.1038/s41598-021-03198-8.

Full text
Abstract:
AbstractMachine learning (ML) has demonstrated promise in predicting mortality; however, understanding spatial variation in risk factor contributions to mortality rate requires explainability. We applied explainable artificial intelligence (XAI) on a stack-ensemble machine learning model framework to explore and visualize the spatial distribution of the contributions of known risk factors to lung and bronchus cancer (LBC) mortality rates in the conterminous United States. We used five base-learners—generalized linear model (GLM), random forest (RF), Gradient boosting machine (GBM), extreme Gradient boosting machine (XGBoost), and Deep Neural Network (DNN) for developing stack-ensemble models. Then we applied several model-agnostic approaches to interpret and visualize the stack ensemble model's output in global and local scales (at the county level). The stack ensemble generally performs better than all the base learners and three spatial regression models. A permutation-based feature importance technique ranked smoking prevalence as the most important predictor, followed by poverty and elevation. However, the impact of these risk factors on LBC mortality rates varies spatially. This is the first study to use ensemble machine learning with explainable algorithms to explore and visualize the spatial heterogeneity of the relationships between LBC mortality and risk factors in the contiguous USA.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Tao, Meng Song, Hongxun Hui, and Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems." Frontiers in Energy Research 9 (October 5, 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.

Full text
Abstract:
With the rapid development of renewable energy, the lithium-ion battery has become one of the most important sources to store energy for many applications such as electrical vehicles and smart grids. As battery performance would be highly and directly affected by its electrode manufacturing process, it is vital to design an effective solution for achieving accurate battery electrode mass loading prognostics at early manufacturing stages and analyzing the effects of manufacturing parameters of interest. To achieve this, this study proposes a hybrid data analysis solution, which integrates the kernel-based support vector machine (SVM) regression model and the linear model–based local interpretable model-agnostic explanation (LIME), to predict battery electrode mass loading and quantify the effects of four manufacturing parameters from mixing and coating stages of the battery manufacturing chain. Illustrative results demonstrate that the derived hybrid data analysis solution is capable of not only providing satisfactory battery electrode mass loading prognostics with over a 0.98 R-squared value but also effectively quantifying the effects of four key parameters (active material mass content, solid-to-liquid ratio, viscosity, and comma-gap) on determining battery electrode properties. Due to the merits of explainability and data-driven nature, the design data–driven solution could assist engineers to obtain battery electrode information at early production cases and understand strongly coupled parameters for producing batteries, further benefiting the improvement of battery performance for wider energy storage applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Tao, Meng Song, Hongxun Hui, and Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems." Frontiers in Energy Research 9 (October 5, 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.

Full text
Abstract:
With the rapid development of renewable energy, the lithium-ion battery has become one of the most important sources to store energy for many applications such as electrical vehicles and smart grids. As battery performance would be highly and directly affected by its electrode manufacturing process, it is vital to design an effective solution for achieving accurate battery electrode mass loading prognostics at early manufacturing stages and analyzing the effects of manufacturing parameters of interest. To achieve this, this study proposes a hybrid data analysis solution, which integrates the kernel-based support vector machine (SVM) regression model and the linear model–based local interpretable model-agnostic explanation (LIME), to predict battery electrode mass loading and quantify the effects of four manufacturing parameters from mixing and coating stages of the battery manufacturing chain. Illustrative results demonstrate that the derived hybrid data analysis solution is capable of not only providing satisfactory battery electrode mass loading prognostics with over a 0.98 R-squared value but also effectively quantifying the effects of four key parameters (active material mass content, solid-to-liquid ratio, viscosity, and comma-gap) on determining battery electrode properties. Due to the merits of explainability and data-driven nature, the design data–driven solution could assist engineers to obtain battery electrode information at early production cases and understand strongly coupled parameters for producing batteries, further benefiting the improvement of battery performance for wider energy storage applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Javed, Abdul Rehman, Habib Ullah Khan, Mohammad Kamel Bader Alomari, Muhammad Usman Sarwar, Muhammad Asim, Ahmad S. Almadhor, and Muhammad Zahid Khan. "Toward explainable AI-empowered cognitive health assessment." Frontiers in Public Health 11 (March 9, 2023). http://dx.doi.org/10.3389/fpubh.2023.1024195.

Full text
Abstract:
Explainable artificial intelligence (XAI) is of paramount importance to various domains, including healthcare, fitness, skill assessment, and personal assistants, to understand and explain the decision-making process of the artificial intelligence (AI) model. Smart homes embedded with smart devices and sensors enabled many context-aware applications to recognize physical activities. This study presents XAI-HAR, a novel XAI-empowered human activity recognition (HAR) approach based on key features identified from the data collected from sensors located at different places in a smart home. XAI-HAR identifies a set of new features (i.e., the total number of sensors used in a specific activity), as physical key features selection (PKFS) based on weighting criteria. Next, it presents statistical key features selection (SKFS) (i.e., mean, standard deviation) to handle the outliers and higher class variance. The proposed XAI-HAR is evaluated using machine learning models, namely, random forest (RF), K-nearest neighbor (KNN), support vector machine (SVM), decision tree (DT), naive Bayes (NB) and deep learning models such as deep neural network (DNN), convolution neural network (CNN), and CNN-based long short-term memory (CNN-LSTM). Experiments demonstrate the superior performance of XAI-HAR using RF classifier over all other machine learning and deep learning models. For explainability, XAI-HAR uses Local Interpretable Model Agnostic (LIME) with an RF classifier. XAI-HAR achieves 0.96% of F-score for health and dementia classification and 0.95 and 0.97% for activity recognition of dementia and healthy individuals, respectively.
APA, Harvard, Vancouver, ISO, and other styles
49

Mustafa, Ahmad, Klaas Koster, and Ghassan AlRegib. "Explainable Machine Learning for Hydrocarbon Risk Assessment." GEOPHYSICS, July 13, 2023, 1–52. http://dx.doi.org/10.1190/geo2022-0594.1.

Full text
Abstract:
Hydrocarbon prospect risk assessment is an important process in oil and gas exploration involving the integrated analysis of various geophysical data modalities including seismic data, well logs, and geological information to estimate the likelihood of drilling success for a given drill location. Over the years, geophysicists have attempted to understand the various factors at play influencing the probability of success for hydrocarbon prospects. Towards this end, a large database of prospect drill outcomes and associated attributes has been collected and analyzed via correlation-based techniques to determine the features that contribute the most in deciding the final outcome. Machine learning has the potential to model complex feature interactions to learn input-output mappings for complicated, high-dimensional datasets. However, in many instances, machine learning models are not interpretable to end users, limiting their utility towards both understanding the underlying scientific principles for the problem domain as well in being deployed to assist in the risk assessment process. In this context, we leverage the concept of explainable machine learning to interpret various black-box machine learning models trained on the aforementioned prospect database for risk assessment. Using various case studies on real data, we demonstrate that this model-agnostic explainability analysis for prospect risking can (1) reveal novel scientific insights into the interplay of various features in regards to deciding prospect outcome, (2) assist with performing feature engineering for machine learning models, (3) detect bias in datasets involving spurious correlations, and (4) build a global picture of a model's understanding of the data by aggregating local explanations on individual data points.
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Darrion Bo-Yun, Alexander Smith, Emily J. Smith, Anant Naik, Mika Janbahan, Charee M. Thompson, Lav R. Varshney, and Wael Hassaneen. "The State of Machine Learning in Outcomes Prediction of Transsphenoidal Surgery: A Systematic Review." Journal of Neurological Surgery Part B: Skull Base, September 12, 2022. http://dx.doi.org/10.1055/a-1941-3618.

Full text
Abstract:
The purpose of this analysis is to assess the use of machine learning (ML) algorithms in the prediction of post-operative outcomes, including complications, recurrence, and death in transsphenoidal surgery. Following PRISMA guidelines, we systematically reviewed all papers that used at least one ML algorithm to predict outcomes after transsphenoidal surgery. We searched Scopus, PubMed, and Web of Science databases for studies published prior to May 12th, 2021. We identified 13 studies enrolling 5048 patients. We extracted the general characteristics of each study; the sensitivity, specificity, AUC of the ML models developed as well as the features identified as important by the ML models. We identified 12 studies with 5048 patients that included ML algorithms for adenomas, three with 1807 patients specifically for acromegaly, and five with 2105 patients specifically for Cushing’s disease. Nearly all were single institution studies. The studies used a heterogeneous mix of ML algorithms and features to build predictive models. All papers reported an AUC greater than .7, which indicates clinical utility. ML algorithms have the potential to predict post-operative outcomes of transsphenoidal surgery and can improve patient care. Ensemble algorithms and neural networks were often top performers when compared to other ML algorithms. Biochemical and pre-operative features were most likely to be selected as important by ML models. Inexplicability remains a challenge, but algorithms such as local interpretable model-agnostic explanation or Shapley value can increase explainability of ML algorithms. Our analysis shows that ML algorithms have the potential to greatly assist surgeons in clinical decision making.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography