Academic literature on the topic 'Local-interpretable-model-agnostic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Local-interpretable-model-agnostic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Local-interpretable-model-agnostic"

1

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (June 30, 2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Full text
Abstract:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
APA, Harvard, Vancouver, ISO, and other styles
2

Neves, Inês, Duarte Folgado, Sara Santos, Marília Barandas, Andrea Campagner, Luca Ronzio, Federico Cabitza, and Hugo Gamboa. "Interpretable heartbeat classification using local model-agnostic explanations on ECGs." Computers in Biology and Medicine 133 (June 2021): 104393. http://dx.doi.org/10.1016/j.compbiomed.2021.104393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Palatnik de Sousa, Iam, Marley Maria Bernardes Rebuzzi Vellasco, and Eduardo Costa da Silva. "Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases." Sensors 19, no. 13 (July 5, 2019): 2969. http://dx.doi.org/10.3390/s19132969.

Full text
Abstract:
An application of explainable artificial intelligence on medical data is presented. There is an increasing demand in machine learning literature for such explainable models in health-related applications. This work aims to generate explanations on how a Convolutional Neural Network (CNN) detects tumor tissue in patches extracted from histology whole slide images. This is achieved using the “locally-interpretable model-agnostic explanations” methodology. Two publicly-available convolutional neural networks trained on the Patch Camelyon Benchmark are analyzed. Three common segmentation algorithms are compared for superpixel generation, and a fourth simpler parameter-free segmentation algorithm is proposed. The main characteristics of the explanations are discussed, as well as the key patterns identified in true positive predictions. The results are compared to medical annotations and literature and suggest that the CNN predictions follow at least some aspects of human expert knowledge.
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Enshuo. "UniformLIME: A Uniformly Perturbed Local Interpretable Model-Agnostic Explanations Approach for Aerodynamics." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012025. http://dx.doi.org/10.1088/1742-6596/2171/1/012025.

Full text
Abstract:
Abstract Machine learning and deep learning are widely used in the field of aerodynamics. But most models are often seen as black boxes due to lack of interpretability. Local Interpretable Model-agnostic Explanations (LIME) is a popular method that uses a local surrogate model to explain a single instance of machine learning. Its main disadvantages are the instability of the explanations and low local fidelity. In this paper, we propose an original modification to LIME by employing a new perturbed sample generation method for aerodynamic tabular data in regression model, which makes the differences between perturbed samples and the input instance vary in a larger range. We make several comparisons with three subtasks and show that our proposed method results in better metrics.
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Hai Thanh, Cham Ngoc Thi Nguyen, Thao Minh Nguyen Phan, and Tinh Cong Dao. "Pleural Effusion Diagnosis using Local Interpretable Model-agnostic Explanations and Convolutional Neural Network." IEIE Transactions on Smart Processing & Computing 10, no. 2 (April 30, 2021): 101–8. http://dx.doi.org/10.5573/ieiespc.2021.10.2.101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Admassu, Tsehay. "Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection." Proceedings of Engineering and Technology Innovation 23 (January 1, 2023): 48–59. http://dx.doi.org/10.46604/peti.2023.10101.

Full text
Abstract:
This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the XGBoost model with various measures, such as consistency and proximity. Finally, Python 3.7 programming language with Jupyter Notebook integrated development environment is used for simulation. The simulation result shows that the XGBoost model achieves 99.79% accuracy, indicating that the counterfactual explanation of the XGBoost model describes the smallest changes in the feature values for changing the diagnosis outcome to the predefined output.
APA, Harvard, Vancouver, ISO, and other styles
7

Rajapaksha, Dilini, and Christoph Bergmeir. "LIMREF: Local Interpretable Model Agnostic Rule-Based Explanations for Forecasting, with an Application to Electricity Smart Meter Data." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12098–107. http://dx.doi.org/10.1609/aaai.v36i11.21469.

Full text
Abstract:
Accurate electricity demand forecasts play a key role in sustainable power systems. To enable better decision-making especially for demand flexibility of the end-user, it is necessary to provide not only accurate but also understandable and actionable forecasts. To provide accurate forecasts Global Forecasting Models (GFM) that are trained across time series have shown superior results in many demand forecasting competitions and real-world applications recently, compared with univariate forecasting approaches. We aim to fill the gap between the accuracy and the interpretability in global forecasting approaches. In order to explain the global model forecasts, we propose Local Interpretable Model-agnostic Rule-based Explanations for Forecasting (LIMREF), which is a local explainer framework that produces k-optimal impact rules for a particular forecast, considering the global forecasting model as a black-box model, in a model-agnostic way. It provides different types of rules which explain the forecast of the global model and the counterfactual rules, which provide actionable insights for potential changes to obtain different outputs for given instances. We conduct experiments using a large-scale electricity demand dataset with exogenous features such as temperature and calendar effects. Here, we evaluate the quality of the explanations produced by the LIMREF framework in terms of both qualitative and quantitative aspects such as accuracy, fidelity and comprehensibility, and benchmark those against other local explainers.
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Devesh. "Interpretable Machine-Learning Approach in Estimating FDI Inflow: Visualization of ML Models with LIME and H2O." TalTech Journal of European Studies 11, no. 1 (May 1, 2021): 133–52. http://dx.doi.org/10.2478/bjes-2021-0009.

Full text
Abstract:
Abstract In advancement of interpretable machine learning (IML), this research proposes local interpretable model-agnostic explanations (LIME) as a new visualization technique in a novel informative way to analyze the foreign direct investment (FDI) inflow. This article examines the determinants of FDI inflow through IML with a supervised learning method to analyze the foreign investment determinants in Hungary by using an open-source artificial intelligence H2O platform. This author used three ML algorithms—general linear model (GML), gradient boosting machine (GBM), and random forest (RF) classifier—to analyze the FDI inflow from 2001 to 2018. The result of this study shows that in all three classifiers GBM performs better to analyze FDI inflow determinants. The variable value of production in a region is the most influenced determinant to the inflow of FDI in Hungarian regions. Explanatory visualizations are presented from the analyzed dataset, which leads to their use in decision-making.
APA, Harvard, Vancouver, ISO, and other styles
9

GhoshRoy, Debasmita, Parvez Ahmad Alvi, and KC Santosh. "Explainable AI to Predict Male Fertility Using Extreme Gradient Boosting Algorithm with SMOTE." Electronics 12, no. 1 (December 21, 2022): 15. http://dx.doi.org/10.3390/electronics12010015.

Full text
Abstract:
Infertility is a common problem across the world. Infertility distribution due to male factors ranges from 40% to 50%. Existing artificial intelligence (AI) systems are not often human interpretable. Further, clinicians are unaware of how data analytical tools make decisions, and as a result, they have limited exposure to healthcare. Using explainable AI tools makes AI systems transparent and traceable, enhancing users’ trust and confidence in decision-making. The main contribution of this study is to introduce an explainable model for investigating male fertility prediction. Nine features related to lifestyle and environmental factors are utilized to develop a male fertility prediction model. Five AI tools, namely support vector machine, adaptive boosting, conventional extreme gradient boost (XGB), random forest, and extra tree algorithms are deployed with a balanced and imbalanced dataset. To produce our model in a trustworthy way, an explainable AI is applied. The techniques are (1) local interpretable model-agnostic explanations (LIME) and (2) Shapley additive explanations (SHAP). Additionally, ELI5 is utilized to inspect the feature’s importance. Finally, XGB outperformed and obtained an AUC of 0.98, which is optimal compared to existing AI systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Toğaçar, Mesut, Nedim Muzoğlu, Burhan Ergen, Bekir Sıddık Binboğa Yarman, and Ahmet Mesrur Halefoğlu. "Detection of COVID-19 findings by the local interpretable model-agnostic explanations method of types-based activations extracted from CNNs." Biomedical Signal Processing and Control 71 (January 2022): 103128. http://dx.doi.org/10.1016/j.bspc.2021.103128.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Local-interpretable-model-agnostic"

1

Fjellström, Lisa. "The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184671.

Full text
Abstract:
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video.  An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.
APA, Harvard, Vancouver, ISO, and other styles
2

Norrie, Christian. "Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19845.

Full text
Abstract:
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
APA, Harvard, Vancouver, ISO, and other styles
3

Malmberg, Jacob, Öhman Marcus Nystad, and Alexandra Hotti. "Implementing Machine Learning in the Credit Process of a Learning Organization While Maintaining Transparency Using LIME." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232579.

Full text
Abstract:
To determine whether a credit limit for a corporate client should be changed, a financial institution writes a PM containingtext and financial data that then is assessed by a credit committee which decides whether to increase the limit or not. To make thisprocess more efficient, machine learning algorithms was used to classify the credit PMs instead of a committee. Since most machinelearning algorithms are black boxes, the LIME framework was used to find the most important features driving the classification. Theresults of this study show that credit memos can be classified with high accuracy and that LIME can be used to indicate which parts ofthe memo had the biggest impact. This implicates that the credit process could be improved by utilizing machine learning, whilemaintaining transparency. However, machine learning may disrupt learning processes within the organization.
För att bedöma om en kreditlimit för ett företag ska förändras eller inte skriver ett finansiellt institut ett PM innehållande text och finansiella data. Detta PM granskas sedan av en kreditkommitté som beslutar om limiten ska förändras eller inte. För att effektivisera denna process användes i denna rapport maskininlärning istället för en kreditkommitté för att besluta om limiten ska förändras. Eftersom de flesta maskininlärningsalgoritmer är svarta lådor så användes LIME-ramverket för att hitta de viktigaste drivarna bakom klassificeringen. Denna studies resultat visar att kredit-PM kan klassificeras med hög noggrannhet och att LIME kan visa vilken del av ett PM som hade störst påverkan vid klassificeringen. Implikationerna av detta är att kreditprocessen kan förbättras av maskininlärning, utan att förlora transparens. Maskininlärning kan emellertid störa lärandeprocesser i organisationen, varför införandet av dessa algoritmer bör vägas mot hur betydelsefullt det är att bevara och utveckla kunskap inom organisationen.
APA, Harvard, Vancouver, ISO, and other styles
4

Saluja, Rohit. "Interpreting Multivariate Time Series for an Organization Health Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289465.

Full text
Abstract:
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem.
Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
APA, Harvard, Vancouver, ISO, and other styles
5

Stanzione, Vincenzo Maria. "Developing a new approach for machine learning explainability combining local and global model-agnostic approaches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.

Full text
Abstract:
The last couple of past decades have seen a new flourishing season for the Artificial Intelligence, in particular for Machine Learning (ML). This is reflected in the great number of fields that are employing ML solutions to overcome a broad spectrum of problems. However, most of the last employed ML models have a black-box behavior. This means that given a certain input, we are not able to understand why one of these models produced a certain output or made a certain decision. Most of the time, we are not interested in knowing what and how the model is thinking, but if we think of a model which makes extremely critical decisions or takes decisions that have a heavy result on people’s lives, in these cases explainability is a duty. A great variety of techniques to perform global or local explanations are available. One of the most widespread is Local Interpretable Model-Agnostic Explanations (LIME), which creates a local linear model in the proximity of an input to understand in which way each feature contributes to the final output. However, LIME is not immune from instability problems and sometimes to incoherent predictions. Furthermore, as a local explainability technique, LIME needs to be performed for each different input that we want to explain. In this work, we have been inspired by the LIME approach for linear models to craft a novel technique. In combination with the Model-based Recursive Partitioning (MOB), a brand-new score function to assess the quality of a partition and the usage of Sobol quasi-Montecarlo sampling, we developed a new global model-agnostic explainability technique we called Global-Lime. Global-Lime is capable of giving a global understanding of the original ML model, through an ensemble of spatially not overlapped hyperplanes, plus a local explanation for a certain output considering only the corresponding linear approximation. The idea is to train the black-box model and then supply along with it its explainable version.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Local-interpretable-model-agnostic"

1

Davagdorj, Khishigsuren, Meijing Li, and Keun Ho Ryu. "Local Interpretable Model-Agnostic Explanations of Predictive Models for Hypertension." In Advances in Intelligent Information Hiding and Multimedia Signal Processing, 426–33. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6757-9_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thanh-Hai, Nguyen, Toan Bao Tran, An Cong Tran, and Nguyen Thai-Nghe. "Feature Selection Using Local Interpretable Model-Agnostic Explanations on Metagenomic Data." In Future Data and Security Engineering. Big Data, Security and Privacy, Smart City and Industry 4.0 Applications, 340–57. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4370-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ElShawi, Radwa, Youssef Sherif, Mouaz Al-Mallah, and Sherif Sakr. "ILIME: Local and Global Interpretable Model-Agnostic Explainer of Black-Box Decision." In Advances in Databases and Information Systems, 53–68. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28730-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Graziani, Mara, Iam Palatnik de Sousa, Marley M. B. R. Vellasco, Eduardo Costa da Silva, Henning Müller, and Vincent Andrearczyk. "Sharpening Local Interpretable Model-Agnostic Explanations for Histopathology: Improved Understandability and Reliability." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 540–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87199-4_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schlegel, Udo, Duy Lam Vo, Daniel A. Keim, and Daniel Seebacher. "TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models." In Communications in Computer and Information Science, 5–14. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93736-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Recio-Garcí­a, Juan A., Belén Dí­az-Agudo, and Victor Pino-Castilla. "CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations." In Case-Based Reasoning Research and Development, 179–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58342-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Biecek, Przemyslaw, and Tomasz Burzykowski. "Local Interpretable Model-agnostic Explanations (LIME)." In Explanatory Model Analysis, 107–23. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9780429027192-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Local-interpretable-model-agnostic"

1

Shi, Peichang, Aryya Gangopadhyay, and Ping Yu. "LIVE: A Local Interpretable model-agnostic Visualizations and Explanations." In 2022 IEEE 10th International Conference on Healthcare Informatics (ICHI). IEEE, 2022. http://dx.doi.org/10.1109/ichi54592.2022.00045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barr Kumarakulasinghe, Nesaretnam, Tobias Blomberg, Jintai Liu, Alexandra Saraiva Leao, and Panagiotis Papapetrou. "Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models." In 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS). IEEE, 2020. http://dx.doi.org/10.1109/cbms49503.2020.00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Farhood, Helia, Morteza Saberi, and Mohammad Najafi. "Improving Object Recognition in Crime Scenes via Local Interpretable Model-Agnostic Explanations." In 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW). IEEE, 2021. http://dx.doi.org/10.1109/edocw52865.2021.00037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Bin, Wenbin Pei, Bing Xue, and Mengjie Zhang. "Evolving local interpretable model-agnostic explanations for deep neural networks in image classification." In GECCO '21: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3449726.3459452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Freitas da Cruz, Harry, Frederic Schneider, and Matthieu-P. Schapranow. "Prediction of Acute Kidney Injury in Cardiac Surgery Patients: Interpretation using Local Interpretable Model-agnostic Explanations." In 12th International Conference on Health Informatics. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007399203800387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sousa, Iam, Marley Vellasco, and Eduardo Silva. "Classificações Explicáveis para Imagens de Células Infectadas por Malária." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/eniac.2020.12116.

Full text
Abstract:
Este trabalho apresenta o desenvolvimento de um classificador explicável de imagens, treinado para a tarefa de determinar se uma célula foi infectada por malária. O classificador consiste em uma rede neural residual, com acurácia de classificação de 96%, treinada com o dataset de Malária do National Health Institute. Técnicas de Inteligência Artificial Explicável foram aplicadas para tornar as classificações mais interpretáveis. Estas explicações são geradas usando duas metodologias: Local Interpretable Model Agnostic Explanations (LIME) e SquareGrid. As explicações fornecem perspectivas novas e importantes para os padrões de decisão de modelos como este, de alto desempenho para tarefas médicas.
APA, Harvard, Vancouver, ISO, and other styles
7

Borges, Fernando Elias Melo, Danton Diego Ferreira, and Antônio Carlos de Sousa Couto Júnior. "Classificação e Interpretação de dados do Cadastro Ambiental Rural utilizando técnicas de Aprendizagem de Máquina." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-108.

Full text
Abstract:
The Rural Environmental Registry (CAR) consists of a mandatory public electronic registry for all rural properties in the Brazilian territory, integrates environmental information of the properties, assists the monitoring of them and the fight against deforestation. However, a large number of registrations are carried out erroneously generating inconsistent data, leading these to be canceled and/or to be requested to correct the registration. Performing automatic verification of these records is important to improve the processing of records. This paper proposes an automatic classification method to approve or cancel the CAR registers with interpretation of the classifications performed. For this, four machine learning-based classifiers were tested and the results were evaluated. The model with the best performance was used to interpret the classification using the Local Interpretable Model-agnostic Explanations (LIME) algorithm. The results showed the potential of the method in future real applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Protopapadakis, Giorgois, Asteris Apostolidis, and Anestis I. Kalfas. "Explainable and Interpretable AI-Assisted Remaining Useful Life Estimation for Aeroengines." In ASME Turbo Expo 2022: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/gt2022-80777.

Full text
Abstract:
Abstract Remaining Useful Life (RUL) estimation is directly related with the application of predictive maintenance. When RUL estimation is performed via data-driven methods and Artificial Intelligence algorithms, explainability and interpretability of the model are necessary for trusted predictions. This is especially important when predictive maintenance is applied to gas turbines or aeroengines, as they have high operational and maintenance costs, while their safety standards are strict and highly regulated. The objective of this work is to study the explainability of a Deep Neural Network (DNN) RUL prediction model. An open-source database is used, which is composed by computed measurements through a thermodynamic model for a given turbofan engine, considering non-linear degradation and data points for every second of a full flight cycle. First, the necessary data pre-processing is performed, and a DNN is used for the regression model. The selection of its hyper-parameters is done using random search and Bayesian optimisation. Tests considering the feature selection and the requirements of additional virtual sensors are discussed. The generalisability of the model is performed, showing that the type of faults as well as the dominant degradation has an important effect on the overall accuracy of the model. The explainability and interpretability aspects are studied, following the Local Interpretable Model-agnostic Explanations (LIME) method. The outcomes are showing that for simple data sets, the model can better understand physics, and LIME can give a good explanation. However, as the complexity of the data increases, both the accuracy of the model drops but also LIME seems to have difficulties in giving satisfactory explanations.
APA, Harvard, Vancouver, ISO, and other styles
9

Yao, Chen, Xi Yueyun, Chen Jinwei, and Zhang Huisheng. "A Novel Gas Path Fault Diagnostic Model for Gas Turbine Based on Explainable Convolutional Neural Network With LIME Method." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-59289.

Full text
Abstract:
Abstract Gas turbine is widely used in aviation and energy industries. Gas path fault diagnosis is an important task for gas turbine operation and maintenance. With the development of information technology, especially deep learning methods, data-driven approaches for gas path diagnosis are developing rapidly in recent years. However, the mechanism of most data-driven models are difficult to explain, resulting in lacking of the credibility of the data-driven methods. In this paper, a novel explainable data-driven model for gas path fault diagnosis based on Convolutional Neural Network (CNN) using Local Interpretable Model-agnostic Explanations (LIME) method is proposed. The input matrix of CNN model is established by considering the mechanism information of gas turbine fault modes and their effects. The relationship between the measurement parameters and fault modes are considered to arrange the relative position in the input matrix. The key parameters which contributes to fault recognition can be achieved by LIME method, and the mechanism information is used to verify the fault diagnostic proceeding and improve the measurement sensor matrix arrangement. A double shaft gas turbine model is used to generate healthy and fault data including 12 typical faults to test the model. The accuracy and interpretability between the CNN diagnosis model built with prior mechanism knowledge and built by parameter correlation matrix are compared, whose accuracy are 96.34% and 89.46% respectively. The result indicates that CNN diagnosis model built with prior mechanism knowledge shows better accuracy and interpretability. This method can express the relevance of the failure mode and its high-correlation measurement parameters in the model, which can greatly improve the interpretability and application value.
APA, Harvard, Vancouver, ISO, and other styles
10

Aditama, Prihandono, Tina Koziol, and Dr Meindert Dillen. "Development of an Artificial Intelligence-Based Well Integrity Monitoring Solution." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211093-ms.

Full text
Abstract:
Abstract The question of how to safeguard well integrity is one of the most important problems faced by oil and gas companies today. With the rise of Artificial Intelligence (AI) and Machine Learning (ML), many companies explore new technologies to improve well integrity and avoid catastrophic events. This paper presents the Proof of Concept (PoC) of an AI-based well integrity monitoring solution for gas lift, natural flow, and water injector wells. AI model prototypes were built to detect annulus leakage as incident-relevant anomalies from time series sensor data. The historical well annulus leakage incidents were classified based on well type and the incident relevant anomalies were categorized as short and long-term. The objective of the PoC is to build generalized AI models that could detect historical events and future events in unseen wells. The success criteria are discussed and agreed with the Subject Matter Experts (SMEs). Two statistical metrics were defined (Detected Event Rate – DER – and False Alarm Rate – FAR) to quantitively evaluate the model performance and decide if it could be used for the next phase. The high frequency sensor data were retrieved from the production historian. The importance of the sensor was aligned with the SMEs and only a small number of sensors was used as input variable. The raw data was pre-processed and resampled to improve model performance and increase computational efficiency. Throughout the PoC, the authors learnt that specific AI models needed to be implemented for different well types as generalization across well types could not be achieved. Depending on the number of available labels in the training dataset, either unsupervised or supervised ML models were developed. Deep learning models, based on LSTM (Long-Short Term Memory) autoencoder and classifier were used to detect complex anomalies. In cases where limited data were available and simplistic anomaly patterns were present, deterministic rules were implemented to detect well integrity-relevant incidents. The LIME (Local Interpretable Model-Agnostic Explanations) framework was used to derive the most important sensors causing the anomaly prediction to enable the users to critically validate the AI suggestion. The AI models for gas lift and natural flow wells achieved a sufficient level of performance with a minimum of 75% of historical events detected and less than one false positive per month per well.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography