Добірка наукової літератури з теми "Model-agnostic Explainability"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Model-agnostic Explainability".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Model-agnostic Explainability"

1

Diprose, William K., Nicholas Buist, Ning Hua, Quentin Thurier, George Shand, and Reece Robinson. "Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator." Journal of the American Medical Informatics Association 27, no. 4 (February 27, 2020): 592–600. http://dx.doi.org/10.1093/jamia/ocz229.

Повний текст джерела
Анотація:
Abstract Objective Implementation of machine learning (ML) may be limited by patients’ right to “meaningful information about the logic involved” when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods. Materials and Methods We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association. Results The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P < .001), between physician understanding and trust (P < .001), and between explainability and trust (P < .001). ML outputs that used model-agnostic explainability methods were preferred by 88% of physicians when compared with the control condition; however, no particular ML explainability method had a greater influence on intended physician behavior. Conclusions Physician understanding, explainability, and trust in ML risk calculators are related. Physicians preferred ML outputs accompanied by model-agnostic explanations but the explainability method did not alter intended physician behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (June 30, 2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Повний текст джерела
Анотація:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

TOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example." Journal of Medicine and Palliative Care 4, no. 2 (March 27, 2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.

Повний текст джерела
Анотація:
Aim: Machine learning tools have various applications in healthcare. However, the implementation of developed models is still limited because of various challenges. One of the most important problems is the lack of explainability of machine learning models. Explainability refers to the capacity to reveal the reasoning and logic behind the decisions made by AI systems, making it straightforward for human users to understand the process and how the system arrived at a specific outcome. The study aimed to compare the performance of different model-agnostic explanation methods using two different ML models created for HbA1c classification. Material and Method: The H2O AutoML engine was used for the development of two ML models (Gradient boosting machine (GBM) and default random forests (DRF)) using 3,036 records from NHANES open data set. Both global and local model-agnostic explanation methods, including performance metrics, feature important analysis and Partial dependence, Breakdown and Shapley additive explanation plots were utilized for the developed models. Results: While both GBM and DRF models have similar performance metrics, such as mean per class error and area under the receiver operating characteristic curve, they had slightly different variable importance. Local explainability methods also showed different contributions to the features. Conclusion: This study evaluated the significance of explainable machine learning techniques for comprehending complicated models and their role in incorporating AI in healthcare. The results indicate that although there are limitations to current explainability methods, particularly for clinical use, both global and local explanation models offer a glimpse into evaluating the model and can be used to enhance or compare models.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ullah, Ihsan, Andre Rios, Vaibhav Gala, and Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation." Applied Sciences 12, no. 1 (December 23, 2021): 136. http://dx.doi.org/10.3390/app12010136.

Повний текст джерела
Анотація:
Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Повний текст джерела
Анотація:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven healthcare approaches and their performances in the current study are discussed. The toolkits used in local and global post hoc explainability and the multiple techniques for explainability pertaining the Rational, Data, and Performance explainability are discussed in the current study. Methods. The explainability of the artificial intelligence model in the healthcare domain is implemented through the Local Interpretable Model-Agnostic Explanations and Shapley Additive Explanations for better comprehensibility of the internal working mechanism of the original AI models and the correlation among the feature set that influences decision of the model. Results. The current state-of-the-art XAI-based and future technologies through XAI are reported on research findings in various implementation aspects, including research challenges and limitations of existing models. The role of XAI in the healthcare domain ranging from the earlier prediction of future illness to the disease’s smart diagnosis is discussed. The metrics considered in evaluating the model’s explainability are presented, along with various explainability tools. Three case studies about the role of XAI in the healthcare domain with their performances are incorporated for better comprehensibility. Conclusion. The future perspective of XAI in healthcare will assist in obtaining research insight in the healthcare domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lv, Ge, Chen Jason Zhang, and Lei Chen. "HENCE-X: Toward Heterogeneity-Agnostic Multi-Level Explainability for Deep Graph Networks." Proceedings of the VLDB Endowment 16, no. 11 (July 2023): 2990–3003. http://dx.doi.org/10.14778/3611479.3611503.

Повний текст джерела
Анотація:
Deep graph networks (DGNs) have demonstrated their outstanding effectiveness on both heterogeneous and homogeneous graphs. However their black-box nature does not allow human users to understand their working mechanisms. Recently, extensive efforts have been devoted to explaining DGNs' prediction, yet heterogeneity-agnostic multi-level explainability is still less explored. Since the two types of graphs are both irreplaceable in real-life applications, having a more general and end-to-end explainer becomes a natural and inevitable choice. In the meantime, feature-level explanation is often ignored by existing techniques, while topological-level explanation alone can be incomplete and deceptive. Thus, we propose a heterogeneity-agnostic multi-level explainer in this paper, named HENCE-X, which is a causality-guided method that can capture the non-linear dependencies of model behavior on the input using conditional probabilities. We theoretically prove that HENCE-X is guaranteed to find the Markov blanket of the explained prediction, meaning that all information that the prediction is dependent on is identified. Experiments on three real-world datasets show that HENCE-X outperforms state-of-the-art (SOTA) methods in generating faithful factual and counterfactual explanations of DGNs.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (December 5, 2021): 3137. http://dx.doi.org/10.3390/math9233137.

Повний текст джерела
Анотація:
Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS classification. XCM is a new compact convolutional neural network which extracts information relative to the observed variables and time directly from the input data. Thus, XCM architecture enables a good generalization ability on both large and small datasets, while allowing the full exploitation of a faithful post hoc model-specific explainability method (Gradient-weighted Class Activation Mapping) by precisely identifying the observed variables and timestamps of the input data that are important for predictions. We first show that XCM outperforms the state-of-the-art MTS classifiers on both the large and small public UEA datasets. Then, we illustrate how XCM reconciles performance and explainability on a synthetic dataset and show that XCM enables a more precise identification of the regions of the input data that are important for predictions compared to the current deep learning MTS classifier also providing faithful explainability. Finally, we present how XCM can outperform the current most accurate state-of-the-art algorithm on a real-world application while enhancing explainability by providing faithful and more informative explanations.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hassan, Fayaz, Jianguo Yu, Zafi Sherhan Syed, Nadeem Ahmed, Mana Saleh Al Reshan, and Asadullah Shaikh. "Achieving model explainability for intrusion detection in VANETs with LIME." PeerJ Computer Science 9 (June 22, 2023): e1440. http://dx.doi.org/10.7717/peerj-cs.1440.

Повний текст джерела
Анотація:
Vehicular ad hoc networks (VANETs) are intelligent transport subsystems; vehicles can communicate through a wireless medium in this system. There are many applications of VANETs such as traffic safety and preventing the accident of vehicles. Many attacks affect VANETs communication such as denial of service (DoS) and distributed denial of service (DDoS). In the past few years the number of DoS (denial of service) attacks are increasing, so network security and protection of the communication systems are challenging topics; intrusion detection systems need to be improved to identify these attacks effectively and efficiently. Many researchers are currently interested in enhancing the security of VANETs. Based on intrusion detection systems (IDS), machine learning (ML) techniques were employed to develop high-security capabilities. A massive dataset containing application layer network traffic is deployed for this purpose. Interpretability technique Local interpretable model-agnostic explanations (LIME) technique for better interpretation model functionality and accuracy. Experimental results demonstrate that utilizing a random forest (RF) classifier achieves 100% accuracy, demonstrating its capability to identify intrusion-based threats in a VANET setting. In addition, LIME is applied to the RF machine learning model to explain and interpret the classification, and the performance of machine learning models is evaluated in terms of accuracy, recall, and F1 score.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Vieira, Carla Piazzon Ramos, and Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM." Revista Brasileira de Computação Aplicada 12, no. 1 (January 8, 2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.

Повний текст джерела
Анотація:
The technologies supporting Artificial Intelligence (AI) have advanced rapidly over the past few years and AI is becoming a commonplace in every aspect of life like the future of self-driving cars or earlier health diagnosis. For this to occur shortly, the entire community stands in front of the barrier of explainability, an inherent problem of latest models (e.g. Deep Neural Networks) that were not present in the previous hype of AI (linear and rule-based models). Most of these recent models are used as black boxes without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. In this paper, we focus on how much we can understand the decisions made by an SVM Classifier in a post-hoc model agnostic approach. Furthermore, we train a tree-based model (inherently interpretable) using labels from the SVM, called secondary training data to provide explanations and compare permutation importance method to the more commonly used measures such as accuracy and show that our methods are both more reliable and meaningful techniques to use. We also outline the main challenges for such methods and conclude that model-agnostic interpretability is a key component in making machine learning more trustworthy.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nguyen, Hung Viet, and Haewon Byeon. "Prediction of Out-of-Hospital Cardiac Arrest Survival Outcomes Using a Hybrid Agnostic Explanation TabNet Model." Mathematics 11, no. 9 (April 25, 2023): 2030. http://dx.doi.org/10.3390/math11092030.

Повний текст джерела
Анотація:
Survival after out-of-hospital cardiac arrest (OHCA) is contingent on time-sensitive interventions taken by onlookers, emergency call operators, first responders, emergency medical services (EMS) personnel, and hospital healthcare staff. By building integrated cardiac resuscitation systems of care, measurement systems, and techniques for assuring the correct execution of evidence-based treatments by bystanders, EMS professionals, and hospital employees, survival results can be improved. To aid in OHCA prognosis and treatment, we develop a hybrid agnostic explanation TabNet (HAE-TabNet) model to predict OHCA patient survival. According to the results, the HAE-TabNet model has an “Area under the receiver operating characteristic curve value” (ROC AUC) score of 0.9934 (95% confidence interval 0.9933–0.9935), which outperformed other machine learning models in the previous study, such as XGBoost, k-nearest neighbors, random forest, decision trees, and logistic regression. In order to achieve model prediction explainability for a non-expert in the artificial intelligence field, we combined the HAE-TabNet model with a LIME-based explainable model. This HAE-TabNet model may assist medical professionals in the prognosis and treatment of OHCA patients effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Model-agnostic Explainability"

1

Stanzione, Vincenzo Maria. "Developing a new approach for machine learning explainability combining local and global model-agnostic approaches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.

Повний текст джерела
Анотація:
The last couple of past decades have seen a new flourishing season for the Artificial Intelligence, in particular for Machine Learning (ML). This is reflected in the great number of fields that are employing ML solutions to overcome a broad spectrum of problems. However, most of the last employed ML models have a black-box behavior. This means that given a certain input, we are not able to understand why one of these models produced a certain output or made a certain decision. Most of the time, we are not interested in knowing what and how the model is thinking, but if we think of a model which makes extremely critical decisions or takes decisions that have a heavy result on people’s lives, in these cases explainability is a duty. A great variety of techniques to perform global or local explanations are available. One of the most widespread is Local Interpretable Model-Agnostic Explanations (LIME), which creates a local linear model in the proximity of an input to understand in which way each feature contributes to the final output. However, LIME is not immune from instability problems and sometimes to incoherent predictions. Furthermore, as a local explainability technique, LIME needs to be performed for each different input that we want to explain. In this work, we have been inspired by the LIME approach for linear models to craft a novel technique. In combination with the Model-based Recursive Partitioning (MOB), a brand-new score function to assess the quality of a partition and the usage of Sobol quasi-Montecarlo sampling, we developed a new global model-agnostic explainability technique we called Global-Lime. Global-Lime is capable of giving a global understanding of the original ML model, through an ensemble of spatially not overlapped hyperplanes, plus a local explanation for a certain output considering only the corresponding linear approximation. The idea is to train the black-box model and then supply along with it its explainable version.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bhattacharya, Debarpan. "A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.

Повний текст джерела
Анотація:
Deep neural networks are the most widely used examples of sophisticated mapping functions from feature space to class labels. In the recent years, several high impact decisions in domains such as finance, healthcare, law and autonomous driving, are made with deep models. In these tasks, the model decisions lack interpretability, and pose difficulties in making the models accountable. Hence, there is a strong demand for developing explainable approaches which can elicit how the deep neural architecture, despite the astounding performance improvements observed in all fields, including computer vision, natural language processing, generates the output decisions. The current frameworks for explainability of deep models are based on gradients (eg. GradCAM, guided-gradCAM, Integrated gradients etc) or based on locally linear assumptions (eg. LIME). Some of these approaches require the knowledge of the deep model architecture, which may be restrictive in many applications. Further, most of the prior works in the literature highlight the results on a set of small number of examples to illustrate the performance of these XAI methods, often lacking statistical evaluation. This thesis proposes a new approach for explainability based on mask estimation approaches, called the Distillation Approach for Model-agnostic Explainability (DAME). The DAME is a saliency-based explainability model that is post-hoc, model-agnostic (applicable to any black box architecture), and requires only query access to black box. The DAME is a student-teacher modeling approach, where the teacher model is the original model for which the explainability is sought, while the student model is the mask estimation model. The input sample is augmented with various data augmentation techniques to produce numerous samples in the immediate vicinity of the input. Using these samples, the mask estimation model is learnt to generate the saliency map of the input sample for predicting the labels. A distillation loss is used to train the DAME model, and the student model tries to locally approximate the original model. Once the DAME model is trained, the DAME generates a region of the input (either in space or in time domain for images and audio samples, respectively) that best explains the model predictions. We also propose an evaluation framework, for both image and audio tasks, where the XAI models are evaluated in a statistical framework on a set of held-out of examples with the Intersection-over-Union (IoU) metric. We have validated the DAME model for vision, audio and biomedical tasks. Firstly, we deploy the DAME for explaining a ResNet-50 classifier pre-trained on ImageNet dataset for the object recognition task. Secondly, we explain the predictions made by ResNet-50 classifier fine-tuned on Environmental Sound Classification (ESC-10) dataset for the audio event classification task. Finally, we validate the DAME model on the COVID-19 classification task using cough audio recordings. In these tasks, the DAME model is shown to outperform existing benchmarks for explainable modeling. The thesis concludes with a discussion on the limitations of the DAME approach along with the potential future directions.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Model-agnostic Explainability"

1

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Повний текст джерела
Анотація:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structures (e.g. neural networks) with post-processing to calculate feature importance. In this paper, a comprehensive overview of predictive models with varying intrinsic complexity are measured based on explainability with model-agnostic quantitative evaluation metrics. To this end, explainability is designed as a symbiosis between interpretability and faithfulness and thereby allowing to compare inherently created explanations (e.g. decision tree rules) with post-hoc explainability techniques (e.g. Shapley values) on top of AI models. Moreover, two improved versions of the logistic regression model capable of capturing non-linear interactions and both inherently generating their own explanations are proposed in the OOPPM context. These models are benchmarked with two common state-of-the-art models with post-hoc explanation techniques in the explainability-performance space.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Baniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.

Повний текст джерела
Анотація:
AbstractMany methods have been developed to understand complex predictive models and high expectations are placed on post-hoc model explainability. It turns out that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications where auditability became a must-have trait supporting black-box machine learning. The fooling is performed via poisoning the data to bend and shift explanations in the desired direction using genetic and gradient algorithms. We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sovrano, Francesco, Salvatore Sapienza, Monica Palmirani, and Fabio Vitali. "A Survey on Methods and Metrics for the Assessment of Explainability Under the Proposed AI Act." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2021. http://dx.doi.org/10.3233/faia210342.

Повний текст джерела
Анотація:
This study discusses the interplay between metrics used to measure the explainability of the AI systems and the proposed EU Artificial Intelligence Act. A standardisation process is ongoing: several entities (e.g. ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act and explainability metrics play a significant role. This study identifies the requirements that such a metric should possess to ease compliance with the AI Act. It does so according to an interdisciplinary approach, i.e. by departing from the philosophical concept of explainability and discussing some metrics proposed by scholars and standardisation entities through the lenses of the explainability obligations set by the proposed AI Act. Our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible & accessible. This is why we discuss the extent to which these requirements are met by the metrics currently under discussion.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Model-agnostic Explainability"

1

Prentzas, Nicoletta, Marios Pitsiali, Efthyvoulos Kyriacou, Andrew Nicolaides, Antonis Kakas, and Constantinos S. Pattichis. "Model Agnostic Explainability Techniques in Ultrasound Image Analysis." In 2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE). IEEE, 2021. http://dx.doi.org/10.1109/bibe52308.2021.9635199.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hamilton, Nicholas, Adam Webb, Matt Wilder, Ben Hendrickson, Matt Blanck, Erin Nelson, Wiley Roemer, and Timothy C. Havens. "Enhancing Visualization and Explainability of Computer Vision Models with Local Interpretable Model-Agnostic Explanations (LIME)." In 2022 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2022. http://dx.doi.org/10.1109/ssci51031.2022.10022096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Protopapadakis, Giorgois, Asteris Apostolidis, and Anestis I. Kalfas. "Explainable and Interpretable AI-Assisted Remaining Useful Life Estimation for Aeroengines." In ASME Turbo Expo 2022: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/gt2022-80777.

Повний текст джерела
Анотація:
Abstract Remaining Useful Life (RUL) estimation is directly related with the application of predictive maintenance. When RUL estimation is performed via data-driven methods and Artificial Intelligence algorithms, explainability and interpretability of the model are necessary for trusted predictions. This is especially important when predictive maintenance is applied to gas turbines or aeroengines, as they have high operational and maintenance costs, while their safety standards are strict and highly regulated. The objective of this work is to study the explainability of a Deep Neural Network (DNN) RUL prediction model. An open-source database is used, which is composed by computed measurements through a thermodynamic model for a given turbofan engine, considering non-linear degradation and data points for every second of a full flight cycle. First, the necessary data pre-processing is performed, and a DNN is used for the regression model. The selection of its hyper-parameters is done using random search and Bayesian optimisation. Tests considering the feature selection and the requirements of additional virtual sensors are discussed. The generalisability of the model is performed, showing that the type of faults as well as the dominant degradation has an important effect on the overall accuracy of the model. The explainability and interpretability aspects are studied, following the Local Interpretable Model-agnostic Explanations (LIME) method. The outcomes are showing that for simple data sets, the model can better understand physics, and LIME can give a good explanation. However, as the complexity of the data increases, both the accuracy of the model drops but also LIME seems to have difficulties in giving satisfactory explanations.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Stang, Marco, Marc Schindewolf, and Eric Sax. "Unraveling Scenario-Based Behavior of a Self-Learning Function with User Interaction." In 10th International Conference on Human Interaction and Emerging Technologies (IHIET 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004028.

Повний текст джерела
Анотація:
In recent years, the field of Artificial Intelligence (AI) and Machine Learning (ML) has witnessed remarkable advancements, revolutionizing various industries and domains. The proliferation of data availability, computational power, and algorithmic innovations has propelled the development of highly sophisticated AI models, particularly in the realm of Deep Learning (DL). These DL models have demonstrated unprecedented levels of accuracy and performance across a wide range of tasks, including image recognition, natural language processing, and complex decision-making. However, amidst these impressive achievements, a critical challenge has emerged - the lack of interpretability.Highly accurate AI models, including DL models, are often referred to as black boxes because their internal workings and decision-making processes are not readily understandable to humans. While these models excel in generating accurate predictions or classifications, they do not provide clear explanations for their reasoning, leaving users and stakeholders in the dark about how and why specific decisions are made. This lack of interpretability raises concerns and limits the trust that humans can place in these models, particularly in safety-critical or high-stakes applications where accountability, transparency, and understanding are paramount.To address the challenge of interpretability, Explainable AI (xAI) has emerged as a multidisciplinary field that aims to bridge the gap in understanding between machines and humans. xAI encompasses a collection of methods and techniques designed to shed light on the decision-making processes of AI models, making their outputs more transparent, interpretable, and comprehensible to human users.The main objective of this paper is to enhance the explainability of AI-based systems that involve user interaction by employing various xAI methods. The proposed approach revolves around a comprehensive ML workflow, beginning with the utilization of real-world data to train a machine learning model that learns the behavior of a simulated driver. The training process encompasses a diverse range of real-world driving scenarios, ensuring that the model captures the intricacies and nuances of different driving situations. This training data serves as the foundation for the subsequent phases of the workflow, where the model's predictive performance is evaluated.Following the training and testing phases, the predictions generated by the ML model are subjected to explanation using different xAI methods, such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations). These xAI methods operate at both the global and local levels, providing distinct perspectives on the model's decision-making process. Global explanations offer insights into the overall behavior of the ML model, enabling a broader understanding of the patterns, relationships, and features that the model deems significant across different instances. These global explanations contribute to a deeper comprehension of the decision-making process employed by the model, allowing users to gain insights into the underlying factors driving its predictions.In contrast, local explanations offer detailed insights into specific instances or predictions made by the model. By analyzing these local explanations, users can better understand why the model made a particular prediction in a given case. This granular analysis facilitates the identification of potential weaknesses, biases, or areas for improvement in the model's performance. By pinpointing the specific features or factors that contribute to the model's decision in individual instances, local explanations offer valuable insights for refining the model and enhancing its accuracy and reliability.In conclusion, the lack of explainability in AI models, particularly in the realm of DL, presents a significant challenge that hinders trust and understanding between machines and humans. Explainable AI (xAI) has emerged as a vital field of research and practice, aiming to address this challenge by providing methods and techniques to enhance the interpretability and transparency of AI models. This paper focuses on enhancing the explainability of AI-based systems involving user interaction by employing various xAI methods. The proposed ML workflow, coupled with global and local explanations, offers valuable insights into the decision-making processes of the model. By unraveling the scenario-based behavior of a self-learning function with user interaction, this paper aims to contribute to the understanding and interpretability of AI-based systems. The insights gained from this research can pave the way for enhanced user trust, improved model performance, and further advancements in the field of explainable AI.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Haid, Charlotte, Alicia Lang, and Johannes Fottner. "Explaining algorithmic decisions: design guidelines for explanations in User Interfaces." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003764.

Повний текст джерела
Анотація:
Artificial Intelligence (AI)-based decision support is becoming a growing issue in manufacturing and logistics. Users of AI-based systems have the claim to understand the decisions made by the systems. In addition, users like workers or managers, but also works councils in companies, demand transparency in the use of AI. Given this background, AI research faces the challenge of making the decisions of algorithmic systems explainable. Algorithms, especially in the field of AI, but also classical algorithms do not provide an explanation for their decision. To generate such explanations, new algorithms have been designed to explain the decisions of the other algorithms post hoc. This subfield is called explainable artificial intelligence (XAI). Methods like local interpretable model-agnostic explanations (LIME), shapley additive explanations (SHAP) or layer-wise relevance propagation (LRP) can be applied. LIME is an algorithm that can explain the predictions of any classifier by learning an interpretable model around the prediction locally. In the case of image recognition, for example, a LIME algorithm can highlight the image areas based on which the algorithm arrived at its decision. They even show that the algorithm can also come to a result based on the image caption. SHAP, a game theoretic approach that can be applied to the output of any machine learning model, connects optimal credit allocation with local explanations. It uses Shapley values as in game theory for the allocation. In the research of XAI, explanatory user interfaces and user interactions have hardly been studied. One of the most crucial factors to make a model understandable through explanations is the involvement of users in XAI. Human-computer interaction skills are needed in addition to technical expertise. According to Miller and Molnar, good explanations should be designed contrastively to explain why event A happened instead of another event B, rather than just emphasizing why event A occurred. In addition, it is important that explanations are limited to only one or two causes and are thus formulated selectively. In literature, four guidelines to be respected for explanations are formulated: use a natural language, use various methods to explain, adapt to mental models of users and be responsive, so a user can ask follow-up questions. The explanations are often very mathematical and a deep knowledge of details is needed to understand the explanations. In this paper, we present design guidelines to help make explanations of algorithms understandable and user-friendly. We use the example of AI-based algorithmic scheduling in logistics and show the importance of a comprehensive user interface in explaining decisions. In our use case, AI-based shift scheduling in logistics, where workers are assigned to workplaces based on their preferences, we designed a user interface to support transparency as well as explainability of the underlying algorithm and then evaluated it with various users and two different user interfaces. We show excerpts from the user interface and our explanations for the users and give recommendations for the creation of explanations in user interfaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії