Journal articles on the topic 'Algorithm explainability'

To see the other types of publications on this topic, follow the link: Algorithm explainability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Algorithm explainability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nuobu, Gengpan. "Transformer model: Explainability and prospectiveness." Applied and Computational Engineering 20, no. 1 (October 23, 2023): 88–99. http://dx.doi.org/10.54254/2755-2721/20/20231079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of Artificial Intelligence(AI) is to simulate learning process of human brain by strong computing power and appropriate algorithm, so that the machine can develop judging ability at work as human. Current AI mainly relies on Deep Learning model which is based on artificial neural network, like Convolutional Neural Network(CNN) in computer visualization, but that also takes with some defects. This paper introduces defects of CNN and discusses Transformer model in solving unexplainability of traditional CNN algorithm. To discuss why the Transformer model and attention mechanism are considered as the way to AI intelligibility.
2

Hwang, Hyunseung, and Steven Euijong Whang. "XClusters: Explainability-First Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 7962–70. http://dx.doi.org/10.1609/aaai.v37i7.25963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study the problem of explainability-first clustering where explainability becomes a first-class citizen for clustering. Previous clustering approaches use decision trees for explanation, but only after the clustering is completed. In contrast, our approach is to perform clustering and decision tree training holistically where the decision tree's performance and size also influence the clustering results. We assume the attributes for clustering and explaining are distinct, although this is not necessary. We observe that our problem is a monotonic optimization where the objective function is a difference of monotonic functions. We then propose an efficient branch-and-bound algorithm for finding the best parameters that lead to a balance of clustering accuracy and decision tree explainability. Our experiments show that our method can improve the explainability of any clustering that fits in our framework.
3

Pendyala, Vishnu, and Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI." Electronics 13, no. 6 (March 8, 2024): 1025. http://dx.doi.org/10.3390/electronics13061025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Machine learning is increasingly and ubiquitously being used in the medical domain. Evaluation metrics like accuracy, precision, and recall may indicate the performance of the models but not necessarily the reliability of their outcomes. This paper assesses the effectiveness of a number of machine learning algorithms applied to an important dataset in the medical domain, specifically, mental health, by employing explainability methodologies. Using multiple machine learning algorithms and model explainability techniques, this work provides insights into the models’ workings to help determine the reliability of the machine learning algorithm predictions. The results are not intuitive. It was found that the models were focusing significantly on less relevant features and, at times, unsound ranking of the features to make the predictions. This paper therefore argues that it is important for research in applied machine learning to provide insights into the explainability of models in addition to other performance metrics like accuracy. This is particularly important for applications in critical domains such as healthcare.
4

Loreti, Daniela, and Giorgio Visani. "Parallel approaches for a decision tree-based explainability algorithm." Future Generation Computer Systems 158 (September 2024): 308–22. http://dx.doi.org/10.1016/j.future.2024.04.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang, and Kay Chen Tan. "Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A plethora of fair graph neural networks (GNNs) have been proposed to promote algorithmic fairness for high-stake real-life contexts. Meanwhile, explainability is generally proposed to help machine learning practitioners debug models by providing human-understandable explanations. However, seldom work on explainability is made to generate explanations for fairness diagnosis in GNNs. From the explainability perspective, this paper explores the problem of what subgraph patterns cause the biased behavior of GNNs, and what actions could practitioners take to rectify the bias? By answering the two questions, this paper aims to produce compact, diagnostic, and actionable explanations that are responsible for discriminatory behavior. Specifically, we formulate the problem of generating diagnostic and actionable explanations as a multi-objective combinatorial optimization problem. To solve the problem, a dedicated multi-objective evolutionary algorithm is presented to ensure GNNs' explainability and fairness in one go. In particular, an influenced nodes-based gradient approximation is developed to boost the computation efficiency of the evolutionary algorithm. We provide a theoretical analysis to illustrate the effectiveness of the proposed framework. Extensive experiments have been conducted to demonstrate the superiority of the proposed method in terms of classification performance, fairness, and interpretability.
6

Yiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth, and Ali Hakan Işık. "Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning." Traitement du Signal 39, no. 3 (June 30, 2022): 863–69. http://dx.doi.org/10.18280/ts.390311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence holds great promise in medical imaging, especially histopathological imaging. However, artificial intelligence algorithms cannot fully explain the thought processes during decision-making. This situation has brought the problem of explainability, i.e., the black box problem, of artificial intelligence applications to the agenda: an algorithm simply responds without stating the reasons for the given images. To overcome the problem and improve the explainability, explainable artificial intelligence (XAI) has come to the fore, and piqued the interest of many researchers. Against this backdrop, this study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM), one of the XAI applications. Afterwards, a detailed questionnaire survey was conducted with the pathologists on these images. Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested. The research results greatly help pathologists in the diagnosis of paratuberculosis.
7

Powell, Alison B. "Explanations as governance? Investigating practices of explanation in algorithmic system design." European Journal of Communication 36, no. 4 (August 2021): 362–75. http://dx.doi.org/10.1177/02673231211028376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The algorithms underpinning many everyday communication processes are now complex enough that rendering them explainable has become a key governance objective. This article examines the question of 'who should be required to explain what, to whom, in platform environments'. By working with algorithm designers and using design methods to extrapolate existing capacities to explain aglorithmic functioning, the article discusses the power relationships underpinning explanation of algorithmic function. Reviewing how key concepts of transparency and accountability connect with explainability, the paper argues that reliance on explainability as a governance mechanism can generate a dangerous paradox which legitimates increased reliance on programmable infrastructure as expert stakeholders are reassured by their ability to perform or receive explanations, while displacing responsibility for understandings of social context and definitions of public interest
8

Xie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang, and Jinjun Chen. "Explainable recommendation based on knowledge graph and multi-objective optimization." Complex & Intelligent Systems 7, no. 3 (March 6, 2021): 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractRecommendation system is a technology that can mine user's preference for items. Explainable recommendation is to produce recommendations for target users and give reasons at the same time to reveal reasons for recommendations. The explainability of recommendations that can improve the transparency of recommendations and the probability of users choosing the recommended items. The merits about explainability of recommendations are obvious, but it is not enough to focus solely on explainability of recommendations in field of explainable recommendations. Therefore, it is essential to construct an explainable recommendation framework to improve the explainability of recommended items while maintaining accuracy and diversity. An explainable recommendation framework based on knowledge graph and multi-objective optimization is proposed that can optimize the precision, diversity and explainability about recommendations at the same time. Knowledge graph connects users and items through different relationships to obtain an explainable candidate list for target user, and the path between target user and recommended item is used as an explanation basis. The explainable candidate list is optimized through multi-objective optimization algorithm to obtain the final recommendation list. It is concluded from the results about experiments that presented explainable recommendation framework provides high-quality recommendations that contains high accuracy, diversity and explainability.
9

Kabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (April 9, 2024): 1797. http://dx.doi.org/10.3390/en17081797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability to deal with data uncertainties, such prediction has limited applicability in this domain. As a result, domain knowledge-based explainability with high accuracy is critical for making energy predictions trustworthy. Motivated by this, we propose an advanced explainable Belief Rule-Based Expert System (eBRBES) with domain knowledge-based explanations for the accurate prediction of energy consumption. We optimize BRBES’s parameters and structure to improve prediction accuracy while dealing with data uncertainties using its inference engine. To predict energy consumption, we take into account floor area, daylight, indoor occupancy, and building heating method. We also describe how a counterfactual output on energy consumption could have been achieved. Furthermore, we propose a novel Belief Rule-Based adaptive Balance Determination (BRBaBD) algorithm for determining the optimal balance between explainability and accuracy. To validate the proposed eBRBES framework, a case study based on Skellefteå, Sweden, is used. BRBaBD results show that our proposed eBRBES framework outperforms state-of-the-art machine learning algorithms in terms of optimal balance between explainability and accuracy by 85.08%.
10

Bulitko, Vadim, Shuwei Wang, Justin Stevens, and Levi H. S. Lelis. "Portability and Explainability of Synthesized Formula-based Heuristics." Proceedings of the International Symposium on Combinatorial Search 15, no. 1 (July 17, 2022): 29–37. http://dx.doi.org/10.1609/socs.v15i1.21749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Heuristic search is a key component of automated planning and pathfinding. It is guided by a heuristic function which estimates remaining solution cost. Traditionally heuristic functions for pathfinding have been human-designed or pre-computed for a specific search graph. The former tend to be compact, human-readable but generic. The latter offer better guidance but require per-graph pre-computation and have a substantial memory cost. We aim to retain compactness and readability of human-designed heuristics and increase their performance. We adopt the recently published approach of representing heuristic functions as algebraic formulae and automatically synthesizing them for video-game maps. Whereas published work merely randomly sampled the space of formula-based heuristic functions, we implement and evaluate a parameterized synthesis algorithm that unifies and generalizes the stochastic sampling, simulated annealing and a basic genetic algorithm. We tune the parameters for better synthesis performance and then, using maps from multiple video games, show that heuristics synthesized for maps from one game still outperform the baseline search (A* with weighted Manhattan distance) on maps from a different game. We analyze a frequently synthesized formula and explain how, despite having a higher error than the Manhattan distance, it takes advantage of the structure in video-game pathfinding problems and speeds up A*.
11

Gräßer, Felix, Hagen Malberg, and Sebastian Zaunseder. "Neighborhood Optimization for Therapy Decision Support." Current Directions in Biomedical Engineering 5, no. 1 (September 1, 2019): 1–4. http://dx.doi.org/10.1515/cdbme-2019-0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThis work targets the development of a neighborhood-based Collaborative Filtering therapy recommender system for clinical decision support. The proposed algorithm estimates outcome of pharmaceutical therapy options in order to derive recommendations. Two approaches, namely a Relief-based algorithm and a metric learning approach are investigated. Both adapt similarity functions to the underlying data in order to determine the neighborhood incorporated into the filtering process. The implemented approaches are evaluated regarding the accuracy of the outcome estimations. The metric learning approach can outperform the Relief-based algorithms. It is, however, inferior regarding explainability of the generated recommendations.
12

Kottinger, Justin, Shaull Almagor, and Morteza Lahijanian. "Conflict-Based Search for Explainable Multi-Agent Path Finding." Proceedings of the International Conference on Automated Planning and Scheduling 32 (June 13, 2022): 692–700. http://dx.doi.org/10.1609/icaps.v32i1.19859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The goal of the Multi-Agent Path Finding (MAPF) problem is to find non-colliding paths for agents in an environment, such that each agent reaches its goal from its initial location. In safety-critical applications, a human supervisor may want to verify that the plan is indeed collision-free. To this end, a recent work introduces a notion of explainability for MAPF based on a visualization of the plan as a short sequence of images representing time segments, where in each time segment the trajectories of the agents are disjoint. Then, the problem of Explainable MAPF via Segmentation asks for a set of non-colliding paths that admit a short-enough explanation. Explainable MAPF adds a new difficulty to MAPF, in that it is NP-hard with respect to the size of the environment, and not just the number of agents. Thus, traditional MAPF algorithms are not equipped to directly handle Explainable MAPF. In this work, we adapt Conflict Based Search (CBS), a well-studied algorithm for MAPF, to handle Explainable MAPF. We show how to add explainability constraints on top of the standard CBS tree and its underlying A* search. We examine the usefulness of this approach and, in particular, the trade-off between planning time and explainability.
13

Monsarrat, Paul, David Bernard, Mathieu Marty, Chiara Cecchin-Albertoni, Emmanuel Doumard, Laure Gez, Julien Aligon, Jean-Noël Vergnes, Louis Casteilla, and Philippe Kemoun. "Systemic Periodontal Risk Score Using an Innovative Machine Learning Strategy: An Observational Study." Journal of Personalized Medicine 12, no. 2 (February 4, 2022): 217. http://dx.doi.org/10.3390/jpm12020217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Early diagnosis is crucial for individuals who are susceptible to tooth-supporting tissue diseases (e.g., periodontitis) that may lead to tooth loss, so as to prevent systemic implications and maintain quality of life. The aim of this study was to propose a personalized explainable machine learning algorithm, solely based on non-invasive predictors that can easily be collected in a clinic, to identify subjects at risk of developing periodontal diseases. To this end, the individual data and periodontal health of 532 subjects was assessed. A machine learning pipeline combining a feature selection step, multilayer perceptron, and SHapley Additive exPlanations (SHAP) explainability, was used to build the algorithm. The prediction scores for healthy periodontium and periodontitis gave final F1-scores of 0.74 and 0.68, respectively, while gingival inflammation was harder to predict (F1-score of 0.32). Age, body mass index, smoking habits, systemic pathologies, diet, alcohol, educational level, and hormonal status were found to be the most contributive variables for periodontal health prediction. The algorithm clearly shows different risk profiles before and after 35 years of age and suggests transition ages in the predisposition to developing gingival inflammation or periodontitis. This innovative approach to systemic periodontal disease risk profiles, combining both ML and up-to-date explainability algorithms, paves the way for new periodontal health prediction strategies.
14

Lv, Ge, and Lei Chen. "On Data-Aware Global Explainability of Graph Neural Networks." Proceedings of the VLDB Endowment 16, no. 11 (July 2023): 3447–60. http://dx.doi.org/10.14778/3611479.3611538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graph Neural Networks (GNNs) have significantly boosted the performance of many graph-based applications, yet they serve as black-box models. To understand how GNNs make decisions, explainability techniques have been extensively studied. While the majority of existing methods focus on local explainability, we propose DAG-Explainer in this work aiming for global explainability. Specifically, we observe three properties of superior explanations for a pretrained GNN: they should be highly recognized by the model, compliant with the data distribution and discriminative among all the classes. The first property entails an explanation to be faithful to the model, as the other two require the explanation to be convincing regarding the data distribution. Guided by these properties, we design metrics to quantify the quality of each single explanation and formulate the problem of finding data-aware global explanations for a pretrained GNN as an optimizing problem. We prove that the problem is NP-hard and adopt a randomized greedy algorithm to find a near optimal solution. Furthermore, we derive an improved bound of the approximation algorithm in our problem over the state-of-the-art (SOTA) best. Experimental results show that DAG-Explainer can efficiently produce meaningful and trustworthy explanations while preserving comparable quantitative evaluation results to the SOTA methods.
15

Li, Tong, Jiale Deng, Yanyan Shen, Luyu Qiu, Huang Yongxiang, and Caleb Chen Cao. "Towards Fine-Grained Explainability for Heterogeneous Graph Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8640–47. http://dx.doi.org/10.1609/aaai.v37i7.26040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Heterogeneous graph neural networks (HGNs) are prominent approaches to node classification tasks on heterogeneous graphs. Despite the superior performance, insights about the predictions made from HGNs are obscure to humans. Existing explainability techniques are mainly proposed for GNNs on homogeneous graphs. They focus on highlighting salient graph objects to the predictions whereas the problem of how these objects affect the predictions remains unsolved. Given heterogeneous graphs with complex structures and rich semantics, it is imperative that salient objects can be accompanied with their influence paths to the predictions, unveiling the reasoning process of HGNs. In this paper, we develop xPath, a new framework that provides fine-grained explanations for black-box HGNs specifying a cause node with its influence path to the target node. In xPath, we differentiate the influence of a node on the prediction w.r.t. every individual influence path, and measure the influence by perturbing graph structure via a novel graph rewiring algorithm. Furthermore, we introduce a greedy search algorithm to find the most influential fine-grained explanations efficiently. Empirical results on various HGNs and heterogeneous graphs show that xPath yields faithful explanations efficiently, outperforming the adaptations of advanced GNN explanation approaches.
16

Kong, Weihao, Jianping Chen, and Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research." Minerals 14, no. 2 (January 24, 2024): 128. http://dx.doi.org/10.3390/min14020128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Sandstone-hosted uranium deposits are indeed significant sources of uranium resources globally. They are typically found in sedimentary basins and have been extensively explored and exploited in various countries. They play a significant role in meeting global uranium demand and are considered important resources for nuclear energy production. Erlian Basin, as one of the sedimentary basins in northern China, is known for its uranium mineralization hosted within sandstone formations. In this research, machine learning (ML) methodology was applied to mineral prospectivity mapping (MPM) of the metallogenic zone in the Manite depression of the Erlian Basin. An ML model of 92% accuracy was implemented with the random forest algorithm. Additionally, the confusion matrix and receiver operating characteristic curve were used as model evaluation indicators. Furthermore, the model explainability research with post hoc interpretability algorithms bridged the gap between complex opaque (black-box) models and geological cognition, enabling the effective and responsible use of AI technologies. The MPM results shown in QGIS provided vivid geological insights for ML-based metallogenic prediction. With the favorable prospective targets delineated, geologists can make decisions for further uranium exploration.
17

Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (December 5, 2021): 3137. http://dx.doi.org/10.3390/math9233137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS classification. XCM is a new compact convolutional neural network which extracts information relative to the observed variables and time directly from the input data. Thus, XCM architecture enables a good generalization ability on both large and small datasets, while allowing the full exploitation of a faithful post hoc model-specific explainability method (Gradient-weighted Class Activation Mapping) by precisely identifying the observed variables and timestamps of the input data that are important for predictions. We first show that XCM outperforms the state-of-the-art MTS classifiers on both the large and small public UEA datasets. Then, we illustrate how XCM reconciles performance and explainability on a synthetic dataset and show that XCM enables a more precise identification of the regions of the input data that are important for predictions compared to the current deep learning MTS classifier also providing faithful explainability. Finally, we present how XCM can outperform the current most accurate state-of-the-art algorithm on a real-world application while enhancing explainability by providing faithful and more informative explanations.
18

Huang, Xuanxiang, Yacine Izza, and Joao Marques-Silva. "Solving Explainability Queries with Quantification: The Case of Feature Relevancy." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 3996–4006. http://dx.doi.org/10.1609/aaai.v37i4.25514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Trustable explanations of machine learning (ML) models are vital in high-risk uses of artificial intelligence (AI). Apart from the computation of trustable explanations, a number of explainability queries have been identified and studied in recent work. Some of these queries involve solving quantification problems, either in propositional or in more expressive logics. This paper investigates one of these quantification problems, namely the feature relevancy problem (FRP), i.e.\ to decide whether a (possibly sensitive) feature can occur in some explanation of a prediction. In contrast with earlier work, that studied FRP for specific classifiers, this paper proposes a novel algorithm for the \fprob quantification problem which is applicable to any ML classifier that meets minor requirements. Furthermore, the paper shows that the novel algorithm is efficient in practice. The experimental results, obtained using random forests (RFs) induced from well-known publicly available datasets, demonstrate that the proposed solution outperforms existing state-of-the-art solvers for Quantified Boolean Formulas (QBF) by orders of magnitude. Finally, the paper also identifies a novel family of formulas that are challenging for currently state-of-the-art QBF solvers.
19

Patel, Sagar, Sangeetha Abdu Jyothi, and Nina Narodytska. "CrystalBox: Future-Based Explanations for Input-Driven Deep RL Systems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14563–71. http://dx.doi.org/10.1609/aaai.v38i13.29372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present CrystalBox, a novel, model-agnostic, posthoc explainability framework for Deep Reinforcement Learning (DRL) controllers in the large family of input-driven environments which includes computer systems. We combine the natural decomposability of reward functions in input-driven environments with the explanatory power of decomposed returns. We propose an efficient algorithm to generate future-based explanations across both discrete and continuous control environments. Using applications such as adaptive bitrate streaming and congestion control, we demonstrate CrystalBox's capability to generate high-fidelity explanations. We further illustrate its higher utility across three practical use cases: contrastive explanations, network observability, and guided reward design, as opposed to prior explainability techniques that identify salient features.
20

Arous, Ines, Ljiljana Dolamic, Jie Yang, Akansha Bhardwaj, Giuseppe Cuccu, and Philippe Cudré-Mauroux. "MARTA: Leveraging Human Rationales for Explainable Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 5868–76. http://dx.doi.org/10.1609/aaai.v35i7.16734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Explainability is a key requirement for text classification in many application domains ranging from sentiment analysis to medical diagnosis or legal reviews. Existing methods often rely on "attention" mechanisms for explaining classification results by estimating the relative importance of input units. However, recent studies have shown that such mechanisms tend to mis-identify irrelevant input units in their explanation. In this work, we propose a hybrid human-AI approach that incorporates human rationales into attention-based text classification models to improve the explainability of classification results. Specifically, we ask workers to provide rationales for their annotation by selecting relevant pieces of text. We introduce MARTA, a Bayesian framework that jointly learns an attention-based model and the reliability of workers while injecting human rationales into model training. We derive a principled optimization algorithm based on variational inference with efficient updating rules for learning MARTA parameters. Extensive validation on real-world datasets shows that our framework significantly improves the state of the art both in terms of classification explainability and accuracy.
21

Tsiami, Lydia, and Christos Makropoulos. "Cyber—Physical Attack Detection in Water Distribution Systems with Temporal Graph Convolutional Neural Networks." Water 13, no. 9 (April 29, 2021): 1247. http://dx.doi.org/10.3390/w13091247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Prompt detection of cyber–physical attacks (CPAs) on a water distribution system (WDS) is critical to avoid irreversible damage to the network infrastructure and disruption of water services. However, the complex interdependencies of the water network’s components make CPA detection challenging. To better capture the spatiotemporal dimensions of these interdependencies, we represented the WDS as a mathematical graph and approached the problem by utilizing graph neural networks. We presented an online, one-stage, prediction-based algorithm that implements the temporal graph convolutional network and makes use of the Mahalanobis distance. The algorithm exhibited strong detection performance and was capable of localizing the targeted network components for several benchmark attacks. We suggested that an important property of the proposed algorithm was its explainability, which allowed the extraction of useful information about how the model works and as such it is a step towards the creation of trustworthy AI algorithms for water applications. Additional insights into metrics commonly used to rank algorithm performance were also presented and discussed.
22

Botana, Iñigo López-Riobóo, Carlos Eiras-Franco, and Amparo Alonso-Betanzos. "Regression Tree Based Explanation for Anomaly Detection Algorithm." Proceedings 54, no. 1 (August 18, 2020): 7. http://dx.doi.org/10.3390/proceedings2020054007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This work presents EADMNC (Explainable Anomaly Detection on Mixed Numerical and Categorical spaces), a novel approach to address explanation using an anomaly detection algorithm, ADMNC, which provides accurate detections on mixed numerical and categorical input spaces. Our improved algorithm leverages the formulation of the ADMNC model to offer pre-hoc explainability based on CART (Classification and Regression Trees). The explanation is presented as a segmentation of the input data into homogeneous groups that can be described with a few variables, offering supervisors novel information for justifications. To prove scalability and interpretability, we list experimental results on real-world large datasets focusing on network intrusion detection domain.
23

Gao, Jingyue, Xiting Wang, Yasha Wang, and Xing Xie. "Explainable Recommendation through Attentive Multi-View Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recommender systems have been playing an increasingly important role in our daily life due to the explosive growth of information. Accuracy and explainability are two core aspects when we evaluate a recommendation model and have become one of the fundamental trade-offs in machine learning. In this paper, we propose to alleviate the trade-off between accuracy and explainability by developing an explainable deep model that combines the advantages of deep learning-based models and existing explainable methods. The basic idea is to build an initial network based on an explainable deep hierarchy (e.g., Microsoft Concept Graph) and improve the model accuracy by optimizing key variables in the hierarchy (e.g., node importance and relevance). To ensure accurate rating prediction, we propose an attentive multi-view learning framework. The framework enables us to handle sparse and noisy data by co-regularizing among different feature levels and combining predictions attentively. To mine readable explanations from the hierarchy, we formulate personalized explanation generation as a constrained tree node selection problem and propose a dynamic programming algorithm to solve it. Experimental results show that our model outperforms state-of-the-art methods in terms of both accuracy and explainability.
24

Lv, Ting, Zhenkuan Pan, Weibo Wei, Guangyu Yang, Jintao Song, Xuqing Wang, Lu Sun, Qian Li, and Xiatao Sun. "Iterative deep neural networks based on proximal gradient descent for image restoration." PLOS ONE 17, no. 11 (November 4, 2022): e0276373. http://dx.doi.org/10.1371/journal.pone.0276373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The algorithm unfolding networks with explainability of algorithms and higher efficiency of Deep Neural Networks (DNN) have received considerable attention in solving ill-posed inverse problems. Under the algorithm unfolding network framework, we propose a novel end-to-end iterative deep neural network and its fast network for image restoration. The first one is designed making use of proximal gradient descent algorithm of variational models, which consists of denoiser and reconstruction sub-networks. The second one is its accelerated version with momentum factors. For sub-network of denoiser, we embed the Convolutional Block Attention Module (CBAM) in previous U-Net for adaptive feature refinement. Experiments on image denoising and deblurring demonstrate that competitive performances in quality and efficiency are gained by compared with several state-of-the-art networks for image restoration. Proposed unfolding DNN can be easily extended to solve other similar image restoration tasks, such as image super-resolution, image demosaicking, etc.
25

Chatterjee, Soumick, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck, and Andreas Nürnberger. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models." Applied Sciences 12, no. 4 (February 10, 2022): 1834. http://dx.doi.org/10.3390/app12041834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Clinicians are often very sceptical about applying automatic image processing approaches, especially deep learning-based methods, in practice. One main reason for this is the black-box nature of these approaches and the inherent problem of missing insights of the automatically derived decisions. In order to increase trust in these methods, this paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas that influence the decision of the algorithm most. Moreover, this research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models and generates visual interpretations and explanations for clinicians to corroborate their clinical findings. In addition, this will aid in gaining confidence in such methods. The framework builds on existing interpretability and explainability techniques that are currently focusing on classification models, extending them to segmentation tasks. In addition, these methods have been adapted to 3D models for volumetric analysis. The proposed framework provides methods to quantitatively compare visual explanations using infidelity and sensitivity metrics. This framework can be used by data scientists to perform post hoc interpretations and explanations of their models, develop more explainable tools, and present the findings to clinicians to increase their faith in such models. The proposed framework was evaluated based on a use case scenario of vessel segmentation models trained on Time-of-Flight (TOF) Magnetic Resonance Angiogram (MRA) images of the human brain. Quantitative and qualitative results of a comparative study of different models and interpretability methods are presented. Furthermore, this paper provides an extensive overview of several existing interpretability and explainability methods.
26

Banditwattanawong, Thepparit, and Masawee Masdisornchote. "On Characterization of Norm-Referenced Achievement Grading Schemes toward Explainability and Selectability." Applied Computational Intelligence and Soft Computing 2021 (February 18, 2021): 1–14. http://dx.doi.org/10.1155/2021/8899649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Grading is the process of interpreting learning competence to inform learners and instructors of the current learning ability levels and necessary improvement. For norm-referenced grading, the instructors use a conventionally statistical method, z score. It is difficult for such a method to achieve explainable grade discrimination to resolve dispute between learners and instructors. To solve such difficulty, this paper proposes a simple and efficient algorithm for explainable norm-referenced grading. Moreover, the rise of artificial intelligence nowadays makes machine learning techniques attractive to the norm-referenced grading in general. This paper also investigates two popular clustering methods, K-means and partitioning around medoids. The experiment relied on the data sets of various score distributions and a metric, namely, Davies–Bouldin index. The comparative evaluation reveals that our algorithm overall outperforms the other three methods and is appropriate for all kinds of data sets in almost all cases. Our findings however lead to a practically useful guideline for the selection of appropriate grading methods including both clustering methods and z score.
27

Rudzite, Liva. "Algorithmic Explainability and the Sufficient-Disclosure Requirement under the European Patent Convention." Juridica International 31 (October 25, 2022): 125–35. http://dx.doi.org/10.12697/ji.2022.31.09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence and its subsector machine learning differs from traditional programming. For this reason, coupled with its potential benefits to society in many arenas, it has been articulated as one of the key priorities in the European Union. Such characteristics specific to artificial intelligence as models with increased accuracy and generalisation power may accentuate issues of algorithmic explainability that can defy patentability. Accordingly, the article focuses on the legal requirements related to the ‘sufficient disclosure’ criterion under the legal framework for patents as one facet of deciding on the patentability of the invention, and it addresses potential solutions for overcoming issues of algorithmic explainability. The author argues that solutions introducing a system involving deposit of the algorithm, training data, or both might not be as effective a mechanism for tackling those issues as instead implementing a recognised certification system.
28

Lizzi, Francesca, Camilla Scapicchio, Francesco Laruina, Alessandra Retico, and Maria Evelina Fantacci. "Convolutional Neural Networks for Breast Density Classification: Performance and Explanation Insights." Applied Sciences 12, no. 1 (December 24, 2021): 148. http://dx.doi.org/10.3390/app12010148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose and evaluate a procedure for the explainability of a breast density deep learning based classifier. A total of 1662 mammography exams labeled according to the BI-RADS categories of breast density was used. We built a residual Convolutional Neural Network, trained it and studied the responses of the model to input changes, such as different distributions of class labels in training and test sets and suitable image pre-processing. The aim was to identify the steps of the analysis with a relevant impact on the classifier performance and on the model explainability. We used the grad-CAM algorithm for CNN to produce saliency maps and computed the Spearman’s rank correlation between input images and saliency maps as a measure of explanation accuracy. We found that pre-processing is critical not only for accuracy, precision and recall of a model but also to have a reasonable explanation of the model itself. Our CNN reaches good performances compared to the state-of-art and it considers the dense pattern to make the classification. Saliency maps strongly correlate with the dense pattern. This work is a starting point towards the implementation of a standard framework to evaluate both CNN performances and the explainability of their predictions in medical image classification problems.
29

Fang, Xue, Lin Li, and Zheng Wei. "Design of Recommendation Algorithm Based on Knowledge Graph." Journal of Physics: Conference Series 2425, no. 1 (February 1, 2023): 012025. http://dx.doi.org/10.1088/1742-6596/2425/1/012025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The commodity data in the shopping system contains a wealth of feature information, and different commodities are related by these features. Traditional collaborative filtering algorithms represent commodity data in a structured way, but they have issues such as low commodity similarity calculation accuracy, poor recommendation effect, and unfriendly recommendation results. The commodity recommendation algorithm based on the knowledge graph put forward in this paper initially automatically extracts the entities and entity relationships in the commodity data as the vertices and edges of the graph and then stores the commodity data in the Neo4j graph database. Finally, the similarity between commodities is calculated based on the graph’s path similarity, and the list of TOP-K commodities with the highest similarity is chosen for recommendation. The experimental findings indicate that the recommendation algorithm based on the knowledge graph can not only more accurately represent the similarity between commodities, but also that the graph structure can display the recommended path in a more user-friendly manner, and has better recommendation explainability as well as higher recommendation trust.
30

Krishna Adithya, Venkatesh, Bryan M. Williams, Silvester Czanner, Srinivasan Kavitha, David S. Friedman, Colin E. Willoughby, Rengaraj Venkatesh, and Gabriela Czanner. "EffUnet-SpaGen: An Efficient and Spatial Generative Approach to Glaucoma Detection." Journal of Imaging 7, no. 6 (May 30, 2021): 92. http://dx.doi.org/10.3390/jimaging7060092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Current research in automated disease detection focuses on making algorithms “slimmer” reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation “EffUnet” with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed “SpaGen” We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, “EffUnet-SpaGen”, is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
31

Adithyaram, N. "Early Detection of Lung Disease Using Deep Learning Algorithms on Image Data." International Journal for Research in Applied Science and Engineering Technology 11, no. 7 (July 31, 2023): 466–69. http://dx.doi.org/10.22214/ijraset.2023.53802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: This research paper presents a deep learning-based algorithm for the early detection of lung diseases using medical image data. The algorithm demonstrates high accuracy, sensitivity, specificity, precision, and AUC-ROC values, outperforming existing methods. By leveraging deep learning techniques, the algorithm provides a valuable tool for accurate disease identification, enabling timely interventions and improving patient outcomes. The study discusses the algorithm's performance, generalizability, and clinical relevance, highlighting its potential impact on clinical practice. Future work includes integrating multi-modal data, exploring model explainability, conducting external validation, and continuous model improvement to enhance the algorithm's diagnostic capabilities and real-world applicability. Overall, the proposed algorithm shows promise in advancing the early detection of lung diseases, contributing to improved healthcare outcomes
32

Satoni Kurniawansyah, Arius. "EXPLAINABLE ARTIFICIAL INTELLIGENCE THEORY IN DECISION MAKING TREATMENT OF ARITHMIA PATIENTS WITH USING DEEP LEARNING MODELS." Jurnal Rekayasa Sistem Informasi dan Teknologi 1, no. 1 (August 29, 2022): 26–41. http://dx.doi.org/10.59407/jrsit.v1i1.75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the context of Explainable Artificial Intelligence, there are two important keywords: interpretability and "explainability". Interpretability is the extent to which humans can understand the causes of decisions. The better the interpretability of an AI/ML model, the easier it is for someone to understand why certain decisions or predictions have been made. Some cases of AI/ML implementation may not require explanation, because they are used in a low-risk environment, meaning mistakes will not have serious consequences. The need for interpretability and explainability arises when an AI system is used for certain high-risk problems or tasks, so it is not enough just to get predictive/classification decision outputs, but also needs explanations to convince users that AI (1: Model Explainability) is working the right way and (2: Decision Explainability) has made the right decision (Hotma, 2022). This research provides benefits for the development of knowledge regarding the implementation model of Explainable AI Theory in assisting Doctors' Decision Making for patients with cardiac arrhythmias with the Deep Learning Model in assisting Doctors' Decision Making for patients with cardiac arrhythmias. Knowing the Deep Learning Algorithm can be used in Machine Learning to read EKG Results. Knowing how to improve the results of the accuracy of the Explainable AI Application in Decision Making by Doctors for patients with cardiac arrhythmias. The use of Explanable Artificial Intelligence in the management of arrhythmia patients can provide an interpretation for doctors to be more optimal in treating patients. The results of this AI machine decision can increase doctors' confidence in treating arrhythmia patients optimally, effectively and efficiently. And also treatment will be faster because it is assisted by tools, so that patients can be treated more quickly. Thus it will reduce the mortality rate in arrhythmia patients.
33

Shalev, Yuval, and Irad Ben-Gal. "Context Based Predictive Information." Entropy 21, no. 7 (June 29, 2019): 645. http://dx.doi.org/10.3390/e21070645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose a new algorithm called the context-based predictive information (CBPI) for estimating the predictive information (PI) between time series, by utilizing a lossy compression algorithm. The advantage of this approach over existing methods resides in the case of sparse predictive information (SPI) conditions, where the ratio between the number of informative sequences to uninformative sequences is small. It is shown that the CBPI achieves a better PI estimation than benchmark methods by ignoring uninformative sequences while improving explainability by identifying the informative sequences. We also provide an implementation of the CBPI algorithm on a real dataset of large banks’ stock prices in the U.S. In the last part of this paper, we show how the CBPI algorithm is related to the well-known information bottleneck in its deterministic version.
34

Samaras, Agorastos-Dimitrios, Serafeim Moustakidis, Ioannis D. Apostolopoulos, Elpiniki Papageorgiou, and Nikolaos Papandrianos. "Uncovering the Black Box of Coronary Artery Disease Diagnosis: The Significance of Explainability in Predictive Models." Applied Sciences 13, no. 14 (July 12, 2023): 8120. http://dx.doi.org/10.3390/app13148120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent times, coronary artery disease (CAD) prediction and diagnosis have been the subject of many Medical decision support systems (MDSS) that make use of machine learning (ML) and deep learning (DL) algorithms. The common ground of most of these applications is that they function as black boxes. They reach a conclusion/diagnosis using multiple features as input; however, the user is oftentimes oblivious to the prediction process and the feature weights leading to the eventual prediction. The primary objective of this study is to enhance the transparency and comprehensibility of a black-box prediction model designed for CAD. The dataset employed in this research comprises biometric and clinical information obtained from 571 patients, encompassing 21 different features. Among the instances, 43% of cases of CAD were confirmed through invasive coronary angiography (ICA). Furthermore, a prediction model utilizing the aforementioned dataset and the CatBoost algorithm is analyzed to highlight its prediction making process and the significance of each input datum. State-of-the-art explainability mechanics are employed to highlight the significance of each feature, and common patterns and differences with the medical bibliography are then discussed. Moreover, the findings are compared with common risk factors for CAD, to offer an evaluation of the prediction process from the medical expert’s point of view. By depicting how the algorithm weights the information contained in features, we shed light on the black-box mechanics of ML prediction models; by analyzing the findings, we explore their validity in accordance with the medical literature on the matter.
35

Silva-Aravena, Fabián, Hugo Núñez Delafuente, Jimmy H. Gutiérrez-Bahamondes, and Jenny Morales. "A Hybrid Algorithm of ML and XAI to Prevent Breast Cancer: A Strategy to Support Decision Making." Cancers 15, no. 9 (April 25, 2023): 2443. http://dx.doi.org/10.3390/cancers15092443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Worldwide, the coronavirus has intensified the management problems of health services, significantly harming patients. Some of the most affected processes have been cancer patients’ prevention, diagnosis, and treatment. Breast cancer is the most affected, with more than 20 million cases and at least 10 million deaths by 2020. Various studies have been carried out to support the management of this disease globally. This paper presents a decision support strategy for health teams based on machine learning (ML) tools and explainability algorithms (XAI). The main methodological contributions are: first, the evaluation of different ML algorithms that allow classifying patients with and without cancer from the available dataset; and second, an ML methodology mixed with an XAI algorithm, which makes it possible to predict the disease and interpret the variables and how they affect the health of patients. The results show that first, the XGBoost Algorithm has a better predictive capacity, with an accuracy of 0.813 for the train data and 0.81 for the test data; and second, with the SHAP algorithm, it is possible to know the relevant variables and their level of significance in the prediction, and to quantify the impact on the clinical condition of the patients, which will allow health teams to offer early and personalized alerts for each patient.
36

BUITEN, Miriam C. "Towards Intelligent Regulation of Artificial Intelligence." European Journal of Risk Regulation 10, no. 1 (March 2019): 41–59. http://dx.doi.org/10.1017/err.2019.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence (AI) is becoming a part of our daily lives at a fast pace, offering myriad benefits for society. At the same time, there is concern about the unpredictability and uncontrollability of AI. In response, legislators and scholars call for more transparency and explainability of AI. This article considers what it would mean to require transparency of AI. It advocates looking beyond the opaque concept of AI, focusing on the concrete risks and biases of its underlying technology: machine-learning algorithms. The article discusses the biases that algorithms may produce through the input data, the testing of the algorithm and the decision model. Any transparency requirement for algorithms should result in explanations of these biases that are both understandable for the prospective recipients, and technically feasible for producers. Before asking how much transparency the law should require from algorithms, we should therefore consider if the explanation that programmers could offer is useful in specific legal contexts.
37

Agarwal, Piyush, Melih Tamer, and Hector Budman. "Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes." Computers & Chemical Engineering 154 (November 2021): 107467. http://dx.doi.org/10.1016/j.compchemeng.2021.107467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Yuying, Yu Wang, and Tyler Derr. "Fairness and Explainability: Bridging the Gap towards Fair Model Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11363–71. http://dx.doi.org/10.1609/aaai.v37i9.26344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
While machine learning models have achieved unprecedented success in real-world applications, they might make biased/unfair decisions for specific demographic groups and hence result in discriminative outcomes. Although research efforts have been devoted to measuring and mitigating bias, they mainly study bias from the result-oriented perspective while neglecting the bias encoded in the decision-making procedure. This results in their inability to capture procedure-oriented bias, which therefore limits the ability to have a fully debiasing method. Fortunately, with the rapid development of explainable machine learning, explanations for predictions are now available to gain insights into the procedure. In this work, we bridge the gap between fairness and explainability by presenting a novel perspective of procedure-oriented fairness based on explanations. We identify the procedure-based bias by measuring the gap of explanation quality between different groups with Ratio-based and Value-based Explanation Fairness. The new metrics further motivate us to design an optimization objective to mitigate the procedure-based bias where we observe that it will also mitigate bias from the prediction. Based on our designed optimization objective, we propose a Comprehensive Fairness Algorithm (CFA), which simultaneously fulfills multiple objectives - improving traditional fairness, satisfying explanation fairness, and maintaining the utility performance. Extensive experiments on real-world datasets demonstrate the effectiveness of our proposed CFA and highlight the importance of considering fairness from the explainability perspective. Our code: https://github.com/YuyingZhao/FairExplanations-CFA.
39

Choi, Insu, and Woo Chang Kim. "Enhancing Exchange-Traded Fund Price Predictions: Insights from Information-Theoretic Networks and Node Embeddings." Entropy 26, no. 1 (January 12, 2024): 70. http://dx.doi.org/10.3390/e26010070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study presents a novel approach to predicting price fluctuations for U.S. sector index ETFs. By leveraging information-theoretic measures like mutual information and transfer entropy, we constructed threshold networks highlighting nonlinear dependencies between log returns and trading volume rate changes. We derived centrality measures and node embeddings from these networks, offering unique insights into the ETFs’ dynamics. By integrating these features into gradient-boosting algorithm-based models, we significantly enhanced the predictive accuracy. Our approach offers improved forecast performance for U.S. sector index futures and adds a layer of explainability to the existing literature.
40

Blomerus, Nicholas, Jacques Cilliers, Willie Nel, Erik Blasch, and Pieter de Villiers. "Feedback-Assisted Automatic Target and Clutter Discrimination Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications." Remote Sensing 14, no. 23 (December 1, 2022): 6096. http://dx.doi.org/10.3390/rs14236096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, a feedback training approach for efficiently dealing with distribution shift in synthetic aperture radar target detection using a Bayesian convolutional neural network is proposed. After training the network on in-distribution data, it is tested on out-of-distribution data. Samples that are classified incorrectly with high certainty are fed back for a second round of training. This results in the reduction of false positives in the out-of-distribution dataset. False positive target detections challenge human attention, sensor resource management, and mission engagement. In these types of applications, a reduction in false positives thus often takes precedence over target detection and classification performance. The classifier is used to discriminate the targets from the clutter and to classify the target type in a single step as opposed to the traditional approach of having a sequential chain of functions for target detection and localisation before the machine learning algorithm. Another aspect of automated synthetic aperture radar detection and recognition problems addressed here is the fact that human users of the output of traditional classification systems are presented with decisions made by “black box” algorithms. Consequently, the decisions are not explainable, even to an expert in the sensor domain. This paper makes use of the concept of explainable artificial intelligence via uncertainty heat maps that are overlaid onto synthetic aperture radar imagery to furnish the user with additional information about classification decisions. These uncertainty heat maps facilitate trust in the machine learning algorithm and are derived from the uncertainty estimates of the classifications from the Bayesian convolutional neural network. These uncertainty overlays further enhance the users’ ability to interpret the reasons why certain decisions were made by the algorithm. Further, it is demonstrated that feeding back the high-certainty, incorrectly classified out-of-distribution data results in an average improvement in detection performance and a reduction in uncertainty for all synthetic aperture radar images processed. Compared to the baseline method, an improvement in recall of 11.8%, and a reduction in the false positive rate of 7.08% were demonstrated using the Feedback-assisted Bayesian Convolutional Neural Network or FaBCNN.
41

Chin, Marshall H., Nasim Afsar-Manesh, Arlene S. Bierman, Christine Chang, Caleb J. Colón-Rodríguez, Prashila Dullabh, Deborah Guadalupe Duran, et al. "Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care." JAMA Network Open 6, no. 12 (December 15, 2023): e2345050. http://dx.doi.org/10.1001/jamanetworkopen.2023.45050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ImportanceHealth care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income.ObjectiveTo provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity.Evidence ReviewThe Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback.FindingsThe panel developed a conceptual framework to apply guiding principles across an algorithm’s life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms.Conclusions and RelevanceMultiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.
42

Klettke, Meike, Adrian Lutsch, and Uta Störl. "Kurz erklärt: Measuring Data Changes in Data Engineering and their Impact on Explainability and Algorithm Fairness." Datenbank-Spektrum 21, no. 3 (October 27, 2021): 245–49. http://dx.doi.org/10.1007/s13222-021-00392-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractData engineering is an integral part of any data science and ML process. It consists of several subtasks that are performed to improve data quality and to transform data into a target format suitable for analysis. The quality and correctness of the data engineering steps is therefore important to ensure the quality of the overall process.In machine learning processes requirements such as fairness and explainability are essential. The answers to these must also be provided by the data engineering subtasks. In this article, we will show how these can be achieved by logging, monitoring and controlling the data changes in order to evaluate their correctness. However, since data preprocessing algorithms are part of any machine learning pipeline, they must obviously also guarantee that they do not produce data biases.In this article we will briefly introduce three classes of methods for measuring data changes in data engineering and present which research questions still remain unanswered in this area.
43

Chetoui, Mohamed, Moulay A. Akhloufi, Bardia Yousefi, and El Mostafa Bouattane. "Explainable COVID-19 Detection on Chest X-rays Using an End-to-End Deep Convolutional Neural Network Architecture." Big Data and Cognitive Computing 5, no. 4 (December 7, 2021): 73. http://dx.doi.org/10.3390/bdcc5040073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The coronavirus pandemic is spreading around the world. Medical imaging modalities such as radiography play an important role in the fight against COVID-19. Deep learning (DL) techniques have been able to improve medical imaging tools and help radiologists to make clinical decisions for the diagnosis, monitoring and prognosis of different diseases. Computer-Aided Diagnostic (CAD) systems can improve work efficiency by precisely delineating infections in chest X-ray (CXR) images, thus facilitating subsequent quantification. CAD can also help automate the scanning process and reshape the workflow with minimal patient contact, providing the best protection for imaging technicians. The objective of this study is to develop a deep learning algorithm to detect COVID-19, pneumonia and normal cases on CXR images. We propose two classifications problems, (i) a binary classification to classify COVID-19 and normal cases and (ii) a multiclass classification for COVID-19, pneumonia and normal. Nine datasets and more than 3200 COVID-19 CXR images are used to assess the efficiency of the proposed technique. The model is trained on a subset of the National Institute of Health (NIH) dataset using swish activation, thus improving the training accuracy to detect COVID-19 and other pneumonia. The models are tested on eight merged datasets and on individual test sets in order to confirm the degree of generalization of the proposed algorithms. An explainability algorithm is also developed to visually show the location of the lung-infected areas detected by the model. Moreover, we provide a detailed analysis of the misclassified images. The obtained results achieve high performances with an Area Under Curve (AUC) of 0.97 for multi-class classification (COVID-19 vs. other pneumonia vs. normal) and 0.98 for the binary model (COVID-19 vs. normal). The average sensitivity and specificity are 0.97 and 0.98, respectively. The sensitivity of the COVID-19 class achieves 0.99. The results outperformed the comparable state-of-the-art models for the detection of COVID-19 on CXR images. The explainability model shows that our model is able to efficiently identify the signs of COVID-19.
44

Schober, Sebastian A., Yosra Bahri, Cecilia Carbonelli, and Robert Wille. "Neural Network Robustness Analysis Using Sensor Simulations for a Graphene-Based Semiconductor Gas Sensor." Chemosensors 10, no. 5 (April 21, 2022): 152. http://dx.doi.org/10.3390/chemosensors10050152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Despite their advantages regarding production costs and flexibility, chemiresistive gas sensors often show drawbacks in reproducibility, signal drift and ageing. As pattern recognition algorithms, such as neural networks, are operating on top of raw sensor signals, assessing the impact of these technological drawbacks on the prediction performance is essential for ensuring a suitable measuring accuracy. In this work, we propose a characterization scheme to analyze the robustness of different machine learning models for a chemiresistive gas sensor based on a sensor simulation model. Our investigations are structured into four separate studies: in three studies, the impact of different sensor instabilities on the concentration prediction performance of the algorithms is investigated, including sensor-to-sensor variations, sensor drift and sensor ageing. In a further study, the explainability of the machine learning models is analyzed by applying a state-of-the-art feature ranking method called SHAP. Our results show the feasibility of model-based algorithm testing and substantiate the need for the thorough characterization of chemiresistive sensor algorithms before sensor deployment in order to ensure robust measurement performance.
45

HÖLLER, Sonja, Thomas DILGER, Teresa SPIESS, Christian PLODER, and Reinhard BERNSTEINER. "Awareness of Unethical Artificial Intelligence and its Mitigation Measures." European Journal of Interdisciplinary Studies 15, no. 2 (December 22, 2023): 67–89. http://dx.doi.org/10.24818/ejis.2023.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The infrastructure of the Internet is based on algorithms that enable the use of search engines, social networks, and much more. Algorithms themselves may vary in functionality, but many of them have the potential to reinforce, accentuate, and systematize age-old prejudices, biases, and implicit assumptions of society. Awareness of algorithms thus becomes an issue of agency, public life, and democracy. Nonetheless, as research showed, people are lacking algorithm awareness. Therefore, this paper aims to investigate the extent to which people are aware of unethical artificial intelligence and what actions they can take against it (mitigation measures). A survey addressing these factors yielded 291 valid responses. To examine the data and the relationship between the constructs in the model, partial least square structural modeling (PLS-SEM) was applied using the Smart PLS 3 tool. The empirical results demonstrate that awareness of mitigation measures is influenced by the self-efficacy of the user. However, trust in the algorithmic platform has no significant influence. In addition, the explainability of an algorithmic platform has a significant influence on the user's self-efficacy and should therefore be considered when setting up the platform. The most frequently mentioned mitigation measures by survey participants are laws and regulations, various types of algorithm audits, and education and training. This work thus provides new empirical insights for researchers and practitioners in the field of ethical artificial intelligence.
46

Zeng, Wenhuan, and Daniel H. Huson. "Leverage the Explainability of Transformer Models to Improve the DNA 5-Methylcytosine Identification (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23703–4. http://dx.doi.org/10.1609/aaai.v38i21.30533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
DNA methylation is an epigenetic mechanism for regulating gene expression, and it plays an important role in many biological processes. While methylation sites can be identified using laboratory techniques, much work is being done on developing computational approaches using machine learning. Here, we present a deep-learning algorithm for determining the 5-methylcytosine status of a DNA sequence. We propose an ensemble framework that treats the self-attention score as an explicit feature that is added to the encoder layer generated by fine-tuned language models. We evaluate the performance of the model under different data distribution scenarios.
47

Mollaei, Nafiseh, Carlos Fujao, Luis Silva, Joao Rodrigues, Catia Cepeda, and Hugo Gamboa. "Human-Centered Explainable Artificial Intelligence: Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms." International Journal of Environmental Research and Public Health 19, no. 15 (August 3, 2022): 9552. http://dx.doi.org/10.3390/ijerph19159552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In automotive and industrial settings, occupational physicians are responsible for monitoring workers’ health protection profiles. Workers’ Functional Work Ability (FWA) status is used to create Occupational Health Protection Profiles (OHPP). This is a novel longitudinal study in comparison with previous research that has predominantly relied on the causality and explainability of human-understandable models for industrial technical teams like ergonomists. The application of artificial intelligence can support the decision-making to go from a worker’s Functional Work Ability to explanations by integrating explainability into medical (restriction) and support in contexts of individual, work-related, and organizational risk conditions. A sample of 7857 for the prognosis part of OHPP based on Functional Work Ability in the Portuguese language in the automotive industry was taken from 2019 to 2021. The most suitable regression models to predict the next medical appointment for the workers’ body parts protection were the models based on CatBoost regression, with an RMSLE of 0.84 and 1.23 weeks (mean error), respectively. CatBoost algorithm is also used to predict the next body part severity of OHPP. This information can help our understanding of potential risk factors for OHPP and identify warning signs of the early stages of musculoskeletal symptoms and work-related absenteeism.
48

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (September 27, 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICIDS-2017 dataset were chosen. This paper proposes an intrusion detection system using machine learning algorithms such as decision trees, random forests, and SVM (IDS). After training these models, an ensemble technique voting classifier was added and achieved an accuracy of 96.25%. Furthermore, the proposed model also incorporates the XAI algorithm LIME for better explainability and understanding of the black-box approach to reliable intrusion detection. Our experimental results confirmed that XAI LIME is more explanation-friendly and more responsive.
49

Trabassi, Dante, Mariano Serrao, Tiwana Varrecchia, Alberto Ranavolo, Gianluca Coppola, Roberto De Icco, Cristina Tassorelli, and Stefano Filippo Castiglia. "Machine Learning Approach to Support the Detection of Parkinson’s Disease in IMU-Based Gait Analysis." Sensors 22, no. 10 (May 12, 2022): 3700. http://dx.doi.org/10.3390/s22103700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this study was to determine which supervised machine learning (ML) algorithm can most accurately classify people with Parkinson’s disease (pwPD) from speed-matched healthy subjects (HS) based on a selected minimum set of IMU-derived gait features. Twenty-two gait features were extrapolated from the trunk acceleration patterns of 81 pwPD and 80 HS, including spatiotemporal, pelvic kinematics, and acceleration-derived gait stability indexes. After a three-level feature selection procedure, seven gait features were considered for implementing five ML algorithms: support vector machine (SVM), artificial neural network, decision trees (DT), random forest (RF), and K-nearest neighbors. Accuracy, precision, recall, and F1 score were calculated. SVM, DT, and RF showed the best classification performances, with prediction accuracy higher than 80% on the test set. The conceptual model of approaching ML that we proposed could reduce the risk of overrepresenting multicollinear gait features in the model, reducing the risk of overfitting in the test performances while fostering the explainability of the results.
50

Gutierrez-Rojas, Daniel, Ioannis T. Christou, Daniel Dantas, Arun Narayanan, Pedro H. J. Nardelli, and Yongheng Yang. "Performance evaluation of machine learning for fault selection in power transmission lines." Knowledge and Information Systems 64, no. 3 (February 19, 2022): 859–83. http://dx.doi.org/10.1007/s10115-022-01657-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractLearning methods have been increasingly used in power engineering to perform various tasks. In this paper, a fault selection procedure in double-circuit transmission lines employing different learning methods is accordingly proposed. In the proposed procedure, the discrete Fourier transform (DFT) is used to pre-process raw data from the transmission line before it is fed into the learning algorithm, which will detect and classify any fault based on a training period. The performance of different machine learning algorithms is then numerically compared through simulations. The comparison indicates that an artificial neural network (ANN) achieves remarkable accuracy of 98.47%. As a drawback, the ANN method cannot provide explainable results and is also not robust against noisy measurements. Subsequently, it is demonstrated that explainable results can be obtained with high accuracy by using rule-based learners such as the recently developed quantitative association rule mining algorithm (QARMA). The QARMA algorithm outperforms other explainable schemes, while attaining an accuracy of 98%. Besides, it was shown that QARMA leads to a very high accuracy of 97% for highly noisy data. The proposed method was also validated using data from an actual transmission line fault. In summary, the proposed two-step procedure using the DFT combined with either deep learning or rule-based algorithms can accurately and successfully perform fault selection tasks but indicating remarkable advantages of the QARMA due to its explainability and robustness against noise. Those aspects are extremely important if machine learning and other data-driven methods are to be employed in critical engineering applications.

To the bibliography