Journal articles on the topic 'Interpretable deep learning'

To see the other types of publications on this topic, follow the link: Interpretable deep learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interpretable deep learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gangopadhyay, Tryambak, Sin Yong Tan, Anthony LoCurto, James B. Michael, and Soumik Sarkar. "Interpretable Deep Learning for Monitoring Combustion Instability." IFAC-PapersOnLine 53, no. 2 (2020): 832–37. http://dx.doi.org/10.1016/j.ifacol.2020.12.839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Hong, Yinglong Dai, Fumin Yu, and Yuezhen Hu. "Interpretable Saliency Map for Deep Reinforcement Learning." Journal of Physics: Conference Series 1757, no. 1 (January 1, 2021): 012075. http://dx.doi.org/10.1088/1742-6596/1757/1/012075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruffolo, Jeffrey A., Jeremias Sulam, and Jeffrey J. Gray. "Antibody structure prediction using interpretable deep learning." Patterns 3, no. 2 (February 2022): 100406. http://dx.doi.org/10.1016/j.patter.2021.100406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arik, Sercan Ö., and Tomas Pfister. "TabNet: Attentive Interpretable Tabular Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6679–87. http://dx.doi.org/10.1609/aaai.v35i8.16826.

Full text
Abstract:
We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into its global behavior. Finally, we demonstrate self-supervised learning for tabular data, significantly improving performance when unlabeled data is abundant.
APA, Harvard, Vancouver, ISO, and other styles
5

Bhambhoria, Rohan, Hui Liu, Samuel Dahan, and Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.

Full text
Abstract:
Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically, we introduce a model-agnostic interpretable intermediate layer, a technique which proves to be effective for legal documents. Furthermore, we utilize weakly supervised learning by means of a curriculum learning strategy, effectively demonstrating the improved performance of a deep learning model. This is in contrast to the conventional models which are only able to utilize the limited number of expensive manually-annotated samples by legal experts. Although the methods presented in this work tackles the task of risk of confusion for trademarks, it is straightforward to extend them to other fields of law, or more generally, to other similar high-stakes application scenarios.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Chih-Hsu, and Olivier Lichtarge. "Using interpretable deep learning to model cancer dependencies." Bioinformatics 37, no. 17 (May 27, 2021): 2675–81. http://dx.doi.org/10.1093/bioinformatics/btab137.

Full text
Abstract:
Abstract Motivation Cancer dependencies provide potential drug targets. Unfortunately, dependencies differ among cancers and even individuals. To this end, visible neural networks (VNNs) are promising due to robust performance and the interpretability required for the biomedical field. Results We design Biological visible neural network (BioVNN) using pathway knowledge to predict cancer dependencies. Despite having fewer parameters, BioVNN marginally outperforms traditional neural networks (NNs) and converges faster. BioVNN also outperforms an NN based on randomized pathways. More importantly, dependency predictions can be explained by correlating with the neuron output states of relevant pathways, which suggest dependency mechanisms. In feature importance analysis, BioVNN recapitulates known reaction partners and proposes new ones. Such robust and interpretable VNNs may facilitate the understanding of cancer dependency and the development of targeted therapies. Availability and implementation Code and data are available at https://github.com/LichtargeLab/BioVNN Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
7

Liao, WangMin, BeiJi Zou, RongChang Zhao, YuanQiong Chen, ZhiYou He, and MengJie Zhou. "Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis." IEEE Journal of Biomedical and Health Informatics 24, no. 5 (May 2020): 1405–12. http://dx.doi.org/10.1109/jbhi.2019.2949075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Matsubara, Takashi. "Bayesian deep learning: A model-based interpretable approach." Nonlinear Theory and Its Applications, IEICE 11, no. 1 (2020): 16–35. http://dx.doi.org/10.1587/nolta.11.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yi, Kenneth Barr, and John Reinitz. "Fully interpretable deep learning model of transcriptional control." Bioinformatics 36, Supplement_1 (July 1, 2020): i499—i507. http://dx.doi.org/10.1093/bioinformatics/btaa506.

Full text
Abstract:
Abstract Motivation The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a ‘black box’ approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modeling of transcriptional control in which mRNA production is controlled by the binding of specific transcriptionfactors to DNA, in part because such models are in part formulated in terms of specific chemical equationsthat appear different in form from those used in neural networks. Results In this paper, we give an example of a DNN whichcan model the detailed control of transcription in a precise and predictive manner. Its internal structure is fully interpretableand is faithful to underlying chemistry of transcription factor binding to DNA. We derive our DNN from asystems biology model that was not previously recognized as having a DNN structure. Although we apply our DNNto data from the early embryo of the fruit fly Drosophila, this system serves as a test bed for analysis of much larger datasets obtained by systems biology studies on a genomic scale. . Availability and implementation The implementation and data for the models used in this paper are in a zip file in the supplementary material. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
10

Brinkrolf, Johannes, and Barbara Hammer. "Interpretable machine learning with reject option." at - Automatisierungstechnik 66, no. 4 (April 25, 2018): 283–90. http://dx.doi.org/10.1515/auto-2017-0123.

Full text
Abstract:
Abstract Classification by means of machine learning models constitutes one relevant technology in process automation and predictive maintenance. However, common techniques such as deep networks or random forests suffer from their black box characteristics and possible adversarial examples. In this contribution, we give an overview about a popular alternative technology from machine learning, namely modern variants of learning vector quantization, which, due to their combined discriminative and generative nature, incorporate interpretability and the possibility of explicit reject options for irregular samples. We give an explicit bound on minimum changes required for a change of the classification in case of LVQ networks with reject option, and we demonstrate the efficiency of reject options in two examples.
APA, Harvard, Vancouver, ISO, and other styles
11

Zinemanas, Pablo, Martín Rocamora, Marius Miron, Frederic Font, and Xavier Serra. "An Interpretable Deep Learning Model for Automatic Sound Classification." Electronics 10, no. 7 (April 2, 2021): 850. http://dx.doi.org/10.3390/electronics10070850.

Full text
Abstract:
Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Gagne II, David John, Sue Ellen Haupt, Douglas W. Nychka, and Gregory Thompson. "Interpretable Deep Learning for Spatial Analysis of Severe Hailstorms." Monthly Weather Review 147, no. 8 (July 17, 2019): 2827–45. http://dx.doi.org/10.1175/mwr-d-18-0316.1.

Full text
Abstract:
Abstract Deep learning models, such as convolutional neural networks, utilize multiple specialized layers to encode spatial patterns at different scales. In this study, deep learning models are compared with standard machine learning approaches on the task of predicting the probability of severe hail based on upper-air dynamic and thermodynamic fields from a convection-allowing numerical weather prediction model. The data for this study come from patches surrounding storms identified in NCAR convection-allowing ensemble runs from 3 May to 3 June 2016. The machine learning models are trained to predict whether the simulated surface hail size from the Thompson hail size diagnostic exceeds 25 mm over the hour following storm detection. A convolutional neural network is compared with logistic regressions using input variables derived from either the spatial means of each field or principal component analysis. The convolutional neural network statistically significantly outperforms all other methods in terms of Brier skill score and area under the receiver operator characteristic curve. Interpretation of the convolutional neural network through feature importance and feature optimization reveals that the network synthesized information about the environment and storm morphology that is consistent with our understanding of hail growth, including large lapse rates and a wind shear profile that favors wide updrafts. Different neurons in the network also record different storm modes, and the magnitude of the output of those neurons is used to analyze the spatiotemporal distributions of different storm modes in the NCAR ensemble.
APA, Harvard, Vancouver, ISO, and other styles
13

Abdel-Basset, Mohamed, Hossam Hawash, Khalid Abdulaziz Alnowibet, Ali Wagdy Mohamed, and Karam M. Sallam. "Interpretable Deep Learning for Discriminating Pneumonia from Lung Ultrasounds." Mathematics 10, no. 21 (November 6, 2022): 4153. http://dx.doi.org/10.3390/math10214153.

Full text
Abstract:
Lung ultrasound images have shown great promise to be an operative point-of-care test for the diagnosis of COVID-19 because of the ease of procedure with negligible individual protection equipment, together with relaxed disinfection. Deep learning (DL) is a robust tool for modeling infection patterns from medical images; however, the existing COVID-19 detection models are complex and thereby are hard to deploy in frequently used mobile platforms in point-of-care testing. Moreover, most of the COVID-19 detection models in the existing literature on DL are implemented as a black box, hence, they are hard to be interpreted or trusted by the healthcare community. This paper presents a novel interpretable DL framework discriminating COVID-19 infection from other cases of pneumonia and normal cases using ultrasound data of patients. In the proposed framework, novel transformer modules are introduced to model the pathological information from ultrasound frames using an improved window-based multi-head self-attention layer. A convolutional patching module is introduced to transform input frames into latent space rather than partitioning input into patches. A weighted pooling module is presented to score the embeddings of the disease representations obtained from the transformer modules to attend to information that is most valuable for the screening decision. Experimental analysis of the public three-class lung ultrasound dataset (PCUS dataset) demonstrates the discriminative power (Accuracy: 93.4%, F1-score: 93.1%, AUC: 97.5%) of the proposed solution overcoming the competing approaches while maintaining low complexity. The proposed model obtained very promising results in comparison with the rival models. More importantly, it gives explainable outputs therefore, it can serve as a candidate tool for empowering the sustainable diagnosis of COVID-19-like diseases in smart healthcare.
APA, Harvard, Vancouver, ISO, and other styles
14

Bang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu, and Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.

Full text
Abstract:
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
APA, Harvard, Vancouver, ISO, and other styles
15

Xu, Lingfeng, Julie Liss, and Visar Berisha. "Dysarthria detection based on a deep learning model with a clinically-interpretable layer." JASA Express Letters 3, no. 1 (January 2023): 015201. http://dx.doi.org/10.1121/10.0016833.

Full text
Abstract:
Studies have shown deep neural networks (DNN) as a potential tool for classifying dysarthric speakers and controls. However, representations used to train DNNs are largely not clinically interpretable, which limits clinical value. Here, a model with a bottleneck layer is trained to jointly learn a classification label and four clinically-interpretable features. Evaluation of two dysarthria subtypes shows that the proposed method can flexibly trade-off between improved classification accuracy and discovery of clinically-interpretable deficit patterns. The analysis using Shapley additive explanation shows the model learns a representation consistent with the disturbances that define the two dysarthria subtypes considered in this work.
APA, Harvard, Vancouver, ISO, and other styles
16

An, Junkang, Yiwan Zhang, and Inwhee Joe. "Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models." Applied Sciences 13, no. 15 (July 29, 2023): 8782. http://dx.doi.org/10.3390/app13158782.

Full text
Abstract:
Deep learning researchers believe that as deep learning models evolve, they can perform well on many tasks. However, the complex parameters of deep learning models make it difficult for users to understand how deep learning models make predictions. In this paper, we propose the specific-input local interpretable model-agnostic explanations (LIME) model, a novel interpretable artificial intelligence (XAI) method that interprets deep learning models of tabular data. The specific-input process uses feature importance and partial dependency plots (PDPs) to select the “what” and “how”. In our experiments, we first obtain a basic interpretation of the data by simulating user behaviour. Second, we use our approach to understand “which” features deep learning models focus on and how these features affect the model’s predictions. From the experimental results, we find that this approach improves the stability of LIME interpretations, compensates for the problem of LIME only focusing on local interpretations, and achieves a balance between global and local interpretations.
APA, Harvard, Vancouver, ISO, and other styles
17

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu, and Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (April 26, 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Full text
Abstract:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
APA, Harvard, Vancouver, ISO, and other styles
18

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu, and Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (April 26, 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Full text
Abstract:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu, and Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (April 26, 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Full text
Abstract:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
APA, Harvard, Vancouver, ISO, and other styles
20

Monje, Leticia, Ramón A. Carrasco, Carlos Rosado, and Manuel Sánchez-Montañés. "Deep Learning XAI for Bus Passenger Forecasting: A Use Case in Spain." Mathematics 10, no. 9 (April 23, 2022): 1428. http://dx.doi.org/10.3390/math10091428.

Full text
Abstract:
Time series forecasting of passenger demand is crucial for optimal planning of limited resources. For smart cities, passenger transport in urban areas is an increasingly important problem, because the construction of infrastructure is not the solution and the use of public transport should be encouraged. One of the most sophisticated techniques for time series forecasting is Long Short Term Memory (LSTM) neural networks. These deep learning models are very powerful for time series forecasting but are not interpretable by humans (black-box models). Our goal was to develop a predictive and linguistically interpretable model, useful for decision making using large volumes of data from different sources. Our case study was one of the most demanded bus lines of Madrid. We obtained an interpretable model from the LSTM neural network using a surrogate model and the 2-tuple fuzzy linguistic model, which improves the linguistic interpretability of the generated Explainable Artificial Intelligent (XAI) model without losing precision.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Dongdong, Samuel Yang, Xiaohui Yuan, and Ping Zhang. "Interpretable deep learning for automatic diagnosis of 12-lead electrocardiogram." iScience 24, no. 4 (April 2021): 102373. http://dx.doi.org/10.1016/j.isci.2021.102373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Fisher, Thomas, Harry Gibson, Yunzhe Liu, Moloud Abdar, Marius Posa, Gholamreza Salimi-Khorshidi, Abdelaali Hassaine, Yutong Cai, Kazem Rahimi, and Mohammad Mamouei. "Uncertainty-Aware Interpretable Deep Learning for Slum Mapping and Monitoring." Remote Sensing 14, no. 13 (June 26, 2022): 3072. http://dx.doi.org/10.3390/rs14133072.

Full text
Abstract:
Over a billion people live in slums, with poor sanitation, education, property rights and working conditions having a direct impact on current residents and future generations. Slum mapping is one of the key problems concerning slums. Policymakers need to delineate slum settlements to make informed decisions about infrastructure development and allocation of aid. A wide variety of machine learning and deep learning methods have been applied to multispectral satellite images to map slums with outstanding performance. Since the physical and visual manifestation of slums significantly varies with geographical region and comprehensive slum maps are rare, it is important to quantify the uncertainty of predictions for reliable and confident application of models to downstream tasks. In this study, we train a U-Net model with Monte Carlo Dropout (MCD) on 13-band Sentinel-2 images, allowing us to calculate pixelwise uncertainty in the predictions. The obtained outcomes show that the proposed model outperforms the previous state-of-the-art model, having both higher AUPRC and lower uncertainty when tested on unseen geographical regions of Mumbai using the regional testing framework introduced in this study. We also use SHapley Additive exPlanations (SHAP) values to investigate how the different features contribute to our model’s predictions which indicate a certain shortwave infrared image band is a powerful feature for determining the locations of slums within images. With our results, we demonstrate the usefulness of including an uncertainty quantification approach in detecting slum area changes over time.
APA, Harvard, Vancouver, ISO, and other styles
23

Zokaeinikoo, M., X. Li, and M. Yang. "An interpretable deep learning model to predict symptomatic knee osteoarthritis." Osteoarthritis and Cartilage 29 (April 2021): S354. http://dx.doi.org/10.1016/j.joca.2021.02.459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Jilong, Rui Li, Renfa Li, Bin Fu, and Danny Z. Chen. "HMCKRAutoEncoder: An Interpretable Deep Learning Framework for Time Series Analysis." IEEE Transactions on Emerging Topics in Computing 10, no. 1 (January 1, 2022): 99–111. http://dx.doi.org/10.1109/tetc.2022.3143154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

de la Torre, Jordi, Aida Valls, and Domenec Puig. "A deep learning interpretable classifier for diabetic retinopathy disease grading." Neurocomputing 396 (July 2020): 465–76. http://dx.doi.org/10.1016/j.neucom.2018.07.102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Zizhao, Pingjun Chen, Mason McGough, Fuyong Xing, Chunbao Wang, Marilyn Bui, Yuanpu Xie, et al. "Pathologist-level interpretable whole-slide cancer diagnosis with deep learning." Nature Machine Intelligence 1, no. 5 (May 2019): 236–45. http://dx.doi.org/10.1038/s42256-019-0052-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Rampal, Neelesh, Tom Shand, Adam Wooler, and Christo Rautenbach. "Interpretable Deep Learning Applied to Rip Current Detection and Localization." Remote Sensing 14, no. 23 (November 29, 2022): 6048. http://dx.doi.org/10.3390/rs14236048.

Full text
Abstract:
A rip current is a strong, localized current of water which moves along and away from the shore. Recent studies have suggested that drownings due to rip currents are still a major threat to beach safety. Identification of rip currents is important for lifeguards when making decisions on where to designate patrolled areas. The public also require information while deciding where to swim when lifeguards are not on patrol. In the present study we present an artificial intelligence (AI) algorithm that both identifies whether a rip current exists in images/video, and also localizes where that rip current occurs. While there have been some significant advances in AI for rip current detection and localization, there is a lack of research ensuring that an AI algorithm can generalize well to a diverse range of coastal environments and marine conditions. The present study made use of an interpretable AI method, gradient-weighted class-activation maps (Grad-CAM), which is a novel approach for amorphous rip current detection. The training data/images were diverse and encompass rip currents in a wide variety of environmental settings, ensuring model generalization. An open-access aerial catalogue of rip currents were used for model training. Here, the aerial imagery was also augmented by applying a wide variety of randomized image transformations (e.g., perspective, rotational transforms, and additive noise), which dramatically improves model performance through generalization. To account for diverse environmental settings, a synthetically generated training set, containing fog, shadows, and rain, was also added to the rip current images, thus increased the training dataset approximately 10-fold. Interpretable AI has dramatically improved the accuracy of unbounded rip current detection, which can correctly classify and localize rip currents about 89% of the time when validated on independent videos from surf-cameras at oblique angles. The novelty also lies in the ability to capture some shape characteristics of the amorphous rip current structure without the need of a predefined bounding box, therefore enabling the use of remote technology like drones. A comparison with well-established coastal image processing techniques is also presented via a short discussion and easy reference table. The strengths and weaknesses of both methods are highlighted and discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Hua, Xinyun, Lei Cheng, Ting Zhang, and Jianlong Li. "Interpretable deep dictionary learning for sound speed profiles with uncertainties." Journal of the Acoustical Society of America 153, no. 2 (February 2023): 877–94. http://dx.doi.org/10.1121/10.0017099.

Full text
Abstract:
Uncertainties abound in sound speed profiles (SSPs) measured/estimated by modern ocean observing systems, which impede the knowledge acquisition and downstream underwater applications. To reduce the SSP uncertainties and draw insights into specific ocean processes, an interpretable deep dictionary learning model is proposed to cater for uncertain SSP processing. In particular, two kinds of SSP uncertainties are considered: measurement errors, which generally exist in the form of Gaussian noises; and the disturbances/anomalies caused by potential ocean dynamics, which occur at some specific depths and durations. To learn the generative patterns of these uncertainties while maintaining the interpretability of the resulting deep model, the adopted scheme first unrolls the classical K-singular value decomposition algorithm into a neural network, and trains this neural network in a supervised learning manner. The training data and model initializations are judiciously designed to incorporate the environmental properties of ocean SSPs. Experimental results demonstrate the superior performance of the proposed method over the classical baseline in mitigating noise corruptions, detecting, and localizing SSP disturbances/anomalies.
APA, Harvard, Vancouver, ISO, and other styles
29

Schmid, Ute, and Bettina Finzel. "Mutual Explanations for Cooperative Decision Making in Medicine." KI - Künstliche Intelligenz 34, no. 2 (January 10, 2020): 227–33. http://dx.doi.org/10.1007/s13218-020-00633-2.

Full text
Abstract:
AbstractExploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.
APA, Harvard, Vancouver, ISO, and other styles
30

Sieusahai, Alexander, and Matthew Guzdial. "Explaining Deep Reinforcement Learning Agents in the Atari Domain through a Surrogate Model." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 17, no. 1 (October 4, 2021): 82–90. http://dx.doi.org/10.1609/aiide.v17i1.18894.

Full text
Abstract:
One major barrier to applications of deep Reinforcement Learning (RL) both inside and outside of games is the lack of explainability. In this paper, we describe a lightweight and effective method to derive explanations for deep RL agents, which we evaluate in the Atari domain. Our method relies on a transformation of the pixel-based input of the RL agent to a symbolic, interpretable input representation. We then train a surrogate model, which is itself interpretable, to replicate the behavior of the target, deep RL agent. Our experiments demonstrate that we can learn an effective surrogate that accurately approximates the underlying decision making of a target agent on a suite of Atari games.
APA, Harvard, Vancouver, ISO, and other styles
31

R. S. Deshpande, P. V. Ambatkar. "Interpretable Deep Learning Models: Enhancing Transparency and Trustworthiness in Explainable AI." Proceeding International Conference on Science and Engineering 11, no. 1 (February 18, 2023): 1352–63. http://dx.doi.org/10.52783/cienceng.v11i1.286.

Full text
Abstract:
Explainable AI (XAI) aims to address the opacity of deep learning models, which can limit their adoption in critical decision-making applications. This paper presents a novel framework that integrates interpretable components and visualization techniques to enhance the transparency and trustworthiness of deep learning models. We propose a hybrid explanation method combining saliency maps, feature attribution, and local interpretable model-agnostic explanations (LIME) to provide comprehensive insights into the model's decision-making process. Our experiments with convolutional neural networks (CNNs) and transformers demonstrate that our approach improves interpretability without compromising performance. User studies with domain experts indicate that our visualization dashboard facilitates better understanding and trust in AI systems. This research contributes to developing more transparent and trustworthy deep learning models, paving the way for broader adoption in sensitive applications where human users need to understand and trust AI decisions.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Wentian, Xidong Feng, Haotian An, Xiang Yao Ng, and Yu-Jin Zhang. "MRI Reconstruction with Interpretable Pixel-Wise Operations Using Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 792–99. http://dx.doi.org/10.1609/aaai.v34i01.5423.

Full text
Abstract:
Compressed sensing magnetic resonance imaging (CS-MRI) is a technique aimed at accelerating the data acquisition of MRI. While down-sampling in k-space proportionally reduces the data acquisition time, it results in images corrupted by aliasing artifacts and blur. To reconstruct images from the down-sampled k-space, recent deep-learning based methods have shown better performance compared with classical optimization-based CS-MRI methods. However, they usually use deep neural networks as a black-box, which directly maps the corrupted images to the target images from fully-sampled k-space data. This lack of transparency may impede practical usage of such methods. In this work, we propose a deep reinforcement learning based method to reconstruct the corrupted images with meaningful pixel-wise operations (e.g. edge enhancing filters), so that the reconstruction process is transparent to users. Specifically, MRI reconstruction is formulated as Markov Decision Process with discrete actions and continuous action parameters. We conduct experiments on MICCAI dataset of brain tissues and fastMRI dataset of knee images. Our proposed method performs favorably against previous approaches. Our trained model learns to select pixel-wise operations that correspond to the anatomical structures in the MR images. This makes the reconstruction process more interpretable, which would be helpful for further medical analysis.
APA, Harvard, Vancouver, ISO, and other styles
33

Verma, Abhinav. "Verifiable and Interpretable Reinforcement Learning through Program Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9902–3. http://dx.doi.org/10.1609/aaai.v33i01.33019902.

Full text
Abstract:
We study the problem of generating interpretable and verifiable policies for Reinforcement Learning (RL). Unlike the popular Deep Reinforcement Learning (DRL) paradigm, in which the policy is represented by a neural network, the aim of this work is to find policies that can be represented in highlevel programming languages. Such programmatic policies have several benefits, including being more easily interpreted than neural networks, and being amenable to verification by scalable symbolic methods. The generation methods for programmatic policies also provide a mechanism for systematically using domain knowledge for guiding the policy search. The interpretability and verifiability of these policies provides the opportunity to deploy RL based solutions in safety critical environments. This thesis draws on, and extends, work from both the machine learning and formal methods communities.
APA, Harvard, Vancouver, ISO, and other styles
34

Lyu, Daoming, Fangkai Yang, Bo Liu, and Steven Gustafson. "SDRL: Interpretable and Data-Efficient Deep Reinforcement Learning Leveraging Symbolic Planning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2970–77. http://dx.doi.org/10.1609/aaai.v33i01.33012970.

Full text
Abstract:
Deep reinforcement learning (DRL) has gained great success by learning directly from high-dimensional sensory inputs, yet is notorious for the lack of interpretability. Interpretability of the subtasks is critical in hierarchical decision-making as it increases the transparency of black-box-style DRL approach and helps the RL practitioners to understand the high-level behavior of the system better. In this paper, we introduce symbolic planning into DRL and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can handle both high-dimensional sensory inputs and symbolic planning. The task-level interpretability is enabled by relating symbolic actions to options.This framework features a planner – controller – meta-controller architecture, which takes charge of subtask scheduling, data-driven subtask learning, and subtask evaluation, respectively. The three components cross-fertilize each other and eventually converge to an optimal symbolic plan along with the learned subtasks, bringing together the advantages of long-term planning capability with symbolic knowledge and end-to-end reinforcement learning directly from a high-dimensional sensory input. Experimental results validate the interpretability of subtasks, along with improved data efficiency compared with state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Ting-He, Md Musaddaqul Hasib, Yu-Chiao Chiu, Zhi-Feng Han, Yu-Fang Jin, Mario Flores, Yidong Chen, and Yufei Huang. "Transformer for Gene Expression Modeling (T-GEM): An Interpretable Deep Learning Model for Gene Expression-Based Phenotype Predictions." Cancers 14, no. 19 (September 29, 2022): 4763. http://dx.doi.org/10.3390/cancers14194763.

Full text
Abstract:
Deep learning has been applied in precision oncology to address a variety of gene expression-based phenotype predictions. However, gene expression data’s unique characteristics challenge the computer vision-inspired design of popular Deep Learning (DL) models such as Convolutional Neural Network (CNN) and ask for the need to develop interpretable DL models tailored for transcriptomics study. To address the current challenges in developing an interpretable DL model for modeling gene expression data, we propose a novel interpretable deep learning architecture called T-GEM, or Transformer for Gene Expression Modeling. We provided the detailed T-GEM model for modeling gene–gene interactions and demonstrated its utility for gene expression-based predictions of cancer-related phenotypes, including cancer type prediction and immune cell type classification. We carefully analyzed the learning mechanism of T-GEM and showed that the first layer has broader attention while higher layers focus more on phenotype-related genes. We also showed that T-GEM’s self-attention could capture important biological functions associated with the predicted phenotypes. We further devised a method to extract the regulatory network that T-GEM learns by exploiting the attributions of self-attention weights for classifications and showed that the network hub genes were likely markers for the predicted phenotypes.
APA, Harvard, Vancouver, ISO, and other styles
36

Michau, Gabriel, Chi-Ching Hsu, and Olga Fink. "Interpretable Detection of Partial Discharge in Power Lines with Deep Learning." Sensors 21, no. 6 (March 19, 2021): 2154. http://dx.doi.org/10.3390/s21062154.

Full text
Abstract:
Partial discharge (PD) is a common indication of faults in power systems, such as generators and cables. These PDs can eventually result in costly repairs and substantial power outages. PD detection traditionally relies on hand-crafted features and domain expertise to identify very specific pulses in the electrical current, and the performance declines in the presence of noise or of superposed pulses. In this paper, we propose a novel end-to-end framework based on convolutional neural networks. The framework has two contributions: First, it does not require any feature extraction and enables robust PD detection. Second, we devise the pulse activation map. It provides interpretability of the results for the domain experts with the identification of the pulses that led to the detection of the PDs. The performance is evaluated on a public dataset for the detection of damaged power lines. An ablation study demonstrates the benefits of each part of the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
37

Monga, Vishal, Yuelong Li, and Yonina C. Eldar. "Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing." IEEE Signal Processing Magazine 38, no. 2 (March 2021): 18–44. http://dx.doi.org/10.1109/msp.2020.3016905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Isleyen, Ergin, Sebnem Duzgun, and R. McKell Carter. "Interpretable deep learning for roof fall hazard detection in underground mines." Journal of Rock Mechanics and Geotechnical Engineering 13, no. 6 (December 2021): 1246–55. http://dx.doi.org/10.1016/j.jrmge.2021.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Vinuesa, Ricardo, and Beril Sirmacek. "Interpretable deep-learning models to help achieve the Sustainable Development Goals." Nature Machine Intelligence 3, no. 11 (November 2021): 926. http://dx.doi.org/10.1038/s42256-021-00414-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hammelman, Jennifer, and David K. Gifford. "Discovering differential genome sequence activity with interpretable and efficient deep learning." PLOS Computational Biology 17, no. 8 (August 9, 2021): e1009282. http://dx.doi.org/10.1371/journal.pcbi.1009282.

Full text
Abstract:
Discovering sequence features that differentially direct cells to alternate fates is key to understanding both cellular development and the consequences of disease related mutations. We introduce Expected Pattern Effect and Differential Expected Pattern Effect, two black-box methods that can interpret genome regulatory sequences for cell type-specific or condition specific patterns. We show that these methods identify relevant transcription factor motifs and spacings that are predictive of cell state-specific chromatin accessibility. Finally, we integrate these methods into framework that is readily accessible to non-experts and available for download as a binary or installed via PyPI or bioconda at https://cgs.csail.mit.edu/deepaccess-package/.
APA, Harvard, Vancouver, ISO, and other styles
41

Zia, Tehseen, Nauman Bashir, Mirza Ahsan Ullah, and Shakeeb Murtaza. "SoFTNet: A concept-controlled deep learning architecture for interpretable image classification." Knowledge-Based Systems 240 (March 2022): 108066. http://dx.doi.org/10.1016/j.knosys.2021.108066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gao, Xinjian, Tingting Mu, John Yannis Goulermas, Jeyarajan Thiyagalingam, and Meng Wang. "An Interpretable Deep Architecture for Similarity Learning Built Upon Hierarchical Concepts." IEEE Transactions on Image Processing 29 (2020): 3911–26. http://dx.doi.org/10.1109/tip.2020.2965275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Caicedo-Torres, William, and Jairo Gutierrez. "ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU." Journal of Biomedical Informatics 98 (October 2019): 103269. http://dx.doi.org/10.1016/j.jbi.2019.103269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Atutxa, Aitziber, Arantza Díaz de Ilarraza, Koldo Gojenola, Maite Oronoz, and Olatz Perez-de-Viñaspre. "Interpretable deep learning to map diagnostic texts to ICD-10 codes." International Journal of Medical Informatics 129 (September 2019): 49–59. http://dx.doi.org/10.1016/j.ijmedinf.2019.05.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Abid, Firas Ben, Marwen Sallem, and Ahmed Braham. "Robust Interpretable Deep Learning for Intelligent Fault Diagnosis of Induction Motors." IEEE Transactions on Instrumentation and Measurement 69, no. 6 (June 2020): 3506–15. http://dx.doi.org/10.1109/tim.2019.2932162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jha, Manoj, Akshay Kumar Kawale, and Chandan Kumar Verma. "Interpretable Model for Antibiotic Resistance Prediction in Bacteria using Deep Learning." Biomedical and Pharmacology Journal 10, no. 4 (December 25, 2017): 1963–68. http://dx.doi.org/10.13005/bpj/1316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Shamsuzzaman, Md. "Explainable and Interpretable Deep Learning Models." Global Journal of Engineering Sciences 5, no. 5 (June 9, 2020). http://dx.doi.org/10.33552/gjes.2020.05.000621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ahsan, Md Manjurul, Md Shahin Ali, Md Mehedi Hassan, Tareque Abu Abdullah, Kishor Datta Gupta, Ulas Bagci, Chetna Kaushal, and Naglaa F. Soliman. "Monkeypox Diagnosis with Interpretable Deep Learning." IEEE Access, 2023, 1. http://dx.doi.org/10.1109/access.2023.3300793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Delaunay, Antoine, and Hannah M. Christensen. "Interpretable Deep Learning for Probabilistic MJO Prediction." Geophysical Research Letters, August 24, 2022. http://dx.doi.org/10.1029/2022gl098566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ahn, Daehwan, Dokyun Lee, and Kartik Hosanagar. "Interpretable Deep Learning Approach to Churn Management." SSRN Electronic Journal, 2020. http://dx.doi.org/10.2139/ssrn.3981160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography