Artículos de revistas sobre el tema "Interpretable deep learning"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Interpretable deep learning.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Interpretable deep learning".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Gangopadhyay, Tryambak, Sin Yong Tan, Anthony LoCurto, James B. Michael y Soumik Sarkar. "Interpretable Deep Learning for Monitoring Combustion Instability". IFAC-PapersOnLine 53, n.º 2 (2020): 832–37. http://dx.doi.org/10.1016/j.ifacol.2020.12.839.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zheng, Hong, Yinglong Dai, Fumin Yu y Yuezhen Hu. "Interpretable Saliency Map for Deep Reinforcement Learning". Journal of Physics: Conference Series 1757, n.º 1 (1 de enero de 2021): 012075. http://dx.doi.org/10.1088/1742-6596/1757/1/012075.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ruffolo, Jeffrey A., Jeremias Sulam y Jeffrey J. Gray. "Antibody structure prediction using interpretable deep learning". Patterns 3, n.º 2 (febrero de 2022): 100406. http://dx.doi.org/10.1016/j.patter.2021.100406.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Arik, Sercan Ö. y Tomas Pfister. "TabNet: Attentive Interpretable Tabular Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de mayo de 2021): 6679–87. http://dx.doi.org/10.1609/aaai.v35i8.16826.

Texto completo
Resumen
We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into its global behavior. Finally, we demonstrate self-supervised learning for tabular data, significantly improving performance when unlabeled data is abundant.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bhambhoria, Rohan, Hui Liu, Samuel Dahan y Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.

Texto completo
Resumen
Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically, we introduce a model-agnostic interpretable intermediate layer, a technique which proves to be effective for legal documents. Furthermore, we utilize weakly supervised learning by means of a curriculum learning strategy, effectively demonstrating the improved performance of a deep learning model. This is in contrast to the conventional models which are only able to utilize the limited number of expensive manually-annotated samples by legal experts. Although the methods presented in this work tackles the task of risk of confusion for trademarks, it is straightforward to extend them to other fields of law, or more generally, to other similar high-stakes application scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lin, Chih-Hsu y Olivier Lichtarge. "Using interpretable deep learning to model cancer dependencies". Bioinformatics 37, n.º 17 (27 de mayo de 2021): 2675–81. http://dx.doi.org/10.1093/bioinformatics/btab137.

Texto completo
Resumen
Abstract Motivation Cancer dependencies provide potential drug targets. Unfortunately, dependencies differ among cancers and even individuals. To this end, visible neural networks (VNNs) are promising due to robust performance and the interpretability required for the biomedical field. Results We design Biological visible neural network (BioVNN) using pathway knowledge to predict cancer dependencies. Despite having fewer parameters, BioVNN marginally outperforms traditional neural networks (NNs) and converges faster. BioVNN also outperforms an NN based on randomized pathways. More importantly, dependency predictions can be explained by correlating with the neuron output states of relevant pathways, which suggest dependency mechanisms. In feature importance analysis, BioVNN recapitulates known reaction partners and proposes new ones. Such robust and interpretable VNNs may facilitate the understanding of cancer dependency and the development of targeted therapies. Availability and implementation Code and data are available at https://github.com/LichtargeLab/BioVNN Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liao, WangMin, BeiJi Zou, RongChang Zhao, YuanQiong Chen, ZhiYou He y MengJie Zhou. "Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis". IEEE Journal of Biomedical and Health Informatics 24, n.º 5 (mayo de 2020): 1405–12. http://dx.doi.org/10.1109/jbhi.2019.2949075.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Matsubara, Takashi. "Bayesian deep learning: A model-based interpretable approach". Nonlinear Theory and Its Applications, IEICE 11, n.º 1 (2020): 16–35. http://dx.doi.org/10.1587/nolta.11.16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Liu, Yi, Kenneth Barr y John Reinitz. "Fully interpretable deep learning model of transcriptional control". Bioinformatics 36, Supplement_1 (1 de julio de 2020): i499—i507. http://dx.doi.org/10.1093/bioinformatics/btaa506.

Texto completo
Resumen
Abstract Motivation The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a ‘black box’ approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modeling of transcriptional control in which mRNA production is controlled by the binding of specific transcriptionfactors to DNA, in part because such models are in part formulated in terms of specific chemical equationsthat appear different in form from those used in neural networks. Results In this paper, we give an example of a DNN whichcan model the detailed control of transcription in a precise and predictive manner. Its internal structure is fully interpretableand is faithful to underlying chemistry of transcription factor binding to DNA. We derive our DNN from asystems biology model that was not previously recognized as having a DNN structure. Although we apply our DNNto data from the early embryo of the fruit fly Drosophila, this system serves as a test bed for analysis of much larger datasets obtained by systems biology studies on a genomic scale. . Availability and implementation The implementation and data for the models used in this paper are in a zip file in the supplementary material. Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Brinkrolf, Johannes y Barbara Hammer. "Interpretable machine learning with reject option". at - Automatisierungstechnik 66, n.º 4 (25 de abril de 2018): 283–90. http://dx.doi.org/10.1515/auto-2017-0123.

Texto completo
Resumen
Abstract Classification by means of machine learning models constitutes one relevant technology in process automation and predictive maintenance. However, common techniques such as deep networks or random forests suffer from their black box characteristics and possible adversarial examples. In this contribution, we give an overview about a popular alternative technology from machine learning, namely modern variants of learning vector quantization, which, due to their combined discriminative and generative nature, incorporate interpretability and the possibility of explicit reject options for irregular samples. We give an explicit bound on minimum changes required for a change of the classification in case of LVQ networks with reject option, and we demonstrate the efficiency of reject options in two examples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zinemanas, Pablo, Martín Rocamora, Marius Miron, Frederic Font y Xavier Serra. "An Interpretable Deep Learning Model for Automatic Sound Classification". Electronics 10, n.º 7 (2 de abril de 2021): 850. http://dx.doi.org/10.3390/electronics10070850.

Texto completo
Resumen
Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Gagne II, David John, Sue Ellen Haupt, Douglas W. Nychka y Gregory Thompson. "Interpretable Deep Learning for Spatial Analysis of Severe Hailstorms". Monthly Weather Review 147, n.º 8 (17 de julio de 2019): 2827–45. http://dx.doi.org/10.1175/mwr-d-18-0316.1.

Texto completo
Resumen
Abstract Deep learning models, such as convolutional neural networks, utilize multiple specialized layers to encode spatial patterns at different scales. In this study, deep learning models are compared with standard machine learning approaches on the task of predicting the probability of severe hail based on upper-air dynamic and thermodynamic fields from a convection-allowing numerical weather prediction model. The data for this study come from patches surrounding storms identified in NCAR convection-allowing ensemble runs from 3 May to 3 June 2016. The machine learning models are trained to predict whether the simulated surface hail size from the Thompson hail size diagnostic exceeds 25 mm over the hour following storm detection. A convolutional neural network is compared with logistic regressions using input variables derived from either the spatial means of each field or principal component analysis. The convolutional neural network statistically significantly outperforms all other methods in terms of Brier skill score and area under the receiver operator characteristic curve. Interpretation of the convolutional neural network through feature importance and feature optimization reveals that the network synthesized information about the environment and storm morphology that is consistent with our understanding of hail growth, including large lapse rates and a wind shear profile that favors wide updrafts. Different neurons in the network also record different storm modes, and the magnitude of the output of those neurons is used to analyze the spatiotemporal distributions of different storm modes in the NCAR ensemble.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Abdel-Basset, Mohamed, Hossam Hawash, Khalid Abdulaziz Alnowibet, Ali Wagdy Mohamed y Karam M. Sallam. "Interpretable Deep Learning for Discriminating Pneumonia from Lung Ultrasounds". Mathematics 10, n.º 21 (6 de noviembre de 2022): 4153. http://dx.doi.org/10.3390/math10214153.

Texto completo
Resumen
Lung ultrasound images have shown great promise to be an operative point-of-care test for the diagnosis of COVID-19 because of the ease of procedure with negligible individual protection equipment, together with relaxed disinfection. Deep learning (DL) is a robust tool for modeling infection patterns from medical images; however, the existing COVID-19 detection models are complex and thereby are hard to deploy in frequently used mobile platforms in point-of-care testing. Moreover, most of the COVID-19 detection models in the existing literature on DL are implemented as a black box, hence, they are hard to be interpreted or trusted by the healthcare community. This paper presents a novel interpretable DL framework discriminating COVID-19 infection from other cases of pneumonia and normal cases using ultrasound data of patients. In the proposed framework, novel transformer modules are introduced to model the pathological information from ultrasound frames using an improved window-based multi-head self-attention layer. A convolutional patching module is introduced to transform input frames into latent space rather than partitioning input into patches. A weighted pooling module is presented to score the embeddings of the disease representations obtained from the transformer modules to attend to information that is most valuable for the screening decision. Experimental analysis of the public three-class lung ultrasound dataset (PCUS dataset) demonstrates the discriminative power (Accuracy: 93.4%, F1-score: 93.1%, AUC: 97.5%) of the proposed solution overcoming the competing approaches while maintaining low complexity. The proposed model obtained very promising results in comparison with the rival models. More importantly, it gives explainable outputs therefore, it can serve as a candidate tool for empowering the sustainable diagnosis of COVID-19-like diseases in smart healthcare.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu y Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de mayo de 2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.

Texto completo
Resumen
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Xu, Lingfeng, Julie Liss y Visar Berisha. "Dysarthria detection based on a deep learning model with a clinically-interpretable layer". JASA Express Letters 3, n.º 1 (enero de 2023): 015201. http://dx.doi.org/10.1121/10.0016833.

Texto completo
Resumen
Studies have shown deep neural networks (DNN) as a potential tool for classifying dysarthric speakers and controls. However, representations used to train DNNs are largely not clinically interpretable, which limits clinical value. Here, a model with a bottleneck layer is trained to jointly learn a classification label and four clinically-interpretable features. Evaluation of two dysarthria subtypes shows that the proposed method can flexibly trade-off between improved classification accuracy and discovery of clinically-interpretable deficit patterns. The analysis using Shapley additive explanation shows the model learns a representation consistent with the disturbances that define the two dysarthria subtypes considered in this work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

An, Junkang, Yiwan Zhang y Inwhee Joe. "Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models". Applied Sciences 13, n.º 15 (29 de julio de 2023): 8782. http://dx.doi.org/10.3390/app13158782.

Texto completo
Resumen
Deep learning researchers believe that as deep learning models evolve, they can perform well on many tasks. However, the complex parameters of deep learning models make it difficult for users to understand how deep learning models make predictions. In this paper, we propose the specific-input local interpretable model-agnostic explanations (LIME) model, a novel interpretable artificial intelligence (XAI) method that interprets deep learning models of tabular data. The specific-input process uses feature importance and partial dependency plots (PDPs) to select the “what” and “how”. In our experiments, we first obtain a basic interpretation of the data by simulating user behaviour. Second, we use our approach to understand “which” features deep learning models focus on and how these features affect the model’s predictions. From the experimental results, we find that this approach improves the stability of LIME interpretations, compensates for the problem of LIME only focusing on local interpretations, and achieves a balance between global and local interpretations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu y Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification". Agronomy 12, n.º 5 (26 de abril de 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Texto completo
Resumen
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu y Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification". Agronomy 12, n.º 5 (26 de abril de 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Texto completo
Resumen
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu y Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification". Agronomy 12, n.º 5 (26 de abril de 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Texto completo
Resumen
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Monje, Leticia, Ramón A. Carrasco, Carlos Rosado y Manuel Sánchez-Montañés. "Deep Learning XAI for Bus Passenger Forecasting: A Use Case in Spain". Mathematics 10, n.º 9 (23 de abril de 2022): 1428. http://dx.doi.org/10.3390/math10091428.

Texto completo
Resumen
Time series forecasting of passenger demand is crucial for optimal planning of limited resources. For smart cities, passenger transport in urban areas is an increasingly important problem, because the construction of infrastructure is not the solution and the use of public transport should be encouraged. One of the most sophisticated techniques for time series forecasting is Long Short Term Memory (LSTM) neural networks. These deep learning models are very powerful for time series forecasting but are not interpretable by humans (black-box models). Our goal was to develop a predictive and linguistically interpretable model, useful for decision making using large volumes of data from different sources. Our case study was one of the most demanded bus lines of Madrid. We obtained an interpretable model from the LSTM neural network using a surrogate model and the 2-tuple fuzzy linguistic model, which improves the linguistic interpretability of the generated Explainable Artificial Intelligent (XAI) model without losing precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Zhang, Dongdong, Samuel Yang, Xiaohui Yuan y Ping Zhang. "Interpretable deep learning for automatic diagnosis of 12-lead electrocardiogram". iScience 24, n.º 4 (abril de 2021): 102373. http://dx.doi.org/10.1016/j.isci.2021.102373.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Fisher, Thomas, Harry Gibson, Yunzhe Liu, Moloud Abdar, Marius Posa, Gholamreza Salimi-Khorshidi, Abdelaali Hassaine, Yutong Cai, Kazem Rahimi y Mohammad Mamouei. "Uncertainty-Aware Interpretable Deep Learning for Slum Mapping and Monitoring". Remote Sensing 14, n.º 13 (26 de junio de 2022): 3072. http://dx.doi.org/10.3390/rs14133072.

Texto completo
Resumen
Over a billion people live in slums, with poor sanitation, education, property rights and working conditions having a direct impact on current residents and future generations. Slum mapping is one of the key problems concerning slums. Policymakers need to delineate slum settlements to make informed decisions about infrastructure development and allocation of aid. A wide variety of machine learning and deep learning methods have been applied to multispectral satellite images to map slums with outstanding performance. Since the physical and visual manifestation of slums significantly varies with geographical region and comprehensive slum maps are rare, it is important to quantify the uncertainty of predictions for reliable and confident application of models to downstream tasks. In this study, we train a U-Net model with Monte Carlo Dropout (MCD) on 13-band Sentinel-2 images, allowing us to calculate pixelwise uncertainty in the predictions. The obtained outcomes show that the proposed model outperforms the previous state-of-the-art model, having both higher AUPRC and lower uncertainty when tested on unseen geographical regions of Mumbai using the regional testing framework introduced in this study. We also use SHapley Additive exPlanations (SHAP) values to investigate how the different features contribute to our model’s predictions which indicate a certain shortwave infrared image band is a powerful feature for determining the locations of slums within images. With our results, we demonstrate the usefulness of including an uncertainty quantification approach in detecting slum area changes over time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Zokaeinikoo, M., X. Li y M. Yang. "An interpretable deep learning model to predict symptomatic knee osteoarthritis". Osteoarthritis and Cartilage 29 (abril de 2021): S354. http://dx.doi.org/10.1016/j.joca.2021.02.459.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Wang, Jilong, Rui Li, Renfa Li, Bin Fu y Danny Z. Chen. "HMCKRAutoEncoder: An Interpretable Deep Learning Framework for Time Series Analysis". IEEE Transactions on Emerging Topics in Computing 10, n.º 1 (1 de enero de 2022): 99–111. http://dx.doi.org/10.1109/tetc.2022.3143154.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

de la Torre, Jordi, Aida Valls y Domenec Puig. "A deep learning interpretable classifier for diabetic retinopathy disease grading". Neurocomputing 396 (julio de 2020): 465–76. http://dx.doi.org/10.1016/j.neucom.2018.07.102.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Zhang, Zizhao, Pingjun Chen, Mason McGough, Fuyong Xing, Chunbao Wang, Marilyn Bui, Yuanpu Xie et al. "Pathologist-level interpretable whole-slide cancer diagnosis with deep learning". Nature Machine Intelligence 1, n.º 5 (mayo de 2019): 236–45. http://dx.doi.org/10.1038/s42256-019-0052-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Rampal, Neelesh, Tom Shand, Adam Wooler y Christo Rautenbach. "Interpretable Deep Learning Applied to Rip Current Detection and Localization". Remote Sensing 14, n.º 23 (29 de noviembre de 2022): 6048. http://dx.doi.org/10.3390/rs14236048.

Texto completo
Resumen
A rip current is a strong, localized current of water which moves along and away from the shore. Recent studies have suggested that drownings due to rip currents are still a major threat to beach safety. Identification of rip currents is important for lifeguards when making decisions on where to designate patrolled areas. The public also require information while deciding where to swim when lifeguards are not on patrol. In the present study we present an artificial intelligence (AI) algorithm that both identifies whether a rip current exists in images/video, and also localizes where that rip current occurs. While there have been some significant advances in AI for rip current detection and localization, there is a lack of research ensuring that an AI algorithm can generalize well to a diverse range of coastal environments and marine conditions. The present study made use of an interpretable AI method, gradient-weighted class-activation maps (Grad-CAM), which is a novel approach for amorphous rip current detection. The training data/images were diverse and encompass rip currents in a wide variety of environmental settings, ensuring model generalization. An open-access aerial catalogue of rip currents were used for model training. Here, the aerial imagery was also augmented by applying a wide variety of randomized image transformations (e.g., perspective, rotational transforms, and additive noise), which dramatically improves model performance through generalization. To account for diverse environmental settings, a synthetically generated training set, containing fog, shadows, and rain, was also added to the rip current images, thus increased the training dataset approximately 10-fold. Interpretable AI has dramatically improved the accuracy of unbounded rip current detection, which can correctly classify and localize rip currents about 89% of the time when validated on independent videos from surf-cameras at oblique angles. The novelty also lies in the ability to capture some shape characteristics of the amorphous rip current structure without the need of a predefined bounding box, therefore enabling the use of remote technology like drones. A comparison with well-established coastal image processing techniques is also presented via a short discussion and easy reference table. The strengths and weaknesses of both methods are highlighted and discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Hua, Xinyun, Lei Cheng, Ting Zhang y Jianlong Li. "Interpretable deep dictionary learning for sound speed profiles with uncertainties". Journal of the Acoustical Society of America 153, n.º 2 (febrero de 2023): 877–94. http://dx.doi.org/10.1121/10.0017099.

Texto completo
Resumen
Uncertainties abound in sound speed profiles (SSPs) measured/estimated by modern ocean observing systems, which impede the knowledge acquisition and downstream underwater applications. To reduce the SSP uncertainties and draw insights into specific ocean processes, an interpretable deep dictionary learning model is proposed to cater for uncertain SSP processing. In particular, two kinds of SSP uncertainties are considered: measurement errors, which generally exist in the form of Gaussian noises; and the disturbances/anomalies caused by potential ocean dynamics, which occur at some specific depths and durations. To learn the generative patterns of these uncertainties while maintaining the interpretability of the resulting deep model, the adopted scheme first unrolls the classical K-singular value decomposition algorithm into a neural network, and trains this neural network in a supervised learning manner. The training data and model initializations are judiciously designed to incorporate the environmental properties of ocean SSPs. Experimental results demonstrate the superior performance of the proposed method over the classical baseline in mitigating noise corruptions, detecting, and localizing SSP disturbances/anomalies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Schmid, Ute y Bettina Finzel. "Mutual Explanations for Cooperative Decision Making in Medicine". KI - Künstliche Intelligenz 34, n.º 2 (10 de enero de 2020): 227–33. http://dx.doi.org/10.1007/s13218-020-00633-2.

Texto completo
Resumen
AbstractExploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Sieusahai, Alexander y Matthew Guzdial. "Explaining Deep Reinforcement Learning Agents in the Atari Domain through a Surrogate Model". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 17, n.º 1 (4 de octubre de 2021): 82–90. http://dx.doi.org/10.1609/aiide.v17i1.18894.

Texto completo
Resumen
One major barrier to applications of deep Reinforcement Learning (RL) both inside and outside of games is the lack of explainability. In this paper, we describe a lightweight and effective method to derive explanations for deep RL agents, which we evaluate in the Atari domain. Our method relies on a transformation of the pixel-based input of the RL agent to a symbolic, interpretable input representation. We then train a surrogate model, which is itself interpretable, to replicate the behavior of the target, deep RL agent. Our experiments demonstrate that we can learn an effective surrogate that accurately approximates the underlying decision making of a target agent on a suite of Atari games.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

R. S. Deshpande, P. V. Ambatkar. "Interpretable Deep Learning Models: Enhancing Transparency and Trustworthiness in Explainable AI". Proceeding International Conference on Science and Engineering 11, n.º 1 (18 de febrero de 2023): 1352–63. http://dx.doi.org/10.52783/cienceng.v11i1.286.

Texto completo
Resumen
Explainable AI (XAI) aims to address the opacity of deep learning models, which can limit their adoption in critical decision-making applications. This paper presents a novel framework that integrates interpretable components and visualization techniques to enhance the transparency and trustworthiness of deep learning models. We propose a hybrid explanation method combining saliency maps, feature attribution, and local interpretable model-agnostic explanations (LIME) to provide comprehensive insights into the model's decision-making process. Our experiments with convolutional neural networks (CNNs) and transformers demonstrate that our approach improves interpretability without compromising performance. User studies with domain experts indicate that our visualization dashboard facilitates better understanding and trust in AI systems. This research contributes to developing more transparent and trustworthy deep learning models, paving the way for broader adoption in sensitive applications where human users need to understand and trust AI decisions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Li, Wentian, Xidong Feng, Haotian An, Xiang Yao Ng y Yu-Jin Zhang. "MRI Reconstruction with Interpretable Pixel-Wise Operations Using Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 01 (3 de abril de 2020): 792–99. http://dx.doi.org/10.1609/aaai.v34i01.5423.

Texto completo
Resumen
Compressed sensing magnetic resonance imaging (CS-MRI) is a technique aimed at accelerating the data acquisition of MRI. While down-sampling in k-space proportionally reduces the data acquisition time, it results in images corrupted by aliasing artifacts and blur. To reconstruct images from the down-sampled k-space, recent deep-learning based methods have shown better performance compared with classical optimization-based CS-MRI methods. However, they usually use deep neural networks as a black-box, which directly maps the corrupted images to the target images from fully-sampled k-space data. This lack of transparency may impede practical usage of such methods. In this work, we propose a deep reinforcement learning based method to reconstruct the corrupted images with meaningful pixel-wise operations (e.g. edge enhancing filters), so that the reconstruction process is transparent to users. Specifically, MRI reconstruction is formulated as Markov Decision Process with discrete actions and continuous action parameters. We conduct experiments on MICCAI dataset of brain tissues and fastMRI dataset of knee images. Our proposed method performs favorably against previous approaches. Our trained model learns to select pixel-wise operations that correspond to the anatomical structures in the MR images. This makes the reconstruction process more interpretable, which would be helpful for further medical analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Verma, Abhinav. "Verifiable and Interpretable Reinforcement Learning through Program Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 9902–3. http://dx.doi.org/10.1609/aaai.v33i01.33019902.

Texto completo
Resumen
We study the problem of generating interpretable and verifiable policies for Reinforcement Learning (RL). Unlike the popular Deep Reinforcement Learning (DRL) paradigm, in which the policy is represented by a neural network, the aim of this work is to find policies that can be represented in highlevel programming languages. Such programmatic policies have several benefits, including being more easily interpreted than neural networks, and being amenable to verification by scalable symbolic methods. The generation methods for programmatic policies also provide a mechanism for systematically using domain knowledge for guiding the policy search. The interpretability and verifiability of these policies provides the opportunity to deploy RL based solutions in safety critical environments. This thesis draws on, and extends, work from both the machine learning and formal methods communities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Lyu, Daoming, Fangkai Yang, Bo Liu y Steven Gustafson. "SDRL: Interpretable and Data-Efficient Deep Reinforcement Learning Leveraging Symbolic Planning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 2970–77. http://dx.doi.org/10.1609/aaai.v33i01.33012970.

Texto completo
Resumen
Deep reinforcement learning (DRL) has gained great success by learning directly from high-dimensional sensory inputs, yet is notorious for the lack of interpretability. Interpretability of the subtasks is critical in hierarchical decision-making as it increases the transparency of black-box-style DRL approach and helps the RL practitioners to understand the high-level behavior of the system better. In this paper, we introduce symbolic planning into DRL and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can handle both high-dimensional sensory inputs and symbolic planning. The task-level interpretability is enabled by relating symbolic actions to options.This framework features a planner – controller – meta-controller architecture, which takes charge of subtask scheduling, data-driven subtask learning, and subtask evaluation, respectively. The three components cross-fertilize each other and eventually converge to an optimal symbolic plan along with the learned subtasks, bringing together the advantages of long-term planning capability with symbolic knowledge and end-to-end reinforcement learning directly from a high-dimensional sensory input. Experimental results validate the interpretability of subtasks, along with improved data efficiency compared with state-of-the-art approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Zhang, Ting-He, Md Musaddaqul Hasib, Yu-Chiao Chiu, Zhi-Feng Han, Yu-Fang Jin, Mario Flores, Yidong Chen y Yufei Huang. "Transformer for Gene Expression Modeling (T-GEM): An Interpretable Deep Learning Model for Gene Expression-Based Phenotype Predictions". Cancers 14, n.º 19 (29 de septiembre de 2022): 4763. http://dx.doi.org/10.3390/cancers14194763.

Texto completo
Resumen
Deep learning has been applied in precision oncology to address a variety of gene expression-based phenotype predictions. However, gene expression data’s unique characteristics challenge the computer vision-inspired design of popular Deep Learning (DL) models such as Convolutional Neural Network (CNN) and ask for the need to develop interpretable DL models tailored for transcriptomics study. To address the current challenges in developing an interpretable DL model for modeling gene expression data, we propose a novel interpretable deep learning architecture called T-GEM, or Transformer for Gene Expression Modeling. We provided the detailed T-GEM model for modeling gene–gene interactions and demonstrated its utility for gene expression-based predictions of cancer-related phenotypes, including cancer type prediction and immune cell type classification. We carefully analyzed the learning mechanism of T-GEM and showed that the first layer has broader attention while higher layers focus more on phenotype-related genes. We also showed that T-GEM’s self-attention could capture important biological functions associated with the predicted phenotypes. We further devised a method to extract the regulatory network that T-GEM learns by exploiting the attributions of self-attention weights for classifications and showed that the network hub genes were likely markers for the predicted phenotypes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Michau, Gabriel, Chi-Ching Hsu y Olga Fink. "Interpretable Detection of Partial Discharge in Power Lines with Deep Learning". Sensors 21, n.º 6 (19 de marzo de 2021): 2154. http://dx.doi.org/10.3390/s21062154.

Texto completo
Resumen
Partial discharge (PD) is a common indication of faults in power systems, such as generators and cables. These PDs can eventually result in costly repairs and substantial power outages. PD detection traditionally relies on hand-crafted features and domain expertise to identify very specific pulses in the electrical current, and the performance declines in the presence of noise or of superposed pulses. In this paper, we propose a novel end-to-end framework based on convolutional neural networks. The framework has two contributions: First, it does not require any feature extraction and enables robust PD detection. Second, we devise the pulse activation map. It provides interpretability of the results for the domain experts with the identification of the pulses that led to the detection of the PDs. The performance is evaluated on a public dataset for the detection of damaged power lines. An ablation study demonstrates the benefits of each part of the proposed framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Monga, Vishal, Yuelong Li y Yonina C. Eldar. "Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing". IEEE Signal Processing Magazine 38, n.º 2 (marzo de 2021): 18–44. http://dx.doi.org/10.1109/msp.2020.3016905.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Isleyen, Ergin, Sebnem Duzgun y R. McKell Carter. "Interpretable deep learning for roof fall hazard detection in underground mines". Journal of Rock Mechanics and Geotechnical Engineering 13, n.º 6 (diciembre de 2021): 1246–55. http://dx.doi.org/10.1016/j.jrmge.2021.09.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Vinuesa, Ricardo y Beril Sirmacek. "Interpretable deep-learning models to help achieve the Sustainable Development Goals". Nature Machine Intelligence 3, n.º 11 (noviembre de 2021): 926. http://dx.doi.org/10.1038/s42256-021-00414-y.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Hammelman, Jennifer y David K. Gifford. "Discovering differential genome sequence activity with interpretable and efficient deep learning". PLOS Computational Biology 17, n.º 8 (9 de agosto de 2021): e1009282. http://dx.doi.org/10.1371/journal.pcbi.1009282.

Texto completo
Resumen
Discovering sequence features that differentially direct cells to alternate fates is key to understanding both cellular development and the consequences of disease related mutations. We introduce Expected Pattern Effect and Differential Expected Pattern Effect, two black-box methods that can interpret genome regulatory sequences for cell type-specific or condition specific patterns. We show that these methods identify relevant transcription factor motifs and spacings that are predictive of cell state-specific chromatin accessibility. Finally, we integrate these methods into framework that is readily accessible to non-experts and available for download as a binary or installed via PyPI or bioconda at https://cgs.csail.mit.edu/deepaccess-package/.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zia, Tehseen, Nauman Bashir, Mirza Ahsan Ullah y Shakeeb Murtaza. "SoFTNet: A concept-controlled deep learning architecture for interpretable image classification". Knowledge-Based Systems 240 (marzo de 2022): 108066. http://dx.doi.org/10.1016/j.knosys.2021.108066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Gao, Xinjian, Tingting Mu, John Yannis Goulermas, Jeyarajan Thiyagalingam y Meng Wang. "An Interpretable Deep Architecture for Similarity Learning Built Upon Hierarchical Concepts". IEEE Transactions on Image Processing 29 (2020): 3911–26. http://dx.doi.org/10.1109/tip.2020.2965275.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Caicedo-Torres, William y Jairo Gutierrez. "ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU". Journal of Biomedical Informatics 98 (octubre de 2019): 103269. http://dx.doi.org/10.1016/j.jbi.2019.103269.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Atutxa, Aitziber, Arantza Díaz de Ilarraza, Koldo Gojenola, Maite Oronoz y Olatz Perez-de-Viñaspre. "Interpretable deep learning to map diagnostic texts to ICD-10 codes". International Journal of Medical Informatics 129 (septiembre de 2019): 49–59. http://dx.doi.org/10.1016/j.ijmedinf.2019.05.015.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Abid, Firas Ben, Marwen Sallem y Ahmed Braham. "Robust Interpretable Deep Learning for Intelligent Fault Diagnosis of Induction Motors". IEEE Transactions on Instrumentation and Measurement 69, n.º 6 (junio de 2020): 3506–15. http://dx.doi.org/10.1109/tim.2019.2932162.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Jha, Manoj, Akshay Kumar Kawale y Chandan Kumar Verma. "Interpretable Model for Antibiotic Resistance Prediction in Bacteria using Deep Learning". Biomedical and Pharmacology Journal 10, n.º 4 (25 de diciembre de 2017): 1963–68. http://dx.doi.org/10.13005/bpj/1316.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Shamsuzzaman, Md. "Explainable and Interpretable Deep Learning Models". Global Journal of Engineering Sciences 5, n.º 5 (9 de junio de 2020). http://dx.doi.org/10.33552/gjes.2020.05.000621.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Ahsan, Md Manjurul, Md Shahin Ali, Md Mehedi Hassan, Tareque Abu Abdullah, Kishor Datta Gupta, Ulas Bagci, Chetna Kaushal y Naglaa F. Soliman. "Monkeypox Diagnosis with Interpretable Deep Learning". IEEE Access, 2023, 1. http://dx.doi.org/10.1109/access.2023.3300793.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Delaunay, Antoine y Hannah M. Christensen. "Interpretable Deep Learning for Probabilistic MJO Prediction". Geophysical Research Letters, 24 de agosto de 2022. http://dx.doi.org/10.1029/2022gl098566.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Ahn, Daehwan, Dokyun Lee y Kartik Hosanagar. "Interpretable Deep Learning Approach to Churn Management". SSRN Electronic Journal, 2020. http://dx.doi.org/10.2139/ssrn.3981160.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía