Literatura académica sobre el tema "Interpretable deep learning"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Interpretable deep learning".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Interpretable deep learning"

1

Gangopadhyay, Tryambak, Sin Yong Tan, Anthony LoCurto, James B. Michael y Soumik Sarkar. "Interpretable Deep Learning for Monitoring Combustion Instability". IFAC-PapersOnLine 53, n.º 2 (2020): 832–37. http://dx.doi.org/10.1016/j.ifacol.2020.12.839.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zheng, Hong, Yinglong Dai, Fumin Yu y Yuezhen Hu. "Interpretable Saliency Map for Deep Reinforcement Learning". Journal of Physics: Conference Series 1757, n.º 1 (1 de enero de 2021): 012075. http://dx.doi.org/10.1088/1742-6596/1757/1/012075.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ruffolo, Jeffrey A., Jeremias Sulam y Jeffrey J. Gray. "Antibody structure prediction using interpretable deep learning". Patterns 3, n.º 2 (febrero de 2022): 100406. http://dx.doi.org/10.1016/j.patter.2021.100406.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Arik, Sercan Ö. y Tomas Pfister. "TabNet: Attentive Interpretable Tabular Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de mayo de 2021): 6679–87. http://dx.doi.org/10.1609/aaai.v35i8.16826.

Texto completo
Resumen
We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into its global behavior. Finally, we demonstrate self-supervised learning for tabular data, significantly improving performance when unlabeled data is abundant.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bhambhoria, Rohan, Hui Liu, Samuel Dahan y Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.

Texto completo
Resumen
Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically, we introduce a model-agnostic interpretable intermediate layer, a technique which proves to be effective for legal documents. Furthermore, we utilize weakly supervised learning by means of a curriculum learning strategy, effectively demonstrating the improved performance of a deep learning model. This is in contrast to the conventional models which are only able to utilize the limited number of expensive manually-annotated samples by legal experts. Although the methods presented in this work tackles the task of risk of confusion for trademarks, it is straightforward to extend them to other fields of law, or more generally, to other similar high-stakes application scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lin, Chih-Hsu y Olivier Lichtarge. "Using interpretable deep learning to model cancer dependencies". Bioinformatics 37, n.º 17 (27 de mayo de 2021): 2675–81. http://dx.doi.org/10.1093/bioinformatics/btab137.

Texto completo
Resumen
Abstract Motivation Cancer dependencies provide potential drug targets. Unfortunately, dependencies differ among cancers and even individuals. To this end, visible neural networks (VNNs) are promising due to robust performance and the interpretability required for the biomedical field. Results We design Biological visible neural network (BioVNN) using pathway knowledge to predict cancer dependencies. Despite having fewer parameters, BioVNN marginally outperforms traditional neural networks (NNs) and converges faster. BioVNN also outperforms an NN based on randomized pathways. More importantly, dependency predictions can be explained by correlating with the neuron output states of relevant pathways, which suggest dependency mechanisms. In feature importance analysis, BioVNN recapitulates known reaction partners and proposes new ones. Such robust and interpretable VNNs may facilitate the understanding of cancer dependency and the development of targeted therapies. Availability and implementation Code and data are available at https://github.com/LichtargeLab/BioVNN Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liao, WangMin, BeiJi Zou, RongChang Zhao, YuanQiong Chen, ZhiYou He y MengJie Zhou. "Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis". IEEE Journal of Biomedical and Health Informatics 24, n.º 5 (mayo de 2020): 1405–12. http://dx.doi.org/10.1109/jbhi.2019.2949075.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Matsubara, Takashi. "Bayesian deep learning: A model-based interpretable approach". Nonlinear Theory and Its Applications, IEICE 11, n.º 1 (2020): 16–35. http://dx.doi.org/10.1587/nolta.11.16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Liu, Yi, Kenneth Barr y John Reinitz. "Fully interpretable deep learning model of transcriptional control". Bioinformatics 36, Supplement_1 (1 de julio de 2020): i499—i507. http://dx.doi.org/10.1093/bioinformatics/btaa506.

Texto completo
Resumen
Abstract Motivation The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a ‘black box’ approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modeling of transcriptional control in which mRNA production is controlled by the binding of specific transcriptionfactors to DNA, in part because such models are in part formulated in terms of specific chemical equationsthat appear different in form from those used in neural networks. Results In this paper, we give an example of a DNN whichcan model the detailed control of transcription in a precise and predictive manner. Its internal structure is fully interpretableand is faithful to underlying chemistry of transcription factor binding to DNA. We derive our DNN from asystems biology model that was not previously recognized as having a DNN structure. Although we apply our DNNto data from the early embryo of the fruit fly Drosophila, this system serves as a test bed for analysis of much larger datasets obtained by systems biology studies on a genomic scale. . Availability and implementation The implementation and data for the models used in this paper are in a zip file in the supplementary material. Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Brinkrolf, Johannes y Barbara Hammer. "Interpretable machine learning with reject option". at - Automatisierungstechnik 66, n.º 4 (25 de abril de 2018): 283–90. http://dx.doi.org/10.1515/auto-2017-0123.

Texto completo
Resumen
Abstract Classification by means of machine learning models constitutes one relevant technology in process automation and predictive maintenance. However, common techniques such as deep networks or random forests suffer from their black box characteristics and possible adversarial examples. In this contribution, we give an overview about a popular alternative technology from machine learning, namely modern variants of learning vector quantization, which, due to their combined discriminative and generative nature, incorporate interpretability and the possibility of explicit reject options for irregular samples. We give an explicit bound on minimum changes required for a change of the classification in case of LVQ networks with reject option, and we demonstrate the efficiency of reject options in two examples.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Interpretable deep learning"

1

FERRONE, LORENZO. "On interpretable information in deep learning: encoding and decoding of distributed structures". Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2016. http://hdl.handle.net/2108/202245.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Xie, Ning. "Towards Interpretable and Reliable Deep Neural Networks for Visual Intelligence". Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1596208422672732.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Emschwiller, Matt V. "Understanding neural network sample complexity and interpretable convergence-guaranteed deep learning with polynomial regression". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127290.

Texto completo
Resumen
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-89).
We first study the sample complexity of one-layer neural networks, namely the number of examples that are needed in the training set for such models to be able to learn meaningful information out-of-sample. We empirically derive quantitative relationships between the sample complexity and the parameters of the network, such as its input dimension and its width. Then, we introduce polynomial regression as a proxy for neural networks through a polynomial approximation of their activation function. This method operates in the lifted space of tensor products of input variables, and is trained by simply optimizing a standard least squares objective in this space. We study the scalability of polynomial regression, and are able to design a bagging-type algorithm to successfully train it. The method achieves competitive accuracy on simple image datasets while being more simple. We also demonstrate that it is more robust and more interpretable that existing approaches. It also offers more convergence guarantees during training. Finally, we empirically show that the widely-used Stochastic Gradient Descent algorithm makes the weights of the trained neural networks converge to the optimal polynomial regression weights.
by Matt V. Emschwiller.
S.M.
S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.

Texto completo
Resumen
The goal of this thesis is to provide algorithms and models for classification, gesture recognition and anomaly detection with a partial focus on human activity. In applications where humans are involved, it is of paramount importance to provide robust and understandable algorithms and models. A way to accomplish this requirement is to use relatively simple and robust approaches, especially when devices are resource-constrained. The second approach, when a large amount of data is present, is to adopt complex algorithms and models and make them robust and interpretable from a human-like point of view. This motivates our thesis that is divided in two parts. The first part of this thesis is devoted to the development of parsimonious algorithms for action/gesture recognition in human-centric applications such as sports and anomaly detection for artificial pancreas. The data sources employed for the validation of our approaches consist of a collection of time-series data coming from sensors, such as accelerometers or glycemic. The main challenge in this context is to discard (i.e. being invariant to) many nuisance factors that make the recognition task difficult, especially where many different users are involved. Moreover, in some cases, data cannot be easily labelled, making supervised approaches not viable. Thus, we present the mathematical tools and the background with a focus to the recognition problems and then we derive novel methods for: (i) gesture/action recognition using sparse representations for a sport application; (ii) gesture/action recognition using a symbolic representations and its extension to the multivariate case; (iii) model-free and unsupervised anomaly detection for detecting faults on artificial pancreas. These algorithms are well-suited to be deployed in resource constrained devices, such as wearables. In the second part, we investigate the feasibility of deep learning frameworks where human interpretation is crucial. Standard deep learning models are not robust and, unfortunately, literature approaches that ensure robustness are typically detrimental to accuracy in general. However, in general, real-world applications often require a minimum amount of accuracy to be employed. In view of this, after reviewing some results present in the recent literature, we formulate a new algorithm being able to semantically trade-off between accuracy and robustness, where a cost-sensitive classification problem is provided and a given threshold of accuracy is required. In addition, we provide a link between robustness to input perturbations and interpretability guided by a physical minimum energy principle: in fact, leveraging optimal transport tools, we show that robust training is connected to the optimal transport problem. Thanks to these theoretical insights we develop a new algorithm that provides robust, interpretable and more transferable representations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

REPETTO, MARCO. "Black-box supervised learning and empirical assessment: new perspectives in credit risk modeling". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/402366.

Texto completo
Resumen
I recenti algoritmi di apprendimento automatico ad alte prestazioni sono convincenti ma opachi, quindi spesso è difficile capire come arrivano alle loro previsioni, dando origine a problemi di interpretabilità. Questi problemi sono particolarmente rilevanti nell'apprendimento supervisionato, dove questi modelli "black-box" non sono facilmente comprensibili per le parti interessate. Un numero crescente di lavori si concentra sul rendere più interpretabili i modelli di apprendimento automatico, in particolare quelli di apprendimento profondo. Gli approcci attualmente proposti si basano su un'interpretazione post-hoc, utilizzando metodi come la mappatura della salienza e le dipendenze parziali. Nonostante i progressi compiuti, l'interpretabilità è ancora un'area di ricerca attiva e non esiste una soluzione definitiva. Inoltre, nei processi decisionali ad alto rischio, l'interpretabilità post-hoc può essere subottimale. Un esempio è il campo della modellazione del rischio di credito aziendale. In questi campi, i modelli di classificazione discriminano tra buoni e cattivi mutuatari. Di conseguenza, gli istituti di credito possono utilizzare questi modelli per negare le richieste di prestito. Il rifiuto di un prestito può essere particolarmente dannoso quando il mutuatario non può appellarsi o avere una spiegazione e una motivazione della decisione. In questi casi, quindi, è fondamentale capire perché questi modelli producono un determinato risultato e orientare il processo di apprendimento verso previsioni basate sui fondamentali. Questa tesi si concentra sul concetto di Interpretable Machine Learning, con particolare attenzione al contesto della modellazione del rischio di credito. In particolare, la tesi ruota attorno a tre argomenti: l'interpretabilità agnostica del modello, l'interpretazione post-hoc nel rischio di credito e l'apprendimento guidato dall'interpretabilità. Più specificamente, il primo capitolo è un'introduzione guidata alle tecniche model-agnostic che caratterizzano l'attuale panorama del Machine Learning e alle loro implementazioni. Il secondo capitolo si concentra su un'analisi empirica del rischio di credito delle piccole e medie imprese italiane. Propone una pipeline analitica in cui l'interpretabilità post-hoc gioca un ruolo cruciale nel trovare le basi rilevanti che portano un'impresa al fallimento. Il terzo e ultimo articolo propone una nuova metodologia di iniezione di conoscenza multicriteriale. La metodologia si basa sulla doppia retropropagazione e può migliorare le prestazioni del modello, soprattutto in caso di scarsità di dati. Il vantaggio essenziale di questa metodologia è che permette al decisore di imporre le sue conoscenze pregresse all'inizio del processo di apprendimento, facendo previsioni che si allineano con i fondamentali.
Recent highly performant Machine Learning algorithms are compelling but opaque, so it is often hard to understand how they arrive at their predictions giving rise to interpretability issues. Such issues are particularly relevant in supervised learning, where such black-box models are not easily understandable by the stakeholders involved. A growing body of work focuses on making Machine Learning, particularly Deep Learning models, more interpretable. The currently proposed approaches rely on post-hoc interpretation, using methods such as saliency mapping and partial dependencies. Despite the advances that have been made, interpretability is still an active area of research, and there is no silver bullet solution. Moreover, in high-stakes decision-making, post-hoc interpretability may be sub-optimal. An example is the field of enterprise credit risk modeling. In such fields, classification models discriminate between good and bad borrowers. As a result, lenders can use these models to deny loan requests. Loan denial can be especially harmful when the borrower cannot appeal or have the decision explained and grounded by fundamentals. Therefore in such cases, it is crucial to understand why these models produce a given output and steer the learning process toward predictions based on fundamentals. This dissertation focuses on the concept of Interpretable Machine Learning, with particular attention to the context of credit risk modeling. In particular, the dissertation revolves around three topics: model agnostic interpretability, post-hoc interpretation in credit risk, and interpretability-driven learning. More specifically, the first chapter is a guided introduction to the model-agnostic techniques shaping today’s landscape of Machine Learning and their implementations. The second chapter focuses on an empirical analysis of the credit risk of Italian Small and Medium Enterprises. It proposes an analytical pipeline in which post-hoc interpretability plays a crucial role in finding the relevant underpinnings that drive a firm into bankruptcy. The third and last paper proposes a novel multicriteria knowledge injection methodology. The methodology is based on double backpropagation and can improve model performance, especially in the case of scarce data. The essential advantage of such methodology is that it allows the decision maker to impose his previous knowledge at the beginning of the learning process, making predictions that align with the fundamentals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sheikhalishahi, Seyedmostafa. "Machine learning applications in Intensive Care Unit". Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/339274.

Texto completo
Resumen
The rapid digitalization of the healthcare domain in recent years highlighted the need for advanced predictive methods particularly based upon deep learning methods. Deep learning methods which are capable of dealing with time- series data have recently emerged in various fields such as natural language processing, machine translation, and the Intensive Care Unit (ICU). The recent applications of deep learning in ICU have increasingly received attention, and it has shown promising results for different clinical tasks; however, there is still a need for the benchmark models as far as a handful of public datasets are available in ICU. In this thesis, a novel benchmark model of four clinical tasks on a multi-center publicly available dataset is presented; we employed deep learning models to predict clinical studies. We believe this benchmark model can facilitate and accelerate the research in ICU by allowing other researchers to build on top of it. Moreover, we investigated the effectiveness of the proposed method to predict the risk of delirium in the varying observation and prediction windows, the variable ranking is provided to ease the implementation of a screening tool for helping caregivers at the bedside. Ultimately, an attention-based interpretable neural network is proposed to predict the outcome and rank the most influential variables in the model predictions’ outcome. Our experimental findings show the effectiveness of the proposed approaches in improving the application of deep learning models in daily ICU practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

jui, mao wen y 毛文瑞. "Towards Interpretable Deep Extreme Multi-label Learning". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/t7hq7r.

Texto completo
Resumen
碩士
國立中山大學
資訊管理學系研究所
107
Extreme multi-label learning is to seek most relevant subset of labels from an extreme large labels space. The problem of scalability and sparsity makes extreme multi-label hard to learn. In this paper, we propose a framework to deal with these problems. Our approach allows to deal with enormous dataset efficiently. Moreover, most algorithms nowadays are criticized for “black box” problem, which model cannot provide how it decides to make predictions. Through special non-negative constraint, our proposed approach is able to provide interpretable explanation. Experiments show that our method achieves both high prediction accuracy and understandable explanation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kuo, Bo-Wen y 郭博文. "Interpretable representation learning based on Deep Rule Forests". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/7wqrk4.

Texto completo
Resumen
碩士
國立中山大學
資訊管理學系研究所
106
The spirit of tree-based methods is to learn rules. A large number of machine learning techniques are tree-based. More complicated tree learners may result in higher predictive models, but may sacrifice for model interpretability. On the other hand, the spirit of representation learning is to extract abstractive concepts from manifestations of the data. For instance, Deep Neural networks (DNNs) is the most popular method in representation learning. However, unaccountable feature representation is the shortcoming of DNNs. In this paper, we proposed an approach, Deep Rule Forest (DRF), to learn region representations based on random forest in the deep layer-wise structures. The learned interpretable rules region representations combine other machine learning algorithms. We trained CART which learned from DRF region representations, and found that the prediction accuracies sometime are better than ensemble learning methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Würfel, Max. "Online advertising revenue forecasting: an interpretable deep learning approach". Master's thesis, 2021. http://hdl.handle.net/10362/122676.

Texto completo
Resumen
This paper investigates whether publishers’ Google AdSense online advertising revenues can be predicted from peekd’s proprietary database using deep learning methodologies. Peekd is a Berlin (Germany) based data science company, which primarily provides e Retailers with sales and shopper intelligence. I find that using a single deep learning model, AdSense revenues can be predicted across publishers. Additionally, using unsupervised clustering, publishers were grouped and related time series were fed as covariates when making predictions. No performance improvement was found in relation with this technique. Finally, I find that in the short-term, publishers’ AdSense revenues embed similar temporal patterns as web traffic.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Huang, Sheng-Tai y 黃升泰. "Interpretable Logic Representation Learning based on Deep Rule Forest". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hybs2q.

Texto completo
Resumen
碩士
國立中山大學
資訊管理學系研究所
107
Compared to traditional machine learning algorithms, most contemporary algorithms have prominent promotion in terms of accuracy, but this also complicate the model architecture, which disables human from understanding how the predictions are generated. This makes the latent discrimination in data difficult for human to discover, and thus there are legislations enforce that models should have interpretability. However, recent interpretable models (e.g. decision tree, linear model) are too simple to produce enough accurate predictions in case of dealing large and complex datasets. Therefore, we extract rules from the decision tree component in random forest, not only makes random forest, regarded as black box model, interpretable, but exploits ensemble learning to boost the accuracy. Moreover, inspired by the concept of representation learning in deep learning, we add multilayer structure to enable random forest to learn more complicated representation. In this paper, we propose Deep Rule Forest, with both interpretability and deep model architecture, and it outperform several complex models such as random forest on accuracy. Nevertheless, this structure makes the rules too complicated to understand by human and hence lose interpretability. At last, via logic optimization, we retain interpretability by simplifying the rules and making them readable and understandable to human.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Interpretable deep learning"

1

Thakoor, Kaveri Anil. Robust, Interpretable, and Portable Deep Learning Systems for Detection of Ophthalmic Diseases. [New York, N.Y.?]: [publisher not identified], 2022.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Interpretable deep learning"

1

Kamath, Uday y John Liu. "Explainable Deep Learning". En Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 217–60. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Preuer, Kristina, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter y Thomas Unterthiner. "Interpretable Deep Learning in Drug Discovery". En Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 331–45. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_18.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wüthrich, Mario V. y Michael Merz. "Selected Topics in Deep Learning". En Springer Actuarial, 453–535. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_11.

Texto completo
Resumen
AbstractThis chapter presents a selection of different topics. We discuss forecasting under model uncertainty, deep quantile regression, deep composite regression and the LocalGLMnet which is an interpretable FN network architecture. Moreover, we provide a bootstrap example to assess prediction uncertainty, we discuss mixture density networks, and we give an outlook to studying variational inference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Rodrigues, Mark, Michael Mayo y Panos Patros. "Interpretable Deep Learning for Surgical Tool Management". En Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data, 3–12. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87444-5_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Batra, Reenu y Manish Mahajan. "Deep Learning Models: An Understandable Interpretable Approach". En Deep Learning for Security and Privacy Preservation in IoT, 169–79. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6186-0_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lu, Yu, Deliang Wang, Qinggang Meng y Penghe Chen. "Towards Interpretable Deep Learning Models for Knowledge Tracing". En Lecture Notes in Computer Science, 185–90. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52240-7_34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Pasquini, Dario, Giuseppe Ateniese y Massimo Bernaschi. "Interpretable Probabilistic Password Strength Meters via Deep Learning". En Computer Security – ESORICS 2020, 502–22. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58951-6_25.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Abdukhamidov, Eldor, Mohammed Abuhamad, Firuz Juraev, Eric Chan-Tin y Tamer AbuHmed. "AdvEdge: Optimizing Adversarial Perturbations Against Interpretable Deep Learning". En Computational Data and Social Networks, 93–105. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-91434-9_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Shinde, Swati V. y Sagar Lahade. "Deep Learning for Tea Leaf Disease Classification". En Applied Computer Vision and Soft Computing with Interpretable AI, 293–314. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003359456-20.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Schütt, Kristof T., Michael Gastegger, Alexandre Tkatchenko y Klaus-Robert Müller. "Quantum-Chemical Insights from Interpretable Atomistic Neural Networks". En Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 311–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_17.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Interpretable deep learning"

1

Ouzounis, Athanasios, George Sidiropoulos, George Papakostas, Ilias Sarafis, Andreas Stamkos y George Solakis. "Interpretable Deep Learning for Marble Tiles Sorting". En 2nd International Conference on Deep Learning Theory and Applications. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010517001010108.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ouzounis, Athanasios, George Sidiropoulos, George Papakostas, Ilias Sarafis, Andreas Stamkos y George Solakis. "Interpretable Deep Learning for Marble Tiles Sorting". En 2nd International Conference on Deep Learning Theory and Applications. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010517000002996.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Do, Cuong M. y Cory Wang. "Interpretable deep learning-based risk evaluation approach". En Artificial Intelligence and Machine Learning in Defense Applications II, editado por Judith Dijk. SPIE, 2020. http://dx.doi.org/10.1117/12.2583972.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Karatekin, Tamer, Selim Sancak, Gokhan Celik, Sevilay Topcuoglu, Guner Karatekin, Pinar Kirci y Ali Okatan. "Interpretable Machine Learning in Healthcare through Generalized Additive Model with Pairwise Interactions (GA2M): Predicting Severe Retinopathy of Prematurity". En 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00020.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kang, Yihuang, I.-Ling Cheng, Wenjui Mao, Bowen Kuo y Pei-Ju Lee. "Towards Interpretable Deep Extreme Multi-Label Learning". En 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2019. http://dx.doi.org/10.1109/iri.2019.00024.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Baranyi, Máté, Marcell Nagy y Roland Molontay. "Interpretable Deep Learning for University Dropout Prediction". En SIGITE '20: The 21st Annual Conference on Information Technology Education. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3368308.3415382.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

White, Andrew. "INTERPRETABLE DEEP LEARNING FOR MOLECULES AND MATERIALS". En 2022 International Symposium on Molecular Spectroscopy. Urbana, Illinois: University of Illinois at Urbana-Champaign, 2022. http://dx.doi.org/10.15278/isms.2022.wk01.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yao, Liuyi, Zijun Yao, Jianying Hu, Jing Gao y Zhaonan Sun. "Deep Staging: An Interpretable Deep Learning Framework for Disease Staging". En 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI). IEEE, 2021. http://dx.doi.org/10.1109/ichi52183.2021.00030.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Jang, Hyeju, Seojin Bang, Wen Xiao, Giuseppe Carenini, Raymond Ng y Young ji Lee. "KW-ATTN: Knowledge Infused Attention for Accurate and Interpretable Text Classification". En Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.deelio-1.10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Liu, Xuan, Xiaoguang Wang y Stan Matwin. "Interpretable Deep Convolutional Neural Networks via Meta-learning". En 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489172.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Interpretable deep learning"

1

Jiang, Peishi, Xingyuan Chen, Maruti Mudunuru, Praveen Kumar, Pin Shuai, Kyongho Son y Alexander Sun. Towards Trustworthy and Interpretable Deep Learning-assisted Ecohydrological Models. Office of Scientific and Technical Information (OSTI), abril de 2021. http://dx.doi.org/10.2172/1769787.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Begeman, Carolyn, Marian Anghel y Ishanu Chattopadhyay. Interpretable Deep Learning for the Earth System with Fractal Nets. Office of Scientific and Technical Information (OSTI), abril de 2021. http://dx.doi.org/10.2172/1769730.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía