Добірка наукової літератури з теми "Interpretable deep learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Interpretable deep learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Interpretable deep learning"
Gangopadhyay, Tryambak, Sin Yong Tan, Anthony LoCurto, James B. Michael, and Soumik Sarkar. "Interpretable Deep Learning for Monitoring Combustion Instability." IFAC-PapersOnLine 53, no. 2 (2020): 832–37. http://dx.doi.org/10.1016/j.ifacol.2020.12.839.
Повний текст джерелаZheng, Hong, Yinglong Dai, Fumin Yu, and Yuezhen Hu. "Interpretable Saliency Map for Deep Reinforcement Learning." Journal of Physics: Conference Series 1757, no. 1 (January 1, 2021): 012075. http://dx.doi.org/10.1088/1742-6596/1757/1/012075.
Повний текст джерелаRuffolo, Jeffrey A., Jeremias Sulam, and Jeffrey J. Gray. "Antibody structure prediction using interpretable deep learning." Patterns 3, no. 2 (February 2022): 100406. http://dx.doi.org/10.1016/j.patter.2021.100406.
Повний текст джерелаArik, Sercan Ö., and Tomas Pfister. "TabNet: Attentive Interpretable Tabular Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6679–87. http://dx.doi.org/10.1609/aaai.v35i8.16826.
Повний текст джерелаBhambhoria, Rohan, Hui Liu, Samuel Dahan, and Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.
Повний текст джерелаLin, Chih-Hsu, and Olivier Lichtarge. "Using interpretable deep learning to model cancer dependencies." Bioinformatics 37, no. 17 (May 27, 2021): 2675–81. http://dx.doi.org/10.1093/bioinformatics/btab137.
Повний текст джерелаLiao, WangMin, BeiJi Zou, RongChang Zhao, YuanQiong Chen, ZhiYou He, and MengJie Zhou. "Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis." IEEE Journal of Biomedical and Health Informatics 24, no. 5 (May 2020): 1405–12. http://dx.doi.org/10.1109/jbhi.2019.2949075.
Повний текст джерелаMatsubara, Takashi. "Bayesian deep learning: A model-based interpretable approach." Nonlinear Theory and Its Applications, IEICE 11, no. 1 (2020): 16–35. http://dx.doi.org/10.1587/nolta.11.16.
Повний текст джерелаLiu, Yi, Kenneth Barr, and John Reinitz. "Fully interpretable deep learning model of transcriptional control." Bioinformatics 36, Supplement_1 (July 1, 2020): i499—i507. http://dx.doi.org/10.1093/bioinformatics/btaa506.
Повний текст джерелаBrinkrolf, Johannes, and Barbara Hammer. "Interpretable machine learning with reject option." at - Automatisierungstechnik 66, no. 4 (April 25, 2018): 283–90. http://dx.doi.org/10.1515/auto-2017-0123.
Повний текст джерелаДисертації з теми "Interpretable deep learning"
FERRONE, LORENZO. "On interpretable information in deep learning: encoding and decoding of distributed structures." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2016. http://hdl.handle.net/2108/202245.
Повний текст джерелаXie, Ning. "Towards Interpretable and Reliable Deep Neural Networks for Visual Intelligence." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1596208422672732.
Повний текст джерелаEmschwiller, Matt V. "Understanding neural network sample complexity and interpretable convergence-guaranteed deep learning with polynomial regression." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127290.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 83-89).
We first study the sample complexity of one-layer neural networks, namely the number of examples that are needed in the training set for such models to be able to learn meaningful information out-of-sample. We empirically derive quantitative relationships between the sample complexity and the parameters of the network, such as its input dimension and its width. Then, we introduce polynomial regression as a proxy for neural networks through a polynomial approximation of their activation function. This method operates in the lifted space of tensor products of input variables, and is trained by simply optimizing a standard least squares objective in this space. We study the scalability of polynomial regression, and are able to design a bagging-type algorithm to successfully train it. The method achieves competitive accuracy on simple image datasets while being more simple. We also demonstrate that it is more robust and more interpretable that existing approaches. It also offers more convergence guarantees during training. Finally, we empirically show that the widely-used Stochastic Gradient Descent algorithm makes the weights of the trained neural networks converge to the optimal polynomial regression weights.
by Matt V. Emschwiller.
S.M.
S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.
Повний текст джерелаREPETTO, MARCO. "Black-box supervised learning and empirical assessment: new perspectives in credit risk modeling." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/402366.
Повний текст джерелаRecent highly performant Machine Learning algorithms are compelling but opaque, so it is often hard to understand how they arrive at their predictions giving rise to interpretability issues. Such issues are particularly relevant in supervised learning, where such black-box models are not easily understandable by the stakeholders involved. A growing body of work focuses on making Machine Learning, particularly Deep Learning models, more interpretable. The currently proposed approaches rely on post-hoc interpretation, using methods such as saliency mapping and partial dependencies. Despite the advances that have been made, interpretability is still an active area of research, and there is no silver bullet solution. Moreover, in high-stakes decision-making, post-hoc interpretability may be sub-optimal. An example is the field of enterprise credit risk modeling. In such fields, classification models discriminate between good and bad borrowers. As a result, lenders can use these models to deny loan requests. Loan denial can be especially harmful when the borrower cannot appeal or have the decision explained and grounded by fundamentals. Therefore in such cases, it is crucial to understand why these models produce a given output and steer the learning process toward predictions based on fundamentals. This dissertation focuses on the concept of Interpretable Machine Learning, with particular attention to the context of credit risk modeling. In particular, the dissertation revolves around three topics: model agnostic interpretability, post-hoc interpretation in credit risk, and interpretability-driven learning. More specifically, the first chapter is a guided introduction to the model-agnostic techniques shaping today’s landscape of Machine Learning and their implementations. The second chapter focuses on an empirical analysis of the credit risk of Italian Small and Medium Enterprises. It proposes an analytical pipeline in which post-hoc interpretability plays a crucial role in finding the relevant underpinnings that drive a firm into bankruptcy. The third and last paper proposes a novel multicriteria knowledge injection methodology. The methodology is based on double backpropagation and can improve model performance, especially in the case of scarce data. The essential advantage of such methodology is that it allows the decision maker to impose his previous knowledge at the beginning of the learning process, making predictions that align with the fundamentals.
Sheikhalishahi, Seyedmostafa. "Machine learning applications in Intensive Care Unit." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/339274.
Повний текст джерелаjui, mao wen, and 毛文瑞. "Towards Interpretable Deep Extreme Multi-label Learning." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/t7hq7r.
Повний текст джерела國立中山大學
資訊管理學系研究所
107
Extreme multi-label learning is to seek most relevant subset of labels from an extreme large labels space. The problem of scalability and sparsity makes extreme multi-label hard to learn. In this paper, we propose a framework to deal with these problems. Our approach allows to deal with enormous dataset efficiently. Moreover, most algorithms nowadays are criticized for “black box” problem, which model cannot provide how it decides to make predictions. Through special non-negative constraint, our proposed approach is able to provide interpretable explanation. Experiments show that our method achieves both high prediction accuracy and understandable explanation.
Kuo, Bo-Wen, and 郭博文. "Interpretable representation learning based on Deep Rule Forests." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/7wqrk4.
Повний текст джерела國立中山大學
資訊管理學系研究所
106
The spirit of tree-based methods is to learn rules. A large number of machine learning techniques are tree-based. More complicated tree learners may result in higher predictive models, but may sacrifice for model interpretability. On the other hand, the spirit of representation learning is to extract abstractive concepts from manifestations of the data. For instance, Deep Neural networks (DNNs) is the most popular method in representation learning. However, unaccountable feature representation is the shortcoming of DNNs. In this paper, we proposed an approach, Deep Rule Forest (DRF), to learn region representations based on random forest in the deep layer-wise structures. The learned interpretable rules region representations combine other machine learning algorithms. We trained CART which learned from DRF region representations, and found that the prediction accuracies sometime are better than ensemble learning methods.
Würfel, Max. "Online advertising revenue forecasting: an interpretable deep learning approach." Master's thesis, 2021. http://hdl.handle.net/10362/122676.
Повний текст джерелаHuang, Sheng-Tai, and 黃升泰. "Interpretable Logic Representation Learning based on Deep Rule Forest." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hybs2q.
Повний текст джерела國立中山大學
資訊管理學系研究所
107
Compared to traditional machine learning algorithms, most contemporary algorithms have prominent promotion in terms of accuracy, but this also complicate the model architecture, which disables human from understanding how the predictions are generated. This makes the latent discrimination in data difficult for human to discover, and thus there are legislations enforce that models should have interpretability. However, recent interpretable models (e.g. decision tree, linear model) are too simple to produce enough accurate predictions in case of dealing large and complex datasets. Therefore, we extract rules from the decision tree component in random forest, not only makes random forest, regarded as black box model, interpretable, but exploits ensemble learning to boost the accuracy. Moreover, inspired by the concept of representation learning in deep learning, we add multilayer structure to enable random forest to learn more complicated representation. In this paper, we propose Deep Rule Forest, with both interpretability and deep model architecture, and it outperform several complex models such as random forest on accuracy. Nevertheless, this structure makes the rules too complicated to understand by human and hence lose interpretability. At last, via logic optimization, we retain interpretability by simplifying the rules and making them readable and understandable to human.
Книги з теми "Interpretable deep learning"
Thakoor, Kaveri Anil. Robust, Interpretable, and Portable Deep Learning Systems for Detection of Ophthalmic Diseases. [New York, N.Y.?]: [publisher not identified], 2022.
Знайти повний текст джерелаЧастини книг з теми "Interpretable deep learning"
Kamath, Uday, and John Liu. "Explainable Deep Learning." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 217–60. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_6.
Повний текст джерелаPreuer, Kristina, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, and Thomas Unterthiner. "Interpretable Deep Learning in Drug Discovery." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 331–45. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_18.
Повний текст джерелаWüthrich, Mario V., and Michael Merz. "Selected Topics in Deep Learning." In Springer Actuarial, 453–535. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_11.
Повний текст джерелаRodrigues, Mark, Michael Mayo, and Panos Patros. "Interpretable Deep Learning for Surgical Tool Management." In Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data, 3–12. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87444-5_1.
Повний текст джерелаBatra, Reenu, and Manish Mahajan. "Deep Learning Models: An Understandable Interpretable Approach." In Deep Learning for Security and Privacy Preservation in IoT, 169–79. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6186-0_10.
Повний текст джерелаLu, Yu, Deliang Wang, Qinggang Meng, and Penghe Chen. "Towards Interpretable Deep Learning Models for Knowledge Tracing." In Lecture Notes in Computer Science, 185–90. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52240-7_34.
Повний текст джерелаPasquini, Dario, Giuseppe Ateniese, and Massimo Bernaschi. "Interpretable Probabilistic Password Strength Meters via Deep Learning." In Computer Security – ESORICS 2020, 502–22. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58951-6_25.
Повний текст джерелаAbdukhamidov, Eldor, Mohammed Abuhamad, Firuz Juraev, Eric Chan-Tin, and Tamer AbuHmed. "AdvEdge: Optimizing Adversarial Perturbations Against Interpretable Deep Learning." In Computational Data and Social Networks, 93–105. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-91434-9_9.
Повний текст джерелаShinde, Swati V., and Sagar Lahade. "Deep Learning for Tea Leaf Disease Classification." In Applied Computer Vision and Soft Computing with Interpretable AI, 293–314. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003359456-20.
Повний текст джерелаSchütt, Kristof T., Michael Gastegger, Alexandre Tkatchenko, and Klaus-Robert Müller. "Quantum-Chemical Insights from Interpretable Atomistic Neural Networks." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 311–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_17.
Повний текст джерелаТези доповідей конференцій з теми "Interpretable deep learning"
Ouzounis, Athanasios, George Sidiropoulos, George Papakostas, Ilias Sarafis, Andreas Stamkos, and George Solakis. "Interpretable Deep Learning for Marble Tiles Sorting." In 2nd International Conference on Deep Learning Theory and Applications. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010517001010108.
Повний текст джерелаOuzounis, Athanasios, George Sidiropoulos, George Papakostas, Ilias Sarafis, Andreas Stamkos, and George Solakis. "Interpretable Deep Learning for Marble Tiles Sorting." In 2nd International Conference on Deep Learning Theory and Applications. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010517000002996.
Повний текст джерелаDo, Cuong M., and Cory Wang. "Interpretable deep learning-based risk evaluation approach." In Artificial Intelligence and Machine Learning in Defense Applications II, edited by Judith Dijk. SPIE, 2020. http://dx.doi.org/10.1117/12.2583972.
Повний текст джерелаKaratekin, Tamer, Selim Sancak, Gokhan Celik, Sevilay Topcuoglu, Guner Karatekin, Pinar Kirci, and Ali Okatan. "Interpretable Machine Learning in Healthcare through Generalized Additive Model with Pairwise Interactions (GA2M): Predicting Severe Retinopathy of Prematurity." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00020.
Повний текст джерелаKang, Yihuang, I.-Ling Cheng, Wenjui Mao, Bowen Kuo, and Pei-Ju Lee. "Towards Interpretable Deep Extreme Multi-Label Learning." In 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2019. http://dx.doi.org/10.1109/iri.2019.00024.
Повний текст джерелаBaranyi, Máté, Marcell Nagy, and Roland Molontay. "Interpretable Deep Learning for University Dropout Prediction." In SIGITE '20: The 21st Annual Conference on Information Technology Education. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3368308.3415382.
Повний текст джерелаWhite, Andrew. "INTERPRETABLE DEEP LEARNING FOR MOLECULES AND MATERIALS." In 2022 International Symposium on Molecular Spectroscopy. Urbana, Illinois: University of Illinois at Urbana-Champaign, 2022. http://dx.doi.org/10.15278/isms.2022.wk01.
Повний текст джерелаYao, Liuyi, Zijun Yao, Jianying Hu, Jing Gao, and Zhaonan Sun. "Deep Staging: An Interpretable Deep Learning Framework for Disease Staging." In 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI). IEEE, 2021. http://dx.doi.org/10.1109/ichi52183.2021.00030.
Повний текст джерелаJang, Hyeju, Seojin Bang, Wen Xiao, Giuseppe Carenini, Raymond Ng, and Young ji Lee. "KW-ATTN: Knowledge Infused Attention for Accurate and Interpretable Text Classification." In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.deelio-1.10.
Повний текст джерелаLiu, Xuan, Xiaoguang Wang, and Stan Matwin. "Interpretable Deep Convolutional Neural Networks via Meta-learning." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489172.
Повний текст джерелаЗвіти організацій з теми "Interpretable deep learning"
Jiang, Peishi, Xingyuan Chen, Maruti Mudunuru, Praveen Kumar, Pin Shuai, Kyongho Son, and Alexander Sun. Towards Trustworthy and Interpretable Deep Learning-assisted Ecohydrological Models. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769787.
Повний текст джерелаBegeman, Carolyn, Marian Anghel, and Ishanu Chattopadhyay. Interpretable Deep Learning for the Earth System with Fractal Nets. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769730.
Повний текст джерела