Academic literature on the topic 'Interpretable ML'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Interpretable ML.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Interpretable ML"
Zytek, Alexandra, Ignacio Arnaldo, Dongyu Liu, Laure Berti-Equille, and Kalyan Veeramachaneni. "The Need for Interpretable Features." ACM SIGKDD Explorations Newsletter 24, no. 1 (June 2, 2022): 1–13. http://dx.doi.org/10.1145/3544903.3544905.
Full textWu, Bozhi, Sen Chen, Cuiyun Gao, Lingling Fan, Yang Liu, Weiping Wen, and Michael R. Lyu. "Why an Android App Is Classified as Malware." ACM Transactions on Software Engineering and Methodology 30, no. 2 (March 2021): 1–29. http://dx.doi.org/10.1145/3423096.
Full textYang, Ziduo, Weihe Zhong, Lu Zhao, and Calvin Yu-Chian Chen. "ML-DTI: Mutual Learning Mechanism for Interpretable Drug–Target Interaction Prediction." Journal of Physical Chemistry Letters 12, no. 17 (April 27, 2021): 4247–61. http://dx.doi.org/10.1021/acs.jpclett.1c00867.
Full textLin, Zhiqing. "A Methodological Review of Machine Learning in Applied Linguistics." English Language Teaching 14, no. 1 (December 23, 2020): 74. http://dx.doi.org/10.5539/elt.v14n1p74.
Full textAbdullah, Talal A. A., Mohd Soperi Mohd Zahid, and Waleed Ali. "A Review of Interpretable ML in Healthcare: Taxonomy, Applications, Challenges, and Future Directions." Symmetry 13, no. 12 (December 17, 2021): 2439. http://dx.doi.org/10.3390/sym13122439.
Full textSajid, Mirza Rizwan, Arshad Ali Khan, Haitham M. Albar, Noryanti Muhammad, Waqas Sami, Syed Ahmad Chan Bukhari, and Iram Wajahat. "Exploration of Black Boxes of Supervised Machine Learning Models: A Demonstration on Development of Predictive Heart Risk Score." Computational Intelligence and Neuroscience 2022 (May 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/5475313.
Full textSingh, Devesh. "Interpretable Machine-Learning Approach in Estimating FDI Inflow: Visualization of ML Models with LIME and H2O." TalTech Journal of European Studies 11, no. 1 (May 1, 2021): 133–52. http://dx.doi.org/10.2478/bjes-2021-0009.
Full textCarreiro Pinasco, Gustavo, Eduardo Moreno Júdice de Mattos Farina, Fabiano Novaes Barcellos Filho, Willer França Fiorotti, Matheus Coradini Mariano Ferreira, Sheila Cristina de Souza Cruz, Andre Louzada Colodette, et al. "An interpretable machine learning model for covid-19 screening." Journal of Human Growth and Development 32, no. 2 (June 23, 2022): 268–74. http://dx.doi.org/10.36311/jhgd.v32.13324.
Full textMenon, P. Archana, and Dr R. Gunasundari. "Study of Interpretability in ML Algorithms for Disease Prognosis." Revista Gestão Inovação e Tecnologias 11, no. 4 (August 19, 2021): 4735–49. http://dx.doi.org/10.47059/revistageintec.v11i4.2500.
Full textDawid, Anna, Patrick Huembeli, Michał Tomza, Maciej Lewenstein, and Alexandre Dauphin. "Hessian-based toolbox for reliable and interpretable machine learning in physics." Machine Learning: Science and Technology 3, no. 1 (November 24, 2021): 015002. http://dx.doi.org/10.1088/2632-2153/ac338d.
Full textDissertations / Theses on the topic "Interpretable ML"
Gustafsson, Sebastian. "Interpretable serious event forecasting using machine learning and SHAP." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444363.
Full textExakta prognoser är viktiga inom flera områden av ekonomisk, vetenskaplig, kommersiell och industriell verksamhet. Det finns få tidigare studier där man använt prognosmetoder för att förutsäga allvarliga händelser. Denna avhandling syftar till att undersöka två saker, för det första om maskininlärningsmodeller kan användas för att förutse allvarliga händelser. För det andra, om modellerna kunde göras tolkbara. Med tanke på dessa mål var metoden att formulera två prognosuppgifter för modellerna och sedan använda Python-ramverket SHAP för att göra dem tolkbara. Den första uppgiften var att förutsäga om en allvarlig händelse kommer att ske under de kommande åtta timmarna. Den andra uppgiften var att förutse hur många allvarliga händelser som kommer att hända under de kommande sex timmarna. GBDT- och LSTM-modeller implementerades, utvärderades och jämfördes för båda uppgifterna. Med tanke på problemkomplexiteten i att förutspå framtiden matchar resultaten de från tidigare relaterad forskning. På klassificeringsuppgiften uppnådde den bäst presterande modellen en träffsäkerhet på 71,6%, och på regressionsuppgiften missade den i genomsnitt med mindre än 1 i antal förutspådda allvarliga händelser.
Gilmore, Eugene M. "Learning Interpretable Decision Tree Classifiers with Human in the Loop Learning and Parallel Coordinates." Thesis, Griffith University, 2022. http://hdl.handle.net/10072/418633.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
REPETTO, MARCO. "Black-box supervised learning and empirical assessment: new perspectives in credit risk modeling." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/402366.
Full textRecent highly performant Machine Learning algorithms are compelling but opaque, so it is often hard to understand how they arrive at their predictions giving rise to interpretability issues. Such issues are particularly relevant in supervised learning, where such black-box models are not easily understandable by the stakeholders involved. A growing body of work focuses on making Machine Learning, particularly Deep Learning models, more interpretable. The currently proposed approaches rely on post-hoc interpretation, using methods such as saliency mapping and partial dependencies. Despite the advances that have been made, interpretability is still an active area of research, and there is no silver bullet solution. Moreover, in high-stakes decision-making, post-hoc interpretability may be sub-optimal. An example is the field of enterprise credit risk modeling. In such fields, classification models discriminate between good and bad borrowers. As a result, lenders can use these models to deny loan requests. Loan denial can be especially harmful when the borrower cannot appeal or have the decision explained and grounded by fundamentals. Therefore in such cases, it is crucial to understand why these models produce a given output and steer the learning process toward predictions based on fundamentals. This dissertation focuses on the concept of Interpretable Machine Learning, with particular attention to the context of credit risk modeling. In particular, the dissertation revolves around three topics: model agnostic interpretability, post-hoc interpretation in credit risk, and interpretability-driven learning. More specifically, the first chapter is a guided introduction to the model-agnostic techniques shaping today’s landscape of Machine Learning and their implementations. The second chapter focuses on an empirical analysis of the credit risk of Italian Small and Medium Enterprises. It proposes an analytical pipeline in which post-hoc interpretability plays a crucial role in finding the relevant underpinnings that drive a firm into bankruptcy. The third and last paper proposes a novel multicriteria knowledge injection methodology. The methodology is based on double backpropagation and can improve model performance, especially in the case of scarce data. The essential advantage of such methodology is that it allows the decision maker to impose his previous knowledge at the beginning of the learning process, making predictions that align with the fundamentals.
Book chapters on the topic "Interpretable ML"
Nandi, Anirban, and Aditya Kumar Pal. "Interpretable ML and Explainable ML Differences." In Interpreting Machine Learning Models, 83–95. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_7.
Full textQiu, Waishan, Wenjing Li, Xun Liu, and Xiaokai Huang. "Subjectively Measured Streetscape Qualities for Shanghai with Large-Scale Application of Computer Vision and Machine Learning." In Proceedings of the 2021 DigitalFUTURES, 242–51. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5983-6_23.
Full textBagci Das, Duygu, and Derya Birant. "XHAC." In Emerging Trends in IoT and Integration with Data Science, Cloud Computing, and Big Data Analytics, 146–64. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-4186-9.ch008.
Full textSajjadinia, Seyed Shayan, Bruno Carpentieri, and Gerhard A. Holzapfel. "A Pointwise Evaluation Metric to Visualize Errors in Machine Learning Surrogate Models." In Proceedings of CECNet 2021. IOS Press, 2021. http://dx.doi.org/10.3233/faia210386.
Full textKatsuragi, Miki, and Kenji Tanaka. "Dropout Prediction by Interpretable Machine Learning Model Towards Preventing Student Dropout." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220700.
Full textConference papers on the topic "Interpretable ML"
Ignatiev, Alexey, Joao Marques-Silva, Nina Narodytska, and Peter J. Stuckey. "Reasoning-Based Learning of Interpretable ML Models." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/608.
Full textNair, Rahul, Massimiliano Mattetti, Elizabeth Daly, Dennis Wei, Oznur Alkan, and Yunfeng Zhang. "What Changed? Interpretable Model Comparison." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/393.
Full textPreece, Alun, Dan Harborne, Ramya Raghavendra, Richard Tomsett, and Dave Braines. "Provisioning Robust and Interpretable AI/ML-Based Service Bundles." In MILCOM 2018 - IEEE Military Communications Conference. IEEE, 2018. http://dx.doi.org/10.1109/milcom.2018.8599838.
Full textKaratekin, Tamer, Selim Sancak, Gokhan Celik, Sevilay Topcuoglu, Guner Karatekin, Pinar Kirci, and Ali Okatan. "Interpretable Machine Learning in Healthcare through Generalized Additive Model with Pairwise Interactions (GA2M): Predicting Severe Retinopathy of Prematurity." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00020.
Full textSim, Rachael Hwee Ling, Xinyi Xu, and Bryan Kian Hsiang Low. "Data Valuation in Machine Learning: "Ingredients", Strategies, and Open Challenges." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/782.
Full textIzza, Yacine, and Joao Marques-Silva. "On Explaining Random Forests with SAT." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/356.
Full textAglin, Gaël, Siegfried Nijssen, and Pierre Schaus. "PyDL8.5: a Library for Learning Optimal Decision Trees." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/750.
Full textKurasova, Olga, Virginijus Marcinkevičius, and Birutė Mikulskienė. "Enhanced Visualization of Customized Manufacturing Data." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.12.
Full textAditama, Prihandono, Tina Koziol, and Dr Meindert Dillen. "Development of an Artificial Intelligence-Based Well Integrity Monitoring Solution." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211093-ms.
Full textCoutinho, Emilio J. R., and Marcelo J. Aqua and Eduardo Gildin. "Physics-Aware Deep-Learning-Based Proxy Reservoir Simulation Model Equipped with State and Well Output Prediction." In SPE Reservoir Simulation Conference. SPE, 2021. http://dx.doi.org/10.2118/203994-ms.
Full textReports on the topic "Interpretable ML"
Zhu, Qing, William Riley, and James Randerson. Improve wildfire predictability driven by extreme water cycle with interpretable physically-guided ML/AI. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769720.
Full text