Journal articles on the topic 'Post-hoc interpretability'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Post-hoc interpretability.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Feng, Jiangfan, Yukun Liang, and Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability." Computational Intelligence and Neuroscience 2021 (July 26, 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Full textZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Full textXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou, and Aijing Luo. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review." Journal of Healthcare Engineering 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Full textGill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (February 29, 2020): 137. http://dx.doi.org/10.3390/info11030137.
Full textMarconato, Emanuele, Andrea Passerini, and Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning." Entropy 25, no. 12 (November 22, 2023): 1574. http://dx.doi.org/10.3390/e25121574.
Full textDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, Chrysoula Garefa, Lukas S. Keller, Reto Boehm, Domenico Ciancone, et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”." Imaging 14, no. 2 (December 23, 2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Full textLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan, and Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Full textJalali, Anahid, Alexander Schindler, Bernhard Haslhofer, and Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study." PHM Society European Conference 5, no. 1 (July 22, 2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Full textGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, Himar Fabelo, Inger Torhild Gram, Maja-Lisa Løchen, Conceição Granja, and Cristina Soguero-Ruiz. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors." Applied Sciences 13, no. 7 (March 23, 2023): 4119. http://dx.doi.org/10.3390/app13074119.
Full textWang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Full textChatterjee, Soumick, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck, and Andreas Nürnberger. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models." Applied Sciences 12, no. 4 (February 10, 2022): 1834. http://dx.doi.org/10.3390/app12041834.
Full textMurdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (October 16, 2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.
Full textAslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.
Full textRoscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Full textGuo, Jiaxing, Zhiyi Tang, Changxing Zhang, Wei Xu, and Yonghong Wu. "An Interpretable Deep Learning Method for Identifying Extreme Events under Faulty Data Interference." Applied Sciences 13, no. 9 (May 4, 2023): 5659. http://dx.doi.org/10.3390/app13095659.
Full textOkajima, Yuzuru, and Kunihiko Sadamasa. "Deep Neural Networks Constrained by Decision Rules." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2496–505. http://dx.doi.org/10.1609/aaai.v33i01.33012496.
Full textQian, Wei, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang, and Mengdi Huai. "Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14651–59. http://dx.doi.org/10.1609/aaai.v38i13.29382.
Full textHuai, Mengdi, Jinduo Liu, Chenglin Miao, Liuyi Yao, and Aidong Zhang. "Towards Automating Model Explanations with Certified Robustness Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6935–43. http://dx.doi.org/10.1609/aaai.v36i6.20651.
Full textXue, Mufan, Xinyu Wu, Jinlong Li, Xuesong Li, and Guoyuan Yang. "A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 6413–21. http://dx.doi.org/10.1609/aaai.v38i6.28461.
Full textKumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.
Full textFan, Yongxian, Meng Liu, and Guicong Sun. "An interpretable machine learning framework for diagnosis and prognosis of COVID-19." PLOS ONE 18, no. 9 (September 21, 2023): e0291961. http://dx.doi.org/10.1371/journal.pone.0291961.
Full textShen, Yifan, Li Liu, Zhihao Tang, Zongyi Chen, Guixiang Ma, Jiyan Dong, Xi Zhang, Lin Yang, and Qingfeng Zheng. "Explainable Survival Analysis with Convolution-Involved Vision Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 2207–15. http://dx.doi.org/10.1609/aaai.v36i2.20118.
Full textAlangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour, and Ibrahim Almosallam. "Intrinsically Interpretable Gaussian Mixture Model." Information 14, no. 3 (March 3, 2023): 164. http://dx.doi.org/10.3390/info14030164.
Full textKong, Weihao, Jianping Chen, and Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research." Minerals 14, no. 2 (January 24, 2024): 128. http://dx.doi.org/10.3390/min14020128.
Full textChen, Qian, Taolin Zhang, Dongyang Li, and Xiaofeng He. "CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 17763–71. http://dx.doi.org/10.1609/aaai.v38i16.29729.
Full textCollazos-Huertas, Diego Fabian, Andrés Marino Álvarez-Meza, David Augusto Cárdenas-Peña, Germán Albeiro Castaño-Duque, and César Germán Castellanos-Domínguez. "Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity." Sensors 23, no. 5 (March 2, 2023): 2750. http://dx.doi.org/10.3390/s23052750.
Full textOlatunji, Iyiola E., Mandeep Rathee, Thorben Funke, and Megha Khosla. "Private Graph Extraction via Feature Explanations." Proceedings on Privacy Enhancing Technologies 2023, no. 2 (April 2023): 59–78. http://dx.doi.org/10.56553/popets-2023-0041.
Full textVieira, Carla Piazzon Ramos, and Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM." Revista Brasileira de Computação Aplicada 12, no. 1 (January 8, 2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.
Full textMaree, Charl, and Christian W. Omlin. "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?" AI 3, no. 2 (June 13, 2022): 526–37. http://dx.doi.org/10.3390/ai3020030.
Full textGu, Jindong. "Interpretable Graph Capsule Networks for Object Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1469–77. http://dx.doi.org/10.1609/aaai.v35i2.16237.
Full textNguyen, Hung Viet, and Haewon Byeon. "Predicting Depression during the COVID-19 Pandemic Using Interpretable TabNet: A Case Study in South Korea." Mathematics 11, no. 14 (July 17, 2023): 3145. http://dx.doi.org/10.3390/math11143145.
Full textTulsani, Vijya, Prashant Sahatiya, Jignasha Parmar, and Jayshree Parmar. "XAI Applications in Medical Imaging: A Survey of Methods and Challenges." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 27, 2023): 181–86. http://dx.doi.org/10.17762/ijritcc.v11i9.8332.
Full textZhong, Xian, Zohaib Salahuddin, Yi Chen, Henry C. Woodruff, Haiyi Long, Jianyun Peng, Xiaoyan Xie, Manxia Lin, and Philippe Lambin. "An Interpretable Radiomics Model Based on Two-Dimensional Shear Wave Elastography for Predicting Symptomatic Post-Hepatectomy Liver Failure in Patients with Hepatocellular Carcinoma." Cancers 15, no. 21 (November 6, 2023): 5303. http://dx.doi.org/10.3390/cancers15215303.
Full textSingh, Rajeev Kumar, Rohan Gorantla, Sai Giridhar Rao Allada, and Pratap Narra. "SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability." PLOS ONE 17, no. 10 (October 31, 2022): e0276836. http://dx.doi.org/10.1371/journal.pone.0276836.
Full textArnold, Thomas, Daniel Kasenberg, and Matthias Scheutz. "Explaining in Time." ACM Transactions on Human-Robot Interaction 10, no. 3 (July 2021): 1–23. http://dx.doi.org/10.1145/3457183.
Full textTursunalieva, Ainura, David L. J. Alexander, Rob Dunne, Jiaming Li, Luis Riera, and Yanchang Zhao. "Making Sense of Machine Learning: A Review of Interpretation Techniques and Their Applications." Applied Sciences 14, no. 2 (January 5, 2024): 496. http://dx.doi.org/10.3390/app14020496.
Full textMokhtari, Ayoub, Roberto Casale, Zohaib Salahuddin, Zelda Paquier, Thomas Guiot, Henry C. Woodruff, Philippe Lambin, Jean-Luc Van Laethem, Alain Hendlisz, and Maria Antonietta Bali. "Development of Clinical Radiomics-Based Models to Predict Survival Outcome in Pancreatic Ductal Adenocarcinoma: A Multicenter Retrospective Study." Diagnostics 14, no. 7 (March 28, 2024): 712. http://dx.doi.org/10.3390/diagnostics14070712.
Full textXie, Yibing, Nichakorn Pongsakornsathien, Alessandro Gardi, and Roberto Sabatini. "Explanation of Machine-Learning Solutions in Air-Traffic Management." Aerospace 8, no. 8 (August 12, 2021): 224. http://dx.doi.org/10.3390/aerospace8080224.
Full textXie, Huafang, Lin Liu, and Han Yue. "Modeling the Effect of Streetscape Environment on Crime Using Street View Images and Interpretable Machine-Learning Technique." International Journal of Environmental Research and Public Health 19, no. 21 (October 24, 2022): 13833. http://dx.doi.org/10.3390/ijerph192113833.
Full textTurbé, Hugues, Mina Bjelogrlic, Christian Lovis, and Gianmarco Mengaldo. "Evaluation of post-hoc interpretability methods in time-series classification." Nature Machine Intelligence, March 13, 2023. http://dx.doi.org/10.1038/s42256-023-00620-w.
Full textMadsen, Andreas, Siva Reddy, and Sarath Chandar. "Post-hoc Interpretability for Neural NLP: A Survey." ACM Computing Surveys, July 9, 2022. http://dx.doi.org/10.1145/3546577.
Full textChen, Changdong, Allen Ding Tian, and Ruochen Jiang. "When Post Hoc Explanation Knocks: Consumer Responses to Explainable AI Recommendations." Journal of Interactive Marketing, December 7, 2023. http://dx.doi.org/10.1177/10949968231200221.
Full textLuo, Zijing, Renguang Zuo, Yihui Xiong, and Bao Zhou. "Metallogenic-Factor Variational Autoencoder for Geochemical Anomaly Detection by Ad-Hoc and Post-Hoc Interpretability Algorithms." Natural Resources Research, April 12, 2023. http://dx.doi.org/10.1007/s11053-023-10200-9.
Full textMarton, Sascha, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev, and Heiner Stuckenschmidt. "Explaining neural networks without access to training data." Machine Learning, January 10, 2024. http://dx.doi.org/10.1007/s10994-023-06428-4.
Full textYang, Fanfan, Renguang Zuo, Yihui Xiong, Ying Xu, Jiaxin Nie, and Gubin Zhang. "Dual-Branch Convolutional Neural Network and Its Post Hoc Interpretability for Mapping Mineral Prospectivity." Mathematical Geosciences, March 22, 2024. http://dx.doi.org/10.1007/s11004-024-10137-6.
Full textVelmurugan, Mythreyi, Chun Ouyang, Renuka Sindhgatta, and Catarina Moreira. "Through the looking glass: evaluating post hoc explanations using transparent models." International Journal of Data Science and Analytics, September 12, 2023. http://dx.doi.org/10.1007/s41060-023-00445-1.
Full textBjörklund, Anton, Andreas Henelius, Emilia Oikarinen, Kimmo Kallonen, and Kai Puolamäki. "Explaining any black box model using real data." Frontiers in Computer Science 5 (August 8, 2023). http://dx.doi.org/10.3389/fcomp.2023.1143904.
Full textXiao, Li-Ming, Yun-Qi Wan, and Zhen-Ran Jiang. "AttCRISPR: a spacetime interpretable model for prediction of sgRNA on-target activity." BMC Bioinformatics 22, no. 1 (December 2021). http://dx.doi.org/10.1186/s12859-021-04509-6.
Full textS. S. Júnior, Jorge, Jérôme Mendes, Francisco Souza, and Cristiano Premebida. "Survey on Deep Fuzzy Systems in Regression Applications: A View on Interpretability." International Journal of Fuzzy Systems, June 5, 2023. http://dx.doi.org/10.1007/s40815-023-01544-8.
Full textTiwari, Devisha Arunadevi, and Bhaskar Mondal. "A Unified Framework for Cyber Oriented Digital Engineering using Integration of Explainable Chaotic Cryptology on Pervasive Systems." Qeios, May 3, 2024. http://dx.doi.org/10.32388/60nk7h.
Full text