Artículos de revistas sobre el tema "Post-hoc interpretability"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Post-hoc interpretability".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Feng, Jiangfan, Yukun Liang y Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability". Computational Intelligence and Neuroscience 2021 (26 de julio de 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Texto completoZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu y Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junio de 2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Texto completoXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou y Aijing Luo. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review". Journal of Healthcare Engineering 2023 (3 de febrero de 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Texto completoGill, Navdeep, Patrick Hall, Kim Montgomery y Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing". Information 11, n.º 3 (29 de febrero de 2020): 137. http://dx.doi.org/10.3390/info11030137.
Texto completoMarconato, Emanuele, Andrea Passerini y Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning". Entropy 25, n.º 12 (22 de noviembre de 2023): 1574. http://dx.doi.org/10.3390/e25121574.
Texto completoDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, Chrysoula Garefa, Lukas S. Keller, Reto Boehm, Domenico Ciancone et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”". Imaging 14, n.º 2 (23 de diciembre de 2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Texto completoLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan y Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Texto completoJalali, Anahid, Alexander Schindler, Bernhard Haslhofer y Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study". PHM Society European Conference 5, n.º 1 (22 de julio de 2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Texto completoGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, Himar Fabelo, Inger Torhild Gram, Maja-Lisa Løchen, Conceição Granja y Cristina Soguero-Ruiz. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors". Applied Sciences 13, n.º 7 (23 de marzo de 2023): 4119. http://dx.doi.org/10.3390/app13074119.
Texto completoWang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de marzo de 2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Texto completoChatterjee, Soumick, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck y Andreas Nürnberger. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models". Applied Sciences 12, n.º 4 (10 de febrero de 2022): 1834. http://dx.doi.org/10.3390/app12041834.
Texto completoMurdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl y Bin Yu. "Definitions, methods, and applications in interpretable machine learning". Proceedings of the National Academy of Sciences 116, n.º 44 (16 de octubre de 2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.
Texto completoAslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid y Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)". Sustainability 14, n.º 12 (16 de junio de 2022): 7375. http://dx.doi.org/10.3390/su14127375.
Texto completoRoscher, R., B. Bohn, M. F. Duarte y J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (3 de agosto de 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Texto completoGuo, Jiaxing, Zhiyi Tang, Changxing Zhang, Wei Xu y Yonghong Wu. "An Interpretable Deep Learning Method for Identifying Extreme Events under Faulty Data Interference". Applied Sciences 13, n.º 9 (4 de mayo de 2023): 5659. http://dx.doi.org/10.3390/app13095659.
Texto completoOkajima, Yuzuru y Kunihiko Sadamasa. "Deep Neural Networks Constrained by Decision Rules". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 2496–505. http://dx.doi.org/10.1609/aaai.v33i01.33012496.
Texto completoQian, Wei, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang y Mengdi Huai. "Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de marzo de 2024): 14651–59. http://dx.doi.org/10.1609/aaai.v38i13.29382.
Texto completoHuai, Mengdi, Jinduo Liu, Chenglin Miao, Liuyi Yao y Aidong Zhang. "Towards Automating Model Explanations with Certified Robustness Guarantees". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junio de 2022): 6935–43. http://dx.doi.org/10.1609/aaai.v36i6.20651.
Texto completoXue, Mufan, Xinyu Wu, Jinlong Li, Xuesong Li y Guoyuan Yang. "A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de marzo de 2024): 6413–21. http://dx.doi.org/10.1609/aaai.v38i6.28461.
Texto completoKumar, Akshi, Shubham Dikshit y Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues". Wireless Communications and Mobile Computing 2021 (2 de julio de 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.
Texto completoFan, Yongxian, Meng Liu y Guicong Sun. "An interpretable machine learning framework for diagnosis and prognosis of COVID-19". PLOS ONE 18, n.º 9 (21 de septiembre de 2023): e0291961. http://dx.doi.org/10.1371/journal.pone.0291961.
Texto completoShen, Yifan, Li Liu, Zhihao Tang, Zongyi Chen, Guixiang Ma, Jiyan Dong, Xi Zhang, Lin Yang y Qingfeng Zheng. "Explainable Survival Analysis with Convolution-Involved Vision Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 2 (28 de junio de 2022): 2207–15. http://dx.doi.org/10.1609/aaai.v36i2.20118.
Texto completoAlangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour y Ibrahim Almosallam. "Intrinsically Interpretable Gaussian Mixture Model". Information 14, n.º 3 (3 de marzo de 2023): 164. http://dx.doi.org/10.3390/info14030164.
Texto completoKong, Weihao, Jianping Chen y Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research". Minerals 14, n.º 2 (24 de enero de 2024): 128. http://dx.doi.org/10.3390/min14020128.
Texto completoChen, Qian, Taolin Zhang, Dongyang Li y Xiaofeng He. "CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de marzo de 2024): 17763–71. http://dx.doi.org/10.1609/aaai.v38i16.29729.
Texto completoCollazos-Huertas, Diego Fabian, Andrés Marino Álvarez-Meza, David Augusto Cárdenas-Peña, Germán Albeiro Castaño-Duque y César Germán Castellanos-Domínguez. "Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity". Sensors 23, n.º 5 (2 de marzo de 2023): 2750. http://dx.doi.org/10.3390/s23052750.
Texto completoOlatunji, Iyiola E., Mandeep Rathee, Thorben Funke y Megha Khosla. "Private Graph Extraction via Feature Explanations". Proceedings on Privacy Enhancing Technologies 2023, n.º 2 (abril de 2023): 59–78. http://dx.doi.org/10.56553/popets-2023-0041.
Texto completoVieira, Carla Piazzon Ramos y Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM". Revista Brasileira de Computação Aplicada 12, n.º 1 (8 de enero de 2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.
Texto completoMaree, Charl y Christian W. Omlin. "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?" AI 3, n.º 2 (13 de junio de 2022): 526–37. http://dx.doi.org/10.3390/ai3020030.
Texto completoGu, Jindong. "Interpretable Graph Capsule Networks for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de mayo de 2021): 1469–77. http://dx.doi.org/10.1609/aaai.v35i2.16237.
Texto completoNguyen, Hung Viet y Haewon Byeon. "Predicting Depression during the COVID-19 Pandemic Using Interpretable TabNet: A Case Study in South Korea". Mathematics 11, n.º 14 (17 de julio de 2023): 3145. http://dx.doi.org/10.3390/math11143145.
Texto completoTulsani, Vijya, Prashant Sahatiya, Jignasha Parmar y Jayshree Parmar. "XAI Applications in Medical Imaging: A Survey of Methods and Challenges". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 9 (27 de octubre de 2023): 181–86. http://dx.doi.org/10.17762/ijritcc.v11i9.8332.
Texto completoZhong, Xian, Zohaib Salahuddin, Yi Chen, Henry C. Woodruff, Haiyi Long, Jianyun Peng, Xiaoyan Xie, Manxia Lin y Philippe Lambin. "An Interpretable Radiomics Model Based on Two-Dimensional Shear Wave Elastography for Predicting Symptomatic Post-Hepatectomy Liver Failure in Patients with Hepatocellular Carcinoma". Cancers 15, n.º 21 (6 de noviembre de 2023): 5303. http://dx.doi.org/10.3390/cancers15215303.
Texto completoSingh, Rajeev Kumar, Rohan Gorantla, Sai Giridhar Rao Allada y Pratap Narra. "SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability". PLOS ONE 17, n.º 10 (31 de octubre de 2022): e0276836. http://dx.doi.org/10.1371/journal.pone.0276836.
Texto completoArnold, Thomas, Daniel Kasenberg y Matthias Scheutz. "Explaining in Time". ACM Transactions on Human-Robot Interaction 10, n.º 3 (julio de 2021): 1–23. http://dx.doi.org/10.1145/3457183.
Texto completoTursunalieva, Ainura, David L. J. Alexander, Rob Dunne, Jiaming Li, Luis Riera y Yanchang Zhao. "Making Sense of Machine Learning: A Review of Interpretation Techniques and Their Applications". Applied Sciences 14, n.º 2 (5 de enero de 2024): 496. http://dx.doi.org/10.3390/app14020496.
Texto completoMokhtari, Ayoub, Roberto Casale, Zohaib Salahuddin, Zelda Paquier, Thomas Guiot, Henry C. Woodruff, Philippe Lambin, Jean-Luc Van Laethem, Alain Hendlisz y Maria Antonietta Bali. "Development of Clinical Radiomics-Based Models to Predict Survival Outcome in Pancreatic Ductal Adenocarcinoma: A Multicenter Retrospective Study". Diagnostics 14, n.º 7 (28 de marzo de 2024): 712. http://dx.doi.org/10.3390/diagnostics14070712.
Texto completoXie, Yibing, Nichakorn Pongsakornsathien, Alessandro Gardi y Roberto Sabatini. "Explanation of Machine-Learning Solutions in Air-Traffic Management". Aerospace 8, n.º 8 (12 de agosto de 2021): 224. http://dx.doi.org/10.3390/aerospace8080224.
Texto completoXie, Huafang, Lin Liu y Han Yue. "Modeling the Effect of Streetscape Environment on Crime Using Street View Images and Interpretable Machine-Learning Technique". International Journal of Environmental Research and Public Health 19, n.º 21 (24 de octubre de 2022): 13833. http://dx.doi.org/10.3390/ijerph192113833.
Texto completoTurbé, Hugues, Mina Bjelogrlic, Christian Lovis y Gianmarco Mengaldo. "Evaluation of post-hoc interpretability methods in time-series classification". Nature Machine Intelligence, 13 de marzo de 2023. http://dx.doi.org/10.1038/s42256-023-00620-w.
Texto completoMadsen, Andreas, Siva Reddy y Sarath Chandar. "Post-hoc Interpretability for Neural NLP: A Survey". ACM Computing Surveys, 9 de julio de 2022. http://dx.doi.org/10.1145/3546577.
Texto completoChen, Changdong, Allen Ding Tian y Ruochen Jiang. "When Post Hoc Explanation Knocks: Consumer Responses to Explainable AI Recommendations". Journal of Interactive Marketing, 7 de diciembre de 2023. http://dx.doi.org/10.1177/10949968231200221.
Texto completoLuo, Zijing, Renguang Zuo, Yihui Xiong y Bao Zhou. "Metallogenic-Factor Variational Autoencoder for Geochemical Anomaly Detection by Ad-Hoc and Post-Hoc Interpretability Algorithms". Natural Resources Research, 12 de abril de 2023. http://dx.doi.org/10.1007/s11053-023-10200-9.
Texto completoMarton, Sascha, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev y Heiner Stuckenschmidt. "Explaining neural networks without access to training data". Machine Learning, 10 de enero de 2024. http://dx.doi.org/10.1007/s10994-023-06428-4.
Texto completoYang, Fanfan, Renguang Zuo, Yihui Xiong, Ying Xu, Jiaxin Nie y Gubin Zhang. "Dual-Branch Convolutional Neural Network and Its Post Hoc Interpretability for Mapping Mineral Prospectivity". Mathematical Geosciences, 22 de marzo de 2024. http://dx.doi.org/10.1007/s11004-024-10137-6.
Texto completoVelmurugan, Mythreyi, Chun Ouyang, Renuka Sindhgatta y Catarina Moreira. "Through the looking glass: evaluating post hoc explanations using transparent models". International Journal of Data Science and Analytics, 12 de septiembre de 2023. http://dx.doi.org/10.1007/s41060-023-00445-1.
Texto completoBjörklund, Anton, Andreas Henelius, Emilia Oikarinen, Kimmo Kallonen y Kai Puolamäki. "Explaining any black box model using real data". Frontiers in Computer Science 5 (8 de agosto de 2023). http://dx.doi.org/10.3389/fcomp.2023.1143904.
Texto completoXiao, Li-Ming, Yun-Qi Wan y Zhen-Ran Jiang. "AttCRISPR: a spacetime interpretable model for prediction of sgRNA on-target activity". BMC Bioinformatics 22, n.º 1 (diciembre de 2021). http://dx.doi.org/10.1186/s12859-021-04509-6.
Texto completoS. S. Júnior, Jorge, Jérôme Mendes, Francisco Souza y Cristiano Premebida. "Survey on Deep Fuzzy Systems in Regression Applications: A View on Interpretability". International Journal of Fuzzy Systems, 5 de junio de 2023. http://dx.doi.org/10.1007/s40815-023-01544-8.
Texto completoTiwari, Devisha Arunadevi y Bhaskar Mondal. "A Unified Framework for Cyber Oriented Digital Engineering using Integration of Explainable Chaotic Cryptology on Pervasive Systems". Qeios, 3 de mayo de 2024. http://dx.doi.org/10.32388/60nk7h.
Texto completo