Artigos de revistas sobre o tema "Post-hoc interpretability"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Post-hoc interpretability".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Feng, Jiangfan, Yukun Liang e Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability". Computational Intelligence and Neuroscience 2021 (26 de julho de 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Texto completo da fonteZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu e Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junho de 2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Texto completo da fonteXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou e Aijing Luo. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review". Journal of Healthcare Engineering 2023 (3 de fevereiro de 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Texto completo da fonteGill, Navdeep, Patrick Hall, Kim Montgomery e Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing". Information 11, n.º 3 (29 de fevereiro de 2020): 137. http://dx.doi.org/10.3390/info11030137.
Texto completo da fonteMarconato, Emanuele, Andrea Passerini e Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning". Entropy 25, n.º 12 (22 de novembro de 2023): 1574. http://dx.doi.org/10.3390/e25121574.
Texto completo da fonteDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, Chrysoula Garefa, Lukas S. Keller, Reto Boehm, Domenico Ciancone et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”". Imaging 14, n.º 2 (23 de dezembro de 2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Texto completo da fonteLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan e Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Texto completo da fonteJalali, Anahid, Alexander Schindler, Bernhard Haslhofer e Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study". PHM Society European Conference 5, n.º 1 (22 de julho de 2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Texto completo da fonteGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, Himar Fabelo, Inger Torhild Gram, Maja-Lisa Løchen, Conceição Granja e Cristina Soguero-Ruiz. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors". Applied Sciences 13, n.º 7 (23 de março de 2023): 4119. http://dx.doi.org/10.3390/app13074119.
Texto completo da fonteWang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de março de 2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Texto completo da fonteChatterjee, Soumick, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck e Andreas Nürnberger. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models". Applied Sciences 12, n.º 4 (10 de fevereiro de 2022): 1834. http://dx.doi.org/10.3390/app12041834.
Texto completo da fonteMurdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl e Bin Yu. "Definitions, methods, and applications in interpretable machine learning". Proceedings of the National Academy of Sciences 116, n.º 44 (16 de outubro de 2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.
Texto completo da fonteAslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid e Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)". Sustainability 14, n.º 12 (16 de junho de 2022): 7375. http://dx.doi.org/10.3390/su14127375.
Texto completo da fonteRoscher, R., B. Bohn, M. F. Duarte e J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (3 de agosto de 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Texto completo da fonteGuo, Jiaxing, Zhiyi Tang, Changxing Zhang, Wei Xu e Yonghong Wu. "An Interpretable Deep Learning Method for Identifying Extreme Events under Faulty Data Interference". Applied Sciences 13, n.º 9 (4 de maio de 2023): 5659. http://dx.doi.org/10.3390/app13095659.
Texto completo da fonteOkajima, Yuzuru, e Kunihiko Sadamasa. "Deep Neural Networks Constrained by Decision Rules". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 2496–505. http://dx.doi.org/10.1609/aaai.v33i01.33012496.
Texto completo da fonteQian, Wei, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang e Mengdi Huai. "Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de março de 2024): 14651–59. http://dx.doi.org/10.1609/aaai.v38i13.29382.
Texto completo da fonteHuai, Mengdi, Jinduo Liu, Chenglin Miao, Liuyi Yao e Aidong Zhang. "Towards Automating Model Explanations with Certified Robustness Guarantees". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junho de 2022): 6935–43. http://dx.doi.org/10.1609/aaai.v36i6.20651.
Texto completo da fonteXue, Mufan, Xinyu Wu, Jinlong Li, Xuesong Li e Guoyuan Yang. "A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de março de 2024): 6413–21. http://dx.doi.org/10.1609/aaai.v38i6.28461.
Texto completo da fonteKumar, Akshi, Shubham Dikshit e Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues". Wireless Communications and Mobile Computing 2021 (2 de julho de 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.
Texto completo da fonteFan, Yongxian, Meng Liu e Guicong Sun. "An interpretable machine learning framework for diagnosis and prognosis of COVID-19". PLOS ONE 18, n.º 9 (21 de setembro de 2023): e0291961. http://dx.doi.org/10.1371/journal.pone.0291961.
Texto completo da fonteShen, Yifan, Li Liu, Zhihao Tang, Zongyi Chen, Guixiang Ma, Jiyan Dong, Xi Zhang, Lin Yang e Qingfeng Zheng. "Explainable Survival Analysis with Convolution-Involved Vision Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 2 (28 de junho de 2022): 2207–15. http://dx.doi.org/10.1609/aaai.v36i2.20118.
Texto completo da fonteAlangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour e Ibrahim Almosallam. "Intrinsically Interpretable Gaussian Mixture Model". Information 14, n.º 3 (3 de março de 2023): 164. http://dx.doi.org/10.3390/info14030164.
Texto completo da fonteKong, Weihao, Jianping Chen e Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research". Minerals 14, n.º 2 (24 de janeiro de 2024): 128. http://dx.doi.org/10.3390/min14020128.
Texto completo da fonteChen, Qian, Taolin Zhang, Dongyang Li e Xiaofeng He. "CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de março de 2024): 17763–71. http://dx.doi.org/10.1609/aaai.v38i16.29729.
Texto completo da fonteCollazos-Huertas, Diego Fabian, Andrés Marino Álvarez-Meza, David Augusto Cárdenas-Peña, Germán Albeiro Castaño-Duque e César Germán Castellanos-Domínguez. "Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity". Sensors 23, n.º 5 (2 de março de 2023): 2750. http://dx.doi.org/10.3390/s23052750.
Texto completo da fonteOlatunji, Iyiola E., Mandeep Rathee, Thorben Funke e Megha Khosla. "Private Graph Extraction via Feature Explanations". Proceedings on Privacy Enhancing Technologies 2023, n.º 2 (abril de 2023): 59–78. http://dx.doi.org/10.56553/popets-2023-0041.
Texto completo da fonteVieira, Carla Piazzon Ramos, e Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM". Revista Brasileira de Computação Aplicada 12, n.º 1 (8 de janeiro de 2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.
Texto completo da fonteMaree, Charl, e Christian W. Omlin. "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?" AI 3, n.º 2 (13 de junho de 2022): 526–37. http://dx.doi.org/10.3390/ai3020030.
Texto completo da fonteGu, Jindong. "Interpretable Graph Capsule Networks for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de maio de 2021): 1469–77. http://dx.doi.org/10.1609/aaai.v35i2.16237.
Texto completo da fonteNguyen, Hung Viet, e Haewon Byeon. "Predicting Depression during the COVID-19 Pandemic Using Interpretable TabNet: A Case Study in South Korea". Mathematics 11, n.º 14 (17 de julho de 2023): 3145. http://dx.doi.org/10.3390/math11143145.
Texto completo da fonteTulsani, Vijya, Prashant Sahatiya, Jignasha Parmar e Jayshree Parmar. "XAI Applications in Medical Imaging: A Survey of Methods and Challenges". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 9 (27 de outubro de 2023): 181–86. http://dx.doi.org/10.17762/ijritcc.v11i9.8332.
Texto completo da fonteZhong, Xian, Zohaib Salahuddin, Yi Chen, Henry C. Woodruff, Haiyi Long, Jianyun Peng, Xiaoyan Xie, Manxia Lin e Philippe Lambin. "An Interpretable Radiomics Model Based on Two-Dimensional Shear Wave Elastography for Predicting Symptomatic Post-Hepatectomy Liver Failure in Patients with Hepatocellular Carcinoma". Cancers 15, n.º 21 (6 de novembro de 2023): 5303. http://dx.doi.org/10.3390/cancers15215303.
Texto completo da fonteSingh, Rajeev Kumar, Rohan Gorantla, Sai Giridhar Rao Allada e Pratap Narra. "SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability". PLOS ONE 17, n.º 10 (31 de outubro de 2022): e0276836. http://dx.doi.org/10.1371/journal.pone.0276836.
Texto completo da fonteArnold, Thomas, Daniel Kasenberg e Matthias Scheutz. "Explaining in Time". ACM Transactions on Human-Robot Interaction 10, n.º 3 (julho de 2021): 1–23. http://dx.doi.org/10.1145/3457183.
Texto completo da fonteTursunalieva, Ainura, David L. J. Alexander, Rob Dunne, Jiaming Li, Luis Riera e Yanchang Zhao. "Making Sense of Machine Learning: A Review of Interpretation Techniques and Their Applications". Applied Sciences 14, n.º 2 (5 de janeiro de 2024): 496. http://dx.doi.org/10.3390/app14020496.
Texto completo da fonteMokhtari, Ayoub, Roberto Casale, Zohaib Salahuddin, Zelda Paquier, Thomas Guiot, Henry C. Woodruff, Philippe Lambin, Jean-Luc Van Laethem, Alain Hendlisz e Maria Antonietta Bali. "Development of Clinical Radiomics-Based Models to Predict Survival Outcome in Pancreatic Ductal Adenocarcinoma: A Multicenter Retrospective Study". Diagnostics 14, n.º 7 (28 de março de 2024): 712. http://dx.doi.org/10.3390/diagnostics14070712.
Texto completo da fonteXie, Yibing, Nichakorn Pongsakornsathien, Alessandro Gardi e Roberto Sabatini. "Explanation of Machine-Learning Solutions in Air-Traffic Management". Aerospace 8, n.º 8 (12 de agosto de 2021): 224. http://dx.doi.org/10.3390/aerospace8080224.
Texto completo da fonteXie, Huafang, Lin Liu e Han Yue. "Modeling the Effect of Streetscape Environment on Crime Using Street View Images and Interpretable Machine-Learning Technique". International Journal of Environmental Research and Public Health 19, n.º 21 (24 de outubro de 2022): 13833. http://dx.doi.org/10.3390/ijerph192113833.
Texto completo da fonteTurbé, Hugues, Mina Bjelogrlic, Christian Lovis e Gianmarco Mengaldo. "Evaluation of post-hoc interpretability methods in time-series classification". Nature Machine Intelligence, 13 de março de 2023. http://dx.doi.org/10.1038/s42256-023-00620-w.
Texto completo da fonteMadsen, Andreas, Siva Reddy e Sarath Chandar. "Post-hoc Interpretability for Neural NLP: A Survey". ACM Computing Surveys, 9 de julho de 2022. http://dx.doi.org/10.1145/3546577.
Texto completo da fonteChen, Changdong, Allen Ding Tian e Ruochen Jiang. "When Post Hoc Explanation Knocks: Consumer Responses to Explainable AI Recommendations". Journal of Interactive Marketing, 7 de dezembro de 2023. http://dx.doi.org/10.1177/10949968231200221.
Texto completo da fonteLuo, Zijing, Renguang Zuo, Yihui Xiong e Bao Zhou. "Metallogenic-Factor Variational Autoencoder for Geochemical Anomaly Detection by Ad-Hoc and Post-Hoc Interpretability Algorithms". Natural Resources Research, 12 de abril de 2023. http://dx.doi.org/10.1007/s11053-023-10200-9.
Texto completo da fonteMarton, Sascha, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev e Heiner Stuckenschmidt. "Explaining neural networks without access to training data". Machine Learning, 10 de janeiro de 2024. http://dx.doi.org/10.1007/s10994-023-06428-4.
Texto completo da fonteYang, Fanfan, Renguang Zuo, Yihui Xiong, Ying Xu, Jiaxin Nie e Gubin Zhang. "Dual-Branch Convolutional Neural Network and Its Post Hoc Interpretability for Mapping Mineral Prospectivity". Mathematical Geosciences, 22 de março de 2024. http://dx.doi.org/10.1007/s11004-024-10137-6.
Texto completo da fonteVelmurugan, Mythreyi, Chun Ouyang, Renuka Sindhgatta e Catarina Moreira. "Through the looking glass: evaluating post hoc explanations using transparent models". International Journal of Data Science and Analytics, 12 de setembro de 2023. http://dx.doi.org/10.1007/s41060-023-00445-1.
Texto completo da fonteBjörklund, Anton, Andreas Henelius, Emilia Oikarinen, Kimmo Kallonen e Kai Puolamäki. "Explaining any black box model using real data". Frontiers in Computer Science 5 (8 de agosto de 2023). http://dx.doi.org/10.3389/fcomp.2023.1143904.
Texto completo da fonteXiao, Li-Ming, Yun-Qi Wan e Zhen-Ran Jiang. "AttCRISPR: a spacetime interpretable model for prediction of sgRNA on-target activity". BMC Bioinformatics 22, n.º 1 (dezembro de 2021). http://dx.doi.org/10.1186/s12859-021-04509-6.
Texto completo da fonteS. S. Júnior, Jorge, Jérôme Mendes, Francisco Souza e Cristiano Premebida. "Survey on Deep Fuzzy Systems in Regression Applications: A View on Interpretability". International Journal of Fuzzy Systems, 5 de junho de 2023. http://dx.doi.org/10.1007/s40815-023-01544-8.
Texto completo da fonteTiwari, Devisha Arunadevi, e Bhaskar Mondal. "A Unified Framework for Cyber Oriented Digital Engineering using Integration of Explainable Chaotic Cryptology on Pervasive Systems". Qeios, 3 de maio de 2024. http://dx.doi.org/10.32388/60nk7h.
Texto completo da fonte