Artículos de revistas sobre el tema "Model-agnostic Explainability"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Model-agnostic Explainability".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Diprose, William K., Nicholas Buist, Ning Hua, Quentin Thurier, George Shand y Reece Robinson. "Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator". Journal of the American Medical Informatics Association 27, n.º 4 (27 de febrero de 2020): 592–600. http://dx.doi.org/10.1093/jamia/ocz229.
Texto completoZafar, Muhammad Rehman y Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability". Machine Learning and Knowledge Extraction 3, n.º 3 (30 de junio de 2021): 525–41. http://dx.doi.org/10.3390/make3030027.
Texto completoTOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example". Journal of Medicine and Palliative Care 4, n.º 2 (27 de marzo de 2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Texto completoUllah, Ihsan, Andre Rios, Vaibhav Gala y Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation". Applied Sciences 12, n.º 1 (23 de diciembre de 2021): 136. http://dx.doi.org/10.3390/app12010136.
Texto completoSrinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri y Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies". Mobile Information Systems 2022 (13 de junio de 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.
Texto completoLv, Ge, Chen Jason Zhang y Lei Chen. "HENCE-X: Toward Heterogeneity-Agnostic Multi-Level Explainability for Deep Graph Networks". Proceedings of the VLDB Endowment 16, n.º 11 (julio de 2023): 2990–3003. http://dx.doi.org/10.14778/3611479.3611503.
Texto completoFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont y Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification". Mathematics 9, n.º 23 (5 de diciembre de 2021): 3137. http://dx.doi.org/10.3390/math9233137.
Texto completoHassan, Fayaz, Jianguo Yu, Zafi Sherhan Syed, Nadeem Ahmed, Mana Saleh Al Reshan y Asadullah Shaikh. "Achieving model explainability for intrusion detection in VANETs with LIME". PeerJ Computer Science 9 (22 de junio de 2023): e1440. http://dx.doi.org/10.7717/peerj-cs.1440.
Texto completoVieira, Carla Piazzon Ramos y Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM". Revista Brasileira de Computação Aplicada 12, n.º 1 (8 de enero de 2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.
Texto completoNguyen, Hung Viet y Haewon Byeon. "Prediction of Out-of-Hospital Cardiac Arrest Survival Outcomes Using a Hybrid Agnostic Explanation TabNet Model". Mathematics 11, n.º 9 (25 de abril de 2023): 2030. http://dx.doi.org/10.3390/math11092030.
Texto completoSzepannaek, Gero y Karsten Lübke. "How much do we see? On the explainability of partial dependence plots for credit risk scoring". Argumenta Oeconomica 2023, n.º 2 (2023): 137–50. http://dx.doi.org/10.15611/aoe.2023.1.07.
Texto completoSovrano, Francesco, Salvatore Sapienza, Monica Palmirani y Fabio Vitali. "Metrics, Explainability and the European AI Act Proposal". J 5, n.º 1 (18 de febrero de 2022): 126–38. http://dx.doi.org/10.3390/j5010010.
Texto completoKaplun, Dmitry, Alexander Krasichkov, Petr Chetyrbok, Nikolay Oleinikov, Anupam Garg y Husanbir Singh Pannu. "Cancer Cell Profiling Using Image Moments and Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database". Mathematics 9, n.º 20 (17 de octubre de 2021): 2616. http://dx.doi.org/10.3390/math9202616.
Texto completoIbrahim, Muhammad Amien, Samsul Arifin, I. Gusti Agung Anom Yudistira, Rinda Nariswari, Abdul Azis Abdillah, Nerru Pranuta Murnaka y Puguh Wahyu Prasetyo. "An Explainable AI Model for Hate Speech Detection on Indonesian Twitter". CommIT (Communication and Information Technology) Journal 16, n.º 2 (8 de junio de 2022): 175–82. http://dx.doi.org/10.21512/commit.v16i2.8343.
Texto completoManikis, Georgios C., Georgios S. Ioannidis, Loizos Siakallis, Katerina Nikiforaki, Michael Iv, Diana Vozlic, Katarina Surlan-Popovic, Max Wintermark, Sotirios Bisdas y Kostas Marias. "Multicenter DSC–MRI-Based Radiomics Predict IDH Mutation in Gliomas". Cancers 13, n.º 16 (5 de agosto de 2021): 3965. http://dx.doi.org/10.3390/cancers13163965.
Texto completoOubelaid, Adel, Abdelhameed Ibrahim y Ahmed M. Elshewey. "Bridging the Gap: An Explainable Methodology for Customer Churn Prediction in Supply Chain Management". Journal of Artificial Intelligence and Metaheuristics 4, n.º 1 (2023): 16–23. http://dx.doi.org/10.54216/jaim.040102.
Texto completoSathyan, Anoop, Abraham Itzhak Weinberg y Kelly Cohen. "Interpretable AI for bio-medical applications". Complex Engineering Systems 2, n.º 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.
Texto completoAhmed, Md Sabbir, Md Tasin Tazwar, Haseen Khan, Swadhin Roy, Junaed Iqbal, Md Golam Rabiul Alam, Md Rafiul Hassan y Mohammad Mehedi Hassan. "Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression". Complexity 2022 (19 de septiembre de 2022): 1–20. http://dx.doi.org/10.1155/2022/5305353.
Texto completoBaşağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi y Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications". Water 14, n.º 8 (11 de abril de 2022): 1230. http://dx.doi.org/10.3390/w14081230.
Texto completoMehta, Harshkumar y Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)". Algorithms 15, n.º 8 (17 de agosto de 2022): 291. http://dx.doi.org/10.3390/a15080291.
Texto completoLu, Haohui y Shahadat Uddin. "Explainable Stacking-Based Model for Predicting Hospital Readmission for Diabetic Patients". Information 13, n.º 9 (15 de septiembre de 2022): 436. http://dx.doi.org/10.3390/info13090436.
Texto completoAbdullah, Talal A. A., Mohd Soperi Mohd Zahid, Waleed Ali y Shahab Ul Hassan. "B-LIME: An Improvement of LIME for Interpretable Deep Learning Classification of Cardiac Arrhythmia from ECG Signals". Processes 11, n.º 2 (16 de febrero de 2023): 595. http://dx.doi.org/10.3390/pr11020595.
Texto completoMerone, Mario, Alessandro Graziosi, Valerio Lapadula, Lorenzo Petrosino, Onorato d’Angelis y Luca Vollero. "A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems". Sensors 22, n.º 20 (14 de octubre de 2022): 7807. http://dx.doi.org/10.3390/s22207807.
Texto completoKim, Jaehun. "Increasing trust in complex machine learning systems". ACM SIGIR Forum 55, n.º 1 (junio de 2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.
Texto completoDu, Yuhan, Anthony R. Rafferty, Fionnuala M. McAuliffe, John Mehegan y Catherine Mooney. "Towards an explainable clinical decision support system for large-for-gestational-age births". PLOS ONE 18, n.º 2 (21 de febrero de 2023): e0281821. http://dx.doi.org/10.1371/journal.pone.0281821.
Texto completoAntoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A. Becker y Catherine Mooney. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review". Applied Sciences 11, n.º 11 (31 de mayo de 2021): 5088. http://dx.doi.org/10.3390/app11115088.
Texto completoKim, Kipyo, Hyeonsik Yang, Jinyeong Yi, Hyung-Eun Son, Ji-Young Ryu, Yong Chul Kim, Jong Cheol Jeong et al. "Real-Time Clinical Decision Support Based on Recurrent Neural Networks for In-Hospital Acute Kidney Injury: External Validation and Model Interpretation". Journal of Medical Internet Research 23, n.º 4 (16 de abril de 2021): e24120. http://dx.doi.org/10.2196/24120.
Texto completoAbir, Wahidul Hasan, Md Fahim Uddin, Faria Rahman Khanam, Tahia Tazin, Mohammad Monirujjaman Khan, Mehedi Masud y Sultan Aljahdali. "Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer Learning Method". Computational Intelligence and Neuroscience 2022 (27 de abril de 2022): 1–14. http://dx.doi.org/10.1155/2022/5140148.
Texto completoWikle, Christopher K., Abhirup Datta, Bhava Vyasa Hari, Edward L. Boone, Indranil Sahoo, Indulekha Kavila, Stefano Castruccio, Susan J. Simmons, Wesley S. Burr y Won Chang. "An illustration of model agnostic explainability methods applied to environmental data". Environmetrics, 25 de octubre de 2022. http://dx.doi.org/10.1002/env.2772.
Texto completoXu, Zhichao, Hansi Zeng, Juntao Tan, Zuohui Fu, Yongfeng Zhang y Qingyao Ai. "A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability". ACM Transactions on Information Systems, 18 de junio de 2023. http://dx.doi.org/10.1145/3605357.
Texto completoJoyce, Dan W., Andrey Kormilitzin, Katharine A. Smith y Andrea Cipriani. "Explainable artificial intelligence for mental health through transparency and interpretability for understandability". npj Digital Medicine 6, n.º 1 (18 de enero de 2023). http://dx.doi.org/10.1038/s41746-023-00751-9.
Texto completoNakashima, Heitor Hoffman, Daielly Mantovani y Celso Machado Junior. "Users’ trust in black-box machine learning algorithms". Revista de Gestão, 25 de octubre de 2022. http://dx.doi.org/10.1108/rege-06-2022-0100.
Texto completoSzepannek, Gero y Karsten Lübke. "Explaining Artificial Intelligence with Care". KI - Künstliche Intelligenz, 16 de mayo de 2022. http://dx.doi.org/10.1007/s13218-022-00764-8.
Texto completoSharma, Jeetesh, Murari Lal Mittal, Gunjan Soni y Arvind Keprate. "Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review". Recent Patents on Engineering 18 (17 de abril de 2023). http://dx.doi.org/10.2174/1872212118666230417084231.
Texto completoSzczepański, Mateusz, Marek Pawlicki, Rafał Kozik y Michał Choraś. "New explainability method for BERT-based model in fake news detection". Scientific Reports 11, n.º 1 (diciembre de 2021). http://dx.doi.org/10.1038/s41598-021-03100-6.
Texto completoÖZTOPRAK, Samet y Zeynep ORMAN. "A New Model-Agnostic Method and Implementation for Explaining the Prediction on Finance Data". European Journal of Science and Technology, 29 de junio de 2022. http://dx.doi.org/10.31590/ejosat.1079145.
Texto completoBachoc, François, Fabrice Gamboa, Max Halford, Jean-Michel Loubes y Laurent Risser. "Explaining machine learning models using entropic variable projection". Information and Inference: A Journal of the IMA 12, n.º 3 (27 de abril de 2023). http://dx.doi.org/10.1093/imaiai/iaad010.
Texto completoLoveleen, Gaur, Bhandari Mohan, Bhadwal Singh Shikhar, Jhanjhi Nz, Mohammad Shorfuzzaman y Mehedi Masud. "Explanation-driven HCI Model to Examine the Mini-Mental State for Alzheimer’s Disease". ACM Transactions on Multimedia Computing, Communications, and Applications, abril de 2022. http://dx.doi.org/10.1145/3527174.
Texto completoVilone, Giulia y Luca Longo. "A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods". Frontiers in Artificial Intelligence 4 (3 de noviembre de 2021). http://dx.doi.org/10.3389/frai.2021.717899.
Texto completoAlabi, Rasheed Omobolaji, Mohammed Elmusrati, Ilmo Leivo, Alhadi Almangush y Antti A. Mäkitie. "Machine learning explainability in nasopharyngeal cancer survival using LIME and SHAP". Scientific Reports 13, n.º 1 (2 de junio de 2023). http://dx.doi.org/10.1038/s41598-023-35795-0.
Texto completoBogdanova, Anna, Akira Imakura y Tetsuya Sakurai. "DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning". Human-Centric Intelligent Systems, 6 de julio de 2023. http://dx.doi.org/10.1007/s44230-023-00032-4.
Texto completoZini, Julia El y Mariette Awad. "On the Explainability of Natural Language Processing Deep Models". ACM Computing Surveys, 19 de julio de 2022. http://dx.doi.org/10.1145/3529755.
Texto completoEsam Noori, Worood y A. S. Albahri. "Towards Trustworthy Myopia Detection: Integration Methodology of Deep Learning Approach, XAI Visualization, and User Interface System". Applied Data Science and Analysis, 23 de febrero de 2023, 1–15. http://dx.doi.org/10.58496/adsa/2023/001.
Texto completoFilho, Renato Miranda, Anísio M. Lacerda y Gisele L. Pappa. "Explainable regression via prototypes". ACM Transactions on Evolutionary Learning and Optimization, 15 de diciembre de 2022. http://dx.doi.org/10.1145/3576903.
Texto completoAhmed, Zia U., Kang Sun, Michael Shelly y Lina Mu. "Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA". Scientific Reports 11, n.º 1 (diciembre de 2021). http://dx.doi.org/10.1038/s41598-021-03198-8.
Texto completoChen, Tao, Meng Song, Hongxun Hui y Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems". Frontiers in Energy Research 9 (5 de octubre de 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.
Texto completoChen, Tao, Meng Song, Hongxun Hui y Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems". Frontiers in Energy Research 9 (5 de octubre de 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.
Texto completoJaved, Abdul Rehman, Habib Ullah Khan, Mohammad Kamel Bader Alomari, Muhammad Usman Sarwar, Muhammad Asim, Ahmad S. Almadhor y Muhammad Zahid Khan. "Toward explainable AI-empowered cognitive health assessment". Frontiers in Public Health 11 (9 de marzo de 2023). http://dx.doi.org/10.3389/fpubh.2023.1024195.
Texto completoMustafa, Ahmad, Klaas Koster y Ghassan AlRegib. "Explainable Machine Learning for Hydrocarbon Risk Assessment". GEOPHYSICS, 13 de julio de 2023, 1–52. http://dx.doi.org/10.1190/geo2022-0594.1.
Texto completoYang, Darrion Bo-Yun, Alexander Smith, Emily J. Smith, Anant Naik, Mika Janbahan, Charee M. Thompson, Lav R. Varshney y Wael Hassaneen. "The State of Machine Learning in Outcomes Prediction of Transsphenoidal Surgery: A Systematic Review". Journal of Neurological Surgery Part B: Skull Base, 12 de septiembre de 2022. http://dx.doi.org/10.1055/a-1941-3618.
Texto completo