Artykuły w czasopismach na temat „Model-agnostic Explainability”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Model-agnostic Explainability”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Diprose, William K., Nicholas Buist, Ning Hua, Quentin Thurier, George Shand i Reece Robinson. "Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator". Journal of the American Medical Informatics Association 27, nr 4 (27.02.2020): 592–600. http://dx.doi.org/10.1093/jamia/ocz229.
Pełny tekst źródłaZafar, Muhammad Rehman, i Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability". Machine Learning and Knowledge Extraction 3, nr 3 (30.06.2021): 525–41. http://dx.doi.org/10.3390/make3030027.
Pełny tekst źródłaTOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example". Journal of Medicine and Palliative Care 4, nr 2 (27.03.2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Pełny tekst źródłaUllah, Ihsan, Andre Rios, Vaibhav Gala i Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation". Applied Sciences 12, nr 1 (23.12.2021): 136. http://dx.doi.org/10.3390/app12010136.
Pełny tekst źródłaSrinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri i Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies". Mobile Information Systems 2022 (13.06.2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.
Pełny tekst źródłaLv, Ge, Chen Jason Zhang i Lei Chen. "HENCE-X: Toward Heterogeneity-Agnostic Multi-Level Explainability for Deep Graph Networks". Proceedings of the VLDB Endowment 16, nr 11 (lipiec 2023): 2990–3003. http://dx.doi.org/10.14778/3611479.3611503.
Pełny tekst źródłaFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont i Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification". Mathematics 9, nr 23 (5.12.2021): 3137. http://dx.doi.org/10.3390/math9233137.
Pełny tekst źródłaHassan, Fayaz, Jianguo Yu, Zafi Sherhan Syed, Nadeem Ahmed, Mana Saleh Al Reshan i Asadullah Shaikh. "Achieving model explainability for intrusion detection in VANETs with LIME". PeerJ Computer Science 9 (22.06.2023): e1440. http://dx.doi.org/10.7717/peerj-cs.1440.
Pełny tekst źródłaVieira, Carla Piazzon Ramos, i Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM". Revista Brasileira de Computação Aplicada 12, nr 1 (8.01.2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.
Pełny tekst źródłaNguyen, Hung Viet, i Haewon Byeon. "Prediction of Out-of-Hospital Cardiac Arrest Survival Outcomes Using a Hybrid Agnostic Explanation TabNet Model". Mathematics 11, nr 9 (25.04.2023): 2030. http://dx.doi.org/10.3390/math11092030.
Pełny tekst źródłaSzepannaek, Gero, i Karsten Lübke. "How much do we see? On the explainability of partial dependence plots for credit risk scoring". Argumenta Oeconomica 2023, nr 2 (2023): 137–50. http://dx.doi.org/10.15611/aoe.2023.1.07.
Pełny tekst źródłaSovrano, Francesco, Salvatore Sapienza, Monica Palmirani i Fabio Vitali. "Metrics, Explainability and the European AI Act Proposal". J 5, nr 1 (18.02.2022): 126–38. http://dx.doi.org/10.3390/j5010010.
Pełny tekst źródłaKaplun, Dmitry, Alexander Krasichkov, Petr Chetyrbok, Nikolay Oleinikov, Anupam Garg i Husanbir Singh Pannu. "Cancer Cell Profiling Using Image Moments and Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database". Mathematics 9, nr 20 (17.10.2021): 2616. http://dx.doi.org/10.3390/math9202616.
Pełny tekst źródłaIbrahim, Muhammad Amien, Samsul Arifin, I. Gusti Agung Anom Yudistira, Rinda Nariswari, Abdul Azis Abdillah, Nerru Pranuta Murnaka i Puguh Wahyu Prasetyo. "An Explainable AI Model for Hate Speech Detection on Indonesian Twitter". CommIT (Communication and Information Technology) Journal 16, nr 2 (8.06.2022): 175–82. http://dx.doi.org/10.21512/commit.v16i2.8343.
Pełny tekst źródłaManikis, Georgios C., Georgios S. Ioannidis, Loizos Siakallis, Katerina Nikiforaki, Michael Iv, Diana Vozlic, Katarina Surlan-Popovic, Max Wintermark, Sotirios Bisdas i Kostas Marias. "Multicenter DSC–MRI-Based Radiomics Predict IDH Mutation in Gliomas". Cancers 13, nr 16 (5.08.2021): 3965. http://dx.doi.org/10.3390/cancers13163965.
Pełny tekst źródłaOubelaid, Adel, Abdelhameed Ibrahim i Ahmed M. Elshewey. "Bridging the Gap: An Explainable Methodology for Customer Churn Prediction in Supply Chain Management". Journal of Artificial Intelligence and Metaheuristics 4, nr 1 (2023): 16–23. http://dx.doi.org/10.54216/jaim.040102.
Pełny tekst źródłaSathyan, Anoop, Abraham Itzhak Weinberg i Kelly Cohen. "Interpretable AI for bio-medical applications". Complex Engineering Systems 2, nr 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.
Pełny tekst źródłaAhmed, Md Sabbir, Md Tasin Tazwar, Haseen Khan, Swadhin Roy, Junaed Iqbal, Md Golam Rabiul Alam, Md Rafiul Hassan i Mohammad Mehedi Hassan. "Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression". Complexity 2022 (19.09.2022): 1–20. http://dx.doi.org/10.1155/2022/5305353.
Pełny tekst źródłaBaşağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi i Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications". Water 14, nr 8 (11.04.2022): 1230. http://dx.doi.org/10.3390/w14081230.
Pełny tekst źródłaMehta, Harshkumar, i Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)". Algorithms 15, nr 8 (17.08.2022): 291. http://dx.doi.org/10.3390/a15080291.
Pełny tekst źródłaLu, Haohui, i Shahadat Uddin. "Explainable Stacking-Based Model for Predicting Hospital Readmission for Diabetic Patients". Information 13, nr 9 (15.09.2022): 436. http://dx.doi.org/10.3390/info13090436.
Pełny tekst źródłaAbdullah, Talal A. A., Mohd Soperi Mohd Zahid, Waleed Ali i Shahab Ul Hassan. "B-LIME: An Improvement of LIME for Interpretable Deep Learning Classification of Cardiac Arrhythmia from ECG Signals". Processes 11, nr 2 (16.02.2023): 595. http://dx.doi.org/10.3390/pr11020595.
Pełny tekst źródłaMerone, Mario, Alessandro Graziosi, Valerio Lapadula, Lorenzo Petrosino, Onorato d’Angelis i Luca Vollero. "A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems". Sensors 22, nr 20 (14.10.2022): 7807. http://dx.doi.org/10.3390/s22207807.
Pełny tekst źródłaKim, Jaehun. "Increasing trust in complex machine learning systems". ACM SIGIR Forum 55, nr 1 (czerwiec 2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.
Pełny tekst źródłaDu, Yuhan, Anthony R. Rafferty, Fionnuala M. McAuliffe, John Mehegan i Catherine Mooney. "Towards an explainable clinical decision support system for large-for-gestational-age births". PLOS ONE 18, nr 2 (21.02.2023): e0281821. http://dx.doi.org/10.1371/journal.pone.0281821.
Pełny tekst źródłaAntoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A. Becker i Catherine Mooney. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review". Applied Sciences 11, nr 11 (31.05.2021): 5088. http://dx.doi.org/10.3390/app11115088.
Pełny tekst źródłaKim, Kipyo, Hyeonsik Yang, Jinyeong Yi, Hyung-Eun Son, Ji-Young Ryu, Yong Chul Kim, Jong Cheol Jeong i in. "Real-Time Clinical Decision Support Based on Recurrent Neural Networks for In-Hospital Acute Kidney Injury: External Validation and Model Interpretation". Journal of Medical Internet Research 23, nr 4 (16.04.2021): e24120. http://dx.doi.org/10.2196/24120.
Pełny tekst źródłaAbir, Wahidul Hasan, Md Fahim Uddin, Faria Rahman Khanam, Tahia Tazin, Mohammad Monirujjaman Khan, Mehedi Masud i Sultan Aljahdali. "Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer Learning Method". Computational Intelligence and Neuroscience 2022 (27.04.2022): 1–14. http://dx.doi.org/10.1155/2022/5140148.
Pełny tekst źródłaWikle, Christopher K., Abhirup Datta, Bhava Vyasa Hari, Edward L. Boone, Indranil Sahoo, Indulekha Kavila, Stefano Castruccio, Susan J. Simmons, Wesley S. Burr i Won Chang. "An illustration of model agnostic explainability methods applied to environmental data". Environmetrics, 25.10.2022. http://dx.doi.org/10.1002/env.2772.
Pełny tekst źródłaXu, Zhichao, Hansi Zeng, Juntao Tan, Zuohui Fu, Yongfeng Zhang i Qingyao Ai. "A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability". ACM Transactions on Information Systems, 18.06.2023. http://dx.doi.org/10.1145/3605357.
Pełny tekst źródłaJoyce, Dan W., Andrey Kormilitzin, Katharine A. Smith i Andrea Cipriani. "Explainable artificial intelligence for mental health through transparency and interpretability for understandability". npj Digital Medicine 6, nr 1 (18.01.2023). http://dx.doi.org/10.1038/s41746-023-00751-9.
Pełny tekst źródłaNakashima, Heitor Hoffman, Daielly Mantovani i Celso Machado Junior. "Users’ trust in black-box machine learning algorithms". Revista de Gestão, 25.10.2022. http://dx.doi.org/10.1108/rege-06-2022-0100.
Pełny tekst źródłaSzepannek, Gero, i Karsten Lübke. "Explaining Artificial Intelligence with Care". KI - Künstliche Intelligenz, 16.05.2022. http://dx.doi.org/10.1007/s13218-022-00764-8.
Pełny tekst źródłaSharma, Jeetesh, Murari Lal Mittal, Gunjan Soni i Arvind Keprate. "Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review". Recent Patents on Engineering 18 (17.04.2023). http://dx.doi.org/10.2174/1872212118666230417084231.
Pełny tekst źródłaSzczepański, Mateusz, Marek Pawlicki, Rafał Kozik i Michał Choraś. "New explainability method for BERT-based model in fake news detection". Scientific Reports 11, nr 1 (grudzień 2021). http://dx.doi.org/10.1038/s41598-021-03100-6.
Pełny tekst źródłaÖZTOPRAK, Samet, i Zeynep ORMAN. "A New Model-Agnostic Method and Implementation for Explaining the Prediction on Finance Data". European Journal of Science and Technology, 29.06.2022. http://dx.doi.org/10.31590/ejosat.1079145.
Pełny tekst źródłaBachoc, François, Fabrice Gamboa, Max Halford, Jean-Michel Loubes i Laurent Risser. "Explaining machine learning models using entropic variable projection". Information and Inference: A Journal of the IMA 12, nr 3 (27.04.2023). http://dx.doi.org/10.1093/imaiai/iaad010.
Pełny tekst źródłaLoveleen, Gaur, Bhandari Mohan, Bhadwal Singh Shikhar, Jhanjhi Nz, Mohammad Shorfuzzaman i Mehedi Masud. "Explanation-driven HCI Model to Examine the Mini-Mental State for Alzheimer’s Disease". ACM Transactions on Multimedia Computing, Communications, and Applications, kwiecień 2022. http://dx.doi.org/10.1145/3527174.
Pełny tekst źródłaVilone, Giulia, i Luca Longo. "A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods". Frontiers in Artificial Intelligence 4 (3.11.2021). http://dx.doi.org/10.3389/frai.2021.717899.
Pełny tekst źródłaAlabi, Rasheed Omobolaji, Mohammed Elmusrati, Ilmo Leivo, Alhadi Almangush i Antti A. Mäkitie. "Machine learning explainability in nasopharyngeal cancer survival using LIME and SHAP". Scientific Reports 13, nr 1 (2.06.2023). http://dx.doi.org/10.1038/s41598-023-35795-0.
Pełny tekst źródłaBogdanova, Anna, Akira Imakura i Tetsuya Sakurai. "DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning". Human-Centric Intelligent Systems, 6.07.2023. http://dx.doi.org/10.1007/s44230-023-00032-4.
Pełny tekst źródłaZini, Julia El, i Mariette Awad. "On the Explainability of Natural Language Processing Deep Models". ACM Computing Surveys, 19.07.2022. http://dx.doi.org/10.1145/3529755.
Pełny tekst źródłaEsam Noori, Worood, i A. S. Albahri. "Towards Trustworthy Myopia Detection: Integration Methodology of Deep Learning Approach, XAI Visualization, and User Interface System". Applied Data Science and Analysis, 23.02.2023, 1–15. http://dx.doi.org/10.58496/adsa/2023/001.
Pełny tekst źródłaFilho, Renato Miranda, Anísio M. Lacerda i Gisele L. Pappa. "Explainable regression via prototypes". ACM Transactions on Evolutionary Learning and Optimization, 15.12.2022. http://dx.doi.org/10.1145/3576903.
Pełny tekst źródłaAhmed, Zia U., Kang Sun, Michael Shelly i Lina Mu. "Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA". Scientific Reports 11, nr 1 (grudzień 2021). http://dx.doi.org/10.1038/s41598-021-03198-8.
Pełny tekst źródłaChen, Tao, Meng Song, Hongxun Hui i Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems". Frontiers in Energy Research 9 (5.10.2021). http://dx.doi.org/10.3389/fenrg.2021.754317.
Pełny tekst źródłaChen, Tao, Meng Song, Hongxun Hui i Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems". Frontiers in Energy Research 9 (5.10.2021). http://dx.doi.org/10.3389/fenrg.2021.754317.
Pełny tekst źródłaJaved, Abdul Rehman, Habib Ullah Khan, Mohammad Kamel Bader Alomari, Muhammad Usman Sarwar, Muhammad Asim, Ahmad S. Almadhor i Muhammad Zahid Khan. "Toward explainable AI-empowered cognitive health assessment". Frontiers in Public Health 11 (9.03.2023). http://dx.doi.org/10.3389/fpubh.2023.1024195.
Pełny tekst źródłaMustafa, Ahmad, Klaas Koster i Ghassan AlRegib. "Explainable Machine Learning for Hydrocarbon Risk Assessment". GEOPHYSICS, 13.07.2023, 1–52. http://dx.doi.org/10.1190/geo2022-0594.1.
Pełny tekst źródłaYang, Darrion Bo-Yun, Alexander Smith, Emily J. Smith, Anant Naik, Mika Janbahan, Charee M. Thompson, Lav R. Varshney i Wael Hassaneen. "The State of Machine Learning in Outcomes Prediction of Transsphenoidal Surgery: A Systematic Review". Journal of Neurological Surgery Part B: Skull Base, 12.09.2022. http://dx.doi.org/10.1055/a-1941-3618.
Pełny tekst źródła