Zeitschriftenartikel zum Thema „Explainability of machine learning models“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Explainability of machine learning models" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
S, Akshay, und Manu Madhavan. „COMPARISON OF EXPLAINABILITY OF MACHINE LEARNING BASED MALAYALAM TEXT CLASSIFICATION“. ICTACT Journal on Soft Computing 15, Nr. 1 (01.07.2024): 3386–91. http://dx.doi.org/10.21917/ijsc.2024.0476.
Der volle Inhalt der QuellePark, Min Sue, Hwijae Son, Chongseok Hyun und Hyung Ju Hwang. „Explainability of Machine Learning Models for Bankruptcy Prediction“. IEEE Access 9 (2021): 124887–99. http://dx.doi.org/10.1109/access.2021.3110270.
Der volle Inhalt der QuelleCheng, Xueyi, und Chang Che. „Interpretable Machine Learning: Explainability in Algorithm Design“. Journal of Industrial Engineering and Applied Science 2, Nr. 6 (01.12.2024): 65–70. https://doi.org/10.70393/6a69656173.323337.
Der volle Inhalt der QuelleBozorgpanah, Aso, Vicenç Torra und Laya Aliahmadipour. „Privacy and Explainability: The Effects of Data Protection on Shapley Values“. Technologies 10, Nr. 6 (01.12.2022): 125. http://dx.doi.org/10.3390/technologies10060125.
Der volle Inhalt der QuelleZhang, Xueting. „Traffic Flow Prediction Based on Explainable Machine Learning“. Highlights in Science, Engineering and Technology 56 (14.07.2023): 56–64. http://dx.doi.org/10.54097/hset.v56i.9816.
Der volle Inhalt der QuellePendyala, Vishnu, und Hyungkyun Kim. „Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI“. Electronics 13, Nr. 6 (08.03.2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Der volle Inhalt der QuelleKim, Dong-sup, und Seungwoo Shin. „THE ECONOMIC EXPLAINABILITY OF MACHINE LEARNING AND STANDARD ECONOMETRIC MODELS-AN APPLICATION TO THE U.S. MORTGAGE DEFAULT RISK“. International Journal of Strategic Property Management 25, Nr. 5 (13.07.2021): 396–412. http://dx.doi.org/10.3846/ijspm.2021.15129.
Der volle Inhalt der QuelleTOPCU, Deniz. „How to explain a machine learning model: HbA1c classification example“. Journal of Medicine and Palliative Care 4, Nr. 2 (27.03.2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Der volle Inhalt der QuelleRodríguez Mallma, Mirko Jerber, Luis Zuloaga-Rotta, Rubén Borja-Rosales, Josef Renato Rodríguez Mallma, Marcos Vilca-Aguilar, María Salas-Ojeda und David Mauricio. „Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review“. Neurology International 16, Nr. 6 (29.10.2024): 1285–307. http://dx.doi.org/10.3390/neurolint16060098.
Der volle Inhalt der QuelleBhagyashree D Shendkar. „Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity“. Panamerican Mathematical Journal 35, Nr. 1s (13.11.2024): 264–75. http://dx.doi.org/10.52783/pmj.v35.i1s.2313.
Der volle Inhalt der QuelleChen, Yinhe. „Enhancing stability and explainability in reinforcement learning with machine learning“. Applied and Computational Engineering 101, Nr. 1 (08.11.2024): 25–34. http://dx.doi.org/10.54254/2755-2721/101/20240943.
Der volle Inhalt der QuelleBorch, Christian, und Bo Hee Min. „Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading“. Big Data & Society 9, Nr. 2 (Juli 2022): 205395172211113. http://dx.doi.org/10.1177/20539517221111361.
Der volle Inhalt der QuelleKolarik, Michal, Martin Sarnovsky, Jan Paralic und Frantisek Babic. „Explainability of deep learning models in medical video analysis: a survey“. PeerJ Computer Science 9 (14.03.2023): e1253. http://dx.doi.org/10.7717/peerj-cs.1253.
Der volle Inhalt der QuellePezoa, R., L. Salinas und C. Torres. „Explainability of High Energy Physics events classification using SHAP“. Journal of Physics: Conference Series 2438, Nr. 1 (01.02.2023): 012082. http://dx.doi.org/10.1088/1742-6596/2438/1/012082.
Der volle Inhalt der QuelleMukendi, Christian Mulomba, Asser Kasai Itakala und Pierrot Muteba Tibasima. „Beyond Accuracy: Building Trustworthy Extreme Events Predictions Through Explainable Machine Learning“. European Journal of Theoretical and Applied Sciences 2, Nr. 1 (01.01.2024): 199–218. http://dx.doi.org/10.59324/ejtas.2024.2(1).15.
Der volle Inhalt der QuelleWang, Liyang, Yu Cheng, Ningjing Sang und You Yao. „Explainability and Stability of Machine Learning Applications — A Financial Risk Management Perspective“. Modern Economics & Management Forum 5, Nr. 5 (06.11.2024): 956. http://dx.doi.org/10.32629/memf.v5i5.2902.
Der volle Inhalt der QuelleGupta, Gopal, Huaduo Wang, Kinjal Basu, Farahad Shakerin, Parth Padalkar, Elmer Salazar, Sarat Chandra Varanasi und Sopam Dasgupta. „Logic-Based Explainable and Incremental Machine Learning“. Proceedings of the AAAI Symposium Series 2, Nr. 1 (22.01.2024): 230–32. http://dx.doi.org/10.1609/aaaiss.v2i1.27678.
Der volle Inhalt der QuelleCollin, Adele, Adrián Ayuso-Muñoz, Paloma Tejera-Nevado, Lucía Prieto-Santamaría, Antonio Verdejo-García, Carmen Díaz-Batanero, Fermín Fernández-Calderón, Natalia Albein-Urios, Óscar M. Lozano und Alejandro Rodríguez-González. „Analyzing Dropout in Alcohol Recovery Programs: A Machine Learning Approach“. Journal of Clinical Medicine 13, Nr. 16 (15.08.2024): 4825. http://dx.doi.org/10.3390/jcm13164825.
Der volle Inhalt der QuelleAas, Kjersti, Arthur Charpentier, Fei Huang und Ronald Richman. „Insurance analytics: prediction, explainability, and fairness“. Annals of Actuarial Science 18, Nr. 3 (November 2024): 535–39. https://doi.org/10.1017/s1748499524000289.
Der volle Inhalt der QuelleTocchetti, Andrea, und Marco Brambilla. „The Role of Human Knowledge in Explainable AI“. Data 7, Nr. 7 (06.07.2022): 93. http://dx.doi.org/10.3390/data7070093.
Der volle Inhalt der QuelleKeçeli, Tarık, Nevruz İlhanlı und Kemal Hakan Gülkesen. „Prediction of retinopathy through machine learning in diabetes mellitus“. Journal of Health Sciences and Medicine 7, Nr. 4 (30.07.2024): 467–71. http://dx.doi.org/10.32322/jhsm.1502050.
Der volle Inhalt der QuelleBurkart, Nadia, und Marco F. Huber. „A Survey on the Explainability of Supervised Machine Learning“. Journal of Artificial Intelligence Research 70 (19.01.2021): 245–317. http://dx.doi.org/10.1613/jair.1.12228.
Der volle Inhalt der QuelleKulaklıoğlu, Duru. „Explainable AI: Enhancing Interpretability of Machine Learning Models“. Human Computer Interaction 8, Nr. 1 (06.12.2024): 91. https://doi.org/10.62802/z3pde490.
Der volle Inhalt der QuelleNagahisarchoghaei, Mohammad, Nasheen Nur, Logan Cummins, Nashtarin Nur, Mirhossein Mousavi Karimi, Shreya Nandanwar, Siddhartha Bhattacharyya und Shahram Rahimi. „An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories from Technical and Application Perspectives“. Electronics 12, Nr. 5 (22.02.2023): 1092. http://dx.doi.org/10.3390/electronics12051092.
Der volle Inhalt der QuellePrzybył, Krzysztof. „Explainable AI: Machine Learning Interpretation in Blackcurrant Powders“. Sensors 24, Nr. 10 (17.05.2024): 3198. http://dx.doi.org/10.3390/s24103198.
Der volle Inhalt der QuelleZubair, Md, Helge Janicke, Ahmad Mohsin, Leandros Maglaras und Iqbal H. Sarker. „Automated Sensor Node Malicious Activity Detection with Explainability Analysis“. Sensors 24, Nr. 12 (07.06.2024): 3712. http://dx.doi.org/10.3390/s24123712.
Der volle Inhalt der QuelleUllah, Ihsan, Andre Rios, Vaibhav Gala und Susan Mckeever. „Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation“. Applied Sciences 12, Nr. 1 (23.12.2021): 136. http://dx.doi.org/10.3390/app12010136.
Der volle Inhalt der QuelleAlsubhi, Bashayer, Basma Alharbi, Nahla Aljojo, Ameen Banjar, Araek Tashkandi, Abdullah Alghoson und Anas Al-Tirawi. „Effective Feature Prediction Models for Student Performance“. Engineering, Technology & Applied Science Research 13, Nr. 5 (13.10.2023): 11937–44. http://dx.doi.org/10.48084/etasr.6345.
Der volle Inhalt der QuelleBARAJAS ARANDA, DANIEL ALEJANDRO, MIGUEL ANGEL SICILIA URBAN, MARIA DOLORES TORRES SOTO und AURORA TORRES SOTO. „COMPARISON AND EXPLANABILITY OF MACHINE LEARNING MODELS IN PREDICTIVE SUICIDE ANALYSIS“. DYNA NEW TECHNOLOGIES 11, Nr. 1 (28.02.2024): [10P.]. http://dx.doi.org/10.6036/nt11028.
Der volle Inhalt der QuelleChen, Tianjie, und Md Faisal Kabir. „Explainable machine learning approach for cancer prediction through binarilization of RNA sequencing data“. PLOS ONE 19, Nr. 5 (10.05.2024): e0302947. http://dx.doi.org/10.1371/journal.pone.0302947.
Der volle Inhalt der QuelleVan Der Laan, Jake. „Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles“. Vietnamese Journal of Legal Sciences 7, Nr. 2 (01.12.2022): 1–38. http://dx.doi.org/10.2478/vjls-2022-0006.
Der volle Inhalt der QuelleKong, Weihao, Jianping Chen und Pengfei Zhu. „Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research“. Minerals 14, Nr. 2 (24.01.2024): 128. http://dx.doi.org/10.3390/min14020128.
Der volle Inhalt der QuellePathan, Refat Khan, Israt Jahan Shorna, Md Sayem Hossain, Mayeen Uddin Khandaker, Huda I. Almohammed und Zuhal Y. Hamd. „The efficacy of machine learning models in lung cancer risk prediction with explainability“. PLOS ONE 19, Nr. 6 (13.06.2024): e0305035. http://dx.doi.org/10.1371/journal.pone.0305035.
Der volle Inhalt der QuelleSatoni Kurniawansyah, Arius. „EXPLAINABLE ARTIFICIAL INTELLIGENCE THEORY IN DECISION MAKING TREATMENT OF ARITHMIA PATIENTS WITH USING DEEP LEARNING MODELS“. Jurnal Rekayasa Sistem Informasi dan Teknologi 1, Nr. 1 (29.08.2022): 26–41. http://dx.doi.org/10.59407/jrsit.v1i1.75.
Der volle Inhalt der QuelleChen, Xingqian, Honghui Fan, Wenhe Chen, Yaoxin Zhang, Dingkun Zhu und Shuangbao Song. „Explaining a Logic Dendritic Neuron Model by Using the Morphology of Decision Trees“. Electronics 13, Nr. 19 (03.10.2024): 3911. http://dx.doi.org/10.3390/electronics13193911.
Der volle Inhalt der QuelleSarder Abdulla Al Shiam, Md Mahdi Hasan, Md Jubair Pantho, Sarmin Akter Shochona, Md Boktiar Nayeem, M Tazwar Hossain Choudhury und Tuan Ngoc Nguyen. „Credit Risk Prediction Using Explainable AI“. Journal of Business and Management Studies 6, Nr. 2 (18.03.2024): 61–66. http://dx.doi.org/10.32996/jbms.2024.6.2.6.
Der volle Inhalt der QuelleHong, Xianbin, Sheng-Uei Guan, Nian Xue, Zhen Li, Ka Lok Man, Prudence W. H. Wong und Dawei Liu. „Dual-Track Lifelong Machine Learning-Based Fine-Grained Product Quality Analysis“. Applied Sciences 13, Nr. 3 (17.01.2023): 1241. http://dx.doi.org/10.3390/app13031241.
Der volle Inhalt der QuelleBrito, João, und Hugo Proença. „A Short Survey on Machine Learning Explainability: An Application to Periocular Recognition“. Electronics 10, Nr. 15 (03.08.2021): 1861. http://dx.doi.org/10.3390/electronics10151861.
Der volle Inhalt der QuelleGhadge, Nikhil. „Leveraging Machine Learning to Enhance Information Exploration“. Machine Learning and Applications: An International Journal 11, Nr. 2 (28.06.2024): 17–27. http://dx.doi.org/10.5121/mlaij.2024.11203.
Der volle Inhalt der QuelleVilain, Matthieu, und Stéphane Aris-Brosou. „Machine Learning Algorithms Associate Case Numbers with SARS-CoV-2 Variants Rather Than with Impactful Mutations“. Viruses 15, Nr. 6 (24.05.2023): 1226. http://dx.doi.org/10.3390/v15061226.
Der volle Inhalt der QuelleCao, Xuenan, und Roozbeh Yousefzadeh. „Extrapolation and AI transparency: Why machine learning models should reveal when they make decisions beyond their training“. Big Data & Society 10, Nr. 1 (Januar 2023): 205395172311697. http://dx.doi.org/10.1177/20539517231169731.
Der volle Inhalt der QuelleSoliman, Amira, Björn Agvall, Kobra Etminani, Omar Hamed und Markus Lingman. „The Price of Explainability in Machine Learning Models for 100-Day Readmission Prediction in Heart Failure: Retrospective, Comparative, Machine Learning Study“. Journal of Medical Internet Research 25 (27.10.2023): e46934. http://dx.doi.org/10.2196/46934.
Der volle Inhalt der QuelleKim, Jaehun. „Increasing trust in complex machine learning systems“. ACM SIGIR Forum 55, Nr. 1 (Juni 2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.
Der volle Inhalt der QuelleAdak, Anirban, Biswajeet Pradhan und Nagesh Shukla. „Sentiment Analysis of Customer Reviews of Food Delivery Services Using Deep Learning and Explainable Artificial Intelligence: Systematic Review“. Foods 11, Nr. 10 (21.05.2022): 1500. http://dx.doi.org/10.3390/foods11101500.
Der volle Inhalt der QuelleKolluru, Vinothkumar, Yudhisthir Nuthakki, Sudeep Mungara, Sonika Koganti, Advaitha Naidu Chintakunta und Charan Sundar Telaganeni. „Healthcare Through AI: Integrating Deep Learning, Federated Learning, and XAI for Disease Management“. International Journal of Soft Computing and Engineering 13, Nr. 6 (30.01.2024): 21–27. http://dx.doi.org/10.35940/ijsce.d3646.13060124.
Der volle Inhalt der QuelleMatara, Caroline, Simpson Osano, Amir Okeyo Yusuf und Elisha Ochungo Aketch. „Prediction of Vehicle-induced Air Pollution based on Advanced Machine Learning Models“. Engineering, Technology & Applied Science Research 14, Nr. 1 (08.02.2024): 12837–43. http://dx.doi.org/10.48084/etasr.6678.
Der volle Inhalt der QuelleRadiuk, Pavlo, Olexander Barmak, Eduard Manziuk und Iurii Krak. „Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices“. Mathematics 12, Nr. 7 (29.03.2024): 1024. http://dx.doi.org/10.3390/math12071024.
Der volle Inhalt der QuelleGao, Jingyue, Xiting Wang, Yasha Wang und Xing Xie. „Explainable Recommendation through Attentive Multi-View Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.
Der volle Inhalt der QuelleLohaj, Oliver, Ján Paralič, Peter Bednár, Zuzana Paraličová und Matúš Huba. „Unraveling COVID-19 Dynamics via Machine Learning and XAI: Investigating Variant Influence and Prognostic Classification“. Machine Learning and Knowledge Extraction 5, Nr. 4 (25.09.2023): 1266–81. http://dx.doi.org/10.3390/make5040064.
Der volle Inhalt der QuelleAkgüller, Ömer, Mehmet Ali Balcı und Gabriela Cioca. „Functional Brain Network Disruptions in Parkinson’s Disease: Insights from Information Theory and Machine Learning“. Diagnostics 14, Nr. 23 (04.12.2024): 2728. https://doi.org/10.3390/diagnostics14232728.
Der volle Inhalt der Quelle