Artykuły w czasopismach na temat „Post-hoc Explainability”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 45 najlepszych artykułów w czasopismach naukowych na temat „Post-hoc Explainability”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont i Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification". Mathematics 9, nr 23 (5.12.2021): 3137. http://dx.doi.org/10.3390/math9233137.
Pełny tekst źródłaMochaourab, Rami, Arun Venkitaraman, Isak Samsten, Panagiotis Papapetrou i Cristian R. Rojas. "Post Hoc Explainability for Time Series Classification: Toward a signal processing perspective". IEEE Signal Processing Magazine 39, nr 4 (lipiec 2022): 119–29. http://dx.doi.org/10.1109/msp.2022.3155955.
Pełny tekst źródłaLee, Gin Chong, i Chu Kiong Loo. "On the Post Hoc Explainability of Optimized Self-Organizing Reservoir Network for Action Recognition". Sensors 22, nr 5 (1.03.2022): 1905. http://dx.doi.org/10.3390/s22051905.
Pełny tekst źródłaMaree, Charl, i Christian Omlin. "Reinforcement Learning Your Way: Agent Characterization through Policy Regularization". AI 3, nr 2 (24.03.2022): 250–59. http://dx.doi.org/10.3390/ai3020015.
Pełny tekst źródłaYan, Fei, Yunqing Chen, Yiwen Xia, Zhiliang Wang i Ruoxiu Xiao. "An Explainable Brain Tumor Detection Framework for MRI Analysis". Applied Sciences 13, nr 6 (8.03.2023): 3438. http://dx.doi.org/10.3390/app13063438.
Pełny tekst źródłaMaarten Schraagen, Jan, Sabin Kerwien Lopez, Carolin Schneider, Vivien Schneider, Stephanie Tönjes i Emma Wiechmann. "The Role of Transparency and Explainability in Automated Systems". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, nr 1 (wrzesień 2021): 27–31. http://dx.doi.org/10.1177/1071181321651063.
Pełny tekst źródłaSrinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri i Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies". Mobile Information Systems 2022 (13.06.2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.
Pełny tekst źródłaCho, Hyeoncheol, Youngrock Oh i Eunjoo Jeon. "SEEN: Seen: Sharpening Explanations for Graph Neural Networks Using Explanations From Neighborhoods". Advances in Artificial Intelligence and Machine Learning 03, nr 02 (2023): 1165–79. http://dx.doi.org/10.54364/aaiml.2023.1168.
Pełny tekst źródłaChatterjee, Soumick, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck i Andreas Nürnberger. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models". Applied Sciences 12, nr 4 (10.02.2022): 1834. http://dx.doi.org/10.3390/app12041834.
Pełny tekst źródłaRoscher, R., B. Bohn, M. F. Duarte i J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (3.08.2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Pełny tekst źródłaApostolopoulos, Ioannis D., Ifigeneia Athanasoula, Mpesi Tzani i Peter P. Groumpos. "An Explainable Deep Learning Framework for Detecting and Localising Smoke and Fire Incidents: Evaluation of Grad-CAM++ and LIME". Machine Learning and Knowledge Extraction 4, nr 4 (6.12.2022): 1124–35. http://dx.doi.org/10.3390/make4040057.
Pełny tekst źródłaAntoniadi, Anna Markella, Miriam Galvin, Mark Heverin, Lan Wei, Orla Hardiman i Catherine Mooney. "A Clinical Decision Support System for the Prediction of Quality of Life in ALS". Journal of Personalized Medicine 12, nr 3 (10.03.2022): 435. http://dx.doi.org/10.3390/jpm12030435.
Pełny tekst źródłaMoustakidis, Serafeim, Christos Kokkotis, Dimitrios Tsaopoulos, Petros Sfikakis, Sotirios Tsiodras, Vana Sypsa, Theoklis E. Zaoutis i Dimitrios Paraskevis. "Identifying Country-Level Risk Factors for the Spread of COVID-19 in Europe Using Machine Learning". Viruses 14, nr 3 (17.03.2022): 625. http://dx.doi.org/10.3390/v14030625.
Pełny tekst źródłaSudars, Kaspars, Ivars Namatēvs i Kaspars Ozols. "Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach". Journal of Imaging 8, nr 2 (30.01.2022): 30. http://dx.doi.org/10.3390/jimaging8020030.
Pełny tekst źródłaHong, Jung-Ho, Woo-Jeoung Nam, Kyu-Sung Jeon i Seong-Whan Lee. "Towards Better Visualizing the Decision Basis of Networks via Unfold and Conquer Attribution Guidance". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 7884–92. http://dx.doi.org/10.1609/aaai.v37i7.25954.
Pełny tekst źródłaSingh, Rajeev Kumar, Rohan Gorantla, Sai Giridhar Rao Allada i Pratap Narra. "SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability". PLOS ONE 17, nr 10 (31.10.2022): e0276836. http://dx.doi.org/10.1371/journal.pone.0276836.
Pełny tekst źródłaNtakolia, Charis, Christos Kokkotis, Patrik Karlsson i Serafeim Moustakidis. "An Explainable Machine Learning Model for Material Backorder Prediction in Inventory Management". Sensors 21, nr 23 (27.11.2021): 7926. http://dx.doi.org/10.3390/s21237926.
Pełny tekst źródłaVieira, Carla Piazzon Ramos, i Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM". Revista Brasileira de Computação Aplicada 12, nr 1 (8.01.2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.
Pełny tekst źródłaMaree, Charl, i Christian W. Omlin. "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?" AI 3, nr 2 (13.06.2022): 526–37. http://dx.doi.org/10.3390/ai3020030.
Pełny tekst źródłaJ. Thiagarajan, Jayaraman, Vivek Narayanaswamy, Rushil Anirudh, Peer-Timo Bremer i Andreas Spanias. "Accurate and Robust Feature Importance Estimation under Distribution Shifts". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 9 (18.05.2021): 7891–98. http://dx.doi.org/10.1609/aaai.v35i9.16963.
Pełny tekst źródłaRguibi, Zakaria, Abdelmajid Hajami, Dya Zitouni, Amine Elqaraoui i Anas Bedraoui. "CXAI: Explaining Convolutional Neural Networks for Medical Imaging Diagnostic". Electronics 11, nr 11 (2.06.2022): 1775. http://dx.doi.org/10.3390/electronics11111775.
Pełny tekst źródłaXu, Qian, Wenzhao Xie, Bolin Liao, Chao Hu, Lu Qin, Zhengzijin Yang, Huan Xiong, Yi Lyu, Yue Zhou i Aijing Luo. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review". Journal of Healthcare Engineering 2023 (3.02.2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Pełny tekst źródłaBai, Xi, Zhibo Zhou, Yunyun Luo, Hongbo Yang, Huijuan Zhu, Shi Chen i Hui Pan. "Development and Evaluation of a Machine Learning Prediction Model for Small-for-Gestational-Age Births in Women Exposed to Radiation before Pregnancy". Journal of Personalized Medicine 12, nr 4 (31.03.2022): 550. http://dx.doi.org/10.3390/jpm12040550.
Pełny tekst źródłaAntoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A. Becker i Catherine Mooney. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review". Applied Sciences 11, nr 11 (31.05.2021): 5088. http://dx.doi.org/10.3390/app11115088.
Pełny tekst źródłaArnold, Thomas, Daniel Kasenberg i Matthias Scheutz. "Explaining in Time". ACM Transactions on Human-Robot Interaction 10, nr 3 (lipiec 2021): 1–23. http://dx.doi.org/10.1145/3457183.
Pełny tekst źródłaHamm, Pascal, Michael Klesel, Patricia Coberger i H. Felix Wittmann. "Explanation matters: An experimental study on explainable AI". Electronic Markets 33, nr 1 (10.05.2023). http://dx.doi.org/10.1007/s12525-023-00640-9.
Pełny tekst źródłaLenatti, Marta, Pedro A. Moreno-Sánchez, Edoardo M. Polo, Maximiliano Mollura, Riccardo Barbieri i Alessia Paglialonga. "Evaluation of Machine Learning Algorithms and Explainability Techniques to Detect Hearing Loss From a Speech-in-Noise Screening Test". American Journal of Audiology, 25.07.2022, 1–19. http://dx.doi.org/10.1044/2022_aja-21-00194.
Pełny tekst źródłaMoreno-Sánchez, Pedro A. "Improvement of a prediction model for heart failure survival through explainable artificial intelligence". Frontiers in Cardiovascular Medicine 10 (1.08.2023). http://dx.doi.org/10.3389/fcvm.2023.1219586.
Pełny tekst źródłaVale, Daniel, Ali El-Sharif i Muhammed Ali. "Explainable artificial intelligence (XAI) post-hoc explainability methods: risks and limitations in non-discrimination law". AI and Ethics, 15.03.2022. http://dx.doi.org/10.1007/s43681-022-00142-y.
Pełny tekst źródłaCorbin, Adam, i Oge Marques. "Assessing Bias in Skin Lesion Classifiers with Contemporary Deep Learning and Post-Hoc Explainability Techniques". IEEE Access, 2023, 1. http://dx.doi.org/10.1109/access.2023.3289320.
Pełny tekst źródłaDervakos, Edmund, Natalia Kotsani i Giorgos Stamou. "Genre Recognition from Symbolic Music with CNNs: Performance and Explainability". SN Computer Science 4, nr 2 (17.12.2022). http://dx.doi.org/10.1007/s42979-022-01490-6.
Pełny tekst źródłaPapagni, Guglielmo, Jesse de Pagter, Setareh Zafari, Michael Filzmoser i Sabine T. Koeszegi. "Artificial agents’ explainability to support trust: considerations on timing and context". AI & SOCIETY, 27.06.2022. http://dx.doi.org/10.1007/s00146-022-01462-7.
Pełny tekst źródłaBarredo Arrieta, Alejandro, Sergio Gil-Lopez, Ibai Laña, Miren Nekane Bilbao i Javier Del Ser. "On the post-hoc explainability of deep echo state networks for time series forecasting, image and video classification". Neural Computing and Applications, 6.08.2021. http://dx.doi.org/10.1007/s00521-021-06359-y.
Pełny tekst źródłaSharma, Jeetesh, Murari Lal Mittal, Gunjan Soni i Arvind Keprate. "Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review". Recent Patents on Engineering 18 (17.04.2023). http://dx.doi.org/10.2174/1872212118666230417084231.
Pełny tekst źródłaVilone, Giulia, i Luca Longo. "A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods". Frontiers in Artificial Intelligence 4 (3.11.2021). http://dx.doi.org/10.3389/frai.2021.717899.
Pełny tekst źródłaZini, Julia El, i Mariette Awad. "On the Explainability of Natural Language Processing Deep Models". ACM Computing Surveys, 19.07.2022. http://dx.doi.org/10.1145/3529755.
Pełny tekst źródłaWeber, Patrick, K. Valerie Carl i Oliver Hinz. "Applications of Explainable Artificial Intelligence in Finance—a systematic review of Finance, Information Systems, and Computer Science literature". Management Review Quarterly, 28.02.2023. http://dx.doi.org/10.1007/s11301-023-00320-0.
Pełny tekst źródłaKarim, Muhammad Monjurul, Yu Li i Ruwen Qin. "Toward Explainable Artificial Intelligence for Early Anticipation of Traffic Accidents". Transportation Research Record: Journal of the Transportation Research Board, 18.02.2022, 036119812210761. http://dx.doi.org/10.1177/03611981221076121.
Pełny tekst źródłaBaptista, Delora, João Correia, Bruno Pereira i Miguel Rocha. "Evaluating molecular representations in machine learning models for drug response prediction and interpretability". Journal of Integrative Bioinformatics, 26.08.2022. http://dx.doi.org/10.1515/jib-2022-0006.
Pełny tekst źródłaNorton, Adam, Henny Admoni, Jacob Crandall, Tesca Fitzgerald, Alvika Gautam, Michael Goodrich, Amy Saretsky i in. "Metrics for Robot Proficiency Self-Assessment and Communication of Proficiency in Human-Robot Teams". ACM Transactions on Human-Robot Interaction, 14.04.2022. http://dx.doi.org/10.1145/3522579.
Pełny tekst źródłaKucklick, Jan-Peter, i Oliver Müller. "Tackling the Accuracy–Interpretability Trade-off: Interpretable Deep Learning Models for Satellite Image-based Real Estate Appraisal". ACM Transactions on Management Information Systems, 10.10.2022. http://dx.doi.org/10.1145/3567430.
Pełny tekst źródłaFleisher, Will. "Understanding, Idealization, and Explainable AI". Episteme, 3.11.2022, 1–27. http://dx.doi.org/10.1017/epi.2022.39.
Pełny tekst źródłaChen, Ruoyu, Jingzhi Li, Hua Zhang, Changchong Sheng, Li Liu i Xiaochun Cao. "Sim2Word: Explaining Similarity with Representative Attribute Words via Counterfactual Explanations". ACM Transactions on Multimedia Computing, Communications, and Applications, 8.09.2022. http://dx.doi.org/10.1145/3563039.
Pełny tekst źródłaSamarasinghe, Dilini. "Counterfactual learning in enhancing resilience in autonomous agent systems". Frontiers in Artificial Intelligence 6 (28.07.2023). http://dx.doi.org/10.3389/frai.2023.1212336.
Pełny tekst źródłaBelle, Vaishak, i Ioannis Papantonis. "Principles and Practice of Explainable Machine Learning". Frontiers in Big Data 4 (1.07.2021). http://dx.doi.org/10.3389/fdata.2021.688969.
Pełny tekst źródła