Artykuły w czasopismach na temat „Algorithm explainability”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Algorithm explainability”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Nuobu, Gengpan. "Transformer model: Explainability and prospectiveness". Applied and Computational Engineering 20, nr 1 (23.10.2023): 88–99. http://dx.doi.org/10.54254/2755-2721/20/20231079.
Pełny tekst źródłaHwang, Hyunseung, i Steven Euijong Whang. "XClusters: Explainability-First Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 7962–70. http://dx.doi.org/10.1609/aaai.v37i7.25963.
Pełny tekst źródłaPendyala, Vishnu, i Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI". Electronics 13, nr 6 (8.03.2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Pełny tekst źródłaLoreti, Daniela, i Giorgio Visani. "Parallel approaches for a decision tree-based explainability algorithm". Future Generation Computer Systems 158 (wrzesień 2024): 308–22. http://dx.doi.org/10.1016/j.future.2024.04.044.
Pełny tekst źródłaWang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang i Kay Chen Tan. "Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.
Pełny tekst źródłaYiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth i Ali Hakan Işık. "Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning". Traitement du Signal 39, nr 3 (30.06.2022): 863–69. http://dx.doi.org/10.18280/ts.390311.
Pełny tekst źródłaPowell, Alison B. "Explanations as governance? Investigating practices of explanation in algorithmic system design". European Journal of Communication 36, nr 4 (sierpień 2021): 362–75. http://dx.doi.org/10.1177/02673231211028376.
Pełny tekst źródłaXie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang i Jinjun Chen. "Explainable recommendation based on knowledge graph and multi-objective optimization". Complex & Intelligent Systems 7, nr 3 (6.03.2021): 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.
Pełny tekst źródłaKabir, Sami, Mohammad Shahadat Hossain i Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings". Energies 17, nr 8 (9.04.2024): 1797. http://dx.doi.org/10.3390/en17081797.
Pełny tekst źródłaBulitko, Vadim, Shuwei Wang, Justin Stevens i Levi H. S. Lelis. "Portability and Explainability of Synthesized Formula-based Heuristics". Proceedings of the International Symposium on Combinatorial Search 15, nr 1 (17.07.2022): 29–37. http://dx.doi.org/10.1609/socs.v15i1.21749.
Pełny tekst źródłaGräßer, Felix, Hagen Malberg i Sebastian Zaunseder. "Neighborhood Optimization for Therapy Decision Support". Current Directions in Biomedical Engineering 5, nr 1 (1.09.2019): 1–4. http://dx.doi.org/10.1515/cdbme-2019-0001.
Pełny tekst źródłaKottinger, Justin, Shaull Almagor i Morteza Lahijanian. "Conflict-Based Search for Explainable Multi-Agent Path Finding". Proceedings of the International Conference on Automated Planning and Scheduling 32 (13.06.2022): 692–700. http://dx.doi.org/10.1609/icaps.v32i1.19859.
Pełny tekst źródłaMonsarrat, Paul, David Bernard, Mathieu Marty, Chiara Cecchin-Albertoni, Emmanuel Doumard, Laure Gez, Julien Aligon, Jean-Noël Vergnes, Louis Casteilla i Philippe Kemoun. "Systemic Periodontal Risk Score Using an Innovative Machine Learning Strategy: An Observational Study". Journal of Personalized Medicine 12, nr 2 (4.02.2022): 217. http://dx.doi.org/10.3390/jpm12020217.
Pełny tekst źródłaLv, Ge, i Lei Chen. "On Data-Aware Global Explainability of Graph Neural Networks". Proceedings of the VLDB Endowment 16, nr 11 (lipiec 2023): 3447–60. http://dx.doi.org/10.14778/3611479.3611538.
Pełny tekst źródłaLi, Tong, Jiale Deng, Yanyan Shen, Luyu Qiu, Huang Yongxiang i Caleb Chen Cao. "Towards Fine-Grained Explainability for Heterogeneous Graph Neural Network". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 8640–47. http://dx.doi.org/10.1609/aaai.v37i7.26040.
Pełny tekst źródłaKong, Weihao, Jianping Chen i Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research". Minerals 14, nr 2 (24.01.2024): 128. http://dx.doi.org/10.3390/min14020128.
Pełny tekst źródłaFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont i Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification". Mathematics 9, nr 23 (5.12.2021): 3137. http://dx.doi.org/10.3390/math9233137.
Pełny tekst źródłaHuang, Xuanxiang, Yacine Izza i Joao Marques-Silva. "Solving Explainability Queries with Quantification: The Case of Feature Relevancy". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 4 (26.06.2023): 3996–4006. http://dx.doi.org/10.1609/aaai.v37i4.25514.
Pełny tekst źródłaPatel, Sagar, Sangeetha Abdu Jyothi i Nina Narodytska. "CrystalBox: Future-Based Explanations for Input-Driven Deep RL Systems". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 13 (24.03.2024): 14563–71. http://dx.doi.org/10.1609/aaai.v38i13.29372.
Pełny tekst źródłaArous, Ines, Ljiljana Dolamic, Jie Yang, Akansha Bhardwaj, Giuseppe Cuccu i Philippe Cudré-Mauroux. "MARTA: Leveraging Human Rationales for Explainable Text Classification". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 7 (18.05.2021): 5868–76. http://dx.doi.org/10.1609/aaai.v35i7.16734.
Pełny tekst źródłaTsiami, Lydia, i Christos Makropoulos. "Cyber—Physical Attack Detection in Water Distribution Systems with Temporal Graph Convolutional Neural Networks". Water 13, nr 9 (29.04.2021): 1247. http://dx.doi.org/10.3390/w13091247.
Pełny tekst źródłaBotana, Iñigo López-Riobóo, Carlos Eiras-Franco i Amparo Alonso-Betanzos. "Regression Tree Based Explanation for Anomaly Detection Algorithm". Proceedings 54, nr 1 (18.08.2020): 7. http://dx.doi.org/10.3390/proceedings2020054007.
Pełny tekst źródłaGao, Jingyue, Xiting Wang, Yasha Wang i Xing Xie. "Explainable Recommendation through Attentive Multi-View Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.
Pełny tekst źródłaLv, Ting, Zhenkuan Pan, Weibo Wei, Guangyu Yang, Jintao Song, Xuqing Wang, Lu Sun, Qian Li i Xiatao Sun. "Iterative deep neural networks based on proximal gradient descent for image restoration". PLOS ONE 17, nr 11 (4.11.2022): e0276373. http://dx.doi.org/10.1371/journal.pone.0276373.
Pełny tekst źródłaChatterjee, Soumick, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck i Andreas Nürnberger. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models". Applied Sciences 12, nr 4 (10.02.2022): 1834. http://dx.doi.org/10.3390/app12041834.
Pełny tekst źródłaBanditwattanawong, Thepparit, i Masawee Masdisornchote. "On Characterization of Norm-Referenced Achievement Grading Schemes toward Explainability and Selectability". Applied Computational Intelligence and Soft Computing 2021 (18.02.2021): 1–14. http://dx.doi.org/10.1155/2021/8899649.
Pełny tekst źródłaRudzite, Liva. "Algorithmic Explainability and the Sufficient-Disclosure Requirement under the European Patent Convention". Juridica International 31 (25.10.2022): 125–35. http://dx.doi.org/10.12697/ji.2022.31.09.
Pełny tekst źródłaLizzi, Francesca, Camilla Scapicchio, Francesco Laruina, Alessandra Retico i Maria Evelina Fantacci. "Convolutional Neural Networks for Breast Density Classification: Performance and Explanation Insights". Applied Sciences 12, nr 1 (24.12.2021): 148. http://dx.doi.org/10.3390/app12010148.
Pełny tekst źródłaFang, Xue, Lin Li i Zheng Wei. "Design of Recommendation Algorithm Based on Knowledge Graph". Journal of Physics: Conference Series 2425, nr 1 (1.02.2023): 012025. http://dx.doi.org/10.1088/1742-6596/2425/1/012025.
Pełny tekst źródłaKrishna Adithya, Venkatesh, Bryan M. Williams, Silvester Czanner, Srinivasan Kavitha, David S. Friedman, Colin E. Willoughby, Rengaraj Venkatesh i Gabriela Czanner. "EffUnet-SpaGen: An Efficient and Spatial Generative Approach to Glaucoma Detection". Journal of Imaging 7, nr 6 (30.05.2021): 92. http://dx.doi.org/10.3390/jimaging7060092.
Pełny tekst źródłaAdithyaram, N. "Early Detection of Lung Disease Using Deep Learning Algorithms on Image Data". International Journal for Research in Applied Science and Engineering Technology 11, nr 7 (31.07.2023): 466–69. http://dx.doi.org/10.22214/ijraset.2023.53802.
Pełny tekst źródłaSatoni Kurniawansyah, Arius. "EXPLAINABLE ARTIFICIAL INTELLIGENCE THEORY IN DECISION MAKING TREATMENT OF ARITHMIA PATIENTS WITH USING DEEP LEARNING MODELS". Jurnal Rekayasa Sistem Informasi dan Teknologi 1, nr 1 (29.08.2022): 26–41. http://dx.doi.org/10.59407/jrsit.v1i1.75.
Pełny tekst źródłaShalev, Yuval, i Irad Ben-Gal. "Context Based Predictive Information". Entropy 21, nr 7 (29.06.2019): 645. http://dx.doi.org/10.3390/e21070645.
Pełny tekst źródłaSamaras, Agorastos-Dimitrios, Serafeim Moustakidis, Ioannis D. Apostolopoulos, Elpiniki Papageorgiou i Nikolaos Papandrianos. "Uncovering the Black Box of Coronary Artery Disease Diagnosis: The Significance of Explainability in Predictive Models". Applied Sciences 13, nr 14 (12.07.2023): 8120. http://dx.doi.org/10.3390/app13148120.
Pełny tekst źródłaSilva-Aravena, Fabián, Hugo Núñez Delafuente, Jimmy H. Gutiérrez-Bahamondes i Jenny Morales. "A Hybrid Algorithm of ML and XAI to Prevent Breast Cancer: A Strategy to Support Decision Making". Cancers 15, nr 9 (25.04.2023): 2443. http://dx.doi.org/10.3390/cancers15092443.
Pełny tekst źródłaBUITEN, Miriam C. "Towards Intelligent Regulation of Artificial Intelligence". European Journal of Risk Regulation 10, nr 1 (marzec 2019): 41–59. http://dx.doi.org/10.1017/err.2019.8.
Pełny tekst źródłaAgarwal, Piyush, Melih Tamer i Hector Budman. "Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes". Computers & Chemical Engineering 154 (listopad 2021): 107467. http://dx.doi.org/10.1016/j.compchemeng.2021.107467.
Pełny tekst źródłaZhao, Yuying, Yu Wang i Tyler Derr. "Fairness and Explainability: Bridging the Gap towards Fair Model Explanations". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 9 (26.06.2023): 11363–71. http://dx.doi.org/10.1609/aaai.v37i9.26344.
Pełny tekst źródłaChoi, Insu, i Woo Chang Kim. "Enhancing Exchange-Traded Fund Price Predictions: Insights from Information-Theoretic Networks and Node Embeddings". Entropy 26, nr 1 (12.01.2024): 70. http://dx.doi.org/10.3390/e26010070.
Pełny tekst źródłaBlomerus, Nicholas, Jacques Cilliers, Willie Nel, Erik Blasch i Pieter de Villiers. "Feedback-Assisted Automatic Target and Clutter Discrimination Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications". Remote Sensing 14, nr 23 (1.12.2022): 6096. http://dx.doi.org/10.3390/rs14236096.
Pełny tekst źródłaChin, Marshall H., Nasim Afsar-Manesh, Arlene S. Bierman, Christine Chang, Caleb J. Colón-Rodríguez, Prashila Dullabh, Deborah Guadalupe Duran i in. "Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care". JAMA Network Open 6, nr 12 (15.12.2023): e2345050. http://dx.doi.org/10.1001/jamanetworkopen.2023.45050.
Pełny tekst źródłaKlettke, Meike, Adrian Lutsch i Uta Störl. "Kurz erklärt: Measuring Data Changes in Data Engineering and their Impact on Explainability and Algorithm Fairness". Datenbank-Spektrum 21, nr 3 (27.10.2021): 245–49. http://dx.doi.org/10.1007/s13222-021-00392-w.
Pełny tekst źródłaChetoui, Mohamed, Moulay A. Akhloufi, Bardia Yousefi i El Mostafa Bouattane. "Explainable COVID-19 Detection on Chest X-rays Using an End-to-End Deep Convolutional Neural Network Architecture". Big Data and Cognitive Computing 5, nr 4 (7.12.2021): 73. http://dx.doi.org/10.3390/bdcc5040073.
Pełny tekst źródłaSchober, Sebastian A., Yosra Bahri, Cecilia Carbonelli i Robert Wille. "Neural Network Robustness Analysis Using Sensor Simulations for a Graphene-Based Semiconductor Gas Sensor". Chemosensors 10, nr 5 (21.04.2022): 152. http://dx.doi.org/10.3390/chemosensors10050152.
Pełny tekst źródłaHÖLLER, Sonja, Thomas DILGER, Teresa SPIESS, Christian PLODER i Reinhard BERNSTEINER. "Awareness of Unethical Artificial Intelligence and its Mitigation Measures". European Journal of Interdisciplinary Studies 15, nr 2 (22.12.2023): 67–89. http://dx.doi.org/10.24818/ejis.2023.17.
Pełny tekst źródłaZeng, Wenhuan, i Daniel H. Huson. "Leverage the Explainability of Transformer Models to Improve the DNA 5-Methylcytosine Identification (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 21 (24.03.2024): 23703–4. http://dx.doi.org/10.1609/aaai.v38i21.30533.
Pełny tekst źródłaMollaei, Nafiseh, Carlos Fujao, Luis Silva, Joao Rodrigues, Catia Cepeda i Hugo Gamboa. "Human-Centered Explainable Artificial Intelligence: Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms". International Journal of Environmental Research and Public Health 19, nr 15 (3.08.2022): 9552. http://dx.doi.org/10.3390/ijerph19159552.
Pełny tekst źródłaPatil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw i Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System". Electronics 11, nr 19 (27.09.2022): 3079. http://dx.doi.org/10.3390/electronics11193079.
Pełny tekst źródłaTrabassi, Dante, Mariano Serrao, Tiwana Varrecchia, Alberto Ranavolo, Gianluca Coppola, Roberto De Icco, Cristina Tassorelli i Stefano Filippo Castiglia. "Machine Learning Approach to Support the Detection of Parkinson’s Disease in IMU-Based Gait Analysis". Sensors 22, nr 10 (12.05.2022): 3700. http://dx.doi.org/10.3390/s22103700.
Pełny tekst źródłaGutierrez-Rojas, Daniel, Ioannis T. Christou, Daniel Dantas, Arun Narayanan, Pedro H. J. Nardelli i Yongheng Yang. "Performance evaluation of machine learning for fault selection in power transmission lines". Knowledge and Information Systems 64, nr 3 (19.02.2022): 859–83. http://dx.doi.org/10.1007/s10115-022-01657-w.
Pełny tekst źródła