Zeitschriftenartikel zum Thema „Algorithm explainability“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Algorithm explainability" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Nuobu, Gengpan. „Transformer model: Explainability and prospectiveness“. Applied and Computational Engineering 20, Nr. 1 (23.10.2023): 88–99. http://dx.doi.org/10.54254/2755-2721/20/20231079.
Der volle Inhalt der QuelleHwang, Hyunseung, und Steven Euijong Whang. „XClusters: Explainability-First Clustering“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 7 (26.06.2023): 7962–70. http://dx.doi.org/10.1609/aaai.v37i7.25963.
Der volle Inhalt der QuellePendyala, Vishnu, und Hyungkyun Kim. „Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI“. Electronics 13, Nr. 6 (08.03.2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Der volle Inhalt der QuelleLoreti, Daniela, und Giorgio Visani. „Parallel approaches for a decision tree-based explainability algorithm“. Future Generation Computer Systems 158 (September 2024): 308–22. http://dx.doi.org/10.1016/j.future.2024.04.044.
Der volle Inhalt der QuelleWang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang und Kay Chen Tan. „Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.
Der volle Inhalt der QuelleYiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth und Ali Hakan Işık. „Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning“. Traitement du Signal 39, Nr. 3 (30.06.2022): 863–69. http://dx.doi.org/10.18280/ts.390311.
Der volle Inhalt der QuellePowell, Alison B. „Explanations as governance? Investigating practices of explanation in algorithmic system design“. European Journal of Communication 36, Nr. 4 (August 2021): 362–75. http://dx.doi.org/10.1177/02673231211028376.
Der volle Inhalt der QuelleXie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang und Jinjun Chen. „Explainable recommendation based on knowledge graph and multi-objective optimization“. Complex & Intelligent Systems 7, Nr. 3 (06.03.2021): 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.
Der volle Inhalt der QuelleKabir, Sami, Mohammad Shahadat Hossain und Karl Andersson. „An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings“. Energies 17, Nr. 8 (09.04.2024): 1797. http://dx.doi.org/10.3390/en17081797.
Der volle Inhalt der QuelleBulitko, Vadim, Shuwei Wang, Justin Stevens und Levi H. S. Lelis. „Portability and Explainability of Synthesized Formula-based Heuristics“. Proceedings of the International Symposium on Combinatorial Search 15, Nr. 1 (17.07.2022): 29–37. http://dx.doi.org/10.1609/socs.v15i1.21749.
Der volle Inhalt der QuelleGräßer, Felix, Hagen Malberg und Sebastian Zaunseder. „Neighborhood Optimization for Therapy Decision Support“. Current Directions in Biomedical Engineering 5, Nr. 1 (01.09.2019): 1–4. http://dx.doi.org/10.1515/cdbme-2019-0001.
Der volle Inhalt der QuelleKottinger, Justin, Shaull Almagor und Morteza Lahijanian. „Conflict-Based Search for Explainable Multi-Agent Path Finding“. Proceedings of the International Conference on Automated Planning and Scheduling 32 (13.06.2022): 692–700. http://dx.doi.org/10.1609/icaps.v32i1.19859.
Der volle Inhalt der QuelleMonsarrat, Paul, David Bernard, Mathieu Marty, Chiara Cecchin-Albertoni, Emmanuel Doumard, Laure Gez, Julien Aligon, Jean-Noël Vergnes, Louis Casteilla und Philippe Kemoun. „Systemic Periodontal Risk Score Using an Innovative Machine Learning Strategy: An Observational Study“. Journal of Personalized Medicine 12, Nr. 2 (04.02.2022): 217. http://dx.doi.org/10.3390/jpm12020217.
Der volle Inhalt der QuelleLv, Ge, und Lei Chen. „On Data-Aware Global Explainability of Graph Neural Networks“. Proceedings of the VLDB Endowment 16, Nr. 11 (Juli 2023): 3447–60. http://dx.doi.org/10.14778/3611479.3611538.
Der volle Inhalt der QuelleLi, Tong, Jiale Deng, Yanyan Shen, Luyu Qiu, Huang Yongxiang und Caleb Chen Cao. „Towards Fine-Grained Explainability for Heterogeneous Graph Neural Network“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 7 (26.06.2023): 8640–47. http://dx.doi.org/10.1609/aaai.v37i7.26040.
Der volle Inhalt der QuelleKong, Weihao, Jianping Chen und Pengfei Zhu. „Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research“. Minerals 14, Nr. 2 (24.01.2024): 128. http://dx.doi.org/10.3390/min14020128.
Der volle Inhalt der QuelleFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont und Alexandre Termier. „XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification“. Mathematics 9, Nr. 23 (05.12.2021): 3137. http://dx.doi.org/10.3390/math9233137.
Der volle Inhalt der QuelleHuang, Xuanxiang, Yacine Izza und Joao Marques-Silva. „Solving Explainability Queries with Quantification: The Case of Feature Relevancy“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 3996–4006. http://dx.doi.org/10.1609/aaai.v37i4.25514.
Der volle Inhalt der QuellePatel, Sagar, Sangeetha Abdu Jyothi und Nina Narodytska. „CrystalBox: Future-Based Explanations for Input-Driven Deep RL Systems“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 13 (24.03.2024): 14563–71. http://dx.doi.org/10.1609/aaai.v38i13.29372.
Der volle Inhalt der QuelleArous, Ines, Ljiljana Dolamic, Jie Yang, Akansha Bhardwaj, Giuseppe Cuccu und Philippe Cudré-Mauroux. „MARTA: Leveraging Human Rationales for Explainable Text Classification“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 7 (18.05.2021): 5868–76. http://dx.doi.org/10.1609/aaai.v35i7.16734.
Der volle Inhalt der QuelleTsiami, Lydia, und Christos Makropoulos. „Cyber—Physical Attack Detection in Water Distribution Systems with Temporal Graph Convolutional Neural Networks“. Water 13, Nr. 9 (29.04.2021): 1247. http://dx.doi.org/10.3390/w13091247.
Der volle Inhalt der QuelleBotana, Iñigo López-Riobóo, Carlos Eiras-Franco und Amparo Alonso-Betanzos. „Regression Tree Based Explanation for Anomaly Detection Algorithm“. Proceedings 54, Nr. 1 (18.08.2020): 7. http://dx.doi.org/10.3390/proceedings2020054007.
Der volle Inhalt der QuelleGao, Jingyue, Xiting Wang, Yasha Wang und Xing Xie. „Explainable Recommendation through Attentive Multi-View Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.
Der volle Inhalt der QuelleLv, Ting, Zhenkuan Pan, Weibo Wei, Guangyu Yang, Jintao Song, Xuqing Wang, Lu Sun, Qian Li und Xiatao Sun. „Iterative deep neural networks based on proximal gradient descent for image restoration“. PLOS ONE 17, Nr. 11 (04.11.2022): e0276373. http://dx.doi.org/10.1371/journal.pone.0276373.
Der volle Inhalt der QuelleChatterjee, Soumick, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck und Andreas Nürnberger. „TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models“. Applied Sciences 12, Nr. 4 (10.02.2022): 1834. http://dx.doi.org/10.3390/app12041834.
Der volle Inhalt der QuelleBanditwattanawong, Thepparit, und Masawee Masdisornchote. „On Characterization of Norm-Referenced Achievement Grading Schemes toward Explainability and Selectability“. Applied Computational Intelligence and Soft Computing 2021 (18.02.2021): 1–14. http://dx.doi.org/10.1155/2021/8899649.
Der volle Inhalt der QuelleRudzite, Liva. „Algorithmic Explainability and the Sufficient-Disclosure Requirement under the European Patent Convention“. Juridica International 31 (25.10.2022): 125–35. http://dx.doi.org/10.12697/ji.2022.31.09.
Der volle Inhalt der QuelleLizzi, Francesca, Camilla Scapicchio, Francesco Laruina, Alessandra Retico und Maria Evelina Fantacci. „Convolutional Neural Networks for Breast Density Classification: Performance and Explanation Insights“. Applied Sciences 12, Nr. 1 (24.12.2021): 148. http://dx.doi.org/10.3390/app12010148.
Der volle Inhalt der QuelleFang, Xue, Lin Li und Zheng Wei. „Design of Recommendation Algorithm Based on Knowledge Graph“. Journal of Physics: Conference Series 2425, Nr. 1 (01.02.2023): 012025. http://dx.doi.org/10.1088/1742-6596/2425/1/012025.
Der volle Inhalt der QuelleKrishna Adithya, Venkatesh, Bryan M. Williams, Silvester Czanner, Srinivasan Kavitha, David S. Friedman, Colin E. Willoughby, Rengaraj Venkatesh und Gabriela Czanner. „EffUnet-SpaGen: An Efficient and Spatial Generative Approach to Glaucoma Detection“. Journal of Imaging 7, Nr. 6 (30.05.2021): 92. http://dx.doi.org/10.3390/jimaging7060092.
Der volle Inhalt der QuelleAdithyaram, N. „Early Detection of Lung Disease Using Deep Learning Algorithms on Image Data“. International Journal for Research in Applied Science and Engineering Technology 11, Nr. 7 (31.07.2023): 466–69. http://dx.doi.org/10.22214/ijraset.2023.53802.
Der volle Inhalt der QuelleSatoni Kurniawansyah, Arius. „EXPLAINABLE ARTIFICIAL INTELLIGENCE THEORY IN DECISION MAKING TREATMENT OF ARITHMIA PATIENTS WITH USING DEEP LEARNING MODELS“. Jurnal Rekayasa Sistem Informasi dan Teknologi 1, Nr. 1 (29.08.2022): 26–41. http://dx.doi.org/10.59407/jrsit.v1i1.75.
Der volle Inhalt der QuelleShalev, Yuval, und Irad Ben-Gal. „Context Based Predictive Information“. Entropy 21, Nr. 7 (29.06.2019): 645. http://dx.doi.org/10.3390/e21070645.
Der volle Inhalt der QuelleSamaras, Agorastos-Dimitrios, Serafeim Moustakidis, Ioannis D. Apostolopoulos, Elpiniki Papageorgiou und Nikolaos Papandrianos. „Uncovering the Black Box of Coronary Artery Disease Diagnosis: The Significance of Explainability in Predictive Models“. Applied Sciences 13, Nr. 14 (12.07.2023): 8120. http://dx.doi.org/10.3390/app13148120.
Der volle Inhalt der QuelleSilva-Aravena, Fabián, Hugo Núñez Delafuente, Jimmy H. Gutiérrez-Bahamondes und Jenny Morales. „A Hybrid Algorithm of ML and XAI to Prevent Breast Cancer: A Strategy to Support Decision Making“. Cancers 15, Nr. 9 (25.04.2023): 2443. http://dx.doi.org/10.3390/cancers15092443.
Der volle Inhalt der QuelleBUITEN, Miriam C. „Towards Intelligent Regulation of Artificial Intelligence“. European Journal of Risk Regulation 10, Nr. 1 (März 2019): 41–59. http://dx.doi.org/10.1017/err.2019.8.
Der volle Inhalt der QuelleAgarwal, Piyush, Melih Tamer und Hector Budman. „Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes“. Computers & Chemical Engineering 154 (November 2021): 107467. http://dx.doi.org/10.1016/j.compchemeng.2021.107467.
Der volle Inhalt der QuelleZhao, Yuying, Yu Wang und Tyler Derr. „Fairness and Explainability: Bridging the Gap towards Fair Model Explanations“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 9 (26.06.2023): 11363–71. http://dx.doi.org/10.1609/aaai.v37i9.26344.
Der volle Inhalt der QuelleChoi, Insu, und Woo Chang Kim. „Enhancing Exchange-Traded Fund Price Predictions: Insights from Information-Theoretic Networks and Node Embeddings“. Entropy 26, Nr. 1 (12.01.2024): 70. http://dx.doi.org/10.3390/e26010070.
Der volle Inhalt der QuelleBlomerus, Nicholas, Jacques Cilliers, Willie Nel, Erik Blasch und Pieter de Villiers. „Feedback-Assisted Automatic Target and Clutter Discrimination Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications“. Remote Sensing 14, Nr. 23 (01.12.2022): 6096. http://dx.doi.org/10.3390/rs14236096.
Der volle Inhalt der QuelleChin, Marshall H., Nasim Afsar-Manesh, Arlene S. Bierman, Christine Chang, Caleb J. Colón-Rodríguez, Prashila Dullabh, Deborah Guadalupe Duran et al. „Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care“. JAMA Network Open 6, Nr. 12 (15.12.2023): e2345050. http://dx.doi.org/10.1001/jamanetworkopen.2023.45050.
Der volle Inhalt der QuelleKlettke, Meike, Adrian Lutsch und Uta Störl. „Kurz erklärt: Measuring Data Changes in Data Engineering and their Impact on Explainability and Algorithm Fairness“. Datenbank-Spektrum 21, Nr. 3 (27.10.2021): 245–49. http://dx.doi.org/10.1007/s13222-021-00392-w.
Der volle Inhalt der QuelleChetoui, Mohamed, Moulay A. Akhloufi, Bardia Yousefi und El Mostafa Bouattane. „Explainable COVID-19 Detection on Chest X-rays Using an End-to-End Deep Convolutional Neural Network Architecture“. Big Data and Cognitive Computing 5, Nr. 4 (07.12.2021): 73. http://dx.doi.org/10.3390/bdcc5040073.
Der volle Inhalt der QuelleSchober, Sebastian A., Yosra Bahri, Cecilia Carbonelli und Robert Wille. „Neural Network Robustness Analysis Using Sensor Simulations for a Graphene-Based Semiconductor Gas Sensor“. Chemosensors 10, Nr. 5 (21.04.2022): 152. http://dx.doi.org/10.3390/chemosensors10050152.
Der volle Inhalt der QuelleHÖLLER, Sonja, Thomas DILGER, Teresa SPIESS, Christian PLODER und Reinhard BERNSTEINER. „Awareness of Unethical Artificial Intelligence and its Mitigation Measures“. European Journal of Interdisciplinary Studies 15, Nr. 2 (22.12.2023): 67–89. http://dx.doi.org/10.24818/ejis.2023.17.
Der volle Inhalt der QuelleZeng, Wenhuan, und Daniel H. Huson. „Leverage the Explainability of Transformer Models to Improve the DNA 5-Methylcytosine Identification (Student Abstract)“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 21 (24.03.2024): 23703–4. http://dx.doi.org/10.1609/aaai.v38i21.30533.
Der volle Inhalt der QuelleMollaei, Nafiseh, Carlos Fujao, Luis Silva, Joao Rodrigues, Catia Cepeda und Hugo Gamboa. „Human-Centered Explainable Artificial Intelligence: Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms“. International Journal of Environmental Research and Public Health 19, Nr. 15 (03.08.2022): 9552. http://dx.doi.org/10.3390/ijerph19159552.
Der volle Inhalt der QuellePatil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw und Ketan Kotecha. „Explainable Artificial Intelligence for Intrusion Detection System“. Electronics 11, Nr. 19 (27.09.2022): 3079. http://dx.doi.org/10.3390/electronics11193079.
Der volle Inhalt der QuelleTrabassi, Dante, Mariano Serrao, Tiwana Varrecchia, Alberto Ranavolo, Gianluca Coppola, Roberto De Icco, Cristina Tassorelli und Stefano Filippo Castiglia. „Machine Learning Approach to Support the Detection of Parkinson’s Disease in IMU-Based Gait Analysis“. Sensors 22, Nr. 10 (12.05.2022): 3700. http://dx.doi.org/10.3390/s22103700.
Der volle Inhalt der QuelleGutierrez-Rojas, Daniel, Ioannis T. Christou, Daniel Dantas, Arun Narayanan, Pedro H. J. Nardelli und Yongheng Yang. „Performance evaluation of machine learning for fault selection in power transmission lines“. Knowledge and Information Systems 64, Nr. 3 (19.02.2022): 859–83. http://dx.doi.org/10.1007/s10115-022-01657-w.
Der volle Inhalt der Quelle