Articles de revues sur le sujet « Post-hoc explainabil »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Post-hoc explainabil ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.
de-la-Rica-Escudero, Alejandra, Eduardo C. Garrido-Merchán, and María Coronado-Vaca. "Explainable post hoc portfolio management financial policy of a Deep Reinforcement Learning agent." PLOS ONE 20, no. 1 (2025): e0315528. https://doi.org/10.1371/journal.pone.0315528.
Texte intégralViswan, Vimb, Shaffi Noushath, and Mahmud Mufti. "Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection." Brain Informatics 11 (April 5, 2024): A10. https://doi.org/10.1186/s40708-024-00222-1.
Texte intégralAlvanpour, Aneseh, Cagla Acun, Kyle Spurlock, et al. "Comparative Analysis of Post Hoc Explainable Methods for Robotic Grasp Failure Prediction." Electronics 14, no. 9 (2025): 1868. https://doi.org/10.3390/electronics14091868.
Texte intégralZednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.
Texte intégralLarriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.
Texte intégralMetsch, Jacqueline Michelle, and Anne-Christin Hauschild. "BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data." Computers in Biology and Medicine 191 (June 2025): 110124. https://doi.org/10.1016/j.compbiomed.2025.110124.
Texte intégralArjunan, Gopalakrishnan. "Implementing Explainable AI in Healthcare: Techniques for Interpretable Machine Learning Models in Clinical Decision-Making." International Journal of Scientific Research and Management (IJSRM) 9, no. 05 (2021): 597–603. http://dx.doi.org/10.18535/ijsrm/v9i05.ec03.
Texte intégralJishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.
Texte intégralSarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.
Texte intégralYang, Huijin, Seon Ha Baek, and Sejoong Kim. "Explainable Prediction of Overcorrection in Severe Hyponatremia: A Post Hoc Analysis of the SALSA Trial." Journal of the American Society of Nephrology 32, no. 10S (2021): 377. http://dx.doi.org/10.1681/asn.20213210s1377b.
Texte intégralZhang, Xiaopu, Wubing Miao, and Guodong Liu. "Explainable Data Mining Framework of Identifying Root Causes of Rocket Engine Anomalies Based on Knowledge and Physics-Informed Feature Selection." Machines 13, no. 8 (2025): 640. https://doi.org/10.3390/machines13080640.
Texte intégralYan, Fei, Yunqing Chen, Yiwen Xia, Zhiliang Wang, and Ruoxiu Xiao. "An Explainable Brain Tumor Detection Framework for MRI Analysis." Applied Sciences 13, no. 6 (2023): 3438. http://dx.doi.org/10.3390/app13063438.
Texte intégralAyaz, Hamail, Esra Sümer-Arpak, Esin Ozturk-Isik, et al. "Post-hoc eXplainable AI methods for analyzing medical images of gliomas (— A review for clinical applications)." Computers in Biology and Medicine 196 (September 2025): 110649. https://doi.org/10.1016/j.compbiomed.2025.110649.
Texte intégralAmarasinghe, Kasun, Kit T. Rodolfa, Sérgio Jesus, et al. "On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (2024): 20921–29. http://dx.doi.org/10.1609/aaai.v38i19.30082.
Texte intégralSai Teja Boppiniti. "A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES." International Journal of Innovations in Engineering Research and Technology 7, no. 3 (2020): 57–66. http://dx.doi.org/10.26662/ijiert.v7i3.pp57-66.
Texte intégralKulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.
Texte intégralFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (2021): 3137. http://dx.doi.org/10.3390/math9233137.
Texte intégralYu, Jinqiang, Alexey Ignatiev, Peter J. Stuckey, Nina Narodytska, and Joao Marques-Silva. "Eliminating the Impossible, Whatever Remains Must Be True: On Extracting and Applying Background Knowledge in the Context of Formal Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 4123–31. http://dx.doi.org/10.1609/aaai.v37i4.25528.
Texte intégralAlfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.
Texte intégralRoscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Texte intégralBoggess, Kayla. "Explanations for Multi-Agent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29245–46. https://doi.org/10.1609/aaai.v39i28.35200.
Texte intégralNogueira, Caio, Luís Fernandes, João N. D. Fernandes, and Jaime S. Cardoso. "Explaining Bounding Boxes in Deep Object Detectors Using Post Hoc Methods for Autonomous Driving Systems." Sensors 24, no. 2 (2024): 516. http://dx.doi.org/10.3390/s24020516.
Texte intégralO'Loughlin, Ryan J., Dan Li, Richard Neale, and Travis A. O'Brien. "Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling." Geoscientific Model Development 18, no. 3 (2025): 787–802. https://doi.org/10.5194/gmd-18-787-2025.
Texte intégralGill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (2020): 137. http://dx.doi.org/10.3390/info11030137.
Texte intégralGadzinski, Gregory, and Alessio Castello. "Combining white box models, black box machines and human interventions for interpretable decision strategies." Judgment and Decision Making 17, no. 3 (2022): 598–627. http://dx.doi.org/10.1017/s1930297500003594.
Texte intégralWyatt, Lucie S., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. "Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review." Applied Sciences 14, no. 18 (2024): 8108. http://dx.doi.org/10.3390/app14188108.
Texte intégralBaek, Myunghun, Taeho An, Seungkuk Kuk, and Kyongtae Park. "14‐3: Drop Resistance Optimization Through Post‐Hoc Analysis of Chemically Strengthened Glass." SID Symposium Digest of Technical Papers 54, no. 1 (2023): 174–77. http://dx.doi.org/10.1002/sdtp.16517.
Texte intégralVinayak Pillai. "Enhancing the transparency of data and ml models using explainable AI (XAI)." World Journal of Advanced Engineering Technology and Sciences 13, no. 1 (2024): 397–406. http://dx.doi.org/10.30574/wjaets.2024.13.1.0428.
Texte intégralKadioglu, Serdar, Elton Yechao Zhu, Gili Rosenberg, et al. "BoolXAI: Explainable AI Using Expressive Boolean Formulas." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 28900–28906. https://doi.org/10.1609/aaai.v39i28.35157.
Texte intégralShen, Yifan, Li Liu, Zhihao Tang, et al. "Explainable Survival Analysis with Convolution-Involved Vision Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2207–15. http://dx.doi.org/10.1609/aaai.v36i2.20118.
Texte intégralGunasekara, Sachini, and Mirka Saarela. "Explainable AI in Education: Techniques and Qualitative Assessment." Applied Sciences 15, no. 3 (2025): 1239. https://doi.org/10.3390/app15031239.
Texte intégralGrozdanovski, Ljupcho. "THE EXPLANATIONS ONE NEEDS FOR THE EXPLANATIONS ONE GIVES—THE NECESSITY OF EXPLAINABLE AI (XAI) FOR CAUSAL EXPLANATIONS OF AI-RELATED HARM:DECONSTRUCTING THE ‘REFUGE OF IGNORANCE’ IN THE EU’S AI LIABILITY REGULATION." International Journal of Law, Ethics, and Technology 2024, no. 2 (2024): 155–262. http://dx.doi.org/10.55574/tqcg5204.
Texte intégralKabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (2024): 1797. http://dx.doi.org/10.3390/en17081797.
Texte intégralBourgais, Mathieu, Franco Giustozzi, Laurent Vercouter, and Cecilia Zanni-Merk. "Towards the use of post-hoc explainable methods to define and detect semantic situations of importance in medical data." Procedia Computer Science 225 (2023): 2312–21. http://dx.doi.org/10.1016/j.procs.2023.10.222.
Texte intégralZdravkovic, Milan. "On the global feature importance for interpretable and trustworthy heat demand forecasting." Thermal Science, no. 00 (2025): 48. https://doi.org/10.2298/tsci241223048z.
Texte intégralAslam, Nida, Irfan Ullah Khan, Samiha Mirza, et al. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (2022): 7375. http://dx.doi.org/10.3390/su14127375.
Texte intégralMikołajczyk, Agnieszka, Michał Grochowski, and Arkadiusz Kwasigroch. "Towards Explainable Classifiers Using the Counterfactual Approach - Global Explanations for Discovering Bias in Data." Journal of Artificial Intelligence and Soft Computing Research 11, no. 1 (2021): 51–67. http://dx.doi.org/10.2478/jaiscr-2021-0004.
Texte intégralAryal, Saugat. "Semi-factual Explanations in AI." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23379–80. http://dx.doi.org/10.1609/aaai.v38i21.30390.
Texte intégralOluwatosin Oladayo ARAMIDE. "Explainable AI (XAI) for Network Operations and Troubleshooting." International Journal for Research Publication and Seminar 16, no. 1 (2025): 533–54. https://doi.org/10.36676/jrps.v16.i1.286.
Texte intégralPendyala, Vishnu S., Neha Bais Thakur, and Radhika Agarwal. "Explainable Use of Foundation Models for Job Hiring." Electronics 14, no. 14 (2025): 2787. https://doi.org/10.3390/electronics14142787.
Texte intégralRanjith Gopalan, Dileesh Onniyil, Ganesh Viswanathan, and Gaurav Samdani. "Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1388–402. https://doi.org/10.30574/wjaets.2025.15.2.0635.
Texte intégral한, 애라. "민사소송에서의 AI 알고리즘 심사". Korea Association of the Law of Civil Procedure 27, № 1 (2023): 185–233. http://dx.doi.org/10.30639/cp.2023.2.27.1.185.
Texte intégralKnapič, Samanta, Avleen Malhi, Rohit Saluja, and Kary Främling. "Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 740–70. http://dx.doi.org/10.3390/make3030037.
Texte intégralKumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.
Texte intégralLi, Lu, Jiale Liu, Xingyu Ji, Maojun Wang, and Zeyu Zhang. "Self-Explainable Graph Transformer for Link Sign Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12084–92. https://doi.org/10.1609/aaai.v39i11.33316.
Texte intégralMadhukar E. "Multi-Level Feature Selection and Transfer Learning Framework for Scalable and Explainable Machine Learning Systems in Real-Time Applications." Journal of Information Systems Engineering and Management 10, no. 46s (2025): 1091–101. https://doi.org/10.52783/jisem.v10i46s.9242.
Texte intégralMethuku, Vijayalaxmi, Sharath Chandra Kondaparthy, and Direesh Reddy Aunugu. "Explainability and Transparency in Artificial Intelligence: Ethical Imperatives and Practical Challenges." International Journal of Electrical, Electronics and Computers 8, no. 3 (2023): 7–12. https://doi.org/10.22161/eec.84.2.
Texte intégralGaurav, Kashyap. "Explainable AI (XAI): Methods and Techniques to Make Deep Learning Models More Interpretable and Their Real-World Implications." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 11, no. 4 (2023): 1–7. https://doi.org/10.5281/zenodo.14382747.
Texte intégralAbdelaal, Yasmin, Michaël Aupetit, Abdelkader Baggag, and Dena Al-Thani. "Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review." Journal of Medical Internet Research 26 (December 24, 2024): e53863. https://doi.org/10.2196/53863.
Texte intégralJeon, Minseok, Jihyeok Park, and Hakjoo Oh. "PL4XGL: A Programming Language Approach to Explainable Graph Learning." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 2148–73. http://dx.doi.org/10.1145/3656464.
Texte intégral