Littérature scientifique sur le sujet « Post-hoc explainabil »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Post-hoc explainabil ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Post-hoc explainabil"
Zednik, Carlos, et Hannes Boelsen. « Scientific Exploration and Explainable Artificial Intelligence ». Minds and Machines 32, no 1 (mars 2022) : 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.
Texte intégralFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont et Alexandre Termier. « XCM : An Explainable Convolutional Neural Network for Multivariate Time Series Classification ». Mathematics 9, no 23 (5 décembre 2021) : 3137. http://dx.doi.org/10.3390/math9233137.
Texte intégralRoscher, R., B. Bohn, M. F. Duarte et J. Garcke. « EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (3 août 2020) : 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Texte intégralGadzinski, Gregory, et Alessio Castello. « Combining white box models, black box machines and human interventions for interpretable decision strategies ». Judgment and Decision Making 17, no 3 (mai 2022) : 598–627. http://dx.doi.org/10.1017/s1930297500003594.
Texte intégralShen, Yifan, Li Liu, Zhihao Tang, Zongyi Chen, Guixiang Ma, Jiyan Dong, Xi Zhang, Lin Yang et Qingfeng Zheng. « Explainable Survival Analysis with Convolution-Involved Vision Transformer ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 2 (28 juin 2022) : 2207–15. http://dx.doi.org/10.1609/aaai.v36i2.20118.
Texte intégralGill, Navdeep, Patrick Hall, Kim Montgomery et Nicholas Schmidt. « A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing ». Information 11, no 3 (29 février 2020) : 137. http://dx.doi.org/10.3390/info11030137.
Texte intégralAslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid et Reham Baageel. « Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI) ». Sustainability 14, no 12 (16 juin 2022) : 7375. http://dx.doi.org/10.3390/su14127375.
Texte intégralMikołajczyk, Agnieszka, Michał Grochowski et Arkadiusz Kwasigroch. « Towards Explainable Classifiers Using the Counterfactual Approach - Global Explanations for Discovering Bias in Data ». Journal of Artificial Intelligence and Soft Computing Research 11, no 1 (1 janvier 2021) : 51–67. http://dx.doi.org/10.2478/jaiscr-2021-0004.
Texte intégralKumar, Akshi, Shubham Dikshit et Victor Hugo C. Albuquerque. « Explainable Artificial Intelligence for Sarcasm Detection in Dialogues ». Wireless Communications and Mobile Computing 2021 (2 juillet 2021) : 1–13. http://dx.doi.org/10.1155/2021/2939334.
Texte intégralKnapič, Samanta, Avleen Malhi, Rohit Saluja et Kary Främling. « Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain ». Machine Learning and Knowledge Extraction 3, no 3 (19 septembre 2021) : 740–70. http://dx.doi.org/10.3390/make3030037.
Texte intégralThèses sur le sujet "Post-hoc explainabil"
SEVESO, ANDREA. « Symbolic Reasoning for Contrastive Explanations ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.
Texte intégralThe need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Chapitres de livres sur le sujet "Post-hoc explainabil"
Kamath, Uday, et John Liu. « Post-Hoc Interpretability and Explanations ». Dans Explainable Artificial Intelligence : An Introduction to Interpretable Machine Learning, 167–216. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.
Texte intégralDeshpande, Saurabh, Rahee Walambe, Ketan Kotecha et Marina Marjanović Jakovljević. « Post-hoc Explainable Reinforcement Learning Using Probabilistic Graphical Models ». Dans Communications in Computer and Information Science, 362–76. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95502-1_28.
Texte intégralCánovas-Segura, Bernardo, Antonio Morales, Antonio López Martínez-Carrasco, Manuel Campos, Jose M. Juarez, Lucía López Rodríguez et Francisco Palacios. « Exploring Antimicrobial Resistance Prediction Using Post-hoc Interpretable Methods ». Dans Artificial Intelligence in Medicine : Knowledge Representation and Transparent and Explainable Systems, 93–107. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37446-4_8.
Texte intégralStevens, Alexander, Johannes De Smedt et Jari Peeperkorn. « Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring ». Dans Lecture Notes in Business Information Processing, 194–206. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Texte intégralActes de conférences sur le sujet "Post-hoc explainabil"
Xu, Kerui, Jun Xu, Sheng Gao, Si Li, Jun Guo et Ji-Rong Wen. « A Tag-Based Post-Hoc Framework for Explainable Conversational Recommendation ». Dans ICTIR '22 : The 2022 ACM SIGIR International Conference on the Theory of Information Retrieval. New York, NY, USA : ACM, 2022. http://dx.doi.org/10.1145/3539813.3545120.
Texte intégralČyras, Kristijonas, Antonio Rago, Emanuele Albini, Pietro Baroni et Francesca Toni. « Argumentative XAI : A Survey ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/600.
Texte intégralSattarzadeh, Sam, Mahesh Sudhakar et Konstantinos N. Plataniotis. « SVEA : A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition ». Dans 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00462.
Texte intégral