Academic literature on the topic 'Post-hoc explainabil'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Post-hoc explainabil.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Post-hoc explainabil"
Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (March 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.
Full textFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (December 5, 2021): 3137. http://dx.doi.org/10.3390/math9233137.
Full textRoscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Full textGadzinski, Gregory, and Alessio Castello. "Combining white box models, black box machines and human interventions for interpretable decision strategies." Judgment and Decision Making 17, no. 3 (May 2022): 598–627. http://dx.doi.org/10.1017/s1930297500003594.
Full textShen, Yifan, Li Liu, Zhihao Tang, Zongyi Chen, Guixiang Ma, Jiyan Dong, Xi Zhang, Lin Yang, and Qingfeng Zheng. "Explainable Survival Analysis with Convolution-Involved Vision Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 2207–15. http://dx.doi.org/10.1609/aaai.v36i2.20118.
Full textGill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (February 29, 2020): 137. http://dx.doi.org/10.3390/info11030137.
Full textAslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.
Full textMikołajczyk, Agnieszka, Michał Grochowski, and Arkadiusz Kwasigroch. "Towards Explainable Classifiers Using the Counterfactual Approach - Global Explanations for Discovering Bias in Data." Journal of Artificial Intelligence and Soft Computing Research 11, no. 1 (January 1, 2021): 51–67. http://dx.doi.org/10.2478/jaiscr-2021-0004.
Full textKumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.
Full textKnapič, Samanta, Avleen Malhi, Rohit Saluja, and Kary Främling. "Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain." Machine Learning and Knowledge Extraction 3, no. 3 (September 19, 2021): 740–70. http://dx.doi.org/10.3390/make3030037.
Full textDissertations / Theses on the topic "Post-hoc explainabil"
SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.
Full textThe need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Book chapters on the topic "Post-hoc explainabil"
Kamath, Uday, and John Liu. "Post-Hoc Interpretability and Explanations." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 167–216. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.
Full textDeshpande, Saurabh, Rahee Walambe, Ketan Kotecha, and Marina Marjanović Jakovljević. "Post-hoc Explainable Reinforcement Learning Using Probabilistic Graphical Models." In Communications in Computer and Information Science, 362–76. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95502-1_28.
Full textCánovas-Segura, Bernardo, Antonio Morales, Antonio López Martínez-Carrasco, Manuel Campos, Jose M. Juarez, Lucía López Rodríguez, and Francisco Palacios. "Exploring Antimicrobial Resistance Prediction Using Post-hoc Interpretable Methods." In Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems, 93–107. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37446-4_8.
Full textStevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Full textConference papers on the topic "Post-hoc explainabil"
Xu, Kerui, Jun Xu, Sheng Gao, Si Li, Jun Guo, and Ji-Rong Wen. "A Tag-Based Post-Hoc Framework for Explainable Conversational Recommendation." In ICTIR '22: The 2022 ACM SIGIR International Conference on the Theory of Information Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3539813.3545120.
Full textČyras, Kristijonas, Antonio Rago, Emanuele Albini, Pietro Baroni, and Francesca Toni. "Argumentative XAI: A Survey." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/600.
Full textSattarzadeh, Sam, Mahesh Sudhakar, and Konstantinos N. Plataniotis. "SVEA: A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition." In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00462.
Full text