Academic literature on the topic 'XAI Interpretability'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'XAI Interpretability.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "XAI Interpretability"
Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (April 13, 2022): 3926. http://dx.doi.org/10.3390/app12083926.
Full textZerilli, John. "Explaining Machine Learning Decisions." Philosophy of Science 89, no. 1 (January 2022): 1–19. http://dx.doi.org/10.1017/psa.2021.13.
Full textVeitch, Erik, and Ole Andreas Alsos. "Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles." Journal of Marine Science and Engineering 9, no. 11 (November 6, 2021): 1227. http://dx.doi.org/10.3390/jmse9111227.
Full textDindorf, Carlo, Wolfgang Teufl, Bertram Taetz, Gabriele Bleser, and Michael Fröhlich. "Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty." Sensors 20, no. 16 (August 6, 2020): 4385. http://dx.doi.org/10.3390/s20164385.
Full textChaddad, Ahmad, Jihao Peng, Jian Xu, and Ahmed Bouridane. "Survey of Explainable AI Techniques in Healthcare." Sensors 23, no. 2 (January 5, 2023): 634. http://dx.doi.org/10.3390/s23020634.
Full textBaşağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.
Full textAslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.
Full textLuo, Ru, Jin Xing, Lifu Chen, Zhouhao Pan, Xingmin Cai, Zengqi Li, Jielan Wang, and Alistair Ford. "Glassboxing Deep Learning to Enhance Aircraft Detection from SAR Imagery." Remote Sensing 13, no. 18 (September 13, 2021): 3650. http://dx.doi.org/10.3390/rs13183650.
Full textBogdanova, Alina, and Vitaly Romanov. "Explainable source code authorship attribution algorithm." Journal of Physics: Conference Series 2134, no. 1 (December 1, 2021): 012011. http://dx.doi.org/10.1088/1742-6596/2134/1/012011.
Full textIslam, Mir Riyanul, Mobyen Uddin Ahmed, Shaibal Barua, and Shahina Begum. "A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks." Applied Sciences 12, no. 3 (January 27, 2022): 1353. http://dx.doi.org/10.3390/app12031353.
Full textDissertations / Theses on the topic "XAI Interpretability"
SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.
Full textThe need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Matz, Filip, and Yuxiang Luo. "Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300897.
Full textUnder senare år har antalet AI implementationer stadigt ökat i flera industrier. Dessa implementationer har visat flera utmaningar kring nuvarande AI system, specifikt gällande diskriminering, otydlighet och datasäkerhet vilket lett till ett intresse för förklarbar artificiell intelligens (XAI). XAI syftar till att utveckla AI system som är rättvisa, transparenta och begripliga. Flera konceptuella ramverk har introducerats för XAI som presenterar etiska såväl som politiska perspektiv och målbilder. Dessutom har tekniska metoder utvecklats som gjort framsteg mot förklarbarhet i forskningskontext. Däremot saknas det fortfarande studier som undersöker implementationer av dessa koncept och tekniker i praktiken. Denna studie syftar till att överbrygga klyftan mellan den senaste teorin inom området och praktiken genom en fallstudie av ett företag i den svenska kreditupplysningsindustrin. Detta genom att föreslå ett ramverk för implementation av lokala förklaringar i praktiken och genom att utveckla tre förklaringsprototyper. Rapporten utvärderar även prototyperna med konsumenter på följande dimensioner: tillit, systemförståelse, användbarhet och övertalningsstyrka. Det föreslagna ramverket validerades genom fallstudien och belyste ett antal utmaningar och avvägningar som förekommer när XAI system utvecklas för användning i praktiken. Utöver detta visar utvärderingen av prototyperna att majoriteten av konsumenter föredrar regelbaserade förklaringar men indikerar även att preferenser mellan konsumenter varierar. Rekommendationer för framtida forskning är dels en längre studie, vari en XAI modell introduceras på och utvärderas av den fria marknaden, dels forskning som kombinerar olika XAI metoder för att generera mer personliga förklaringar för konsumenter.
Book chapters on the topic "XAI Interpretability"
Dinu, Marius-Constantin, Markus Hofmarcher, Vihang P. Patil, Matthias Dorfer, Patrick M. Blies, Johannes Brandstetter, Jose A. Arjona-Medina, and Sepp Hochreiter. "XAI and Strategy Extraction via Reward Redistribution." In xxAI - Beyond Explainable AI, 177–205. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_10.
Full textStevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Full textVirgolin, Marco, Andrea De Lorenzo, Eric Medvet, and Francesca Randone. "Learning a Formula of Interpretability to Learn Interpretable Formulas." In Parallel Problem Solving from Nature – PPSN XVI, 79–93. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58115-2_6.
Full textSingh, Chandan, Wooseok Ha, and Bin Yu. "Interpreting and Improving Deep-Learning Models with Reality Checks." In xxAI - Beyond Explainable AI, 229–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_12.
Full textMittelstadt, Brent. "Interpretability and Transparency in Artificial Intelligence." In The Oxford Handbook of Digital Ethics. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780198857815.013.20.
Full textKavila, Selvani Deepthi, Rajesh Bandaru, Tanishk Venkat Mahesh Babu Gali, and Jana Shafi. "Analysis of Cardiovascular Disease Prediction Using Model-Agnostic Explainable Artificial Intelligence Techniques." In Advances in Medical Technologies and Clinical Practice, 27–54. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3791-9.ch002.
Full textDaglarli, Evren. "Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models for Cyber-Physical Systems." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 42–67. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5101-1.ch003.
Full textDağlarli, Evren. "Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models." In Advances and Applications in Deep Learning. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.92172.
Full textConference papers on the topic "XAI Interpretability"
Alibekov, M. R. "Diagnosis of Plant Biotic Stress by Methods of Explainable Artificial Intelligence." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-728-739.
Full textDemajo, Lara Marie, Vince Vella, and Alexiei Dingli. "Explainable AI for Interpretable Credit Scoring." In 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101516.
Full textFryskowska, Anna, Michal Kedzierski, Damian Wierzbicki, Marcin Gorka, and Natalia Berlinska. "Analysis of imagery interpretability of open sources radar satellite imagery." In XII Conference on Reconnaissance and Electronic Warfare Systems, edited by Piotr Kaniewski. SPIE, 2019. http://dx.doi.org/10.1117/12.2525013.
Full textWalczykowski, Piotr, Marcin Gorka, Michal Kedzierski, Aleksandra Sekrecka, and Marcin Walkowiak. "Evaluation of the interpretability of satellite imagery obtained from open sources of information." In XII Conference on Reconnaissance and Electronic Warfare Systems, edited by Piotr Kaniewski. SPIE, 2019. http://dx.doi.org/10.1117/12.2525019.
Full text