Academic literature on the topic 'Interpretable AI'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Interpretable AI.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Interpretable AI"
Sathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.
Full textJia, Xun, Lei Ren, and Jing Cai. "Clinical implementation of AI technologies will require interpretable AI models." Medical Physics 47, no. 1 (November 19, 2019): 1–4. http://dx.doi.org/10.1002/mp.13891.
Full textXu, Wei, Jianshan Sun, and Mengxiang Li. "Guest editorial: Interpretable AI-enabled online behavior analytics." Internet Research 32, no. 2 (March 15, 2022): 401–5. http://dx.doi.org/10.1108/intr-04-2022-683.
Full textSkirzyński, Julian, Frederic Becker, and Falk Lieder. "Automatic discovery of interpretable planning strategies." Machine Learning 110, no. 9 (April 9, 2021): 2641–83. http://dx.doi.org/10.1007/s10994-021-05963-2.
Full textTomsett, Richard, Alun Preece, Dave Braines, Federico Cerutti, Supriyo Chakraborty, Mani Srivastava, Gavin Pearson, and Lance Kaplan. "Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI." Patterns 1, no. 4 (July 2020): 100049. http://dx.doi.org/10.1016/j.patter.2020.100049.
Full textHerzog, Christian. "On the risk of confusing interpretability with explicability." AI and Ethics 2, no. 1 (December 9, 2021): 219–25. http://dx.doi.org/10.1007/s43681-021-00121-9.
Full textSchmidt Nordmo, Tor-Arne, Ove Kvalsvik, Svein Ove Kvalsund, Birte Hansen, and Michael A. Riegler. "Fish AI." Nordic Machine Intelligence 2, no. 2 (June 2, 2022): 1–3. http://dx.doi.org/10.5617/nmi.9657.
Full textPark, Sungjoon, Akshat Singhal, Erica Silva, Jason F. Kreisberg, and Trey Ideker. "Abstract 1159: Predicting clinical drug responses using a few-shot learning-based interpretable AI." Cancer Research 82, no. 12_Supplement (June 15, 2022): 1159. http://dx.doi.org/10.1158/1538-7445.am2022-1159.
Full textBaşağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.
Full textDemajo, Lara Marie, Vince Vella, and Alexiei Dingli. "An Explanation Framework for Interpretable Credit Scoring." International Journal of Artificial Intelligence & Applications 12, no. 1 (January 31, 2021): 19–38. http://dx.doi.org/10.5121/ijaia.2021.12102.
Full textDissertations / Theses on the topic "Interpretable AI"
Gustafsson, Sebastian. "Interpretable serious event forecasting using machine learning and SHAP." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444363.
Full textExakta prognoser är viktiga inom flera områden av ekonomisk, vetenskaplig, kommersiell och industriell verksamhet. Det finns få tidigare studier där man använt prognosmetoder för att förutsäga allvarliga händelser. Denna avhandling syftar till att undersöka två saker, för det första om maskininlärningsmodeller kan användas för att förutse allvarliga händelser. För det andra, om modellerna kunde göras tolkbara. Med tanke på dessa mål var metoden att formulera två prognosuppgifter för modellerna och sedan använda Python-ramverket SHAP för att göra dem tolkbara. Den första uppgiften var att förutsäga om en allvarlig händelse kommer att ske under de kommande åtta timmarna. Den andra uppgiften var att förutse hur många allvarliga händelser som kommer att hända under de kommande sex timmarna. GBDT- och LSTM-modeller implementerades, utvärderades och jämfördes för båda uppgifterna. Med tanke på problemkomplexiteten i att förutspå framtiden matchar resultaten de från tidigare relaterad forskning. På klassificeringsuppgiften uppnådde den bäst presterande modellen en träffsäkerhet på 71,6%, och på regressionsuppgiften missade den i genomsnitt med mindre än 1 i antal förutspådda allvarliga händelser.
Joel, Viklund. "Explaining the output of a black box model and a white box model: an illustrative comparison." Thesis, Uppsala universitet, Filosofiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420889.
Full textNorrie, Christian. "Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19845.
Full textFjellström, Lisa. "The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184671.
Full textGridelli, Eleonora. "Interpretabilità nel Machine Learning tramite modelli di ottimizzazione discreta." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23216/.
Full textBalayan, Vladimir. "Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection." Master's thesis, 2020. http://hdl.handle.net/10362/130774.
Full textA Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os humanos a tomar decisões de alto risco numa vasta gama de áreas, desde política até à justiça criminal, educação, saúde e serviços financeiros. Porém, é muito difícil para os humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis, embora ainda sem grande avanço. Além disso, os métodos de explicação mais populares — LIME and SHAP — produzem explicações de muito baixo nível, sendo de utilidade limitada para pessoas sem conhecimento de AM. Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por analistas de fraude. Estes são especialistas no domínio treinados para procurar evidências suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada a especialistas de domínio que não têm conhecimento técnico profundo de AM, fornecendo informações de alto nível sobre as previsões do modelo, que muito se assemelham ao raciocínio dos próprios especialistas. Ademais, ao recolher o feedback de especialistas certificados (ensino humano), promovemos explicações contínuas e de melhor qualidade. Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias de domínio para anotar automaticamente um conjunto de dados, superando a ausência de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o ensino humano é capaz de melhorar a qualidade da previsão das explicações.
Books on the topic "Interpretable AI"
Guerrini, Mauro. De bibliothecariis. Edited by Tiziana Stagi. Florence: Firenze University Press, 2017. http://dx.doi.org/10.36253/978-88-6453-559-3.
Full textThampi, Ajay. Interpretable AI: Building Explainable Machine Learning Systems. Manning Publications Co. LLC, 2022.
Find full textCappelen, Herman, and Josh Dever. Making AI Intelligible. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192894724.001.0001.
Full textExplainable Fuzzy Systems: Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems. Springer International Publishing AG, 2021.
Find full textExplainable Fuzzy Systems: Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems. Springer International Publishing AG, 2022.
Find full textBellodi Ansaloni, Anna. L’arte dell’avvocato, actor veritatis. Bononia University Press, 2021. http://dx.doi.org/10.30682/sg279.
Full textBook chapters on the topic "Interpretable AI"
Elton, Daniel C. "Self-explaining AI as an Alternative to Interpretable AI." In Artificial General Intelligence, 95–106. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52152-3_10.
Full textBastani, Osbert, Jeevana Priya Inala, and Armando Solar-Lezama. "Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis." In xxAI - Beyond Explainable AI, 207–28. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_11.
Full textPreuer, Kristina, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, and Thomas Unterthiner. "Interpretable Deep Learning in Drug Discovery." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 331–45. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_18.
Full textBewley, Tom, Jonathan Lawry, and Arthur Richards. "Modelling Agent Policies with Interpretable Imitation Learning." In Trustworthy AI - Integrating Learning, Optimization and Reasoning, 180–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73959-1_16.
Full textSchütt, Kristof T., Michael Gastegger, Alexandre Tkatchenko, and Klaus-Robert Müller. "Quantum-Chemical Insights from Interpretable Atomistic Neural Networks." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 311–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_17.
Full textMacDonald, Samual, Kaiah Steven, and Maciej Trzaskowski. "Interpretable AI in Healthcare: Enhancing Fairness, Safety, and Trust." In Artificial Intelligence in Medicine, 241–58. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1223-8_11.
Full textMallia, Natalia, Alexiei Dingli, and Foaad Haddod. "MIRAI: A Modifiable, Interpretable, and Rational AI Decision System." In Studies in Computational Intelligence, 127–41. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-61045-6_10.
Full textDinu, Marius-Constantin, Markus Hofmarcher, Vihang P. Patil, Matthias Dorfer, Patrick M. Blies, Johannes Brandstetter, Jose A. Arjona-Medina, and Sepp Hochreiter. "XAI and Strategy Extraction via Reward Redistribution." In xxAI - Beyond Explainable AI, 177–205. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_10.
Full textHong, Seunghoon, Dingdong Yang, Jongwook Choi, and Honglak Lee. "Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 77–95. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_5.
Full textAdadi, Amina, and Mohammed Berrada. "Explainable AI for Healthcare: From Black Box to Interpretable Models." In Embedded Systems and Artificial Intelligence, 327–37. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0947-6_31.
Full textConference papers on the topic "Interpretable AI"
Sengoz, Nilgun, and Tuncay Yigit. "Towards Third Generation AI: Explainable and Interpretable AI." In 2022 7th International Conference on Computer Science and Engineering (UBMK). IEEE, 2022. http://dx.doi.org/10.1109/ubmk55850.2022.9919510.
Full textDemajo, Lara Marie, Vince Vella, and Alexiei Dingli. "Explainable AI for Interpretable Credit Scoring." In 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101516.
Full textCustode, Leonardo Lucio, and Giovanni Iacca. "Interpretable AI for policy-making in pandemics." In GECCO '22: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3520304.3533959.
Full textGuidotti, Riccardo, and Anna Monreale. "Designing Shapelets for Interpretable Data-Agnostic Classification." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462553.
Full textZhang, Wei, Brian Barr, and John Paisley. "An Interpretable Deep Classifier for Counterfactual Generation." In ICAIF '22: 3rd ACM International Conference on AI in Finance. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3533271.3561722.
Full textIgnatiev, Alexey, Joao Marques-Silva, Nina Narodytska, and Peter J. Stuckey. "Reasoning-Based Learning of Interpretable ML Models." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/608.
Full textVerma, Pulkit, Shashank Rao Marpally, and Siddharth Srivastava. "Discovering User-Interpretable Capabilities of Black-Box Planning Agents." In 19th International Conference on Principles of Knowledge Representation and Reasoning {KR-2022}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/kr.2022/36.
Full textKim, Tae Wan, and Bryan R. Routledge. "Informational Privacy, A Right to Explanation, and Interpretable AI." In 2018 IEEE Symposium on Privacy-Aware Computing (PAC). IEEE, 2018. http://dx.doi.org/10.1109/pac.2018.00013.
Full textPreece, Alun, Dan Harborne, Ramya Raghavendra, Richard Tomsett, and Dave Braines. "Provisioning Robust and Interpretable AI/ML-Based Service Bundles." In MILCOM 2018 - IEEE Military Communications Conference. IEEE, 2018. http://dx.doi.org/10.1109/milcom.2018.8599838.
Full textPitroda, Vidhi, Mostafa M. Fouda, and Zubair Md Fadlullah. "An Explainable AI Model for Interpretable Lung Disease Classification." In 2021 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS). IEEE, 2021. http://dx.doi.org/10.1109/iotais53735.2021.9628573.
Full textReports on the topic "Interpretable AI"
Chen, Thomas, Biprateep Dey, Aishik Ghosh, Michael Kagan, Brian Nord, and Nesar Ramachandra. Interpretable Uncertainty Quantification in AI for HEP. Office of Scientific and Technical Information (OSTI), August 2022. http://dx.doi.org/10.2172/1886020.
Full textZhu, Qing, William Riley, and James Randerson. Improve wildfire predictability driven by extreme water cycle with interpretable physically-guided ML/AI. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769720.
Full text