Academic literature on the topic 'Interpretable AI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Interpretable AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Interpretable AI"

1

Sathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.

Full text
Abstract:
This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide further insights into the relationship between the input features and the predictions. SHAP methodology additionally provides a more holistic view of the effect of the inputs on the output predictions. The results also present the commonalities between the insights gained using LIME and SHAP. Although this paper focuses on the use of deep neural networks trained on UCI Breast Cancer Wisconsin dataset, the methodology can be applied to other neural networks and architectures trained on other applications. The deep neural network trained in this work provides a high level of accuracy. Analyzing the model using LIME and SHAP adds the much desired benefit of providing explanations for the recommendations made by the trained model.
APA, Harvard, Vancouver, ISO, and other styles
2

Jia, Xun, Lei Ren, and Jing Cai. "Clinical implementation of AI technologies will require interpretable AI models." Medical Physics 47, no. 1 (November 19, 2019): 1–4. http://dx.doi.org/10.1002/mp.13891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Wei, Jianshan Sun, and Mengxiang Li. "Guest editorial: Interpretable AI-enabled online behavior analytics." Internet Research 32, no. 2 (March 15, 2022): 401–5. http://dx.doi.org/10.1108/intr-04-2022-683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skirzyński, Julian, Frederic Becker, and Falk Lieder. "Automatic discovery of interpretable planning strategies." Machine Learning 110, no. 9 (April 9, 2021): 2641–83. http://dx.doi.org/10.1007/s10994-021-05963-2.

Full text
Abstract:
AbstractWhen making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decision-makers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that providing the decision rules generated by AI-Interpret as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Moreover, our fourth experiment revealed that this approach is significantly more effective at improving human decision-making than training people by giving them performance feedback. Finally, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making. The code for our algorithm and the experiments is available at https://github.com/RationalityEnhancement/InterpretableStrategyDiscovery.
APA, Harvard, Vancouver, ISO, and other styles
5

Tomsett, Richard, Alun Preece, Dave Braines, Federico Cerutti, Supriyo Chakraborty, Mani Srivastava, Gavin Pearson, and Lance Kaplan. "Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI." Patterns 1, no. 4 (July 2020): 100049. http://dx.doi.org/10.1016/j.patter.2020.100049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Herzog, Christian. "On the risk of confusing interpretability with explicability." AI and Ethics 2, no. 1 (December 9, 2021): 219–25. http://dx.doi.org/10.1007/s43681-021-00121-9.

Full text
Abstract:
AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.
APA, Harvard, Vancouver, ISO, and other styles
7

Schmidt Nordmo, Tor-Arne, Ove Kvalsvik, Svein Ove Kvalsund, Birte Hansen, and Michael A. Riegler. "Fish AI." Nordic Machine Intelligence 2, no. 2 (June 2, 2022): 1–3. http://dx.doi.org/10.5617/nmi.9657.

Full text
Abstract:
Sustainable Commercial Fishing is the second challenge at the Nordic AI Meet following the successful MedAI, which had a focus on medical image segmentation and transparency in machine learning (ML)-based systems. FishAI focuses on a new domain, namely, commercial fishing and how to make it more sustainable with the help of machine learning. A range of public available datasets is used to tackle three specific tasks. The first one is to predict fishing coordinates to optimize catching of specific fish, the second one is to create a report that can be used by experienced fishermen, and the third task is to make a sustainable fishing plan that provides a route for a week. The second and third task require to some extend explainable and interpretable models that can provide explanations. A development dataset is provided and all methods will be tested on a concealed test dataset and assessed by an expert jury. artificial intelligence; machine learning; segmentation; transparency; medicine
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Sungjoon, Akshat Singhal, Erica Silva, Jason F. Kreisberg, and Trey Ideker. "Abstract 1159: Predicting clinical drug responses using a few-shot learning-based interpretable AI." Cancer Research 82, no. 12_Supplement (June 15, 2022): 1159. http://dx.doi.org/10.1158/1538-7445.am2022-1159.

Full text
Abstract:
Abstract High-throughput screens have generated large amounts of data characterizing how thousands of cell lines respond to hundreds of anti-cancer therapies. However, predictive drug response models trained using data from cell lines often fail to translate to clinical applications. Here, we focus on two key issues to improve clinical performance: 1) Transferability: the ability of predictive models to quickly adapt to clinical contexts even with a limited number of samples from patients, and 2) Interpretability: the ability to explain how drug-response predictions are being made given an individual patient’s genotype. Notably, an interpretable AI model can also help to identify biomarkers of treatment response in individual patients. By leveraging new developments in meta-learning and interpretable AI, we have developed an interpretable drug response prediction model that is trained on large amounts of data from experiments using cell lines and then transferred to clinical applications. We assessed our model’s clinical utility using AACR Project GENIE data, which contains mutational profiles from tumors and the patient’s therapeutic responses. We have demonstrated the feasibility of applying our AI-driven predictive model to clinical settings and shown how this model can support clinical decision making for tumor boards. Citation Format: Sungjoon Park, Akshat Singhal, Erica Silva, Jason F. Kreisberg, Trey Ideker. Predicting clinical drug responses using a few-shot learning-based interpretable AI [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1159.
APA, Harvard, Vancouver, ISO, and other styles
9

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Demajo, Lara Marie, Vince Vella, and Alexiei Dingli. "An Explanation Framework for Interpretable Credit Scoring." International Journal of Artificial Intelligence & Applications 12, no. 1 (January 31, 2021): 19–38. http://dx.doi.org/10.5121/ijaia.2021.12102.

Full text
Abstract:
With the recent boosted enthusiasm in Artificial Intelligence (AI) and Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. However, despite the evergrowing achievements, the biggest obstacle in most AI systems is their lack of interpretability. This deficiency of transparency limits their application in different domains including credit scoring. Credit scoring systems help financial experts make better decisions regarding whether or not to accept a loan application so that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. A recently introduced concept is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance- based) that are required by different people in different situations. Evaluation through the use of functionally-grounded, application-grounded and human-grounded analysis shows that the explanations provided are simple and consistent as well as correct, effective, easy to understand, sufficiently detailed and trustworthy.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Interpretable AI"

1

Gustafsson, Sebastian. "Interpretable serious event forecasting using machine learning and SHAP." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444363.

Full text
Abstract:
Accurate forecasts are vital in multiple areas of economic, scientific, commercial, and industrial activity. There are few previous studies on using forecasting methods for predicting serious events. This thesis set out to investigate two things, firstly whether machine learning models could be applied to the objective of forecasting serious events. Secondly, if the models could be made interpretable. Given these objectives, the approach was to formulate two forecasting tasks for the models and then use the Python framework SHAP to make them interpretable. The first task was to predict if a serious event will happen in the coming eight hours. The second task was to forecast how many serious events that will happen in the coming six hours. GBDT and LSTM models were implemented, evaluated, and compared on both tasks. Given the problem complexity of forecasting, the results match those of previous related research. On the classification task, the best performing model achieved an accuracy of 71.6%, and on the regression task, it missed by less than 1 on average.
Exakta prognoser är viktiga inom flera områden av ekonomisk, vetenskaplig, kommersiell och industriell verksamhet. Det finns få tidigare studier där man använt prognosmetoder för att förutsäga allvarliga händelser. Denna avhandling syftar till att undersöka två saker, för det första om maskininlärningsmodeller kan användas för att förutse allvarliga händelser. För det andra, om modellerna kunde göras tolkbara. Med tanke på dessa mål var metoden att formulera två prognosuppgifter för modellerna och sedan använda Python-ramverket SHAP för att göra dem tolkbara. Den första uppgiften var att förutsäga om en allvarlig händelse kommer att ske under de kommande åtta timmarna. Den andra uppgiften var att förutse hur många allvarliga händelser som kommer att hända under de kommande sex timmarna. GBDT- och LSTM-modeller implementerades, utvärderades och jämfördes för båda uppgifterna. Med tanke på problemkomplexiteten i att förutspå framtiden matchar resultaten de från tidigare relaterad forskning. På klassificeringsuppgiften uppnådde den bäst presterande modellen en träffsäkerhet på 71,6%, och på regressionsuppgiften missade den i genomsnitt med mindre än 1 i antal förutspådda allvarliga händelser.
APA, Harvard, Vancouver, ISO, and other styles
2

Joel, Viklund. "Explaining the output of a black box model and a white box model: an illustrative comparison." Thesis, Uppsala universitet, Filosofiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420889.

Full text
Abstract:
The thesis investigates how one should determine the appropriate transparency of an information processing system from a receiver perspective. Research in the past has suggested that the model should be maximally transparent for what is labeled as ”high stake decisions”. Instead of motivating the choice of a model’s transparency on the non-rigorous criterion that the model contributes to a high stake decision, this thesis explores an alternative method. The suggested method involves that one should let the transparency depend on how well an explanation of the model’s output satisfies the purpose of an explanation. As a result, we do not have to bother if it is a high stake decision, we should instead make sure the model is sufficiently transparent to provide an explanation that satisfies the expressed purpose of an explanation.
APA, Harvard, Vancouver, ISO, and other styles
3

Norrie, Christian. "Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19845.

Full text
Abstract:
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
APA, Harvard, Vancouver, ISO, and other styles
4

Fjellström, Lisa. "The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184671.

Full text
Abstract:
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video.  An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.
APA, Harvard, Vancouver, ISO, and other styles
5

Gridelli, Eleonora. "Interpretabilità nel Machine Learning tramite modelli di ottimizzazione discreta." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23216/.

Full text
Abstract:
Fin dalle sue origini, il machine learning è stato influenzato dall'ottimizzazione matematica, infatti molti metodi sono spesso implementati come problemi di minimizzazione di funzioni obiettivo. Ciononostante, la ricerca di soluzioni ottime in problemi di dimensione reale è un problema difficile, per cui negli algoritmi di machine learning accade che il problema venga semplificato e la soluzione trovata sia sub-ottima. Questo modo di agire, anche se in teoria meno accurato, porta a soluzioni meno costose dal punto di vista computazionale ma comunque competitive. Con l'avanzamento tecnologico, i solver per l'ottimizzazione lineare misto-intera (MIO) sono diventati sempre più efficienti e questo ha permesso lo sviluppo di una nuova libreria, chiamata Interpretable AI. Essa raccoglie diversi algoritmi per la costruzione di alberi binari ottimi, ottenuti risolvendo il problema di MIO sottostante, ponendosi come alternativa agli alberi tradizionali generati con un approccio greedy. Questo elaborato vuole quindi andare ad analizzare e valutare l'algoritmo della libreria Interpretable AI per la costruzione di alberi binari ottimi di regressione. Gli autori affermano, infatti, che sia in grado di eguagliare in performance i metodi black-box garantendo allo stesso tempo, in quanto albero, un livello alto di interpretabilità.
APA, Harvard, Vancouver, ISO, and other styles
6

Balayan, Vladimir. "Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection." Master's thesis, 2020. http://hdl.handle.net/10362/130774.

Full text
Abstract:
Machine Learning (ML) has been increasingly used to aid humans making high-stakes decisions in a wide range of areas, from public policy to criminal justice, education, healthcare, or financial services. However, it is very hard for humans to grasp the rationale behind every ML model’s prediction, hindering trust in the system. The field of Explainable Artificial Intelligence (XAI) emerged to tackle this problem, aiming to research and develop methods to make those “black-boxes” more interpretable, but there is still no major breakthrough. Additionally, the most popular explanation methods — LIME and SHAP — produce very low-level feature attribution explanations, being of limited usefulness to personas without any ML knowledge. This work was developed at Feedzai, a fintech company that uses ML to prevent financial crime. One of the main Feedzai products is a case management application used by fraud analysts to review suspicious financial transactions flagged by the ML models. Fraud analysts are domain experts trained to look for suspicious evidence in transactions but they do not have ML knowledge, and consequently, current XAI methods do not suit their information needs. To address this, we present JOEL, a neural network-based framework to jointly learn a decision-making task and associated domain knowledge explanations. JOEL is tailored to human-in-the-loop domain experts that lack deep technical ML knowledge, providing high-level insights about the model’s predictions that very much resemble the experts’ own reasoning. Moreover, by collecting the domain feedback from a pool of certified experts (human teaching), we promote seamless and better quality explanations. Lastly, we resort to semantic mappings between legacy expert systems and domain taxonomies to automatically annotate a bootstrap training set, overcoming the absence of concept-based human annotations. We validate JOEL empirically on a real-world fraud detection dataset, at Feedzai. We show that JOEL can generalize the explanations from the bootstrap dataset. Furthermore, obtained results indicate that human teaching is able to further improve the explanations prediction quality.
A Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os humanos a tomar decisões de alto risco numa vasta gama de áreas, desde política até à justiça criminal, educação, saúde e serviços financeiros. Porém, é muito difícil para os humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis, embora ainda sem grande avanço. Além disso, os métodos de explicação mais populares — LIME and SHAP — produzem explicações de muito baixo nível, sendo de utilidade limitada para pessoas sem conhecimento de AM. Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por analistas de fraude. Estes são especialistas no domínio treinados para procurar evidências suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada a especialistas de domínio que não têm conhecimento técnico profundo de AM, fornecendo informações de alto nível sobre as previsões do modelo, que muito se assemelham ao raciocínio dos próprios especialistas. Ademais, ao recolher o feedback de especialistas certificados (ensino humano), promovemos explicações contínuas e de melhor qualidade. Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias de domínio para anotar automaticamente um conjunto de dados, superando a ausência de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o ensino humano é capaz de melhorar a qualidade da previsão das explicações.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Interpretable AI"

1

Guerrini, Mauro. De bibliothecariis. Edited by Tiziana Stagi. Florence: Firenze University Press, 2017. http://dx.doi.org/10.36253/978-88-6453-559-3.

Full text
Abstract:
Nell’attività del bibliotecario la dimensione tecnica, essenziale per lavorare con competenza, non può prescindere o separarsi dall’impegno, dall’attenzione ai diritti civili e al modo in cui questi vengono vissuti e praticati nell’ambito della comunità di appartenenza. Garantire l’accesso alle informazioni non può essere limitato alla ‘nostra’ biblioteca, ma dev’essere una responsabilità che riguarda il territorio dove viviamo e dove operiamo, guardando ai nostri colleghi che possono trovarsi in situazioni più difficili della nostra e soprattutto alle persone che si trovano in difficoltà nell’esercitare i propri diritti. L’auspicio è che la trasmissione della conoscenza registrata contribuisca sempre più alla libertà, ai diritti, al benessere di tutti. Quando si capirà che investire in biblioteche significa investire per la democrazia, lo sviluppo economico e la qualità della vita? Il quadro di riferimento per comprendere e interpretare le problematiche delle biblioteche è, come sempre, quello del confronto con le tradizioni bibliotecarie internazionali, a partire dal continente europeo, proprio perché la professione ha oggi un impianto teorico e una dimensione operativa di valore globale.
APA, Harvard, Vancouver, ISO, and other styles
2

Thampi, Ajay. Interpretable AI: Building Explainable Machine Learning Systems. Manning Publications Co. LLC, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cappelen, Herman, and Josh Dever. Making AI Intelligible. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192894724.001.0001.

Full text
Abstract:
Can humans and artificial intelligences share concepts and communicate? One aim of Making AI Intelligible is to show that philosophical work on the metaphysics of meaning can help answer these questions. Cappelen and Dever use the externalist tradition in philosophy of to create models of how AIs and humans can understand each other. In doing so, they also show ways in which that philosophical tradition can be improved: our linguistic encounters with AIs revel that our theories of meaning have been excessively anthropocentric. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (e.g. creditworthiness, recidivism, cancer, and combatants.) If AIs can share our concepts, that will go some way towards justifying this reliance on AI. The book can be read as a proposal for how to take some first steps towards achieving interpretable AI. Making AI Intelligible is of interest to both philosophers of language and anyone who follows current events or interacts with AI systems. It illustrates how philosophy can help us understand and improve our interactions with AI.
APA, Harvard, Vancouver, ISO, and other styles
4

Explainable Fuzzy Systems: Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Explainable Fuzzy Systems: Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bellodi Ansaloni, Anna. L’arte dell’avvocato, actor veritatis. Bononia University Press, 2021. http://dx.doi.org/10.30682/sg279.

Full text
Abstract:
Il volume ripercorre la dottrina retorica romana al fine di delineare la tecnica del discorso persuasivo spendibile in sede di impostazione delle cause e di composizione dell’arringa giudiziaria. La precettistica si interseca con la lezione del teatro, restituendo un ritratto composito di oratore forense, da un lato tributario, per patetismi e gestualità, degli stratagemmi dell’attore tali da determinare una sorta di “spettacolarizzazione” della giustizia, dall’altro, actor veritatis , latore di una verità da interpretare e promuovere nella sede processuale. Punto di incontro che consente la sapiente ed efficace mediazione tra azione, recitazione e verità è la salda adesione ai principi etici e morali nell’adempimento del proprio officium al servizio della giustizia, principi che la tradizione deontologica forense ha trasmesso sino ai nostri giorni. Il libro propone un percorso di lettura che di quelle tecniche e di quei valori svela la perenne attualità nel percorso formativo del giurista moderno chiamato a calcare le scene dell’agone processuale.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Interpretable AI"

1

Elton, Daniel C. "Self-explaining AI as an Alternative to Interpretable AI." In Artificial General Intelligence, 95–106. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52152-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bastani, Osbert, Jeevana Priya Inala, and Armando Solar-Lezama. "Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis." In xxAI - Beyond Explainable AI, 207–28. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_11.

Full text
Abstract:
AbstractReinforcement learning is a promising strategy for automatically training policies for challenging control tasks. However, state-of-the-art deep reinforcement learning algorithms focus on training deep neural network (DNN) policies, which are black box models that are hard to interpret and reason about. In this chapter, we describe recent progress towards learning policies in the form of programs. Compared to DNNs, such programmatic policies are significantly more interpretable, easier to formally verify, and more robust. We give an overview of algorithms designed to learn programmatic policies, and describe several case studies demonstrating their various advantages.
APA, Harvard, Vancouver, ISO, and other styles
3

Preuer, Kristina, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, and Thomas Unterthiner. "Interpretable Deep Learning in Drug Discovery." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 331–45. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bewley, Tom, Jonathan Lawry, and Arthur Richards. "Modelling Agent Policies with Interpretable Imitation Learning." In Trustworthy AI - Integrating Learning, Optimization and Reasoning, 180–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73959-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schütt, Kristof T., Michael Gastegger, Alexandre Tkatchenko, and Klaus-Robert Müller. "Quantum-Chemical Insights from Interpretable Atomistic Neural Networks." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 311–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

MacDonald, Samual, Kaiah Steven, and Maciej Trzaskowski. "Interpretable AI in Healthcare: Enhancing Fairness, Safety, and Trust." In Artificial Intelligence in Medicine, 241–58. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1223-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mallia, Natalia, Alexiei Dingli, and Foaad Haddod. "MIRAI: A Modifiable, Interpretable, and Rational AI Decision System." In Studies in Computational Intelligence, 127–41. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-61045-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dinu, Marius-Constantin, Markus Hofmarcher, Vihang P. Patil, Matthias Dorfer, Patrick M. Blies, Johannes Brandstetter, Jose A. Arjona-Medina, and Sepp Hochreiter. "XAI and Strategy Extraction via Reward Redistribution." In xxAI - Beyond Explainable AI, 177–205. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_10.

Full text
Abstract:
AbstractIn reinforcement learning, an agent interacts with an environment from which it receives rewards, that are then used to learn a task. However, it is often unclear what strategies or concepts the agent has learned to solve the task. Thus, interpretability of the agent’s behavior is an important aspect in practical applications, next to the agent’s performance at the task itself. However, with the increasing complexity of both tasks and agents, interpreting the agent’s behavior becomes much more difficult. Therefore, developing new interpretable RL agents is of high importance. To this end, we propose to use Align-RUDDER as an interpretability method for reinforcement learning. Align-RUDDER is a method based on the recently introduced RUDDER framework, which relies on contribution analysis of an LSTM model, to redistribute rewards to key events. From these key events a strategy can be derived, guiding the agent’s decisions in order to solve a certain task. More importantly, the key events are in general interpretable by humans, and are often sub-tasks; where solving these sub-tasks is crucial for solving the main task. Align-RUDDER enhances the RUDDER framework with methods from multiple sequence alignment (MSA) to identify key events from demonstration trajectories. MSA needs only a few trajectories in order to perform well, and is much better understood than deep learning models such as LSTMs. Consequently, strategies and concepts can be learned from a few expert demonstrations, where the expert can be a human or an agent trained by reinforcement learning. By substituting RUDDER’s LSTM with a profile model that is obtained from MSA of demonstration trajectories, we are able to interpret an agent at three stages: First, by extracting common strategies from demonstration trajectories with MSA. Second, by encoding the most prevalent strategy via the MSA profile model and therefore explaining the expert’s behavior. And third, by allowing the interpretation of an arbitrary agent’s behavior based on its demonstration trajectories.
APA, Harvard, Vancouver, ISO, and other styles
9

Hong, Seunghoon, Dingdong Yang, Jongwook Choi, and Honglak Lee. "Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 77–95. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Adadi, Amina, and Mohammed Berrada. "Explainable AI for Healthcare: From Black Box to Interpretable Models." In Embedded Systems and Artificial Intelligence, 327–37. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0947-6_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Interpretable AI"

1

Sengoz, Nilgun, and Tuncay Yigit. "Towards Third Generation AI: Explainable and Interpretable AI." In 2022 7th International Conference on Computer Science and Engineering (UBMK). IEEE, 2022. http://dx.doi.org/10.1109/ubmk55850.2022.9919510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Demajo, Lara Marie, Vince Vella, and Alexiei Dingli. "Explainable AI for Interpretable Credit Scoring." In 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101516.

Full text
Abstract:
With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness.
APA, Harvard, Vancouver, ISO, and other styles
3

Custode, Leonardo Lucio, and Giovanni Iacca. "Interpretable AI for policy-making in pandemics." In GECCO '22: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3520304.3533959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guidotti, Riccardo, and Anna Monreale. "Designing Shapelets for Interpretable Data-Agnostic Classification." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Wei, Brian Barr, and John Paisley. "An Interpretable Deep Classifier for Counterfactual Generation." In ICAIF '22: 3rd ACM International Conference on AI in Finance. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3533271.3561722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ignatiev, Alexey, Joao Marques-Silva, Nina Narodytska, and Peter J. Stuckey. "Reasoning-Based Learning of Interpretable ML Models." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/608.

Full text
Abstract:
Artificial Intelligence (AI) is widely used in decision making procedures in myriads of real-world applications across important practical areas such as finance, healthcare, education, and safety critical systems. Due to its ubiquitous use in safety and privacy critical domains, it is often vital to understand the reasoning behind the AI decisions, which motivates the need for explainable AI (XAI). One of the major approaches to XAI is represented by computing so-called interpretable machine learning (ML) models, such as decision trees (DT), decision lists (DL) and decision sets (DS). These models build on the use of if-then rules and are thus deemed to be easily understandable by humans. A number of approaches have been proposed in the recent past to devising all kinds of interpretable ML models, the most prominent of which involve encoding the problem into a logic formalism, which is then tackled by invoking a reasoning or discrete optimization procedure. This paper overviews the recent advances of the reasoning and constraints based approaches to learning interpretable ML models and discusses their advantages and limitations.
APA, Harvard, Vancouver, ISO, and other styles
7

Verma, Pulkit, Shashank Rao Marpally, and Siddharth Srivastava. "Discovering User-Interpretable Capabilities of Black-Box Planning Agents." In 19th International Conference on Principles of Knowledge Representation and Reasoning {KR-2022}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/kr.2022/36.

Full text
Abstract:
Several approaches have been developed for answering users' specific questions about AI behavior and for assessing their core functionality in terms of primitive executable actions. However, the problem of summarizing an AI agent's broad capabilities for a user is comparatively new. This paper presents an algorithm for discovering from scratch the suite of high-level "capabilities" that an AI system with arbitrary internal planning algorithms/policies can perform. It computes conditions describing the applicability and effects of these capabilities in user-interpretable terms. Starting from a set of user-interpretable state properties, an AI agent, and a simulator that the agent can interact with, our algorithm returns a set of high-level capabilities with their parameterized descriptions. Empirical evaluation on several game-based scenarios shows that this approach efficiently learns descriptions of various types of AI agents in deterministic, fully observable settings. User studies show that such descriptions are easier to understand and reason with than the agent's primitive actions.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Tae Wan, and Bryan R. Routledge. "Informational Privacy, A Right to Explanation, and Interpretable AI." In 2018 IEEE Symposium on Privacy-Aware Computing (PAC). IEEE, 2018. http://dx.doi.org/10.1109/pac.2018.00013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Preece, Alun, Dan Harborne, Ramya Raghavendra, Richard Tomsett, and Dave Braines. "Provisioning Robust and Interpretable AI/ML-Based Service Bundles." In MILCOM 2018 - IEEE Military Communications Conference. IEEE, 2018. http://dx.doi.org/10.1109/milcom.2018.8599838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pitroda, Vidhi, Mostafa M. Fouda, and Zubair Md Fadlullah. "An Explainable AI Model for Interpretable Lung Disease Classification." In 2021 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS). IEEE, 2021. http://dx.doi.org/10.1109/iotais53735.2021.9628573.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Interpretable AI"

1

Chen, Thomas, Biprateep Dey, Aishik Ghosh, Michael Kagan, Brian Nord, and Nesar Ramachandra. Interpretable Uncertainty Quantification in AI for HEP. Office of Scientific and Technical Information (OSTI), August 2022. http://dx.doi.org/10.2172/1886020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Qing, William Riley, and James Randerson. Improve wildfire predictability driven by extreme water cycle with interpretable physically-guided ML/AI. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography