Journal articles on the topic 'Interpretable AI'

To see the other types of publications on this topic, follow the link: Interpretable AI.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interpretable AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.

Full text
Abstract:
This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide further insights into the relationship between the input features and the predictions. SHAP methodology additionally provides a more holistic view of the effect of the inputs on the output predictions. The results also present the commonalities between the insights gained using LIME and SHAP. Although this paper focuses on the use of deep neural networks trained on UCI Breast Cancer Wisconsin dataset, the methodology can be applied to other neural networks and architectures trained on other applications. The deep neural network trained in this work provides a high level of accuracy. Analyzing the model using LIME and SHAP adds the much desired benefit of providing explanations for the recommendations made by the trained model.
APA, Harvard, Vancouver, ISO, and other styles
2

Jia, Xun, Lei Ren, and Jing Cai. "Clinical implementation of AI technologies will require interpretable AI models." Medical Physics 47, no. 1 (November 19, 2019): 1–4. http://dx.doi.org/10.1002/mp.13891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Wei, Jianshan Sun, and Mengxiang Li. "Guest editorial: Interpretable AI-enabled online behavior analytics." Internet Research 32, no. 2 (March 15, 2022): 401–5. http://dx.doi.org/10.1108/intr-04-2022-683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skirzyński, Julian, Frederic Becker, and Falk Lieder. "Automatic discovery of interpretable planning strategies." Machine Learning 110, no. 9 (April 9, 2021): 2641–83. http://dx.doi.org/10.1007/s10994-021-05963-2.

Full text
Abstract:
AbstractWhen making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decision-makers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that providing the decision rules generated by AI-Interpret as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Moreover, our fourth experiment revealed that this approach is significantly more effective at improving human decision-making than training people by giving them performance feedback. Finally, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making. The code for our algorithm and the experiments is available at https://github.com/RationalityEnhancement/InterpretableStrategyDiscovery.
APA, Harvard, Vancouver, ISO, and other styles
5

Tomsett, Richard, Alun Preece, Dave Braines, Federico Cerutti, Supriyo Chakraborty, Mani Srivastava, Gavin Pearson, and Lance Kaplan. "Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI." Patterns 1, no. 4 (July 2020): 100049. http://dx.doi.org/10.1016/j.patter.2020.100049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Herzog, Christian. "On the risk of confusing interpretability with explicability." AI and Ethics 2, no. 1 (December 9, 2021): 219–25. http://dx.doi.org/10.1007/s43681-021-00121-9.

Full text
Abstract:
AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.
APA, Harvard, Vancouver, ISO, and other styles
7

Schmidt Nordmo, Tor-Arne, Ove Kvalsvik, Svein Ove Kvalsund, Birte Hansen, and Michael A. Riegler. "Fish AI." Nordic Machine Intelligence 2, no. 2 (June 2, 2022): 1–3. http://dx.doi.org/10.5617/nmi.9657.

Full text
Abstract:
Sustainable Commercial Fishing is the second challenge at the Nordic AI Meet following the successful MedAI, which had a focus on medical image segmentation and transparency in machine learning (ML)-based systems. FishAI focuses on a new domain, namely, commercial fishing and how to make it more sustainable with the help of machine learning. A range of public available datasets is used to tackle three specific tasks. The first one is to predict fishing coordinates to optimize catching of specific fish, the second one is to create a report that can be used by experienced fishermen, and the third task is to make a sustainable fishing plan that provides a route for a week. The second and third task require to some extend explainable and interpretable models that can provide explanations. A development dataset is provided and all methods will be tested on a concealed test dataset and assessed by an expert jury. artificial intelligence; machine learning; segmentation; transparency; medicine
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Sungjoon, Akshat Singhal, Erica Silva, Jason F. Kreisberg, and Trey Ideker. "Abstract 1159: Predicting clinical drug responses using a few-shot learning-based interpretable AI." Cancer Research 82, no. 12_Supplement (June 15, 2022): 1159. http://dx.doi.org/10.1158/1538-7445.am2022-1159.

Full text
Abstract:
Abstract High-throughput screens have generated large amounts of data characterizing how thousands of cell lines respond to hundreds of anti-cancer therapies. However, predictive drug response models trained using data from cell lines often fail to translate to clinical applications. Here, we focus on two key issues to improve clinical performance: 1) Transferability: the ability of predictive models to quickly adapt to clinical contexts even with a limited number of samples from patients, and 2) Interpretability: the ability to explain how drug-response predictions are being made given an individual patient’s genotype. Notably, an interpretable AI model can also help to identify biomarkers of treatment response in individual patients. By leveraging new developments in meta-learning and interpretable AI, we have developed an interpretable drug response prediction model that is trained on large amounts of data from experiments using cell lines and then transferred to clinical applications. We assessed our model’s clinical utility using AACR Project GENIE data, which contains mutational profiles from tumors and the patient’s therapeutic responses. We have demonstrated the feasibility of applying our AI-driven predictive model to clinical settings and shown how this model can support clinical decision making for tumor boards. Citation Format: Sungjoon Park, Akshat Singhal, Erica Silva, Jason F. Kreisberg, Trey Ideker. Predicting clinical drug responses using a few-shot learning-based interpretable AI [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1159.
APA, Harvard, Vancouver, ISO, and other styles
9

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Demajo, Lara Marie, Vince Vella, and Alexiei Dingli. "An Explanation Framework for Interpretable Credit Scoring." International Journal of Artificial Intelligence & Applications 12, no. 1 (January 31, 2021): 19–38. http://dx.doi.org/10.5121/ijaia.2021.12102.

Full text
Abstract:
With the recent boosted enthusiasm in Artificial Intelligence (AI) and Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. However, despite the evergrowing achievements, the biggest obstacle in most AI systems is their lack of interpretability. This deficiency of transparency limits their application in different domains including credit scoring. Credit scoring systems help financial experts make better decisions regarding whether or not to accept a loan application so that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. A recently introduced concept is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance- based) that are required by different people in different situations. Evaluation through the use of functionally-grounded, application-grounded and human-grounded analysis shows that the explanations provided are simple and consistent as well as correct, effective, easy to understand, sufficiently detailed and trustworthy.
APA, Harvard, Vancouver, ISO, and other styles
11

GhoshRoy, Debasmita, Parvez Ahmad Alvi, and KC Santosh. "Explainable AI to Predict Male Fertility Using Extreme Gradient Boosting Algorithm with SMOTE." Electronics 12, no. 1 (December 21, 2022): 15. http://dx.doi.org/10.3390/electronics12010015.

Full text
Abstract:
Infertility is a common problem across the world. Infertility distribution due to male factors ranges from 40% to 50%. Existing artificial intelligence (AI) systems are not often human interpretable. Further, clinicians are unaware of how data analytical tools make decisions, and as a result, they have limited exposure to healthcare. Using explainable AI tools makes AI systems transparent and traceable, enhancing users’ trust and confidence in decision-making. The main contribution of this study is to introduce an explainable model for investigating male fertility prediction. Nine features related to lifestyle and environmental factors are utilized to develop a male fertility prediction model. Five AI tools, namely support vector machine, adaptive boosting, conventional extreme gradient boost (XGB), random forest, and extra tree algorithms are deployed with a balanced and imbalanced dataset. To produce our model in a trustworthy way, an explainable AI is applied. The techniques are (1) local interpretable model-agnostic explanations (LIME) and (2) Shapley additive explanations (SHAP). Additionally, ELI5 is utilized to inspect the feature’s importance. Finally, XGB outperformed and obtained an AUC of 0.98, which is optimal compared to existing AI systems.
APA, Harvard, Vancouver, ISO, and other styles
12

Dikshit, Abhirup, and Biswajeet Pradhan. "Interpretable and explainable AI (XAI) model for spatial drought prediction." Science of The Total Environment 801 (December 2021): 149797. http://dx.doi.org/10.1016/j.scitotenv.2021.149797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rampal, Neelesh, Tom Shand, Adam Wooler, and Christo Rautenbach. "Interpretable Deep Learning Applied to Rip Current Detection and Localization." Remote Sensing 14, no. 23 (November 29, 2022): 6048. http://dx.doi.org/10.3390/rs14236048.

Full text
Abstract:
A rip current is a strong, localized current of water which moves along and away from the shore. Recent studies have suggested that drownings due to rip currents are still a major threat to beach safety. Identification of rip currents is important for lifeguards when making decisions on where to designate patrolled areas. The public also require information while deciding where to swim when lifeguards are not on patrol. In the present study we present an artificial intelligence (AI) algorithm that both identifies whether a rip current exists in images/video, and also localizes where that rip current occurs. While there have been some significant advances in AI for rip current detection and localization, there is a lack of research ensuring that an AI algorithm can generalize well to a diverse range of coastal environments and marine conditions. The present study made use of an interpretable AI method, gradient-weighted class-activation maps (Grad-CAM), which is a novel approach for amorphous rip current detection. The training data/images were diverse and encompass rip currents in a wide variety of environmental settings, ensuring model generalization. An open-access aerial catalogue of rip currents were used for model training. Here, the aerial imagery was also augmented by applying a wide variety of randomized image transformations (e.g., perspective, rotational transforms, and additive noise), which dramatically improves model performance through generalization. To account for diverse environmental settings, a synthetically generated training set, containing fog, shadows, and rain, was also added to the rip current images, thus increased the training dataset approximately 10-fold. Interpretable AI has dramatically improved the accuracy of unbounded rip current detection, which can correctly classify and localize rip currents about 89% of the time when validated on independent videos from surf-cameras at oblique angles. The novelty also lies in the ability to capture some shape characteristics of the amorphous rip current structure without the need of a predefined bounding box, therefore enabling the use of remote technology like drones. A comparison with well-established coastal image processing techniques is also presented via a short discussion and easy reference table. The strengths and weaknesses of both methods are highlighted and discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Solanke, Abiodun A. "Explainable digital forensics AI: Towards mitigating distrust in AI-based digital forensics analysis using interpretable models." Forensic Science International: Digital Investigation 42 (July 2022): 301403. http://dx.doi.org/10.1016/j.fsidi.2022.301403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Eder, Matthias, Emanuel Moser, Andreas Holzinger, Claire Jean-Quartier, and Fleur Jeanquartier. "Interpretable Machine Learning with Brain Image and Survival Data." BioMedInformatics 2, no. 3 (September 6, 2022): 492–510. http://dx.doi.org/10.3390/biomedinformatics2030031.

Full text
Abstract:
Recent developments in research on artificial intelligence (AI) in medicine deal with the analysis of image data such as Magnetic Resonance Imaging (MRI) scans to support the of decision-making of medical personnel. For this purpose, machine learning (ML) algorithms are often used, which do not explain the internal decision-making process at all. Thus, it is often difficult to validate or interpret the results of the applied AI methods. This manuscript aims to overcome this problem by using methods of explainable AI (XAI) to interpret the decision-making of an ML algorithm in the use case of predicting the survival rate of patients with brain tumors based on MRI scans. Therefore, we explore the analysis of brain images together with survival data to predict survival in gliomas with a focus on improving the interpretability of the results. Using the Brain Tumor Segmentation dataset BraTS 2020, we used a well-validated dataset for evaluation and relied on a convolutional neural network structure to improve the explainability of important features by adding Shapley overlays. The trained network models were used to evaluate SHapley Additive exPlanations (SHAP) directly and were not optimized for accuracy. The resulting overfitting of some network structures is therefore seen as a use case of the presented interpretation method. It is shown that the network structure can be validated by experts using visualizations, thus making the decision-making of the method interpretable. Our study highlights the feasibility of combining explainers with 3D voxels and also the fact that the interpretation of prediction results significantly supports the evaluation of results. The implementation in python is available on gitlab as “XAIforBrainImgSurv”.
APA, Harvard, Vancouver, ISO, and other styles
16

Vishwarupe, Varad, Prachi M. Joshi, Nicole Mathias, Shrey Maheshwari, Shweta Mhaisalkar, and Vishal Pawar. "Explainable AI and Interpretable Machine Learning: A Case Study in Perspective." Procedia Computer Science 204 (2022): 869–76. http://dx.doi.org/10.1016/j.procs.2022.08.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Combs, Kara, Mary Fendley, and Trevor Bihl. "A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 8 (June 1, 2020): 61–72. http://dx.doi.org/10.37394/232018.2020.8.9.

Full text
Abstract:
Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Belle, Vaishak. "The quest for interpretable and responsible artificial intelligence." Biochemist 41, no. 5 (October 18, 2019): 16–19. http://dx.doi.org/10.1042/bio04105016.

Full text
Abstract:
Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in computational biology, finance, law and robotics. However, such a highly positive impact is coupled with significant challenges: how do we understand the decisions suggested by these systems in order that we can trust them? How can they be held accountable for those decisions?
APA, Harvard, Vancouver, ISO, and other styles
19

Calegari, Roberta, Giovanni Ciatto, and Andrea Omicini. "On the integration of symbolic and sub-symbolic techniques for XAI: A survey." Intelligenza Artificiale 14, no. 1 (September 17, 2020): 7–32. http://dx.doi.org/10.3233/ia-190036.

Full text
Abstract:
The more intelligent systems based on sub-symbolic techniques pervade our everyday lives, the less human can understand them. This is why symbolic approaches are getting more and more attention in the general effort to make AI interpretable, explainable, and trustable. Understanding the current state of the art of AI techniques integrating symbolic and sub-symbolic approaches is then of paramount importance, nowadays—in particular in the XAI perspective. This is why this paper provides an overview of the main symbolic/sub-symbolic integration techniques, focussing in particular on those targeting explainable AI systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Vasan Srinivasan, Aditya, and Mona de Boer. "Improving trust in data and algorithms in the medium of AI." Maandblad Voor Accountancy en Bedrijfseconomie 94, no. 3/4 (April 22, 2020): 147–60. http://dx.doi.org/10.5117/mab.94.49425.

Full text
Abstract:
Artificial Intelligence (AI) has great potential to solve a wide spectrum of real-world business problems, but the lack of trust from the perspective of potential users, investors, and other stakeholders towards AI is preventing them from adoption. To build and strengthen trust in AI, technology creators should ensure that the data which is acquired, processed and being fed into the algorithm is accurate, reliable, consistent, relevant, bias-free, and complete. Similarly, the algorithm that is selected, trained, and tested should be explainable, interpretable, transparent, bias-free, reliable, and useful. Most importantly, the algorithm and its outcomes should be auditable and properly governed.
APA, Harvard, Vancouver, ISO, and other styles
21

Alm, Cecilia O., and Alex Hedges. "Visualizing NLP in Undergraduate Students' Learning about Natural Language." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 17 (May 18, 2021): 15480–88. http://dx.doi.org/10.1609/aaai.v35i17.17822.

Full text
Abstract:
We report on the use of open-source natural language processing capabilities in a web-based interface to allow undergraduate students to apply what they have learned about formal natural language structures. The learning activities encourage students to interpret data in new ways, think originally about natural language, and critique the back-end NLP models and algorithms visualized on the user front end. This work is of relevance to AI resources developed for education by focusing on inclusivity of students from many disciplinary backgrounds. Specifically, we comprehensively extended a web-based system with new resources. To test the students' reactions to NLP analyses that offer insights into both the strengths and limitations of AI systems, we incorporated a range of automated analyses focused on language-independent processing or meaning representations which still represent challenges for NLP. We conducted a survey-based evaluation with students in open-ended case-based assignments in undergraduate coursework. Responses indicated that the students reinforced their knowledge, applied critical thinking about language and NLP applications, and used the application not to solve the assignment for them, but as a tool in their own effort to address the task. We further discuss how using interpretable visualizations of system decisions is an opportunity to learn about ethical issues in NLP, and how making AI systems interpretable may broaden multidisciplinary interest in AI in early educational experiences.
APA, Harvard, Vancouver, ISO, and other styles
22

Baldini, Ioana, Clark Barrett, Antonio Chella, Carlos Cinelli, David Gamez, Leilani Gilpin, Knut Hinkelmann, et al. "Reports of the AAAI 2019 Spring Symposium Series." AI Magazine 40, no. 3 (September 30, 2019): 59–66. http://dx.doi.org/10.1609/aimag.v40i3.5181.

Full text
Abstract:
The AAAI 2019 Spring Series was held Monday through Wednesday, March 25–27, 2019 on the campus of Stanford University, adjacent to Palo Alto, California. The titles of the nine symposia were Artificial Intelligence, Autonomous Machines, and Human Awareness: User Interventions, Intuition and Mutually Constructed Context; Beyond Curve Fitting — Causation, Counterfactuals and Imagination-Based AI; Combining Machine Learning with Knowledge Engineering; Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness; Privacy- Enhancing Artificial Intelligence and Language Technologies; Story-Enabled Intelligence; Towards Artificial Intelligence for Collaborative Open Science; Towards Conscious AI Systems; and Verification of Neural Networks.
APA, Harvard, Vancouver, ISO, and other styles
23

Hah, Hyeyoung, and Deana Goldin. "Moving toward AI-assisted decision-making: Observation on clinicians’ management of multimedia patient information in synchronous and asynchronous telehealth contexts." Health Informatics Journal 28, no. 1 (January 2022): 146045822210770. http://dx.doi.org/10.1177/14604582221077049.

Full text
Abstract:
Background. Artificial intelligence (AI) intends to support clinicians’ patient diagnosis decisions by processing and identifying insights from multimedia patient information. Objective. We explored clinicians’ current decision-making patterns using multimedia patient information (MPI) provided by AI algorithms and identified areas where AI can support clinicians in diagnostic decision-making. Design. We recruited 87 advanced practice nursing (APN) students who had experience making diagnostic decisions using AI algorithms under various care contexts, including telehealth and other healthcare modalities. The participants described their diagnostic decision-making experiences using videos, images, and audio-based MPI. Results. Clinicians processed multimedia patient information differentially such that their focus, selection, and utilization of MPI influence diagnosis and satisfaction levels. Conclusions and implications. To streamline collaboration between AI and clinicians across healthcare contexts, AI should understand clinicians’ patterns of MPI processing under various care environments and provide them with interpretable analytic results for them. Furthermore, clinicians must be trained with the interface and contents of AI technology and analytic assistance.
APA, Harvard, Vancouver, ISO, and other styles
24

Gómez, Blas, Estefanía Coronado, José Villalón, and Antonio Garrido. "Intelli-GATS: Dynamic Selection of the Wi-Fi Multicast Transmission Policy Using Interpretable-AI." Wireless Communications and Mobile Computing 2022 (November 30, 2022): 1–18. http://dx.doi.org/10.1155/2022/7922273.

Full text
Abstract:
COVID-19 has changed the way we use networks, as multimedia content now represents an even more significant portion of the traffic due to the rise in remote education and telecommuting. In this context, in which Wi-Fi is the predominant radio access technology (RAT), multicast transmissions have become a way to reduce overhead in the network when many users access the same content. However, Wi-Fi lacks a versatile multicast transmission method for ensuring efficiency, scalability, and reliability. Although the IEEE 802.11aa amendment defines different multicast operation modes, these perform well only in particular situations and do not adapt to different channel conditions. Moreover, methods for dynamically adapting them to the situation do not exist. In view of these shortcomings, artificial intelligence (AI) and machine learning (ML) have emerged as solutions to automating network management. However, the most accurate models usually operate as black boxes, triggering mistrust among human experts. Accordingly, research efforts have moved towards using Interpretable-AI models that humans can easily track. Thus, this work presents an Interpretable-AI solution designed to dynamically select the best multicast operation mode to improve the scalability and efficiency of this kind of transmission. The evaluation shows that our approach outperforms the standard by up to 38%.
APA, Harvard, Vancouver, ISO, and other styles
25

Weitz, Katharina, Teena Hassan, Ute Schmid, and Jens-Uwe Garbas. "Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods." tm - Technisches Messen 86, no. 7-8 (July 26, 2019): 404–12. http://dx.doi.org/10.1515/teme-2019-0024.

Full text
Abstract:
AbstractDeep neural networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep neural methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that due to the complexity of these networks, it is not possible to interpret the knowledge that is stored inside the network. It is a black-box learning procedure. Explainable Artificial Intelligence (AI) approaches mitigate this problem by extracting explanations for decisions and representing them in a human-interpretable form. The aim of this paper is to investigate the explainable AI methods Layer-wise Relevance Propagation (LRP) and Local Interpretable Model-agnostic Explanations (LIME). These approaches are applied to explain how a deep neural network distinguishes facial expressions of pain from facial expressions of emotions such as happiness and disgust.
APA, Harvard, Vancouver, ISO, and other styles
26

Fuhrman, Jordan D., Naveena Gorre, Qiyuan Hu, Hui Li, Issam El Naqa, and Maryellen L. Giger. "A review of explainable and interpretable AI with applications in COVID‐19 imaging." Medical Physics 49, no. 1 (December 7, 2021): 1–14. http://dx.doi.org/10.1002/mp.15359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Santala, Onni E., Jukka A. Lipponen, Helena Jäntti, Tuomas T. Rissanen, Mika P. Tarvainen, Tomi P. Laitinen, Tiina M. Laitinen, et al. "Continuous mHealth Patch Monitoring for the Algorithm-Based Detection of Atrial Fibrillation: Feasibility and Diagnostic Accuracy Study." JMIR Cardio 6, no. 1 (June 21, 2022): e31230. http://dx.doi.org/10.2196/31230.

Full text
Abstract:
Background The detection of atrial fibrillation (AF) is a major clinical challenge as AF is often paroxysmal and asymptomatic. Novel mobile health (mHealth) technologies could provide a cost-effective and reliable solution for AF screening. However, many of these techniques have not been clinically validated. Objective The purpose of this study is to evaluate the feasibility and reliability of artificial intelligence (AI) arrhythmia analysis for AF detection with an mHealth patch device designed for personal well-being. Methods Patients (N=178) with an AF (n=79, 44%) or sinus rhythm (n=99, 56%) were recruited from the emergency care department. A single-lead, 24-hour, electrocardiogram-based heart rate variability (HRV) measurement was recorded with the mHealth patch device and analyzed with a novel AI arrhythmia analysis software. Simultaneously registered 3-lead electrocardiograms (Holter) served as the gold standard for the final rhythm diagnostics. Results Of the HRV data produced by the single-lead mHealth patch, 81.5% (3099/3802 hours) were interpretable, and the subject-based median for interpretable HRV data was 99% (25th percentile=77% and 75th percentile=100%). The AI arrhythmia detection algorithm detected AF correctly in all patients in the AF group and suggested the presence of AF in 5 patients in the control group, resulting in a subject-based AF detection accuracy of 97.2%, a sensitivity of 100%, and a specificity of 94.9%. The time-based AF detection accuracy, sensitivity, and specificity of the AI arrhythmia detection algorithm were 98.7%, 99.6%, and 98.0%, respectively. Conclusions The 24-hour HRV monitoring by the mHealth patch device enabled accurate automatic AF detection. Thus, the wearable mHealth patch device with AI arrhythmia analysis is a novel method for AF screening. Trial Registration ClinicalTrials.gov NCT03507335; https://clinicaltrials.gov/ct2/show/NCT03507335
APA, Harvard, Vancouver, ISO, and other styles
28

Cavallaro, Massimo, Ed Moran, Benjamin Collyer, Noel D. McCarthy, Christopher Green, and Matt J. Keeling. "Informing antimicrobial stewardship with explainable AI." PLOS Digital Health 2, no. 1 (January 5, 2023): e0000162. http://dx.doi.org/10.1371/journal.pdig.0000162.

Full text
Abstract:
The accuracy and flexibility of artificial intelligence (AI) systems often comes at the cost of a decreased ability to offer an intuitive explanation of their predictions. This hinders trust and discourage adoption of AI in healthcare, exacerbated by concerns over liabilities and risks to patients’ health in case of misdiagnosis. Providing an explanation for a model’s prediction is possible due to recent advances in the field of interpretable machine learning. We considered a data set of hospital admissions linked to records of antibiotic prescriptions and susceptibilities of bacterial isolates. An appropriately trained gradient boosted decision tree algorithm, supplemented by a Shapley explanation model, predicts the likely antimicrobial drug resistance, with the odds of resistance informed by characteristics of the patient, admission data, and historical drug treatments and culture test results. Applying this AI-based system, we found that it substantially reduces the risk of mismatched treatment compared with the observed prescriptions. The Shapley values provide an intuitive association between observations/data and outcomes; the associations identified are broadly consistent with expectations based on prior knowledge from health specialists. The results, and the ability to attribute confidence and explanations, support the wider adoption of AI in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
29

Hijazi, Haytham, Manar Abu Talib, Ahmad Hasasneh, Ali Bou Nassif, Nafisa Ahmed, and Qassim Nasir. "Wearable Devices, Smartphones, and Interpretable Artificial Intelligence in Combating COVID-19." Sensors 21, no. 24 (December 17, 2021): 8424. http://dx.doi.org/10.3390/s21248424.

Full text
Abstract:
Physiological measures, such as heart rate variability (HRV) and beats per minute (BPM), can be powerful health indicators of respiratory infections. HRV and BPM can be acquired through widely available wrist-worn biometric wearables and smartphones. Successive abnormal changes in these indicators could potentially be an early sign of respiratory infections such as COVID-19. Thus, wearables and smartphones should play a significant role in combating COVID-19 through the early detection supported by other contextual data and artificial intelligence (AI) techniques. In this paper, we investigate the role of the heart measurements (i.e., HRV and BPM) collected from wearables and smartphones in demonstrating early onsets of the inflammatory response to the COVID-19. The AI framework consists of two blocks: an interpretable prediction model to classify the HRV measurements status (as normal or affected by inflammation) and a recurrent neural network (RNN) to analyze users’ daily status (i.e., textual logs in a mobile application). Both classification decisions are integrated to generate the final decision as either “potentially COVID-19 infected” or “no evident signs of infection”. We used a publicly available dataset, which comprises 186 patients with more than 3200 HRV readings and numerous user textual logs. The first evaluation of the approach showed an accuracy of 83.34 ± 1.68% with 0.91, 0.88, 0.89 precision, recall, and F1-Score, respectively, in predicting the infection two days before the onset of the symptoms supported by a model interpretation using the local interpretable model-agnostic explanations (LIME).
APA, Harvard, Vancouver, ISO, and other styles
30

De Sousa Ribeiro, Manuel, and João Leite. "Aligning Artificial Neural Networks and Ontologies towards Explainable AI." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 4932–40. http://dx.doi.org/10.1609/aaai.v35i6.16626.

Full text
Abstract:
Neural networks have been the key to solve a variety of different problems. However, neural network models are still regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain result. We address this issue by leveraging on ontologies and building small classifiers that map a neural network model's internal state to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural network models. Using an image classification problem as testing ground, we discuss how to map the internal state of a neural network to the concepts of an ontology, examine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.
APA, Harvard, Vancouver, ISO, and other styles
31

Connie, Tee, Yee Fan Tan, Michael Kah Ong Goh, Hock Woon Hon, Zulaikha Kadim, and Li Pei Wong. "Explainable health prediction from facial features with transfer learning." Journal of Intelligent & Fuzzy Systems 42, no. 3 (February 2, 2022): 2491–503. http://dx.doi.org/10.3233/jifs-211737.

Full text
Abstract:
In the recent years, Artificial Intelligence (AI) has been widely deployed in the healthcare industry. The new AI technology enables efficient and personalized healthcare systems for the public. In this paper, transfer learning with pre-trained VGGFace model is applied to identify sick symptoms based on the facial features of a person. As the deep learning model’s operation is unknown for making a decision, this paper investigates the use of Explainable AI (XAI) techniques for soliciting explanations for the predictions made by the model. Various XAI techniques including Integrated Gradient, Explainable region-based AI (XRAI) and Local Interpretable Model-Agnostic Explanations (LIME) are studied. XAI is crucial to increase the model’s transparency and reliability for practical deployment. Experimental results demonstrate that the attribution method can give proper explanations for the decisions made by highlighting important attributes in the images. The facial features that account for positive and negative classes predictions are highlighted appropriately for effective visualization. XAI can help to increase accountability and trustworthiness of the healthcare system as it provides insights for understanding how a conclusion is derived from the AI model.
APA, Harvard, Vancouver, ISO, and other styles
32

Yao, Melissa Min-Szu, Hao Du, Mikael Hartman, Wing P. Chan, and Mengling Feng. "End-to-End Calcification Distribution Pattern Recognition for Mammograms: An Interpretable Approach with GNN." Diagnostics 12, no. 6 (June 2, 2022): 1376. http://dx.doi.org/10.3390/diagnostics12061376.

Full text
Abstract:
Purpose: We aimed to develop a novel interpretable artificial intelligence (AI) model algorithm focusing on automatic detection and classification of various patterns of calcification distribution in mammographic images using a unique graph convolution approach. Materials and methods: Images from 292 patients, which showed calcifications according to the mammographic reports and diagnosed breast cancers, were collected. The calcification distributions were classified as diffuse, segmental, regional, grouped, or linear. Excluded were mammograms with (1) breast cancer with multiple lexicons such as mass, asymmetry, or architectural distortion without calcifications; (2) hidden calcifications that were difficult to mark; or (3) incomplete medical records. Results: A graph-convolutional-network-based model was developed. A total of 581 mammographic images from 292 cases of breast cancer were divided based on the calcification distribution pattern: diffuse (n = 67), regional (n = 115), group (n = 337), linear (n = 8), or segmental (n = 54). The classification performances were measured using metrics including precision, recall, F1 score, accuracy, and multi-class area under the receiver operating characteristic curve. The proposed model achieved a precision of 0.522 ± 0.028, sensitivity of 0.643 ± 0.017, specificity of 0.847 ± 0.009, F1 score of 0.559 ± 0.018, accuracy of 64.325 ± 1.694%, and area under the curve of 0.745 ± 0.030; thus, the method was found to be superior compared to all baseline models. The predicted linear and diffuse classifications were highly similar to the ground truth, and the predicted grouped and regional classifications were also superior compared to baseline models. The prediction results are interpretable using visualization methods to highlight the important calcification nodes in graphs. Conclusions: The proposed deep neural network framework is an AI solution that automatically detects and classifies calcification distribution patterns on mammographic images highly suspected of showing breast cancers. Further study of the AI model in an actual clinical setting and additional data collection will improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
33

Nguyen Thu Hien, Nguyen Phuong Nhung, and Nguyen Tuan Linh. "Adaptive neuro-fuzzy inference system classifier with interpretability for cancer diagnostic." Journal of Military Science and Technology, CSCE6 (December 30, 2022): 56–64. http://dx.doi.org/10.54939/1859-1043.j.mst.csce6.2022.56-64.

Full text
Abstract:
Clinical outcome analysis using patient medical data facilitates clinical decision-making and increases prognostic accuracy. Recently, deep learning (DL) with learning big data features has shown expert-level accuracy in predicting clinical outcomes. Many of these sophisticated machine learning models, however, lack interpretability, creating significant trust-related healthcare issues. This necessarily requires the need for interpretable AI systems capable of explaining their decisions. In this respect, the paper proposes an interpretable classifier of the adaptive neuro-fuzzy inference method (iANFIS), which combines the fuzzy inference system with critical rule selection by attention mechanism. The rule-based processing of ANFIS helps the user to understand the behavior of the proposed model. The essential activated rule and the most important input features that predict the outcome are identified by the attention-based rule selector. We conduct two experiments with two cancer diagnostic datasets for verifying the performance of the proposed iANFIS. By using recursive rule elimination (RRE) to prune fuzzy rules, the model’s complexity is significantly reduced while preserving system efficiency that makes it more interpretable.
APA, Harvard, Vancouver, ISO, and other styles
34

Das, Arun, Jeffrey Mock, Yufei Huang, Edward Golob, and Peyman Najafirad. "Interpretable Self-Supervised Facial Micro-Expression Learning to Predict Cognitive State and Neurological Disorders." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 818–26. http://dx.doi.org/10.1609/aaai.v35i1.16164.

Full text
Abstract:
Human behavior is the confluence of output from voluntary and involuntary motor systems. The neural activities that mediate behavior, from individual cells to distributed networks, are in a state of constant flux. Artificial intelligence (AI) research over the past decade shows that behavior, in the form of facial muscle activity, can reveal information about fleeting voluntary and involuntary motor system activity related to emotion, pain, and deception. However, the AI algorithms often lack an explanation for their decisions, and learning meaningful representations requires large datasets labeled by a subject-matter expert. Motivated by the success of using facial muscle movements to classify brain states and the importance of learning from small amounts of data, we propose an explainable self-supervised representation-learning paradigm that learns meaningful temporal facial muscle movement patterns from limited samples. We validate our methodology by carrying out comprehensive empirical study to predict future speech behavior in a real-world dataset of adults who stutter (AWS). Our explainability study found facial muscle movements around the eyes (p
APA, Harvard, Vancouver, ISO, and other styles
35

De, Tanusree, Prasenjit Giri, Ahmeduvesh Mevawala, Ramyasri Nemani, and Arati Deo. "Explainable AI: A Hybrid Approach to Generate Human-Interpretable Explanation for Deep Learning Prediction." Procedia Computer Science 168 (2020): 40–48. http://dx.doi.org/10.1016/j.procs.2020.02.255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chao, Wenhan, Xin Jiang, Zhunchen Luo, Yakun Hu, and Wenjia Ma. "Interpretable Charge Prediction for Criminal Cases with Dynamic Rationale Attention." Journal of Artificial Intelligence Research 66 (November 25, 2019): 743–64. http://dx.doi.org/10.1613/jair.1.11377.

Full text
Abstract:
Charge prediction which aims to determine appropriate charges for criminal cases based on textual fact descriptions, is an important technology in the field of AI&Law. Previous works focus on improving prediction accuracy, ignoring the interpretability, which limits the methods’ applicability. In this work, we propose a deep neural framework to extract short but charge-decisive text snippets – rationales – from input fact description, as the interpretation of charge prediction. To solve the scarcity problem ofrationale annotatedcorpus, rationalesare extractedinareinforcement stylewiththe only supervision in the form of charge labels. We further propose a dynamic rationale attention mechanism to better utilize the information in extracted rationales and predict the charges. Experimental results show that besides providing charge prediction interpretation, our approach can also capture subtle details to help charge prediction.
APA, Harvard, Vancouver, ISO, and other styles
37

Qadir, Junaid, Mohammad Qamar Islam, and Ala Al-Fuqaha. "Toward accountable human-centered AI: rationale and promising directions." Journal of Information, Communication and Ethics in Society 20, no. 2 (February 10, 2022): 329–42. http://dx.doi.org/10.1108/jices-06-2021-0059.

Full text
Abstract:
Purpose Along with the various beneficial uses of artificial intelligence (AI), there are various unsavory concomitants including the inscrutability of AI tools (and the opaqueness of their mechanisms), the fragility of AI models under adversarial settings, the vulnerability of AI models to bias throughout their pipeline, the high planetary cost of running large AI models and the emergence of exploitative surveillance capitalism-based economic logic built on AI technology. This study aims to document these harms of AI technology and study how these technologies and their developers and users can be made more accountable. Design/methodology/approach Due to the nature of the problem, a holistic, multi-pronged approach is required to understand and counter these potential harms. This paper identifies the rationale for urgently focusing on human-centered AI and provide an outlook of promising directions including technical proposals. Findings AI has the potential to benefit the entire society, but there remains an increased risk for vulnerable segments of society. This paper provides a general survey of the various approaches proposed in the literature to make AI technology more accountable. This paper reports that the development of ethical accountable AI design requires the confluence and collaboration of many fields (ethical, philosophical, legal, political and technical) and that lack of diversity is a problem plaguing the state of the art in AI. Originality/value This paper provides a timely synthesis of the various technosocial proposals in the literature spanning technical areas such as interpretable and explainable AI; algorithmic auditability; as well as policy-making challenges and efforts that can operationalize ethical AI and help in making AI accountable. This paper also identifies and shares promising future directions of research.
APA, Harvard, Vancouver, ISO, and other styles
38

Cervera-Lierta, Alba, Mario Krenn, and Alán Aspuru-Guzik. "Design of quantum optical experiments with logic artificial intelligence." Quantum 6 (October 13, 2022): 836. http://dx.doi.org/10.22331/q-2022-10-13-836.

Full text
Abstract:
Logic Artificial Intelligence (AI) is a subfield of AI where variables can take two defined arguments, True or False, and are arranged in clauses that follow the rules of formal logic. Several problems that span from physical systems to mathematical conjectures can be encoded into these clauses and solved by checking their satisfiability (SAT). In contrast to machine learning approaches where the results can be approximations or local minima, Logic AI delivers formal and mathematically exact solutions to those problems. In this work, we propose the use of logic AI for the design of optical quantum experiments. We show how to map into a SAT problem the experimental preparation of an arbitrary quantum state and propose a logic-based algorithm, called Klaus, to find an interpretable representation of the photonic setup that generates it. We compare the performance of Klaus with the state-of-the-art algorithm for this purpose based on continuous optimization. We also combine both logic and numeric strategies to find that the use of logic AI significantly improves the resolution of this problem, paving the path to developing more formal-based approaches in the context of quantum physics experiments.
APA, Harvard, Vancouver, ISO, and other styles
39

Kitamura, Shinji, Kensaku Takahashi, Yizhen Sang, Kazuhiko Fukushima, Kenji Tsuji, and Jun Wada. "Deep Learning Could Diagnose Diabetic Nephropathy with Renal Pathological Immunofluorescent Images." Diagnostics 10, no. 7 (July 9, 2020): 466. http://dx.doi.org/10.3390/diagnostics10070466.

Full text
Abstract:
Artificial Intelligence (AI) imaging diagnosis is developing, making enormous steps forward in medical fields. Regarding diabetic nephropathy (DN), medical doctors diagnose them with clinical course, clinical laboratory data and renal pathology, mainly evaluate with light microscopy images rather than immunofluorescent images because there are no characteristic findings in immunofluorescent images for DN diagnosis. Here, we examined the possibility of whether AI could diagnose DN from immunofluorescent images. We collected renal immunofluorescent images from 885 renal biopsy patients in our hospital, and we created a dataset that contains six types of immunofluorescent images of IgG, IgA, IgM, C3, C1q and Fibrinogen for each patient. Using the dataset, 39 programs worked without errors (Area under the curve (AUC): 0.93). Five programs diagnosed DN completely with immunofluorescent images (AUC: 1.00). By analyzing with Local interpretable model-agnostic explanations (Lime), the AI focused on the peripheral lesion of DN glomeruli. On the other hand, the nephrologist diagnostic ratio (AUC: 0.75833) was slightly inferior to AI diagnosis. These findings suggest that DN could be diagnosed only by immunofluorescent images by deep learning. AI could diagnose DN and identify classified unknown parts with the immunofluorescent images that nephrologists usually do not use for DN diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
40

Makridis, Christos, Seth Hurley, Mary Klote, and Gil Alterovitz. "Ethical Applications of Artificial Intelligence: Evidence From Health Research on Veterans." JMIR Medical Informatics 9, no. 6 (June 2, 2021): e28921. http://dx.doi.org/10.2196/28921.

Full text
Abstract:
Background Despite widespread agreement that artificial intelligence (AI) offers significant benefits for individuals and society at large, there are also serious challenges to overcome with respect to its governance. Recent policymaking has focused on establishing principles for the trustworthy use of AI. Adhering to these principles is especially important for ensuring that the development and application of AI raises economic and social welfare, including among vulnerable groups and veterans. Objective We explore the newly developed principles around trustworthy AI and how they can be readily applied at scale to vulnerable groups that are potentially less likely to benefit from technological advances. Methods Using the US Department of Veterans Affairs as a case study, we explore the principles of trustworthy AI that are of particular interest for vulnerable groups and veterans. Results We focus on three principles: (1) designing, developing, acquiring, and using AI so that the benefits of its use significantly outweigh the risks and the risks are assessed and managed; (2) ensuring that the application of AI occurs in well-defined domains and is accurate, effective, and fit for the intended purposes; and (3) ensuring that the operations and outcomes of AI applications are sufficiently interpretable and understandable by all subject matter experts, users, and others. Conclusions These principles and applications apply more generally to vulnerable groups, and adherence to them can allow the VA and other organizations to continue modernizing their technology governance, leveraging the gains of AI while simultaneously managing its risks.
APA, Harvard, Vancouver, ISO, and other styles
41

Roopaei*, Mehdi, Hunter Durian, and Joey Godiska. "Explainable AI in Internet of Control System Distributed at Edge-Cloud Architecture." International Journal of Engineering and Advanced Technology 10, no. 3 (February 28, 2021): 136–42. http://dx.doi.org/10.35940/ijeat.c2246.0210321.

Full text
Abstract:
Many current control systems are restricted to highly controlled environments. In complicated dynamic and unstructured environments such as autonomous vehicles, control systems must be able to deal with more and more complex state situations. In complex systems with large number of states, it is often too slow to use optimal planners and developing heuristic tactics for high level goals can be challenging. AI control is an attractive alternative to traditional control architectures due to their capability to approximate optimal solutions in high dimensional state spaces without requiring a human-designed heuristic. Explainable AI control attempts to produce a human readable control command which is both interpretable and manipulable. This paper is an attempt to propose an architecture for explainable AI control in edge-cloud environment in which there are connected autonomous agents that need to be controlled. In this architecture the designed controller is distributed across the edge and cloud platform using explainable AI. This architecture could be introduced as Internet of Control Systems (IoCS), which could be applied as distributed tactics to control of connected autonomous agents. The IoCS attempts to unleash AI services using resources at the edge near the autonomous agents and make intelligent edge for dynamic, adaptive, and optimized AI control.
APA, Harvard, Vancouver, ISO, and other styles
42

Martens, Harald. "Interpretable machine learning with an eye for the physics: Hyperspectral Vis/NIR “video” of drying wood analyzed by hybrid subspace modeling." NIR news 32, no. 7-8 (November 25, 2021): 24–32. http://dx.doi.org/10.1177/09603360211062706.

Full text
Abstract:
Chemometric multivariate analysis based on low-dimensional linear and bilinear data modelling is presented as a fast and interpretable alternative to more fancy “AI” for practical use of Big Data streams from hyperspectral “video” cameras. The purpose of the present illustration is to find, quantify and understand the various known and unknown factors affecting the process of drying moist wood. It involves an “interpretable machine learning” that analyses more than 350 million absorbance spectra, requiring 418 GB of data storage, without the use of black box operations. The 159-channel high-resolution hyperspectral wood “video” in the 500–1005 nm range was reduced to five known and four unknown variation components of physical and chemical nature, each with its spectral, spatial and temporal parameters quantified. Together, this 9-dimensional linear model explained more than 99.98% of the total input variance.
APA, Harvard, Vancouver, ISO, and other styles
43

Marten, Dennis, Carsten Hilgenfeld, and Andreas Heuer. "Scalable In-Database Machine Learning for the Prediction of Port-to-Port Routes." Journal für Mobilität und Verkehr, no. 6 (November 10, 2020): 2–10. http://dx.doi.org/10.34647/jmv.nr6.id42.

Full text
Abstract:
The correct prediction of subsequential port-to-port routes plays an integral part in maritime logistics and is therefore essential for many further tasks like accurate predictions of the estimated time of arrival. In this paper we present a scalable AI-based approach to predict upcoming port destinations from vessels based on historical AIS data. The presented method is mainly intended as a fill in for cases where the AIS destination entry of a vessel is not interpretable. We describe how one can build a stable and efficient in-database AI solution built on Markov models that are suited for massively parallel prediction tasks with high accuracy. The presented research is part of the PRESEA project (“Real-time based maritime traffic forecast”).
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Quan, Qian Du, and Guohua Liu. "A whole-process interpretable and multi-modal deep reinforcement learning for diagnosis and analysis of Alzheimer’s disease ." Journal of Neural Engineering 18, no. 6 (December 1, 2021): 066032. http://dx.doi.org/10.1088/1741-2552/ac37cc.

Full text
Abstract:
Abstract Objective. Alzheimer’s disease (AD), a common disease of the elderly with unknown etiology, has been adversely affecting many people, especially with the aging of the population and the younger trend of this disease. Current artificial intelligence (AI) methods based on individual information or magnetic resonance imaging (MRI) can solve the problem of diagnostic sensitivity and specificity, but still face the challenges of interpretability and clinical feasibility. In this study, we propose an interpretable multimodal deep reinforcement learning model for inferring pathological features and the diagnosis of AD. Approach. First, for better clinical feasibility, the compressed-sensing MRI image is reconstructed using an interpretable deep reinforcement learning model. Then, the reconstructed MRI is input into the full convolution neural network to generate a pixel-level disease probability risk map (DPM) of the whole brain for AD. The DPM of important brain regions and individual information are then input into the attention-based fully deep neural network to obtain the diagnosis results and analyze the biomarkers. We used 1349 multi-center samples to construct and test the model. Main results. Finally, the model obtained 99.6% ± 0.2%, 97.9% ± 0.2%, and 96.1% ± 0.3% area under curve in ADNI, AIBL and NACC, respectively. The model also provides an effective analysis of multimodal pathology, predicts the imaging biomarkers in MRI and the weight of each individual item of information. In this study, a deep reinforcement learning model was designed, which can not only accurately diagnose AD, but analyze potential biomarkers. Significance. In this study, a deep reinforcement learning model was designed. The model builds a bridge between clinical practice and AI diagnosis and provides a viewpoint for the interpretability of AI technology.
APA, Harvard, Vancouver, ISO, and other styles
45

Sucipto, Kathleen, Archit Khosla, Michael Drage, Yilan Wang, Darren Fahy, Mary Lin, Murray Resnick, et al. "QUANTITATIVE AND EXPLAINABLE ARTIFICIAL INTELLIGENCE (AI)-POWERED APPROACHES TO PREDICT ULCERATIVE COLITIS DISEASE ACTIVITY FROM HEMATOXYLIN AND EOSIN (H&E)-STAINED WHOLE SLIDE IMAGES (WSI)." Inflammatory Bowel Diseases 29, Supplement_1 (January 26, 2023): S22—S23. http://dx.doi.org/10.1093/ibd/izac247.042.

Full text
Abstract:
Abstract BACKGROUND Microscopic inflammation has been shown to be an important indicator of disease activity in ulcerative colitis (UC). However, manual histologic scoring is semi-quantitative and subject to interobserver variation, and AI-based solutions often lack interpretability. Here we report two distinct quantitative approaches to predict disease activity scores and histological remission using AI-powered digital pathology. Both the random forest classifier (RFC) and graph neural network (GNN) further provide explainability and biological insight by identifying histological features informing model predictions. METHODS Convolutional neural networks (CNNs) were developed using >162k annotations on 820 WSI of H&E-stained colorectal biopsies for pixel-level identification of tissue regions (e.g. crypt abscesses, erosion/ulceration) and cell types (e.g. neutrophils, plasma cells). All WSI were scored by 5 board-certified pathologists using the Nancy Histological Index (NHI) to establish consensus ground truth. A rich, quantitative set of human interpretable features that capture CNN predictions of the tissue region and cell type across each WSI was extracted and used to train a RFC to predict slide-level NHI score. To test the hypothesis that tissue region spatial relationships and cellular composition can inform AI-based predictions of disease activity, a separate GNN was trained, using nodes defined by spatially-resolved CNN model-generated outputs, to predict NHI score. The RFC and GNN also predicted histologic remission (NHI<2). Feature importance was calculated for all combinations of RFC (Fig. 1), and the GNNExplainer was applied to locate important interactions between regions in the tissue and identify features significantly contributing to GNN predictions (Fig. 2). RESULTS The RFC and GNN both predicted histologic remission with high accuracy (weighted kappa 0.87 and 0.85, respectively). Both models also identified histologic features relevant to disease activity predictions. Some features are well established, e.g. infiltrated epithelium or neutrophil cell features distinguish cases with histologic remission. The models also identified features beyond those assessed by the NHI, e.g. area proportion of basal plasmacytosis associated with predictions of NHI 2 and 3. Other features not previously implicated in UC disease activity were also identified, e.g. intraepithelial lymphocytes differentiate cases with NHI 3. CONCLUSIONS We report quantitative and interpretable AI-powered approaches for UC histological assessment. CNN identification of UC histology was used as input to two distinct disease activity classifiers that showed strong concordance with consensus pathologist scoring. Both approaches provide interpretable features that explain model predictions and that may be used to inform biomarker selection and clinical development efforts.
APA, Harvard, Vancouver, ISO, and other styles
46

Silva, Vivian S., André Freitas, and Siegfried Handschuh. "Exploring Knowledge Graphs in an Interpretable Composite Approach for Text Entailment." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7023–30. http://dx.doi.org/10.1609/aaai.v33i01.33017023.

Full text
Abstract:
Recognizing textual entailment is a key task for many semantic applications, such as Question Answering, Text Summarization, and Information Extraction, among others. Entailment scenarios can range from a simple syntactic variation to more complex semantic relationships between pieces of text, but most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. We propose a composite approach for recognizing text entailment which analyzes the entailment pair to decide whether it must be resolved syntactically or semantically. We also make the answer interpretable: whenever an entailment is solved semantically, we explore a knowledge base composed of structured lexical definitions to generate natural language humanlike justifications, explaining the semantic relationship holding between the pieces of text. Besides outperforming wellestablished entailment algorithms, our composite approach gives an important step towards Explainable AI, using world knowledge to make the semantic reasoning process explicit and understandable.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Yue, and Sai Ho Chung. "Artificial intelligence in safety-critical systems: a systematic review." Industrial Management & Data Systems 122, no. 2 (December 7, 2021): 442–70. http://dx.doi.org/10.1108/imds-07-2021-0419.

Full text
Abstract:
PurposeThis study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application status according to different AI techniques and propose some research directions and insights to promote its wider application.Design/methodology/approachA total of 92 articles were selected for this review through a systematic literature review along with a thematic analysis.FindingsThe literature is divided into three themes: interpretable method, explain model behavior and reinforcement of safe learning. Among AI techniques, the most widely used are Bayesian networks (BNs) and deep neural networks. In addition, given the huge potential in this field, four future research directions were also proposed.Practical implicationsThis study is of vital interest to industry practitioners and regulators in safety-critical domain, as it provided a clear picture of the current status and pointed out that some AI techniques have great application potential. For those that are inherently appropriate for use in safety-critical systems, regulators can conduct in-depth studies to validate and encourage their use in the industry.Originality/valueThis is the first review of the application of AI in safety-critical systems in the literature. It marks the first step toward advancing AI in safety-critical domain. The paper has potential values to promote the use of the term “safety-critical” and to improve the phenomenon of literature fragmentation.
APA, Harvard, Vancouver, ISO, and other styles
48

Thrun, Michael C., Alfred Ultsch, and Lutz Breuer. "Explainable AI Framework for Multivariate Hydrochemical Time Series." Machine Learning and Knowledge Extraction 3, no. 1 (February 4, 2021): 170–204. http://dx.doi.org/10.3390/make3010009.

Full text
Abstract:
The understanding of water quality and its underlying processes is important for the protection of aquatic environments. With the rare opportunity of access to a domain expert, an explainable AI (XAI) framework is proposed that is applicable to multivariate time series. The XAI provides explanations that are interpretable by domain experts. In three steps, it combines a data-driven choice of a distance measure with supervised decision trees guided by projection-based clustering. The multivariate time series consists of water quality measurements, including nitrate, electrical conductivity, and twelve other environmental parameters. The relationships between water quality and the environmental parameters are investigated by identifying similar days within a cluster and dissimilar days between clusters. The framework, called DDS-XAI, does not depend on prior knowledge about data structure, and its explanations are tendentially contrastive. The relationships in the data can be visualized by a topographic map representing high-dimensional structures. Two state of the art XAIs called eUD3.5 and iterative mistake minimization (IMM) were unable to provide meaningful and relevant explanations from the three multivariate time series data. The DDS-XAI framework can be swiftly applied to new data. Open-source code in R for all steps of the XAI framework is provided and the steps are structured application-oriented.
APA, Harvard, Vancouver, ISO, and other styles
49

Neto, Pedro C., Sara P. Oliveira, Diana Montezuma, João Fraga, Ana Monteiro, Liliana Ribeiro, Sofia Gonçalves, Isabel M. Pinto, and Jaime S. Cardoso. "iMIL4PATH: A Semi-Supervised Interpretable Approach for Colorectal Whole-Slide Images." Cancers 14, no. 10 (May 18, 2022): 2489. http://dx.doi.org/10.3390/cancers14102489.

Full text
Abstract:
Colorectal cancer (CRC) diagnosis is based on samples obtained from biopsies, assessed in pathology laboratories. Due to population growth and ageing, as well as better screening programs, the CRC incidence rate has been increasing, leading to a higher workload for pathologists. In this sense, the application of AI for automatic CRC diagnosis, particularly on whole-slide images (WSI), is of utmost relevance, in order to assist professionals in case triage and case review. In this work, we propose an interpretable semi-supervised approach to detect lesions in colorectal biopsies with high sensitivity, based on multiple-instance learning and feature aggregation methods. The model was developed on an extended version of the recent, publicly available CRC dataset (the CRC+ dataset with 4433 WSI), using 3424 slides for training and 1009 slides for evaluation. The proposed method attained 90.19% classification ACC, 98.8% sensitivity, 85.7% specificity, and a quadratic weighted kappa of 0.888 at slide-based evaluation. Its generalisation capabilities are also studied on two publicly available external datasets.
APA, Harvard, Vancouver, ISO, and other styles
50

Haupt, Sue Ellen, William Chapman, Samantha V. Adams, Charlie Kirkwood, J. Scott Hosking, Niall H. Robinson, Sebastian Lerch, and Aneesh C. Subramanian. "Towards implementing artificial intelligence post-processing in weather and climate: proposed actions from the Oxford 2019 workshop." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2194 (February 15, 2021): 20200091. http://dx.doi.org/10.1098/rsta.2020.0091.

Full text
Abstract:
The most mature aspect of applying artificial intelligence (AI)/machine learning (ML) to problems in the atmospheric sciences is likely post-processing of model output. This article provides some history and current state of the science of post-processing with AI for weather and climate models. Deriving from the discussion at the 2019 Oxford workshop on Machine Learning for Weather and Climate, this paper also presents thoughts on medium-term goals to advance such use of AI, which include assuring that algorithms are trustworthy and interpretable, adherence to FAIR data practices to promote usability, and development of techniques that leverage our physical knowledge of the atmosphere. The coauthors propose several actionable items and have initiated one of those: a repository for datasets from various real weather and climate problems that can be addressed using AI. Five such datasets are presented and permanently archived, together with Jupyter notebooks to process them and assess the results in comparison with a baseline technique. The coauthors invite the readers to test their own algorithms in comparison with the baseline and to archive their results. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography