Journal articles on the topic 'Artificial Intelligence, Explainable AI'

To see the other types of publications on this topic, follow the link: Artificial Intelligence, Explainable AI.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Artificial Intelligence, Explainable AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Full text
Abstract:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
APA, Harvard, Vancouver, ISO, and other styles
2

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (April 30, 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Full text
Abstract:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
APA, Harvard, Vancouver, ISO, and other styles
3

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence." Digital Technologies Research and Applications 1, no. 1 (January 26, 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Full text
Abstract:
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
APA, Harvard, Vancouver, ISO, and other styles
4

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (March 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Full text
Abstract:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
APA, Harvard, Vancouver, ISO, and other styles
5

Alufaisan, Yasmeen, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6618–26. http://dx.doi.org/10.1609/aaai.v35i8.16819.

Full text
Abstract:
Explainable AI provides insights to users into the why for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, we compare objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct vs. incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the why information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Dikmen, Murat, and Catherine Burns. "Abstraction Hierarchy Based Explainable Artificial Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 319–23. http://dx.doi.org/10.1177/1071181320641073.

Full text
Abstract:
This work explores the application of Cognitive Work Analysis (CWA) in the context of Explainable Artificial Intelligence (XAI). We built an AI system using a loan evaluation data set and applied an XAI technique to obtain data-driven explanations for predictions. Using an Abstraction Hierarchy (AH), we generated domain knowledge-based explanations to accompany data-driven explanations. An online experiment was conducted to test the usefulness of AH-based explanations. Participants read financial profiles of loan applicants, the AI system’s loan approval/rejection decisions, and explanations that justify the decisions. Presence or absence of AH-based explanations was manipulated, and participants’ perceptions of the explanation quality was measured. The results showed that providing AH-based explanations helped participants learn about the loan evaluation process and improved the perceived quality of explanations. We conclude that a CWA approach can increase understandability when explaining the decisions made by AI systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Full text
Abstract:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
APA, Harvard, Vancouver, ISO, and other styles
9

Esmaeili, Morteza, Riyas Vettukattil, Hasan Banitalebi, Nina R. Krogh, and Jonn Terje Geitung. "Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization." Journal of Personalized Medicine 11, no. 11 (November 16, 2021): 1213. http://dx.doi.org/10.3390/jpm11111213.

Full text
Abstract:
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Thakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar, and Sydney Simpson. "Explainable Artificial Intelligence for Developing Smart Cities Solutions." Smart Cities 3, no. 4 (November 13, 2020): 1353–82. http://dx.doi.org/10.3390/smartcities3040065.

Full text
Abstract:
Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.
APA, Harvard, Vancouver, ISO, and other styles
11

Freeman, Laura, Abdul Rahman, and Feras A. Batarseh. "Enabling Artificial Intelligence Adoption through Assurance." Social Sciences 10, no. 9 (August 25, 2021): 322. http://dx.doi.org/10.3390/socsci10090322.

Full text
Abstract:
The wide scale adoption of Artificial Intelligence (AI) will require that AI engineers and developers can provide assurances to the user base that an algorithm will perform as intended and without failure. Assurance is the safety valve for reliable, dependable, explainable, and fair intelligent systems. AI assurance provides the necessary tools to enable AI adoption into applications, software, hardware, and complex systems. AI assurance involves quantifying capabilities and associating risks across deployments including: data quality to include inherent biases, algorithm performance, statistical errors, and algorithm trustworthiness and security. Data, algorithmic, and context/domain-specific factors may change over time and impact the ability of AI systems in delivering accurate outcomes. In this paper, we discuss the importance and different angles of AI assurance, and present a general framework that addresses its challenges.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (January 19, 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Full text
Abstract:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Sreedharan, Sarath, Anagha Kulkarni, and Subbarao Kambhampati. "Explainable Human--AI Interaction: A Planning Perspective." Synthesis Lectures on Artificial Intelligence and Machine Learning 16, no. 1 (January 24, 2022): 1–184. http://dx.doi.org/10.2200/s01152ed1v01y202111aim050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sakai, Akira, Masaaki Komatsu, Reina Komatsu, Ryu Matsuoka, Suguru Yasutomi, Ai Dozen, Kanto Shozu, et al. "Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening." Biomedicines 10, no. 3 (February 25, 2022): 551. http://dx.doi.org/10.3390/biomedicines10030551.

Full text
Abstract:
Diagnostic support tools based on artificial intelligence (AI) have exhibited high performance in various medical fields. However, their clinical application remains challenging because of the lack of explanatory power in AI decisions (black box problem), making it difficult to build trust with medical professionals. Nevertheless, visualizing the internal representation of deep neural networks will increase explanatory power and improve the confidence of medical professionals in AI decisions. We propose a novel deep learning-based explainable representation “graph chart diagram” to support fetal cardiac ultrasound screening, which has low detection rates of congenital heart diseases due to the difficulty in mastering the technique. Screening performance improves using this representation from 0.966 to 0.975 for experts, 0.829 to 0.890 for fellows, and 0.616 to 0.748 for residents in the arithmetic mean of area under the curve of a receiver operating characteristic curve. This is the first demonstration wherein examiners used deep learning-based explainable representation to improve the performance of fetal cardiac ultrasound screening, highlighting the potential of explainable AI to augment examiner capabilities.
APA, Harvard, Vancouver, ISO, and other styles
15

Corchado, Juan M., Sascha Ossowski, Sara Rodríguez-González, and Fernando De la Prieta. "Advances in Explainable Artificial Intelligence and Edge Computing Applications." Electronics 11, no. 19 (September 28, 2022): 3111. http://dx.doi.org/10.3390/electronics11193111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Veprytska, O. Y., and V. S. Kharchenko. "Analysis of Requirements and Quality Modeloriented Assessment of the Explainable Ai As A Service." Èlektronnoe modelirovanie 44, no. 5 (July 10, 2022): 36–50. http://dx.doi.org/10.15407/emodel.44.05.036.

Full text
Abstract:
Existing artificial intelligence (AI) services provided by cloud providers (Artificial Intelligence as a Service (AIaaS)) and their explainability have been studied. The characteristics and provision of objective evaluation of explainable AI as a service (eXplainable AI as a Service (XAIaaS)) are defined. AIaaS solutions provided by cloud providers Amazon Web Services, Google Cloud Platform and Microsoft Azure were analyzed. Non-functional requirements for XAIaaS evaluation of such systems have been formed. A model has been developed and an example of the quality assessment of an AI system for image detection of weapons has been provided, and an example of its metric assessment has been provided. Directions for further research: parameterization of explainability and its sub-characteristics for services, development of algorithms for determining metrics for evaluating the quality of AI and XAIaaS systems, development of means for ensuring explainability.
APA, Harvard, Vancouver, ISO, and other styles
17

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Park, Heewon, Koji Maruhashi, Rui Yamaguchi, Seiya Imoto, and Satoru Miyano. "Global gene network exploration based on explainable artificial intelligence approach." PLOS ONE 15, no. 11 (November 6, 2020): e0241508. http://dx.doi.org/10.1371/journal.pone.0241508.

Full text
Abstract:
In recent years, personalized gene regulatory networks have received significant attention, and interpretation of the multilayer networks has been a critical issue for a comprehensive understanding of gene regulatory systems. Although several statistical and machine learning approaches have been developed and applied to reveal sample-specific regulatory pathways, integrative understanding of the massive multilayer networks remains a challenge. To resolve this problem, we propose a novel artificial intelligence (AI) strategy for comprehensive gene regulatory network analysis. In our strategy, personalized gene networks corresponding specific clinical characteristic are constructed and the constructed network is considered as a second-order tensor. Then, an explainable AI method based on deep learning is applied to decompose the multilayer networks, thus we can reveal all-encompassing gene regulatory systems characterized by clinical features of patients. To evaluate the proposed methodology, we apply our method to the multilayer gene networks under varying conditions of an epithelial–mesenchymal transition (EMT) process. From the comprehensive analysis of multilayer networks, we identified novel markers, and the biological mechanisms of the identified genes and their reciprocal mechanisms are verified through the literature. Although any biological knowledge about the identified genes was not incorporated in our analysis, our data-driven approach based on AI approach provides biologically reliable results. Furthermore, the results provide crucial evidences to reveal biological mechanism related to various diseases, e.g., keratinocyte proliferation. The use of explainable AI method based on the tensor decomposition enables us to reveal global and novel mechanisms of gene regulatory system from the massive multiple networks, which cannot be demonstrated by existing methods. We expect that the proposed method provides a new insight into network biology and it will be a useful tool to integrative gene network analysis related complex architectures of diseases.
APA, Harvard, Vancouver, ISO, and other styles
19

Islam, Mohammed Saidul, Iqram Hussain, Md Mezbaur Rahman, Se Jin Park, and Md Azam Hossain. "Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal." Sensors 22, no. 24 (December 15, 2022): 9859. http://dx.doi.org/10.3390/s22249859.

Full text
Abstract:
State-of-the-art healthcare technologies are incorporating advanced Artificial Intelligence (AI) models, allowing for rapid and easy disease diagnosis. However, most AI models are considered “black boxes,” because there is no explanation for the decisions made by these models. Users may find it challenging to comprehend and interpret the results. Explainable AI (XAI) can explain the machine learning (ML) outputs and contribution of features in disease prediction models. Electroencephalography (EEG) is a potential predictive tool for understanding cortical impairment caused by an ischemic stroke and can be utilized for acute stroke prediction, neurologic prognosis, and post-stroke treatment. This study aims to utilize ML models to classify the ischemic stroke group and the healthy control group for acute stroke prediction in active states. Moreover, XAI tools (Eli5 and LIME) were utilized to explain the behavior of the model and determine the significant features that contribute to stroke prediction models. In this work, we studied 48 patients admitted to a hospital with acute ischemic stroke and 75 healthy adults who had no history of identified other neurological illnesses. EEG was obtained within three months following the onset of ischemic stroke symptoms using frontal, central, temporal, and occipital cortical electrodes (Fz, C1, T7, Oz). EEG data were collected in an active state (walking, working, and reading tasks). In the results of the ML approach, the Adaptive Gradient Boosting models showed around 80% accuracy for the classification of the control group and the stroke group. Eli5 and LIME were utilized to explain the behavior of the stroke prediction model and interpret the model locally around the prediction. The Eli5 and LIME interpretable models emphasized the spectral delta and theta features as local contributors to stroke prediction. From the findings of this explainable AI research, it is expected that the stroke-prediction XAI model will help with post-stroke treatment and recovery, as well as help healthcare professionals, make their diagnostic decisions more explainable.
APA, Harvard, Vancouver, ISO, and other styles
20

Combs, Kara, Mary Fendley, and Trevor Bihl. "A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 8 (June 1, 2020): 61–72. http://dx.doi.org/10.37394/232018.2020.8.9.

Full text
Abstract:
Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.
APA, Harvard, Vancouver, ISO, and other styles
21

Khan, Muhammad Salar, Mehdi Nayebpour, Meng-Hao Li, Hadi El-Amine, Naoru Koizumi, and James L. Olds. "Explainable AI: A Neurally-Inspired Decision Stack Framework." Biomimetics 7, no. 3 (September 9, 2022): 127. http://dx.doi.org/10.3390/biomimetics7030127.

Full text
Abstract:
European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called “decision stacks” that can provide a way forward in research to develop Explainable Artificial Intelligence (X-AI). By leveraging findings from the finest memory systems in biological brains, the decision stack framework operationalizes the definition of explainability. It then proposes a test that can potentially reveal how a given AI decision was made.
APA, Harvard, Vancouver, ISO, and other styles
22

Lorente, Maria Paz Sesmero, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez, and Araceli Sanchis de Miguel. "Explaining Deep Learning-Based Driver Models." Applied Sciences 11, no. 8 (April 7, 2021): 3321. http://dx.doi.org/10.3390/app11083321.

Full text
Abstract:
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments.
APA, Harvard, Vancouver, ISO, and other styles
23

Khrais, Laith T. "Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce." Future Internet 12, no. 12 (December 8, 2020): 226. http://dx.doi.org/10.3390/fi12120226.

Full text
Abstract:
The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.
APA, Harvard, Vancouver, ISO, and other styles
24

Wongburi, Praewa, and Jae K. Park. "Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network." Sustainability 14, no. 10 (May 21, 2022): 6276. http://dx.doi.org/10.3390/su14106276.

Full text
Abstract:
Sludge Volume Index (SVI) is one of the most important operational parameters in an activated sludge process. It is difficult to predict SVI because of the nonlinearity of data and variability operation conditions. With complex time-series data from Wastewater Treatment Plants (WWTPs), the Recurrent Neural Network (RNN) with an Explainable Artificial Intelligence was applied to predict SVI and interpret the prediction result. RNN architecture has been proven to efficiently handle time-series and non-uniformity data. Moreover, due to the complexity of the model, the newly Explainable Artificial Intelligence concept was used to interpret the result. Data were collected from the Nine Springs Wastewater Treatment Plant, Madison, Wisconsin, and the data were analyzed and cleaned using Python program and data analytics approaches. An RNN model predicted SVI accurately after training with historical big data collected at the Nine Spring WWTP. The Explainable Artificial Intelligence (AI) analysis was able to determine which input parameters affected higher SVI most. The prediction of SVI will benefit WWTPs to establish corrective measures to maintaining stable SVI. The SVI prediction model and Explainable Artificial Intelligence method will help the wastewater treatment sector to improve operational performance, system management, and process reliability.
APA, Harvard, Vancouver, ISO, and other styles
25

Sarp, Salih, Murat Kuzlu, Emmanuel Wilson, Umit Cali, and Ozgur Guler. "The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification." Electronics 10, no. 12 (June 11, 2021): 1406. http://dx.doi.org/10.3390/electronics10121406.

Full text
Abstract:
Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, built on weights calculated as a result of large matrix multiplications. It is typically hard to interpret and debug the computationally intensive processes. Explainable Artificial Intelligence (XAI) aims to solve black-box and hard-to-debug approaches through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
26

Kerr, Alison Duncan, and Kevin Scharp. "The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence." Minds and Machines 32, no. 3 (September 2022): 585–611. http://dx.doi.org/10.1007/s11023-022-09609-7.

Full text
Abstract:
AbstractArtificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.
APA, Harvard, Vancouver, ISO, and other styles
27

Linardatos, Pantelis, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. "Explainable AI: A Review of Machine Learning Interpretability Methods." Entropy 23, no. 1 (December 25, 2020): 18. http://dx.doi.org/10.3390/e23010018.

Full text
Abstract:
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
APA, Harvard, Vancouver, ISO, and other styles
28

Dodge, Jonathan, Roli Khanna, Jed Irvine, Kin-ho Lam, Theresa Mai, Zhengxian Lin, Nicholas Kiddle, et al. "After-Action Review for AI (AAR/AI)." ACM Transactions on Interactive Intelligent Systems 11, no. 3-4 (December 31, 2021): 1–35. http://dx.doi.org/10.1145/3453173.

Full text
Abstract:
Explainable AI is growing in importance as AI pervades modern society, but few have studied how explainable AI can directly support people trying to assess an AI agent. Without a rigorous process, people may approach assessment in ad hoc ways—leading to the possibility of wide variations in assessment of the same agent due only to variations in their processes. AAR, or After-Action Review, is a method some military organizations use to assess human agents, and it has been validated in many domains. Drawing upon this strategy, we derived an After-Action Review for AI (AAR/AI), to organize ways people assess reinforcement learning agents in a sequential decision-making environment. We then investigated what AAR/AI brought to human assessors in two qualitative studies. The first investigated AAR/AI to gather formative information, and the second built upon the results, and also varied the type of explanation (model-free vs. model-based) used in the AAR/AI process. Among the results were the following: (1) participants reporting that AAR/AI helped to organize their thoughts and think logically about the agent, (2) AAR/AI encouraged participants to reason about the agent from a wide range of perspectives , and (3) participants were able to leverage AAR/AI with the model-based explanations to falsify the agent’s predictions.
APA, Harvard, Vancouver, ISO, and other styles
29

Rajabi, Enayat, and Somayeh Kafaie. "Knowledge Graphs and Explainable AI in Healthcare." Information 13, no. 10 (September 28, 2022): 459. http://dx.doi.org/10.3390/info13100459.

Full text
Abstract:
Building trust and transparency in healthcare can be achieved using eXplainable Artificial Intelligence (XAI), as it facilitates the decision-making process for healthcare professionals. Knowledge graphs can be used in XAI for explainability by structuring information, extracting features and relations, and performing reasoning. This paper highlights the role of knowledge graphs in XAI models in healthcare, considering a state-of-the-art review. Based on our review, knowledge graphs have been used for explainability to detect healthcare misinformation, adverse drug reactions, drug-drug interactions and to reduce the knowledge gap between healthcare experts and AI-based models. We also discuss how to leverage knowledge graphs in pre-model, in-model, and post-model XAI models in healthcare to make them more explainable.
APA, Harvard, Vancouver, ISO, and other styles
30

Shadbolt, Nigel. "“From So Simple a Beginning”: Species of Artificial Intelligence." Daedalus 151, no. 2 (2022): 28–42. http://dx.doi.org/10.1162/daed_a_01898.

Full text
Abstract:
Abstract Artificial intelligence has a decades-long history that exhibits alternating enthusiasm and disillusionment for the field's scientific insights, technical accomplishments, and socioeconomic impact. Recent achievements have seen renewed claims for the transformative and disruptive effects of AI. Reviewing the history and current state of the art reveals a broad repertoire of methods and techniques developed by AI researchers. In particular, modern machine learning methods have enabled a series of AI systems to achieve superhuman performance. The exponential increases in computing power, open-source software, available data, and embedded services have been crucial to this success. At the same time, there is growing unease around whether the behavior of these systems can be rendered transparent, explainable, unbiased, and accountable. One consequence of recent AI accomplishments is a renaissance of interest around the ethics of such systems. More generally, our AI systems remain singular task-achieving architectures, often termed narrow AI. I will argue that artificial general intelligence-able to range across widely differing tasks and contexts-is unlikely to be developed, or emerge, any time soon.
APA, Harvard, Vancouver, ISO, and other styles
31

Shadbolt, Nigel. "“From So Simple a Beginning”: Species of Artificial Intelligence." Daedalus 151, no. 2 (2022): 28–42. http://dx.doi.org/10.1162/daed_a_01898.

Full text
Abstract:
Abstract Artificial intelligence has a decades-long history that exhibits alternating enthusiasm and disillusionment for the field's scientific insights, technical accomplishments, and socioeconomic impact. Recent achievements have seen renewed claims for the transformative and disruptive effects of AI. Reviewing the history and current state of the art reveals a broad repertoire of methods and techniques developed by AI researchers. In particular, modern machine learning methods have enabled a series of AI systems to achieve superhuman performance. The exponential increases in computing power, open-source software, available data, and embedded services have been crucial to this success. At the same time, there is growing unease around whether the behavior of these systems can be rendered transparent, explainable, unbiased, and accountable. One consequence of recent AI accomplishments is a renaissance of interest around the ethics of such systems. More generally, our AI systems remain singular task-achieving architectures, often termed narrow AI. I will argue that artificial general intelligence-able to range across widely differing tasks and contexts-is unlikely to be developed, or emerge, any time soon.
APA, Harvard, Vancouver, ISO, and other styles
32

Tsai, Yun-Cheng, Fu-Min Szu, Jun-Hao Chen, and Samuel Yen-Chi Chen. "Financial Vision-Based Reinforcement Learning Trading Strategy." Analytics 1, no. 1 (August 9, 2022): 35–53. http://dx.doi.org/10.3390/analytics1010004.

Full text
Abstract:
Recent advances in artificial intelligence (AI) for quantitative trading have led to its general superhuman performance among notable trading performance results. However, if we use AI without proper supervision, it can lead to wrong choices and huge losses. Therefore, we need to ask why AI makes decisions and how AI makes decisions so that people can trust AI. By understanding the decision process, people can make error corrections, so the need for explainability highlights the artificial intelligence challenges that intelligent technology can explain in trading. This research focuses on financial vision, an explainable approach, and the link to its programmatic implementation. We hope our paper can refer to superhuman performance and the reasons for decisions in trading systems.
APA, Harvard, Vancouver, ISO, and other styles
33

Feng, Jinchao, Joshua L. Lansford, Markos A. Katsoulakis, and Dionisios G. Vlachos. "Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences." Science Advances 6, no. 42 (October 2020): eabc3204. http://dx.doi.org/10.1126/sciadv.abc3204.

Full text
Abstract:
Data science has primarily focused on big data, but for many physics, chemistry, and engineering applications, data are often small, correlated and, thus, low dimensional, and sourced from both computations and experiments with various levels of noise. Typical statistics and machine learning methods do not work for these cases. Expert knowledge is essential, but a systematic framework for incorporating it into physics-based models under uncertainty is lacking. Here, we develop a mathematical and computational framework for probabilistic artificial intelligence (AI)–based predictive modeling combining data, expert knowledge, multiscale models, and information theory through uncertainty quantification and probabilistic graphical models (PGMs). We apply PGMs to chemistry specifically and develop predictive guarantees for PGMs generally. Our proposed framework, combining AI and uncertainty quantification, provides explainable results leading to correctable and, eventually, trustworthy models. The proposed framework is demonstrated on a microkinetic model of the oxygen reduction reaction.
APA, Harvard, Vancouver, ISO, and other styles
34

Cavallaro, Massimo, Ed Moran, Benjamin Collyer, Noel D. McCarthy, Christopher Green, and Matt J. Keeling. "Informing antimicrobial stewardship with explainable AI." PLOS Digital Health 2, no. 1 (January 5, 2023): e0000162. http://dx.doi.org/10.1371/journal.pdig.0000162.

Full text
Abstract:
The accuracy and flexibility of artificial intelligence (AI) systems often comes at the cost of a decreased ability to offer an intuitive explanation of their predictions. This hinders trust and discourage adoption of AI in healthcare, exacerbated by concerns over liabilities and risks to patients’ health in case of misdiagnosis. Providing an explanation for a model’s prediction is possible due to recent advances in the field of interpretable machine learning. We considered a data set of hospital admissions linked to records of antibiotic prescriptions and susceptibilities of bacterial isolates. An appropriately trained gradient boosted decision tree algorithm, supplemented by a Shapley explanation model, predicts the likely antimicrobial drug resistance, with the odds of resistance informed by characteristics of the patient, admission data, and historical drug treatments and culture test results. Applying this AI-based system, we found that it substantially reduces the risk of mismatched treatment compared with the observed prescriptions. The Shapley values provide an intuitive association between observations/data and outcomes; the associations identified are broadly consistent with expectations based on prior knowledge from health specialists. The results, and the ability to attribute confidence and explanations, support the wider adoption of AI in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
35

Schwendicke, F., W. Samek, and J. Krois. "Artificial Intelligence in Dentistry: Chances and Challenges." Journal of Dental Research 99, no. 7 (April 21, 2020): 769–74. http://dx.doi.org/10.1177/0022034520915714.

Full text
Abstract:
The term “artificial intelligence” (AI) refers to the idea of machines being capable of performing human tasks. A subdomain of AI is machine learning (ML), which “learns” intrinsic statistical patterns in data to eventually cast predictions on unseen data. Deep learning is a ML technique using multi-layer mathematical operations for learning and inferring on complex data like imagery. This succinct narrative review describes the application, limitations and possible future of AI-based dental diagnostics, treatment planning, and conduct, for example, image analysis, prediction making, record keeping, as well as dental research and discovery. AI-based applications will streamline care, relieving the dental workforce from laborious routine tasks, increasing health at lower costs for a broader population, and eventually facilitate personalized, predictive, preventive, and participatory dentistry. However, AI solutions have not by large entered routine dental practice, mainly due to 1) limited data availability, accessibility, structure, and comprehensiveness, 2) lacking methodological rigor and standards in their development, 3) and practical questions around the value and usefulness of these solutions, but also ethics and responsibility. Any AI application in dentistry should demonstrate tangible value by, for example, improving access to and quality of care, increasing efficiency and safety of services, empowering and enabling patients, supporting medical research, or increasing sustainability. Individual privacy, rights, and autonomy need to be put front and center; a shift from centralized to distributed/federated learning may address this while improving scalability and robustness. Lastly, trustworthiness into, and generalizability of, dental AI solutions need to be guaranteed; the implementation of continuous human oversight and standards grounded in evidence-based dentistry should be expected. Methods to visualize, interpret, and explain the logic behind AI solutions will contribute (“explainable AI”). Dental education will need to accompany the introduction of clinical AI solutions by fostering digital literacy in the future dental workforce.
APA, Harvard, Vancouver, ISO, and other styles
36

Gramegna, Alex, and Paolo Giudici. "Why to Buy Insurance? An Explainable Artificial Intelligence Approach." Risks 8, no. 4 (December 14, 2020): 137. http://dx.doi.org/10.3390/risks8040137.

Full text
Abstract:
We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.
APA, Harvard, Vancouver, ISO, and other styles
37

Vilone, Giulia, and Luca Longo. "Classification of Explainable Artificial Intelligence Methods through Their Output Formats." Machine Learning and Knowledge Extraction 3, no. 3 (August 4, 2021): 615–61. http://dx.doi.org/10.3390/make3030032.

Full text
Abstract:
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulations.
APA, Harvard, Vancouver, ISO, and other styles
38

Fior, Jacopo, Luca Cagliero, and Paolo Garza. "Leveraging Explainable AI to Support Cryptocurrency Investors." Future Internet 14, no. 9 (August 24, 2022): 251. http://dx.doi.org/10.3390/fi14090251.

Full text
Abstract:
In the last decade, cryptocurrency trading has attracted the attention of private and professional traders and investors. To forecast the financial markets, algorithmic trading systems based on Artificial Intelligence (AI) models are becoming more and more established. However, they suffer from the lack of transparency, thus hindering domain experts from directly monitoring the fundamentals behind market movements. This is particularly critical for cryptocurrency investors, because the study of the main factors influencing cryptocurrency prices, including the characteristics of the blockchain infrastructure, is crucial for driving experts’ decisions. This paper proposes a new visual analytics tool to support domain experts in the explanation of AI-based cryptocurrency trading systems. To describe the rationale behind AI models, it exploits an established method, namely SHapley Additive exPlanations, which allows experts to identify the most discriminating features and provides them with an interactive and easy-to-use graphical interface. The simulations carried out on 21 cryptocurrencies over a 8-year period demonstrate the usability of the proposed tool.
APA, Harvard, Vancouver, ISO, and other styles
39

de Lange, Petter Eilif, Borger Melsom, Christian Bakke Vennerød, and Sjur Westgaard. "Explainable AI for Credit Assessment in Banks." Journal of Risk and Financial Management 15, no. 12 (November 28, 2022): 556. http://dx.doi.org/10.3390/jrfm15120556.

Full text
Abstract:
Banks’ credit scoring models are required by financial authorities to be explainable. This paper proposes an explainable artificial intelligence (XAI) model for predicting credit default on a unique dataset of unsecured consumer loans provided by a Norwegian bank. We combined a LightGBM model with SHAP, which enables the interpretation of explanatory variables affecting the predictions. The LightGBM model clearly outperforms the bank’s actual credit scoring model (Logistic Regression). We found that the most important explanatory variables for predicting default in the LightGBM model are the volatility of utilized credit balance, remaining credit in percentage of total credit and the duration of the customer relationship. Our main contribution is the implementation of XAI methods in banking, exploring how these methods can be applied to improve the interpretability and reliability of state-of-the-art AI models. We also suggest a method for analyzing the potential economic value of an improved credit scoring model.
APA, Harvard, Vancouver, ISO, and other styles
40

Lawless, William F., Ranjeev Mittu, Don Sofge, and Laura Hiatt. "Artificial intelligence, Autonomy, and Human-Machine Teams — Interdependence, Context, and Explainable AI." AI Magazine 40, no. 3 (July 9, 2019): 5–13. http://dx.doi.org/10.1609/aimag.v40i3.2866.

Full text
Abstract:
Because in military situations, as well as for self-driving cars, information must be processed faster than humans can achieve, determination of context computationally, also known as situational assessment, is increasingly important. In this article, we introduce the topic of context, and we discuss what is known about the heretofore intractable research problem on the effects of interdependence, present in the best of human teams; we close by proposing that interdependence must be mastered mathematically to operate human-machine teams efficiently, to advance theory, and to make the machine actions directed by AI explainable to team members and society. The special topic articles in this issue and a subsequent issue of AI Magazine review ongoing mature research and operational programs that address context for human-machine teams.
APA, Harvard, Vancouver, ISO, and other styles
41

Chung, Kimin. "The effects of Explainable Artificial Intelligence Education Program Based on AI Literacy." Journal of The Korean Association of Artificial Intelligence Education 3, no. 1 (May 31, 2022): 1–12. http://dx.doi.org/10.52618/aied.2022.3.1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Barredo Arrieta, Alejandro, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, et al. "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI." Information Fusion 58 (June 2020): 82–115. http://dx.doi.org/10.1016/j.inffus.2019.12.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Mueller, Shane T. "Cognitive Anthropomorphism of AI: How Humans and Computers Classify Images." Ergonomics in Design: The Quarterly of Human Factors Applications 28, no. 3 (May 11, 2020): 12–19. http://dx.doi.org/10.1177/1064804620920870.

Full text
Abstract:
Modern artificial intelligence (AI) image classifiers have made impressive advances in recent years, but their performance often appears strange or violates expectations of users. This suggests that humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence. This mismatch presents an obstacle to appropriate human-AI interaction. To delineate this mismatch, I examine known properties of human classification, in comparison with image classifier systems. Based on this examination, I offer three strategies for system design that can address the mismatch between human and AI classification: explainable AI, novel methods for training users, and new algorithms that match human cognition.
APA, Harvard, Vancouver, ISO, and other styles
44

Kshirsagar, Meghana, Krishn Kumar Gupt, Gauri Vaidya, Conor Ryan, Joseph P. Sullivan, and Vivek Kshirsagar. "Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI." International Journal of Natural Computing Research 11, no. 1 (January 1, 2022): 1–23. http://dx.doi.org/10.4018/ijncr.310006.

Full text
Abstract:
Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Hassan, Ali, Riza Sulaiman, Mansoor Abdullateef Abdulgabber, and Hasan Kahtan. "TOWARDS USER-CENTRIC EXPLANATIONS FOR EXPLAINABLE MODELS: A REVIEW." Journal of Information System and Technology Management 6, no. 22 (September 1, 2021): 36–50. http://dx.doi.org/10.35631/jistm.622004.

Full text
Abstract:
Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.
APA, Harvard, Vancouver, ISO, and other styles
46

HOW. "Future-Ready Strategic Oversight of Multiple Artificial Superintelligence-Enabled Adaptive Learning Systems via Human-Centric Explainable AI-Empowered Predictive Optimizations of Educational Outcomes." Big Data and Cognitive Computing 3, no. 3 (July 31, 2019): 46. http://dx.doi.org/10.3390/bdcc3030046.

Full text
Abstract:
Artificial intelligence-enabled adaptive learning systems (AI-ALS) have been increasingly utilized in education. Schools are usually afforded the freedom to deploy the AI-ALS that they prefer. However, even before artificial intelligence autonomously develops into artificial superintelligence in the future, it would be remiss to entirely leave the students to the AI-ALS without any independent oversight of the potential issues. For example, if the students score well in formative assessments within the AI-ALS but subsequently perform badly in paper-based post-tests, or if the relentless algorithm of a particular AI-ALS is suspected of causing undue stress for the students, they should be addressed by educational stakeholders. Policy makers and educational stakeholders should collaborate to analyze the data from multiple AI-ALS deployed in different schools to achieve strategic oversight. The current paper provides exemplars to illustrate how this future-ready strategic oversight could be implemented using an artificial intelligence-based Bayesian network software to analyze the data from five dissimilar AI-ALS, each deployed in a different school. Besides using descriptive analytics to reveal potential issues experienced by students within each AI-ALS, this human-centric AI-empowered approach also enables explainable predictive analytics of the students’ learning outcomes in paper-based summative assessments after training is completed in each AI-ALS.
APA, Harvard, Vancouver, ISO, and other styles
47

Hacker, Philipp, Ralf Krestel, Stefan Grundmann, and Felix Naumann. "Explainable AI under contract and tort law: legal incentives and technical challenges." Artificial Intelligence and Law 28, no. 4 (January 19, 2020): 415–39. http://dx.doi.org/10.1007/s10506-020-09260-6.

Full text
Abstract:
Abstract This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.
APA, Harvard, Vancouver, ISO, and other styles
48

Chaddad, Ahmad, Jihao Peng, Jian Xu, and Ahmed Bouridane. "Survey of Explainable AI Techniques in Healthcare." Sensors 23, no. 2 (January 5, 2023): 634. http://dx.doi.org/10.3390/s23020634.

Full text
Abstract:
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
APA, Harvard, Vancouver, ISO, and other styles
49

Lötsch, Jörn, Dario Kringel, and Alfred Ultsch. "Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients." BioMedInformatics 2, no. 1 (December 22, 2021): 1–17. http://dx.doi.org/10.3390/biomedinformatics2010001.

Full text
Abstract:
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field.
APA, Harvard, Vancouver, ISO, and other styles
50

Snidaro, Lauro, Jesús García Herrero, James Llinas, and Erik Blasch. "Recent Trends in Context Exploitation for Information Fusion and AI." AI Magazine 40, no. 3 (September 30, 2019): 14–27. http://dx.doi.org/10.1609/aimag.v40i3.2864.

Full text
Abstract:
AI is related to information fusion (IF). Many methods in AI that use perception and reasoning align to the functionalities of high-level IF (HLIF) operations that estimate situational and impact states. To achieve HLIF sensor, user, and mission management operations, AI elements of planning, control, and knowledge representation are needed. Both AI reasoning and IF inferencing and estimation exploit context as a basis for achieving deeper levels of understanding of complex world conditions. Open challenges for AI researchers include achieving concept generalization, response adaptation, and situation assessment. This article presents a brief survey of recent and current research on the exploitation of context in IF and discusses the interplay and similarities between IF, context exploitation, and AI. In addition, it highlights the role that contextual information can provide in the next generation of adaptive intelligent systems based on explainable AI. The article describes terminology, addresses notional processing concepts, and lists references for readers to follow up and explore ideas offered herein.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography