To see the other types of publications on this topic, follow the link: Explainable artifical intelligence.

Journal articles on the topic 'Explainable artifical intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explainable artifical intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ridley, Michael. "Explainable Artificial Intelligence." Ethics of Artificial Intelligence, no. 299 (September 19, 2019): 28–46. http://dx.doi.org/10.29242/rli.299.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gunning, David, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. "XAI—Explainable artificial intelligence." Science Robotics 4, no. 37 (December 18, 2019): eaay7120. http://dx.doi.org/10.1126/scirobotics.aay7120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (April 30, 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Full text
Abstract:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
APA, Harvard, Vancouver, ISO, and other styles
4

Abdelmonem, Ahmed, and Nehal N. Mostafa. "Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection." Fusion: Practice and Applications 3, no. 1 (2021): 54–69. http://dx.doi.org/10.54216/fpa.030104.

Full text
Abstract:
Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popular method for dealing with such data and acquiring more trustworthy, helpful, and precise insights. Compared to other, more traditional-based data fusion methods, machine learning's capacity to automatically learn from experience with nonexplicit programming significantly improves fusion's computational and predictive power. This paper comprehensively studies the most explainable artificial intelligent methods based on anomaly detection. We proposed the required criteria of the transparency model to measure the data fusion analytics techniques. Also, define the different used evaluation metrics in explainable artificial intelligence. We provide some applications for explainable artificial intelligence. We provide a case study of anomaly detection with the fusion of machine learning. Finally, we discuss the key challenges and future directions in explainable artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
5

Sharma, Deepak Kumar, Jahanavi Mishra, Aeshit Singh, Raghav Govil, Gautam Srivastava, and Jerry Chun-Wei Lin. "Explainable Artificial Intelligence for Cybersecurity." Computers and Electrical Engineering 103 (October 2022): 108356. http://dx.doi.org/10.1016/j.compeleceng.2022.108356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Karpov, O. E., D. A. Andrikov, V. A. Maksimenko, and A. E. Hramov. "EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICINE." Vrach i informacionnye tehnologii, no. 2 (2022): 4–11. http://dx.doi.org/10.25881/18110193_2022_2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Full text
Abstract:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
APA, Harvard, Vancouver, ISO, and other styles
8

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence." Digital Technologies Research and Applications 1, no. 1 (January 26, 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Full text
Abstract:
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
APA, Harvard, Vancouver, ISO, and other styles
9

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (March 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Full text
Abstract:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
APA, Harvard, Vancouver, ISO, and other styles
10

Allen, Ben. "Discovering Themes in Deep Brain Stimulation Research Using Explainable Artificial Intelligence." Biomedicines 11, no. 3 (March 3, 2023): 771. http://dx.doi.org/10.3390/biomedicines11030771.

Full text
Abstract:
Deep brain stimulation is a treatment that controls symptoms by changing brain activity. The complexity of how to best treat brain dysfunction with deep brain stimulation has spawned research into artificial intelligence approaches. Machine learning is a subset of artificial intelligence that uses computers to learn patterns in data and has many healthcare applications, such as an aid in diagnosis, personalized medicine, and clinical decision support. Yet, how machine learning models make decisions is often opaque. The spirit of explainable artificial intelligence is to use machine learning models that produce interpretable solutions. Here, we use topic modeling to synthesize recent literature on explainable artificial intelligence approaches to extracting domain knowledge from machine learning models relevant to deep brain stimulation. The results show that patient classification (i.e., diagnostic models, precision medicine) is the most common problem in deep brain stimulation studies that employ explainable artificial intelligence. Other topics concern attempts to optimize stimulation strategies and the importance of explainable methods. Overall, this review supports the potential for artificial intelligence to revolutionize deep brain stimulation by personalizing stimulation protocols and adapting stimulation in real time.
APA, Harvard, Vancouver, ISO, and other styles
11

Miller, Tim, Rosina Weber, and Daniele Magazenni. "Report on the 2019 IJCAI Explainable Artificial Intelligence Workshop." AI Magazine 41, no. 1 (April 13, 2020): 103–5. http://dx.doi.org/10.1609/aimag.v41i1.5302.

Full text
Abstract:
This article reports on the Explainable Artificial Intelligence Workshop, held within the International Joint Conferences on Artificial Intelligence 2019 Workshop Program in Macau, August 11, 2019. With over 160 registered attendees, the workshop was the largest workshop at the conference. It featured an invited talk and 23 oral presentations, and closed with an audience discussion about where explainable artificial intelligence research stands.
APA, Harvard, Vancouver, ISO, and other styles
12

Dikmen, Murat, and Catherine Burns. "Abstraction Hierarchy Based Explainable Artificial Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 319–23. http://dx.doi.org/10.1177/1071181320641073.

Full text
Abstract:
This work explores the application of Cognitive Work Analysis (CWA) in the context of Explainable Artificial Intelligence (XAI). We built an AI system using a loan evaluation data set and applied an XAI technique to obtain data-driven explanations for predictions. Using an Abstraction Hierarchy (AH), we generated domain knowledge-based explanations to accompany data-driven explanations. An online experiment was conducted to test the usefulness of AH-based explanations. Participants read financial profiles of loan applicants, the AI system’s loan approval/rejection decisions, and explanations that justify the decisions. Presence or absence of AH-based explanations was manipulated, and participants’ perceptions of the explanation quality was measured. The results showed that providing AH-based explanations helped participants learn about the loan evaluation process and improved the perceived quality of explanations. We conclude that a CWA approach can increase understandability when explaining the decisions made by AI systems.
APA, Harvard, Vancouver, ISO, and other styles
13

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Full text
Abstract:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Jiménez-Luna, José, Francesca Grisoni, and Gisbert Schneider. "Drug discovery with explainable artificial intelligence." Nature Machine Intelligence 2, no. 10 (October 2020): 573–84. http://dx.doi.org/10.1038/s42256-020-00236-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Miller, Tim. ""But why?" Understanding explainable artificial intelligence." XRDS: Crossroads, The ACM Magazine for Students 25, no. 3 (April 10, 2019): 20–25. http://dx.doi.org/10.1145/3313107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
APA, Harvard, Vancouver, ISO, and other styles
17

Alufaisan, Yasmeen, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6618–26. http://dx.doi.org/10.1609/aaai.v35i8.16819.

Full text
Abstract:
Explainable AI provides insights to users into the why for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, we compare objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct vs. incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the why information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Wongburi, Praewa, and Jae K. Park. "Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network." Sustainability 14, no. 10 (May 21, 2022): 6276. http://dx.doi.org/10.3390/su14106276.

Full text
Abstract:
Sludge Volume Index (SVI) is one of the most important operational parameters in an activated sludge process. It is difficult to predict SVI because of the nonlinearity of data and variability operation conditions. With complex time-series data from Wastewater Treatment Plants (WWTPs), the Recurrent Neural Network (RNN) with an Explainable Artificial Intelligence was applied to predict SVI and interpret the prediction result. RNN architecture has been proven to efficiently handle time-series and non-uniformity data. Moreover, due to the complexity of the model, the newly Explainable Artificial Intelligence concept was used to interpret the result. Data were collected from the Nine Springs Wastewater Treatment Plant, Madison, Wisconsin, and the data were analyzed and cleaned using Python program and data analytics approaches. An RNN model predicted SVI accurately after training with historical big data collected at the Nine Spring WWTP. The Explainable Artificial Intelligence (AI) analysis was able to determine which input parameters affected higher SVI most. The prediction of SVI will benefit WWTPs to establish corrective measures to maintaining stable SVI. The SVI prediction model and Explainable Artificial Intelligence method will help the wastewater treatment sector to improve operational performance, system management, and process reliability.
APA, Harvard, Vancouver, ISO, and other styles
19

Webb-Robertson, Bobbie-Jo M. "Explainable Artificial Intelligence in Endocrinological Medical Research." Journal of Clinical Endocrinology & Metabolism 106, no. 7 (May 21, 2021): e2809-e2810. http://dx.doi.org/10.1210/clinem/dgab237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (September 27, 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Full text
Abstract:
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICIDS-2017 dataset were chosen. This paper proposes an intrusion detection system using machine learning algorithms such as decision trees, random forests, and SVM (IDS). After training these models, an ensemble technique voting classifier was added and achieved an accuracy of 96.25%. Furthermore, the proposed model also incorporates the XAI algorithm LIME for better explainability and understanding of the black-box approach to reliable intrusion detection. Our experimental results confirmed that XAI LIME is more explanation-friendly and more responsive.
APA, Harvard, Vancouver, ISO, and other styles
21

Babaei, Golnoosh, Paolo Giudici, and Emanuela Raffinetti. "Explainable artificial intelligence for crypto asset allocation." Finance Research Letters 47 (June 2022): 102941. http://dx.doi.org/10.1016/j.frl.2022.102941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Miller, Tim, Robert Hoffman, Ofra Amir, and Andreas Holzinger. "Special issue on Explainable Artificial Intelligence (XAI)." Artificial Intelligence 307 (June 2022): 103705. http://dx.doi.org/10.1016/j.artint.2022.103705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Javaid, Kumail, Ayesha Siddiqa, Syed Abbas Zilqurnain Naqvi, Allah Ditta, Muhammad Ahsan, M. A. Khan, Tariq Mahmood, and Muhammad Adnan Khan. "Explainable Artificial Intelligence Solution for Online Retail." Computers, Materials & Continua 71, no. 3 (2022): 4425–42. http://dx.doi.org/10.32604/cmc.2022.022984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Alonso-Moral, Jose Maria, Corrado Mencar, and Hisao Ishibuchi. "Explainable and Trustworthy Artificial Intelligence [Guest Editorial]." IEEE Computational Intelligence Magazine 17, no. 1 (February 2022): 14–15. http://dx.doi.org/10.1109/mci.2021.3129953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Han, Juhee, and Younghoon Lee. "Explainable Artificial Intelligence-Based Competitive Factor Identification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–11. http://dx.doi.org/10.1145/3451529.

Full text
Abstract:
Competitor analysis is an essential component of corporate strategy, providing both offensive and defensive strategic contexts to identify opportunities and threats. The rapid development of social media has recently led to several methodologies and frameworks facilitating competitor analysis through online reviews. Existing studies only focused on detecting comparative sentences in review comments or utilized low-performance models. However, this study proposes a novel approach to identifying the competitive factors using a recent explainable artificial intelligence approach at the comprehensive product feature level. We establish a model to classify the review comments for each corresponding product and evaluate the relevance of each keyword in such comments during the classification process. We then extract and prioritize the keywords and determine their competitiveness based on relevance. Our experiment results show that the proposed method can effectively extract the competitive factors both qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
26

Sandu, Marian Gabriel, and Stefan Trausan-Matu. "Explainable Artificial Intelligence in Natural Language Processing." International Joural of User-System Interaction 14, no. 2 (2021): 68–84. http://dx.doi.org/10.37789/ijusi.2021.14.2.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Jung, Chanyil, and Hoojin Lee. "Explainable Artificial Intelligence based Process Mining Analysis Automation." Journal of the Institute of Electronics and Information Engineers 56, no. 11 (November 30, 2019): 45–51. http://dx.doi.org/10.5573/ieie.2019.56.11.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Althoff, Daniel, Helizani Couto Bazame, and Jessica Garcia Nascimento. "Untangling hybrid hydrological models with explainable artificial intelligence." H2Open Journal 4, no. 1 (January 1, 2021): 13–28. http://dx.doi.org/10.2166/h2oj.2021.066.

Full text
Abstract:
Abstract Hydrological models are valuable tools for developing streamflow predictions in unmonitored catchments to increase our understanding of hydrological processes. A recent effort has been made in the development of hybrid (conceptual/machine learning) models that can preserve some of the hydrological processes represented by conceptual models and can improve streamflow predictions. However, these studies have not explored how the data-driven component of hybrid models resolved runoff routing. In this study, explainable artificial intelligence (XAI) techniques are used to turn a ‘black-box’ model into a ‘glass box’ model. The hybrid models reduced the root-mean-square error of the simulated streamflow values by approximately 27, 50, and 24% for stations 17120000, 27380000, and 33680000, respectively, relative to the traditional method. XAI techniques helped unveil the importance of accounting for soil moisture in hydrological models. Differing from purely data-driven hydrological models, the inclusion of the production storage in the proposed hybrid model, which is responsible for estimating the water balance, reduced the short- and long-term dependencies of input variables for streamflow prediction. In addition, soil moisture controlled water percolation, which was the main predictor of streamflow. This finding is because soil moisture controls the underlying mechanisms of groundwater flow into river streams.
APA, Harvard, Vancouver, ISO, and other styles
29

Cambria, Erik, Akshi Kumar, Mahmoud Al-Ayyoub, and Newton Howard. "Guest Editorial: Explainable artificial intelligence for sentiment analysis." Knowledge-Based Systems 238 (February 2022): 107920. http://dx.doi.org/10.1016/j.knosys.2021.107920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.

Full text
Abstract:
Sarcasm detection in dialogues has been gaining popularity among natural language processing (NLP) researchers with the increased use of conversational threads on social media. Capturing the knowledge of the domain of discourse, context propagation during the course of dialogue, and situational context and tone of the speaker are some important features to train the machine learning models for detecting sarcasm in real time. As situational comedies vibrantly represent human mannerism and behaviour in everyday real-life situations, this research demonstrates the use of an ensemble supervised learning algorithm to detect sarcasm in the benchmark dialogue dataset, MUStARD. The punch-line utterance and its associated context are taken as features to train the eXtreme Gradient Boosting (XGBoost) method. The primary goal is to predict sarcasm in each utterance of the speaker using the chronological nature of a scene. Further, it is vital to prevent model bias and help decision makers understand how to use the models in the right way. Therefore, as a twin goal of this research, we make the learning model used for conversational sarcasm detection interpretable. This is done using two post hoc interpretability approaches, Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), to generate explanations for the output of a trained classifier. The classification results clearly depict the importance of capturing the intersentence context to detect sarcasm in conversational threads. The interpretability methods show the words (features) that influence the decision of the model the most and help the user understand how the model is making the decision for detecting sarcasm in dialogues.
APA, Harvard, Vancouver, ISO, and other styles
31

Sahakyan, Maria, Zeyar Aung, and Talal Rahwan. "Explainable Artificial Intelligence for Tabular Data: A Survey." IEEE Access 9 (2021): 135392–422. http://dx.doi.org/10.1109/access.2021.3116481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Raihan, Md Johir, and Abdullah-Al Nahid. "Malaria cell image classification by explainable artificial intelligence." Health and Technology 12, no. 1 (November 11, 2021): 47–58. http://dx.doi.org/10.1007/s12553-021-00620-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chary, Michael, Ed W. Boyer, and Michele M. Burns. "Diagnosis of Acute Poisoning using explainable artificial intelligence." Computers in Biology and Medicine 134 (July 2021): 104469. http://dx.doi.org/10.1016/j.compbiomed.2021.104469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Thakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar, and Sydney Simpson. "Explainable Artificial Intelligence for Developing Smart Cities Solutions." Smart Cities 3, no. 4 (November 13, 2020): 1353–82. http://dx.doi.org/10.3390/smartcities3040065.

Full text
Abstract:
Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.
APA, Harvard, Vancouver, ISO, and other styles
35

de Bruijn, Hans, Marijn Janssen, and Martijn Warnier. "Transparantie en Explainable Artificial Intelligence: beperkingen en strategieën." Bestuurskunde 29, no. 4 (December 2020): 21–29. http://dx.doi.org/10.5553/bk/092733872020029004003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gordon, Lauren, Teodor Grantcharov, and Frank Rudzicz. "Explainable Artificial Intelligence for Safe Intraoperative Decision Support." JAMA Surgery 154, no. 11 (November 1, 2019): 1064. http://dx.doi.org/10.1001/jamasurg.2019.2821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Páez, Andrés. "The Pragmatic Turn in Explainable Artificial Intelligence (XAI)." Minds and Machines 29, no. 3 (May 29, 2019): 441–59. http://dx.doi.org/10.1007/s11023-019-09502-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Shkileva, Ksenia, and Nikolai Zolotykh. "Explainable Artificial Intelligence Techniques in Medical Signal Processing." Procedia Computer Science 212 (2022): 474–84. http://dx.doi.org/10.1016/j.procs.2022.11.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Schmid, Stefka. "Trustworthy and Explainable: A European Vision of (Weaponised) Artificial Intelligence." Die Friedens-Warte 95, no. 3-4 (2022): 290. http://dx.doi.org/10.35998/fw-2022-0013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Papadakis, Emmanuel, Ben Adams, Song Gao, Bruno Martins, George Baryannis, and Alina Ristea. "Explainable artificial intelligence in the spatial domain ( X‐GeoAI )." Transactions in GIS 26, no. 6 (September 2022): 2413–14. http://dx.doi.org/10.1111/tgis.12996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Corchado, Juan M., Sascha Ossowski, Sara Rodríguez-González, and Fernando De la Prieta. "Advances in Explainable Artificial Intelligence and Edge Computing Applications." Electronics 11, no. 19 (September 28, 2022): 3111. http://dx.doi.org/10.3390/electronics11193111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

He, Bohao, Yanghe Zhao, Wei Mao, and Robert J. Griffin-Nolanb. "Explainable artificial intelligence reveals environmental constraints in seagrass distribution." Ecological Indicators 144 (November 2022): 109523. http://dx.doi.org/10.1016/j.ecolind.2022.109523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Zijiao, Chong Wu, Shiyou Qu, and Xiaofang Chen. "An explainable artificial intelligence approach for financial distress prediction." Information Processing & Management 59, no. 4 (July 2022): 102988. http://dx.doi.org/10.1016/j.ipm.2022.102988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Raunak, M. S., and Rick Kuhn. "Explainable artificial intelligence and machine learning [Guest Editors’ introduction]." Computer 54, no. 10 (October 2021): 25–27. http://dx.doi.org/10.1109/mc.2021.3099041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Park, Seobin, Saif Haider Kayani, Kwangjun Euh, Eunhyeok Seo, Hayeol Kim, Sangeun Park, Bishnu Nand Yadav, Seong Jin Park, Hyokyung Sung, and Im Doo Jung. "High strength aluminum alloys design via explainable artificial intelligence." Journal of Alloys and Compounds 903 (May 2022): 163828. http://dx.doi.org/10.1016/j.jallcom.2022.163828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (January 19, 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Full text
Abstract:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Pérez-Landa, Gabriel Ichcanziho, Octavio Loyola-González, and Miguel Angel Medina-Pérez. "An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets." Applied Sciences 11, no. 22 (November 16, 2021): 10801. http://dx.doi.org/10.3390/app112210801.

Full text
Abstract:
Xenophobia is a social and political behavior that has been present in our societies since the beginning of humanity. The feeling of hatred, fear, or resentment is present before people from different communities from ours. With the rise of social networks like Twitter, hate speeches were swift because of the pseudo feeling of anonymity that these platforms provide. Sometimes this violent behavior on social networks that begins as threats or insults to third parties breaks the Internet barriers to become an act of real physical violence. Hence, this proposal aims to correctly classify xenophobic posts on social networks, specifically on Twitter. In addition, we collected a xenophobic tweets database from which we also extracted new features by using a Natural Language Processing (NLP) approach. Then, we provide an Explainable Artificial Intelligence (XAI) model, allowing us to understand better why a post is considered xenophobic. Consequently, we provide a set of contrast patterns describing xenophobic tweets, which could help decision-makers prevent acts of violence caused by xenophobic posts on Twitter. Finally, our interpretable results based on our new feature representation approach jointly with a contrast pattern-based classifier obtain similar classification results than other feature representations jointly with prominent machine learning classifiers, which are not easy to understand by an expert in the application area.
APA, Harvard, Vancouver, ISO, and other styles
48

Gramegna, Alex, and Paolo Giudici. "Why to Buy Insurance? An Explainable Artificial Intelligence Approach." Risks 8, no. 4 (December 14, 2020): 137. http://dx.doi.org/10.3390/risks8040137.

Full text
Abstract:
We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.
APA, Harvard, Vancouver, ISO, and other styles
49

Linkov, Igor, Stephanie Galaitsi, Benjamin D. Trump, Jeffrey M. Keisler, and Alexander Kott. "Cybertrust: From Explainable to Actionable and Interpretable Artificial Intelligence." Computer 53, no. 9 (September 2020): 91–96. http://dx.doi.org/10.1109/mc.2020.2993623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ridley, Michael. "Explainable Artificial Intelligence (XAI)." Information Technology and Libraries 41, no. 2 (June 15, 2022). http://dx.doi.org/10.6017/ital.v41i2.14683.

Full text
Abstract:
The field of explainable artificial intelligence (XAI) advances techniques, processes, and strategies that provide explanations for the predictions, recommendations, and decisions of opaque and complex machine learning systems. Increasingly academic libraries are providing library users with systems, services, and collections created and delivered by machine learning. Academic libraries should adopt XAI as a tool set to verify and validate these resources, and advocate for public policy regarding XAI that serves libraries, the academy, and the public interest.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography