Siga este enlace para ver otros tipos de publicaciones sobre el tema: Explainable artifical intelligence.

Artículos de revistas sobre el tema "Explainable artifical intelligence"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Explainable artifical intelligence".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Ridley, Michael. "Explainable Artificial Intelligence". Ethics of Artificial Intelligence, n.º 299 (19 de septiembre de 2019): 28–46. http://dx.doi.org/10.29242/rli.299.3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gunning, David, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf y Guang-Zhong Yang. "XAI—Explainable artificial intelligence". Science Robotics 4, n.º 37 (18 de diciembre de 2019): eaay7120. http://dx.doi.org/10.1126/scirobotics.aay7120.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Chauhan, Tavishee y Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques". International Journal on Recent and Innovation Trends in Computing and Communication 10, n.º 4 (30 de abril de 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Texto completo
Resumen
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Abdelmonem, Ahmed y Nehal N. Mostafa. "Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection". Fusion: Practice and Applications 3, n.º 1 (2021): 54–69. http://dx.doi.org/10.54216/fpa.030104.

Texto completo
Resumen
Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popular method for dealing with such data and acquiring more trustworthy, helpful, and precise insights. Compared to other, more traditional-based data fusion methods, machine learning's capacity to automatically learn from experience with nonexplicit programming significantly improves fusion's computational and predictive power. This paper comprehensively studies the most explainable artificial intelligent methods based on anomaly detection. We proposed the required criteria of the transparency model to measure the data fusion analytics techniques. Also, define the different used evaluation metrics in explainable artificial intelligence. We provide some applications for explainable artificial intelligence. We provide a case study of anomaly detection with the fusion of machine learning. Finally, we discuss the key challenges and future directions in explainable artificial intelligence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sharma, Deepak Kumar, Jahanavi Mishra, Aeshit Singh, Raghav Govil, Gautam Srivastava y Jerry Chun-Wei Lin. "Explainable Artificial Intelligence for Cybersecurity". Computers and Electrical Engineering 103 (octubre de 2022): 108356. http://dx.doi.org/10.1016/j.compeleceng.2022.108356.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Karpov, O. E., D. A. Andrikov, V. A. Maksimenko y A. E. Hramov. "EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICINE". Vrach i informacionnye tehnologii, n.º 2 (2022): 4–11. http://dx.doi.org/10.25881/18110193_2022_2_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence". Russian Journal of Philosophical Sciences 65, n.º 1 (25 de junio de 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Texto completo
Resumen
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence". Digital Technologies Research and Applications 1, n.º 1 (26 de enero de 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Texto completo
Resumen
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zednik, Carlos y Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence". Minds and Machines 32, n.º 1 (marzo de 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Texto completo
Resumen
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Allen, Ben. "Discovering Themes in Deep Brain Stimulation Research Using Explainable Artificial Intelligence". Biomedicines 11, n.º 3 (3 de marzo de 2023): 771. http://dx.doi.org/10.3390/biomedicines11030771.

Texto completo
Resumen
Deep brain stimulation is a treatment that controls symptoms by changing brain activity. The complexity of how to best treat brain dysfunction with deep brain stimulation has spawned research into artificial intelligence approaches. Machine learning is a subset of artificial intelligence that uses computers to learn patterns in data and has many healthcare applications, such as an aid in diagnosis, personalized medicine, and clinical decision support. Yet, how machine learning models make decisions is often opaque. The spirit of explainable artificial intelligence is to use machine learning models that produce interpretable solutions. Here, we use topic modeling to synthesize recent literature on explainable artificial intelligence approaches to extracting domain knowledge from machine learning models relevant to deep brain stimulation. The results show that patient classification (i.e., diagnostic models, precision medicine) is the most common problem in deep brain stimulation studies that employ explainable artificial intelligence. Other topics concern attempts to optimize stimulation strategies and the importance of explainable methods. Overall, this review supports the potential for artificial intelligence to revolutionize deep brain stimulation by personalizing stimulation protocols and adapting stimulation in real time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Miller, Tim, Rosina Weber y Daniele Magazenni. "Report on the 2019 IJCAI Explainable Artificial Intelligence Workshop". AI Magazine 41, n.º 1 (13 de abril de 2020): 103–5. http://dx.doi.org/10.1609/aimag.v41i1.5302.

Texto completo
Resumen
This article reports on the Explainable Artificial Intelligence Workshop, held within the International Joint Conferences on Artificial Intelligence 2019 Workshop Program in Macau, August 11, 2019. With over 160 registered attendees, the workshop was the largest workshop at the conference. It featured an invited talk and 23 oral presentations, and closed with an audience discussion about where explainable artificial intelligence research stands.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Dikmen, Murat y Catherine Burns. "Abstraction Hierarchy Based Explainable Artificial Intelligence". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, n.º 1 (diciembre de 2020): 319–23. http://dx.doi.org/10.1177/1071181320641073.

Texto completo
Resumen
This work explores the application of Cognitive Work Analysis (CWA) in the context of Explainable Artificial Intelligence (XAI). We built an AI system using a loan evaluation data set and applied an XAI technique to obtain data-driven explanations for predictions. Using an Abstraction Hierarchy (AH), we generated domain knowledge-based explanations to accompany data-driven explanations. An online experiment was conducted to test the usefulness of AH-based explanations. Participants read financial profiles of loan applicants, the AI system’s loan approval/rejection decisions, and explanations that justify the decisions. Presence or absence of AH-based explanations was manipulated, and participants’ perceptions of the explanation quality was measured. The results showed that providing AH-based explanations helped participants learn about the loan evaluation process and improved the perceived quality of explanations. We conclude that a CWA approach can increase understandability when explaining the decisions made by AI systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Gunning, David y David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program". AI Magazine 40, n.º 2 (24 de junio de 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Texto completo
Resumen
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Jiménez-Luna, José, Francesca Grisoni y Gisbert Schneider. "Drug discovery with explainable artificial intelligence". Nature Machine Intelligence 2, n.º 10 (octubre de 2020): 573–84. http://dx.doi.org/10.1038/s42256-020-00236-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Miller, Tim. ""But why?" Understanding explainable artificial intelligence". XRDS: Crossroads, The ACM Magazine for Students 25, n.º 3 (10 de abril de 2019): 20–25. http://dx.doi.org/10.1145/3313107.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel y German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance". Risks 10, n.º 12 (1 de diciembre de 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Texto completo
Resumen
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Alufaisan, Yasmeen, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou y Murat Kantarcioglu. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de mayo de 2021): 6618–26. http://dx.doi.org/10.1609/aaai.v35i8.16819.

Texto completo
Resumen
Explainable AI provides insights to users into the why for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, we compare objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct vs. incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the why information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Wongburi, Praewa y Jae K. Park. "Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network". Sustainability 14, n.º 10 (21 de mayo de 2022): 6276. http://dx.doi.org/10.3390/su14106276.

Texto completo
Resumen
Sludge Volume Index (SVI) is one of the most important operational parameters in an activated sludge process. It is difficult to predict SVI because of the nonlinearity of data and variability operation conditions. With complex time-series data from Wastewater Treatment Plants (WWTPs), the Recurrent Neural Network (RNN) with an Explainable Artificial Intelligence was applied to predict SVI and interpret the prediction result. RNN architecture has been proven to efficiently handle time-series and non-uniformity data. Moreover, due to the complexity of the model, the newly Explainable Artificial Intelligence concept was used to interpret the result. Data were collected from the Nine Springs Wastewater Treatment Plant, Madison, Wisconsin, and the data were analyzed and cleaned using Python program and data analytics approaches. An RNN model predicted SVI accurately after training with historical big data collected at the Nine Spring WWTP. The Explainable Artificial Intelligence (AI) analysis was able to determine which input parameters affected higher SVI most. The prediction of SVI will benefit WWTPs to establish corrective measures to maintaining stable SVI. The SVI prediction model and Explainable Artificial Intelligence method will help the wastewater treatment sector to improve operational performance, system management, and process reliability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Webb-Robertson, Bobbie-Jo M. "Explainable Artificial Intelligence in Endocrinological Medical Research". Journal of Clinical Endocrinology & Metabolism 106, n.º 7 (21 de mayo de 2021): e2809-e2810. http://dx.doi.org/10.1210/clinem/dgab237.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw y Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System". Electronics 11, n.º 19 (27 de septiembre de 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Texto completo
Resumen
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICIDS-2017 dataset were chosen. This paper proposes an intrusion detection system using machine learning algorithms such as decision trees, random forests, and SVM (IDS). After training these models, an ensemble technique voting classifier was added and achieved an accuracy of 96.25%. Furthermore, the proposed model also incorporates the XAI algorithm LIME for better explainability and understanding of the black-box approach to reliable intrusion detection. Our experimental results confirmed that XAI LIME is more explanation-friendly and more responsive.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Babaei, Golnoosh, Paolo Giudici y Emanuela Raffinetti. "Explainable artificial intelligence for crypto asset allocation". Finance Research Letters 47 (junio de 2022): 102941. http://dx.doi.org/10.1016/j.frl.2022.102941.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Miller, Tim, Robert Hoffman, Ofra Amir y Andreas Holzinger. "Special issue on Explainable Artificial Intelligence (XAI)". Artificial Intelligence 307 (junio de 2022): 103705. http://dx.doi.org/10.1016/j.artint.2022.103705.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Javaid, Kumail, Ayesha Siddiqa, Syed Abbas Zilqurnain Naqvi, Allah Ditta, Muhammad Ahsan, M. A. Khan, Tariq Mahmood y Muhammad Adnan Khan. "Explainable Artificial Intelligence Solution for Online Retail". Computers, Materials & Continua 71, n.º 3 (2022): 4425–42. http://dx.doi.org/10.32604/cmc.2022.022984.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Alonso-Moral, Jose Maria, Corrado Mencar y Hisao Ishibuchi. "Explainable and Trustworthy Artificial Intelligence [Guest Editorial]". IEEE Computational Intelligence Magazine 17, n.º 1 (febrero de 2022): 14–15. http://dx.doi.org/10.1109/mci.2021.3129953.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Han, Juhee y Younghoon Lee. "Explainable Artificial Intelligence-Based Competitive Factor Identification". ACM Transactions on Knowledge Discovery from Data 16, n.º 1 (3 de julio de 2021): 1–11. http://dx.doi.org/10.1145/3451529.

Texto completo
Resumen
Competitor analysis is an essential component of corporate strategy, providing both offensive and defensive strategic contexts to identify opportunities and threats. The rapid development of social media has recently led to several methodologies and frameworks facilitating competitor analysis through online reviews. Existing studies only focused on detecting comparative sentences in review comments or utilized low-performance models. However, this study proposes a novel approach to identifying the competitive factors using a recent explainable artificial intelligence approach at the comprehensive product feature level. We establish a model to classify the review comments for each corresponding product and evaluate the relevance of each keyword in such comments during the classification process. We then extract and prioritize the keywords and determine their competitiveness based on relevance. Our experiment results show that the proposed method can effectively extract the competitive factors both qualitatively and quantitatively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Sandu, Marian Gabriel y Stefan Trausan-Matu. "Explainable Artificial Intelligence in Natural Language Processing". International Joural of User-System Interaction 14, n.º 2 (2021): 68–84. http://dx.doi.org/10.37789/ijusi.2021.14.2.2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Jung, Chanyil y Hoojin Lee. "Explainable Artificial Intelligence based Process Mining Analysis Automation". Journal of the Institute of Electronics and Information Engineers 56, n.º 11 (30 de noviembre de 2019): 45–51. http://dx.doi.org/10.5573/ieie.2019.56.11.45.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Althoff, Daniel, Helizani Couto Bazame y Jessica Garcia Nascimento. "Untangling hybrid hydrological models with explainable artificial intelligence". H2Open Journal 4, n.º 1 (1 de enero de 2021): 13–28. http://dx.doi.org/10.2166/h2oj.2021.066.

Texto completo
Resumen
Abstract Hydrological models are valuable tools for developing streamflow predictions in unmonitored catchments to increase our understanding of hydrological processes. A recent effort has been made in the development of hybrid (conceptual/machine learning) models that can preserve some of the hydrological processes represented by conceptual models and can improve streamflow predictions. However, these studies have not explored how the data-driven component of hybrid models resolved runoff routing. In this study, explainable artificial intelligence (XAI) techniques are used to turn a ‘black-box’ model into a ‘glass box’ model. The hybrid models reduced the root-mean-square error of the simulated streamflow values by approximately 27, 50, and 24% for stations 17120000, 27380000, and 33680000, respectively, relative to the traditional method. XAI techniques helped unveil the importance of accounting for soil moisture in hydrological models. Differing from purely data-driven hydrological models, the inclusion of the production storage in the proposed hybrid model, which is responsible for estimating the water balance, reduced the short- and long-term dependencies of input variables for streamflow prediction. In addition, soil moisture controlled water percolation, which was the main predictor of streamflow. This finding is because soil moisture controls the underlying mechanisms of groundwater flow into river streams.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Cambria, Erik, Akshi Kumar, Mahmoud Al-Ayyoub y Newton Howard. "Guest Editorial: Explainable artificial intelligence for sentiment analysis". Knowledge-Based Systems 238 (febrero de 2022): 107920. http://dx.doi.org/10.1016/j.knosys.2021.107920.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Kumar, Akshi, Shubham Dikshit y Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues". Wireless Communications and Mobile Computing 2021 (2 de julio de 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.

Texto completo
Resumen
Sarcasm detection in dialogues has been gaining popularity among natural language processing (NLP) researchers with the increased use of conversational threads on social media. Capturing the knowledge of the domain of discourse, context propagation during the course of dialogue, and situational context and tone of the speaker are some important features to train the machine learning models for detecting sarcasm in real time. As situational comedies vibrantly represent human mannerism and behaviour in everyday real-life situations, this research demonstrates the use of an ensemble supervised learning algorithm to detect sarcasm in the benchmark dialogue dataset, MUStARD. The punch-line utterance and its associated context are taken as features to train the eXtreme Gradient Boosting (XGBoost) method. The primary goal is to predict sarcasm in each utterance of the speaker using the chronological nature of a scene. Further, it is vital to prevent model bias and help decision makers understand how to use the models in the right way. Therefore, as a twin goal of this research, we make the learning model used for conversational sarcasm detection interpretable. This is done using two post hoc interpretability approaches, Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), to generate explanations for the output of a trained classifier. The classification results clearly depict the importance of capturing the intersentence context to detect sarcasm in conversational threads. The interpretability methods show the words (features) that influence the decision of the model the most and help the user understand how the model is making the decision for detecting sarcasm in dialogues.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Sahakyan, Maria, Zeyar Aung y Talal Rahwan. "Explainable Artificial Intelligence for Tabular Data: A Survey". IEEE Access 9 (2021): 135392–422. http://dx.doi.org/10.1109/access.2021.3116481.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Raihan, Md Johir y Abdullah-Al Nahid. "Malaria cell image classification by explainable artificial intelligence". Health and Technology 12, n.º 1 (11 de noviembre de 2021): 47–58. http://dx.doi.org/10.1007/s12553-021-00620-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Chary, Michael, Ed W. Boyer y Michele M. Burns. "Diagnosis of Acute Poisoning using explainable artificial intelligence". Computers in Biology and Medicine 134 (julio de 2021): 104469. http://dx.doi.org/10.1016/j.compbiomed.2021.104469.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Thakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar y Sydney Simpson. "Explainable Artificial Intelligence for Developing Smart Cities Solutions". Smart Cities 3, n.º 4 (13 de noviembre de 2020): 1353–82. http://dx.doi.org/10.3390/smartcities3040065.

Texto completo
Resumen
Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

de Bruijn, Hans, Marijn Janssen y Martijn Warnier. "Transparantie en Explainable Artificial Intelligence: beperkingen en strategieën". Bestuurskunde 29, n.º 4 (diciembre de 2020): 21–29. http://dx.doi.org/10.5553/bk/092733872020029004003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Gordon, Lauren, Teodor Grantcharov y Frank Rudzicz. "Explainable Artificial Intelligence for Safe Intraoperative Decision Support". JAMA Surgery 154, n.º 11 (1 de noviembre de 2019): 1064. http://dx.doi.org/10.1001/jamasurg.2019.2821.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Páez, Andrés. "The Pragmatic Turn in Explainable Artificial Intelligence (XAI)". Minds and Machines 29, n.º 3 (29 de mayo de 2019): 441–59. http://dx.doi.org/10.1007/s11023-019-09502-w.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Shkileva, Ksenia y Nikolai Zolotykh. "Explainable Artificial Intelligence Techniques in Medical Signal Processing". Procedia Computer Science 212 (2022): 474–84. http://dx.doi.org/10.1016/j.procs.2022.11.031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Schmid, Stefka. "Trustworthy and Explainable: A European Vision of (Weaponised) Artificial Intelligence". Die Friedens-Warte 95, n.º 3-4 (2022): 290. http://dx.doi.org/10.35998/fw-2022-0013.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Papadakis, Emmanuel, Ben Adams, Song Gao, Bruno Martins, George Baryannis y Alina Ristea. "Explainable artificial intelligence in the spatial domain ( X‐GeoAI )". Transactions in GIS 26, n.º 6 (septiembre de 2022): 2413–14. http://dx.doi.org/10.1111/tgis.12996.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Corchado, Juan M., Sascha Ossowski, Sara Rodríguez-González y Fernando De la Prieta. "Advances in Explainable Artificial Intelligence and Edge Computing Applications". Electronics 11, n.º 19 (28 de septiembre de 2022): 3111. http://dx.doi.org/10.3390/electronics11193111.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

He, Bohao, Yanghe Zhao, Wei Mao y Robert J. Griffin-Nolanb. "Explainable artificial intelligence reveals environmental constraints in seagrass distribution". Ecological Indicators 144 (noviembre de 2022): 109523. http://dx.doi.org/10.1016/j.ecolind.2022.109523.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Zhang, Zijiao, Chong Wu, Shiyou Qu y Xiaofang Chen. "An explainable artificial intelligence approach for financial distress prediction". Information Processing & Management 59, n.º 4 (julio de 2022): 102988. http://dx.doi.org/10.1016/j.ipm.2022.102988.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Raunak, M. S. y Rick Kuhn. "Explainable artificial intelligence and machine learning [Guest Editors’ introduction]". Computer 54, n.º 10 (octubre de 2021): 25–27. http://dx.doi.org/10.1109/mc.2021.3099041.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Park, Seobin, Saif Haider Kayani, Kwangjun Euh, Eunhyeok Seo, Hayeol Kim, Sangeun Park, Bishnu Nand Yadav, Seong Jin Park, Hyokyung Sung y Im Doo Jung. "High strength aluminum alloys design via explainable artificial intelligence". Journal of Alloys and Compounds 903 (mayo de 2022): 163828. http://dx.doi.org/10.1016/j.jallcom.2022.163828.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Zhang, Yiming, Ying Weng y Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery". Diagnostics 12, n.º 2 (19 de enero de 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Texto completo
Resumen
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Pérez-Landa, Gabriel Ichcanziho, Octavio Loyola-González y Miguel Angel Medina-Pérez. "An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets". Applied Sciences 11, n.º 22 (16 de noviembre de 2021): 10801. http://dx.doi.org/10.3390/app112210801.

Texto completo
Resumen
Xenophobia is a social and political behavior that has been present in our societies since the beginning of humanity. The feeling of hatred, fear, or resentment is present before people from different communities from ours. With the rise of social networks like Twitter, hate speeches were swift because of the pseudo feeling of anonymity that these platforms provide. Sometimes this violent behavior on social networks that begins as threats or insults to third parties breaks the Internet barriers to become an act of real physical violence. Hence, this proposal aims to correctly classify xenophobic posts on social networks, specifically on Twitter. In addition, we collected a xenophobic tweets database from which we also extracted new features by using a Natural Language Processing (NLP) approach. Then, we provide an Explainable Artificial Intelligence (XAI) model, allowing us to understand better why a post is considered xenophobic. Consequently, we provide a set of contrast patterns describing xenophobic tweets, which could help decision-makers prevent acts of violence caused by xenophobic posts on Twitter. Finally, our interpretable results based on our new feature representation approach jointly with a contrast pattern-based classifier obtain similar classification results than other feature representations jointly with prominent machine learning classifiers, which are not easy to understand by an expert in the application area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Gramegna, Alex y Paolo Giudici. "Why to Buy Insurance? An Explainable Artificial Intelligence Approach". Risks 8, n.º 4 (14 de diciembre de 2020): 137. http://dx.doi.org/10.3390/risks8040137.

Texto completo
Resumen
We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Linkov, Igor, Stephanie Galaitsi, Benjamin D. Trump, Jeffrey M. Keisler y Alexander Kott. "Cybertrust: From Explainable to Actionable and Interpretable Artificial Intelligence". Computer 53, n.º 9 (septiembre de 2020): 91–96. http://dx.doi.org/10.1109/mc.2020.2993623.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Ridley, Michael. "Explainable Artificial Intelligence (XAI)". Information Technology and Libraries 41, n.º 2 (15 de junio de 2022). http://dx.doi.org/10.6017/ital.v41i2.14683.

Texto completo
Resumen
The field of explainable artificial intelligence (XAI) advances techniques, processes, and strategies that provide explanations for the predictions, recommendations, and decisions of opaque and complex machine learning systems. Increasingly academic libraries are providing library users with systems, services, and collections created and delivered by machine learning. Academic libraries should adopt XAI as a tool set to verify and validate these resources, and advocate for public policy regarding XAI that serves libraries, the academy, and the public interest.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía