Добірка наукової літератури з теми "Explainable Artificial Intelligence (XAI)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Explainable Artificial Intelligence (XAI)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Explainable Artificial Intelligence (XAI)":

1

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
2

Sewada, Ranu, Ashwani Jangid, Piyush Kumar, and Neha Mishra. "Explainable Artificial Intelligence (XAI)." Journal of Nonlinear Analysis and Optimization 13, no. 01 (2023): 41–47. http://dx.doi.org/10.36893/jnao.2022.v13i02.041-047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable Artificial Intelligence (XAI) has emerged as a critical facet in the realm of machine learning and artificial intelligence, responding to the increasing complexity of models, particularly deep neural networks, and the subsequent need for transparent decision making processes. This research paper delves into the essence of XAI, unraveling its significance across diverse domains such as healthcare, finance, and criminal justice. As a countermeasure to the opacity of intricate models, the paper explores various XAI methods and techniques, including LIME and SHAP, weighing their interpretability against computational efficiency and accuracy. Through an examination of real-world applications, the research elucidates how XAI not only enhances decision-making processes but also influences user trust and acceptance in AI systems. However, the paper also scrutinizes the delicate balance between interpretability and performance, shedding light on instances where the pursuit of accuracy may compromise explain-ability. Additionally, it navigates through the current challenges and limitations in XAI, the regulatory landscape surrounding AI explain-ability, and offers insights into future trends and directions, fostering a comprehensive understanding of XAI's present state and future potential.
3

Gunning, David, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. "XAI—Explainable artificial intelligence." Science Robotics 4, no. 37 (December 18, 2019): eaay7120. http://dx.doi.org/10.1126/scirobotics.aay7120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
5

Chaudhary, G. "Explainable Artificial Intelligence (xAI): Reflections on Judicial System." Kutafin Law Review 10, no. 4 (January 13, 2024): 872–89. http://dx.doi.org/10.17803/2713-0533.2023.4.26.872-889.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning algorithms are increasingly being utilized in scenarios, such, as criminal, administrative and civil proceedings. However, there is growing concern regarding the lack of transparency and accountability due to the “black box” nature of these algorithms. This makes it challenging for judges’ to comprehend how decisions or predictions are reached. This paper aims to explore the significance of Explainable AI (xAI) in enhancing transparency and accountability within contexts. Additionally, it examines the role that the judicial system can play in developing xAI. The methodology involves a review of existing xAI research and a discussion on how feedback from the system can improve its effectiveness in legal settings. The argument presented is that xAI is crucial in contexts as it empowers judges to make informed decisions based on algorithmic outcomes. However, the lack of transparency, in decision-making processes can impede judge’s ability to do effectively. Therefore, implementing xAI can contribute to increasing transparency and accountability within this decision-making process. The judicial system has an opportunity to aid in the development of xAI by emulating reasoning customizing approaches according to specific jurisdictions and audiences and providing valuable feedback for improving this technology’s efficacy.Hence the primary objective is to emphasize the significance of xAI in enhancing transparency and accountability, within settings well as the potential contribution of the judicial system, towards its advancement. Judges could consider asking about the rationale, behind outcomes. It is advisable for xAI systems to provide a clear account of the steps taken by algorithms to reach their conclusions or predictions. Additionally, it is proposed that public stakeholders have a role, in shaping xAI to guarantee ethical and socially responsible technology.
6

Praveenraj, D. David Winster, Melvin Victor, C. Vennila, Ahmed Hussein Alawadi, Pardaeva Diyora, N. Vasudevan, and T. Avudaiappan. "Exploring Explainable Artificial Intelligence for Transparent Decision Making." E3S Web of Conferences 399 (2023): 04030. http://dx.doi.org/10.1051/e3sconf/202339904030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial intelligence (AI) has become a potent tool in many fields, allowing complicated tasks to be completed with astounding effectiveness. However, as AI systems get more complex, worries about their interpretability and transparency have become increasingly prominent. It is now more important than ever to use Explainable Artificial Intelligence (XAI) methodologies in decision-making processes, where the capacity to comprehend and trust AI-based judgments is crucial. This abstract explores the idea of XAI and how important it is for promoting transparent decision-making. Finally, the development of Explainable Artificial Intelligence (XAI) has shown to be crucial for promoting clear decision-making in AI systems. XAI approaches close the cognitive gap between complicated algorithms and human comprehension by empowering users to comprehend and analyze the inner workings of AI models. XAI equips stakeholders to evaluate and trust AI systems, assuring fairness, accountability, and ethical standards in fields like healthcare and finance where AI-based choices have substantial ramifications. The development of XAI is essential for attaining AI's full potential while retaining transparency and human-centric decision making, despite ongoing hurdles.
7

Javed, Abdul Rehman, Waqas Ahmed, Sharnil Pandya, Praveen Kumar Reddy Maddikunta, Mamoun Alazab, and Thippa Reddy Gadekallu. "A Survey of Explainable Artificial Intelligence for Smart Cities." Electronics 12, no. 4 (February 18, 2023): 1020. http://dx.doi.org/10.3390/electronics12041020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The emergence of Explainable Artificial Intelligence (XAI) has enhanced the lives of humans and envisioned the concept of smart cities using informed actions, enhanced user interpretations and explanations, and firm decision-making processes. The XAI systems can unbox the potential of black-box AI models and describe them explicitly. The study comprehensively surveys the current and future developments in XAI technologies for smart cities. It also highlights the societal, industrial, and technological trends that initiate the drive towards XAI for smart cities. It presents the key to enabling XAI technologies for smart cities in detail. The paper also discusses the concept of XAI for smart cities, various XAI technology use cases, challenges, applications, possible alternative solutions, and current and future research enhancements. Research projects and activities, including standardization efforts toward developing XAI for smart cities, are outlined in detail. The lessons learned from state-of-the-art research are summarized, and various technical challenges are discussed to shed new light on future research possibilities. The presented study on XAI for smart cities is a first-of-its-kind, rigorous, and detailed study to assist future researchers in implementing XAI-driven systems, architectures, and applications for smart cities.
8

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (January 19, 2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
9

Lozano-Murcia, Catalina, Francisco P. Romero, Jesus Serrano-Guerrero, Arturo Peralta, and Jose A. Olivas. "Potential Applications of Explainable Artificial Intelligence to Actuarial Problems." Mathematics 12, no. 5 (February 21, 2024): 635. http://dx.doi.org/10.3390/math12050635.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable artificial intelligence (XAI) is a group of techniques and evaluations that allows users to understand artificial intelligence knowledge and increase the reliability of the results produced using artificial intelligence. XAI can assist actuaries in achieving better estimations and decisions. This study reviews the current literature to summarize XAI in common actuarial problems. We proposed a research process based on understanding the type of AI used in actuarial practice in the financial industry and insurance pricing and then researched XAI implementation. This study systematically reviews the literature on the need for implementation options and the current use of explanatory artificial intelligence (XAI) techniques for actuarial problems. The study begins with a contextual introduction outlining the use of artificial intelligence techniques and their potential limitations, followed by the definition of the search equations used in the research process, the analysis of the results, and the identification of the main potential fields for exploitation in actuarial problems, as well as pointers for potential future work in this area.
10

Shukla, Bibhudhendu, Ip-Shing Fan, and Ian Jennions. "Opportunities for Explainable Artificial Intelligence in Aerospace Predictive Maintenance." PHM Society European Conference 5, no. 1 (July 22, 2020): 11. http://dx.doi.org/10.36001/phme.2020.v5i1.1231.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper aims to look at the value and the necessity of XAI (Explainable Artificial Intelligence) when using DNNs (Deep Neural Networks) in PM (Predictive Maintenance). The context will be the field of Aerospace IVHM (Integrated Vehicle Health Management) when using DNNs. An XAI (Explainable Artificial Intelligence) system is necessary so that the result of an AI (Artificial Intelligence) solution is clearly explained and understood by a human expert. This would allow the IVHM system to use XAI based PM to improve effectiveness of predictive model. An IVHM system would be able to utilize the information to assess the health of the subsystems, and their effect on the aircraft. Even if the underlying mathematical principles are understood, they lack an understandable insight, hence have difficulty in generating the underlying explanatory structures (i.e. black box). This calls for a process, or system, that enables decisions to be explainable, transparent, and understandable. It is argued that research in XAI would generally help to accelerate the implementation of AI/ML (Machine Learning) in the aerospace domain, and specifically help to facilitate compliance, transparency, and trust. This paper explains the following areas: Challenges & benefits of AI based PM in aerospace Why XAI is required for DNNs in aerospace PM? Evolution of XAI models and industry adoption Framework for XAI using XPA (Explainability Parameters) Discussion about future research in adopting XAI & DNNs in improving IVHM.

Дисертації з теми "Explainable Artificial Intelligence (XAI)":

1

Vincenzi, Leonardo. "eXplainable Artificial Intelligence User Experience: contesto e stato dell’arte." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23338/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Il grande sviluppo del mondo dell’Intelligenza Artificiale unito alla sua vastissima applicazione in molteplici ambiti degli ultimi anni, ha portato a una sempre maggior richiesta di spiegabilità dei sistemi di Machine Learning. A seguito di questa necessità il campo dell’eXplainable Artificial Intelligence ha compiuto passi importanti verso la creazione di sistemi e metodi per rendere i sistemi intelligenti sempre più trasparenti e in un futuro prossimo, per garantire sempre più equità e sicurezza nelle decisioni prese dall’AI, si prevede una sempre più rigida regolamentazione verso la sua spiegabilità. Per compiere un ulteriore salto di qualità, il recente campo di studio XAI UX si pone come obiettivo principale l’inserimento degli utenti al centro dei processi di progettazione di sistemi di AI, attraverso la combinazione di tecniche di spiegabilità offerte dall'eXplainable AI insieme allo studio di soluzioni UX. Il nuovo focus sull’utente e la necessità di creare team multidisciplinari, e quindi con maggiori barriere comunicative tra le persone, impongono ancora un largo studio sia da parte degli esperti XAI sia da parte della comunità HCI, e sono attualmente le principali difficoltà da risolvere. All’interno dell’elaborato si fornisce una visione attuale sullo stato dell'arte della XAI UX introducendo le motivazioni, il contesto e i vari ambiti di ricerca che comprende.
2

Elguendouze, Sofiane. "Explainable Artificial Intelligence approaches for Image Captioning." Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'évolution rapide des modèles de sous-titrage d'images, impulsée par l'intégration de techniques d'apprentissage profond combinant les modalités image et texte, a conduit à des systèmes de plus en plus complexes. Cependant, ces modèles fonctionnent souvent comme des boîtes noires, incapables de fournir des explications transparentes de leurs décisions. Cette thèse aborde l'explicabilité des systèmes de sous-titrage d'images basés sur des architectures Encodeur-Attention-Décodeur, et ce à travers quatre aspects. Premièrement, elle explore le concept d'espace latent, s'éloignant ainsi des approches traditionnelles basées sur l'espace de représentation originel. Deuxièmement, elle présente la notion de caractère décisif, conduisant à la formulation d'une nouvelle définition pour le concept d'influence/décisivité des composants dans le contexte de sous-titrage d'images explicable, ainsi qu'une approche par perturbation pour la capture du caractère décisif. Le troisième aspect vise à élucider les facteurs influençant la qualité des explications, en mettant l'accent sur la portée des méthodes d'explication. En conséquence, des variantes basées sur l'espace latent de méthodes d'explication bien établies telles que LRP et LIME ont été développées, ainsi que la proposition d'une approche d'évaluation centrée sur l'espace latent, connue sous le nom d'Ablation Latente. Le quatrième aspect de ce travail consiste à examiner ce que nous appelons la saillance et la représentation de certains concepts visuels, tels que la quantité d'objets, à différents niveaux de l'architecture de sous-titrage
The rapid advancement of image captioning models, driven by the integration of deep learning techniques that combine image and text modalities, has resulted in increasingly complex systems. However, these models often operate as black boxes, lacking the ability to provide transparent explanations for their decisions. This thesis addresses the explainability of image captioning systems based on Encoder-Attention-Decoder architectures, through four aspects. First, it explores the concept of the latent space, marking a departure from traditional approaches relying on the original representation space. Second, it introduces the notion of decisiveness, leading to the formulation of a new definition for the concept of component influence/decisiveness in the context of explainable image captioning, as well as a perturbation-based approach to capturing decisiveness. The third aspect aims to elucidate the factors influencing explanation quality, in particular the scope of explanation methods. Accordingly, latent-based variants of well-established explanation methods such as LRP and LIME have been developed, along with the introduction of a latent-centered evaluation approach called Latent Ablation. The fourth aspect of this work involves investigating what we call saliency and the representation of certain visual concepts, such as object quantity, at different levels of the captioning architecture
3

PANIGUTTI, Cecilia. "eXplainable AI for trustworthy healthcare applications." Doctoral thesis, Scuola Normale Superiore, 2022. https://hdl.handle.net/11384/125202.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Acknowledging that AI will inevitably become a central element of clinical practice, this thesis investigates the role of eXplainable AI (XAI) techniques in developing trustworthy AI applications in healthcare. The first part of this thesis focuses on the societal, ethical, and legal aspects of the use of AI in healthcare. It first compares the different approaches to AI ethics worldwide and then focuses on the practical implications of the European ethical and legal guidelines for AI applications in healthcare. The second part of the thesis explores how XAI techniques can help meet three key requirements identified in the initial analysis: transparency, auditability, and human oversight. The technical transparency requirement is tackled by enabling explanatory techniques to deal with common healthcare data characteristics and tailor them to the medical field. In this regard, this thesis presents two novel XAI techniques that incrementally reach this goal by first focusing on multi-label predictive algorithms and then tackling sequential data and incorporating domainspecific knowledge in the explanation process. This thesis then analyzes the ability to leverage the developed XAI technique to audit a fictional commercial black-box clinical decision support system (DSS). Finally, the thesis studies AI explanation’s ability to effectively enable human oversight by studying the impact of explanations on the decision-making process of healthcare professionals.
4

Gjeka, Mario. "Uno strumento per le spiegazioni di sistemi di Explainable Artificial Intelligence." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'obiettivo di questa tesi è quello di mostrare l’importanza delle spiegazioni in un sistema intelligente. Il bisogno di avere un'intelligenza artificiale spiegabile e trasparente sta crescendo notevolmente, esigenza evidenziata dalla ricerca delle aziende di sviluppare sistemi informatici intelligenti trasparenti e spiegabili.
5

Bracchi, Luca. "I-eXplainer: applicazione web per spiegazioni interattive." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20424/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Lo scopo di questa tesi è dimostrare che è possibile trasformare una spiegazione statica relativa all'output di un algoritmo spiegabile in una spiegazione interattiva con una maggior grado di efficacia. La spiegazione cercata sarà human readable ed esplorabile. Per dimostrare la tesi userò la spiegazione di uno degli algoritmi spiegabili del toolkit AIX360 e andrò a generarne una statica che chiamerò spiegazione di base. Questa verrà poi espansa ed arricchita grazie a una struttura dati costruita per contenere informazioni formattate e connesse tra loro. Tali informazioni verranno richieste dall'utente, dandogli la possibilità di costruire una spiegazione interattiva che si sviluppa secondo il suo desiderio.
6

Hammarström, Tobias. "Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424779.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
7

Matz, Filip, and Yuxiang Luo. "Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300897.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The field of explainable artificial intelligence (XAI) has gained momentum in recent years following the increased use of AI systems across industries leading to bias, discrimination, and data security concerns. Several conceptual frameworks for how to reach AI systems that are fair, transparent, and understandable have been proposed, as well as a number of technical solutions improving some of these aspects in a research context. However, there is still a lack of studies examining the implementation of these concepts and techniques in practice. This research aims to bridge the gap between prominent theory within the area and practical implementation, exploring the implementation and evaluation of XAI models in the Swedish credit scoring industry, and proposes a three-step framework for the implementation of local explanations in practice. The research methods used consisted of a case study with the model development at UC AB as a subject and an experiment evaluating the consumers' levels of trust and system understanding as well as the usefulness, persuasive power, and usability of the explanation for three different explanation prototypes developed. The framework proposed was validated by the case study and highlighted a number of key challenges and trade-offs present when implementing XAI in practice. Moreover, the evaluation of the XAI prototypes showed that the majority of consumers prefers rulebased explanations, but that preferences for explanations is still dependent on the individual consumer. Recommended future research endeavors include studying a longterm XAI project in which the models can be evaluated by the open market and the combination of different XAI methods in reaching a more personalized explanation for the consumer.
Under senare år har antalet AI implementationer stadigt ökat i flera industrier. Dessa implementationer har visat flera utmaningar kring nuvarande AI system, specifikt gällande diskriminering, otydlighet och datasäkerhet vilket lett till ett intresse för förklarbar artificiell intelligens (XAI). XAI syftar till att utveckla AI system som är rättvisa, transparenta och begripliga. Flera konceptuella ramverk har introducerats för XAI som presenterar etiska såväl som politiska perspektiv och målbilder. Dessutom har tekniska metoder utvecklats som gjort framsteg mot förklarbarhet i forskningskontext. Däremot saknas det fortfarande studier som undersöker implementationer av dessa koncept och tekniker i praktiken. Denna studie syftar till att överbrygga klyftan mellan den senaste teorin inom området och praktiken genom en fallstudie av ett företag i den svenska kreditupplysningsindustrin. Detta genom att föreslå ett ramverk för implementation av lokala förklaringar i praktiken och genom att utveckla tre förklaringsprototyper. Rapporten utvärderar även prototyperna med konsumenter på följande dimensioner: tillit, systemförståelse, användbarhet och övertalningsstyrka. Det föreslagna ramverket validerades genom fallstudien och belyste ett antal utmaningar och avvägningar som förekommer när XAI system utvecklas för användning i praktiken. Utöver detta visar utvärderingen av prototyperna att majoriteten av konsumenter föredrar regelbaserade förklaringar men indikerar även att preferenser mellan konsumenter varierar. Rekommendationer för framtida forskning är dels en längre studie, vari en XAI modell introduceras på och utvärderas av den fria marknaden, dels forskning som kombinerar olika XAI metoder för att generera mer personliga förklaringar för konsumenter.
8

Ankaräng, Marcus, and Jakob Kristiansson. "Comparison of Logistic Regression and an Explained Random Forest in the Domain of Creditworthiness Assessment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301907.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As the use of AI in society is developing, the requirement of explainable algorithms has increased. A challenge with many modern machine learning algorithms is that they, due to their often complex structures, lack the ability to produce human-interpretable explanations. Research within explainable AI has resulted in methods that can be applied on top of non- interpretable models to motivate their decision bases. The aim of this thesis is to compare an unexplained machine learning model used in combination with an explanatory method, and a model that is explainable through its inherent structure. Random forest was the unexplained model in question and the explanatory method was SHAP. The explainable model was logistic regression, which is explanatory through its feature weights. The comparison was conducted within the area of creditworthiness and was based on predictive performance and explainability. Furthermore, the thesis intends to use these models to investigate what characterizes loan applicants who are likely to default. The comparison showed that no model performed significantly better than the other in terms of predictive performance. Characteristics of bad loan applicants differed between the two algorithms. Three important aspects were the applicant’s age, where they lived and whether they had a residential phone. Regarding explainability, several advantages with SHAP were observed. With SHAP, explanations on both a local and a global level can be produced. Also, SHAP offers a way to take advantage of the high performance in many modern machine learning algorithms, and at the same time fulfil today’s increased requirement of transparency.
I takt med att AI används allt oftare för att fatta beslut i samhället, har kravet på förklarbarhet ökat. En utmaning med flera moderna maskininlärningsmodeller är att de, på grund av sina komplexa strukturer, sällan ger tillgång till mänskligt förståeliga motiveringar. Forskning inom förklarar AI har lett fram till metoder som kan appliceras ovanpå icke- förklarbara modeller för att tolka deras beslutsgrunder. Det här arbetet syftar till att jämföra en icke- förklarbar maskininlärningsmodell i kombination med en förklaringsmetod, och en modell som är förklarbar genom sin struktur. Den icke- förklarbara modellen var random forest och förklaringsmetoden som användes var SHAP. Den förklarbara modellen var logistisk regression, som är förklarande genom sina vikter. Jämförelsen utfördes inom området kreditvärdighet och grundades i prediktiv prestanda och förklarbarhet. Vidare användes dessa modeller för att undersöka vilka egenskaper som var kännetecknande för låntagare som inte förväntades kunna betala tillbaka sitt lån. Jämförelsen visade att ingen av de båda metoderna presterande signifikant mycket bättre än den andra sett till prediktiv prestanda. Kännetecknande särdrag för dåliga låntagare skiljde sig åt mellan metoderna. Tre viktiga aspekter var låntagarens °ålder, vart denna bodde och huruvida personen ägde en hemtelefon. Gällande förklarbarheten framträdde flera fördelar med SHAP, däribland möjligheten att kunna producera både lokala och globala förklaringar. Vidare konstaterades att SHAP gör det möjligt att dra fördel av den höga prestandan som många moderna maskininlärningsmetoder uppvisar och samtidigt uppfylla dagens ökade krav på transparens.
9

Leoni, Cristian. "Interpretation of Dimensionality Reduction with Supervised Proxies of User-defined Labels." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Research on Machine learning (ML) explainability has received a lot of focus in recent times. The interest, however, mostly focused on supervised models, while other ML fields have not had the same level of attention. Despite its usefulness in a variety of different fields, unsupervised learning explainability is still an open issue. In this paper, we present a Visual Analytics framework based on eXplainable AI (XAI) methods to support the interpretation of Dimensionality reduction methods. The framework provides the user with an interactive and iterative process to investigate and explain user-perceived patterns for a variety of DR methods by using XAI methods to explain a supervised method trained on the selected data. To evaluate the effectiveness of the proposed solution, we focus on two main aspects: the quality of the visualization and the quality of the explanation. This challenge is tackled using both quantitative and qualitative methods, and due to the lack of pre-existing test data, a new benchmark has been created. The quality of the visualization is established using a well-known survey-based methodology, while the quality of the explanation is evaluated using both case studies and a controlled experiment, where the generated explanation accuracy is evaluated on the proposed benchmark. The results show a strong capacity of our framework to generate accurate explanations, with an accuracy of 89% over the controlled experiment. The explanation generated for the two case studies yielded very similar results when compared with pre-existing, well-known literature on ground truths. Finally, the user experiment generated high quality overall scores for all assessed aspects of the visualization.
10

Nilsson, Linus. "Explainable Artificial Intelligence for Reinforcement Learning Agents." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294162.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Following the success that machine learning has enjoyed over the last decade, reinforcement learning has become a prime research area for automation and solving complex tasks. Ranging from playing video games at a professional level to robots collaborating in picking goods in warehouses, the applications of reinforcement learning are numerous. The systems are however, very complex and the understanding of why the reinforcement learning agents solve the tasks given to them in certain ways are still largely unknown to the human observer. This makes the actual use of the agents limited to non-critical tasks and the information that could be learnt by them hidden. To this end, explainable artificial intelligence (XAI) has been a topic that has received more attention in the last couple of years, in an attempt to be able to explain the machine learning systems to the human operators. In this thesis we propose to use model-agnostic XAI techniques combined with clustering techniques on simple Atari games, as well as proposing an automated evaluation for how well the explanations explain the behavior of the agents. This in an effort to uncover to what extent model-agnostic XAI can be used to gain insight into the behavior of reinforcement learning agents. The tested methods were RISE, t-SNE and Deletion. The methods were evaluated on several different agents trained on playing the Atari-breakout game and the results show that they can be used to explain the behavior of the agents on a local level (one individual frame of a game sequence), global (behavior over the entire game sequence) as well as uncovering different strategies used by the agents and as training time differs between agents.
Efter framgångarna inom maskininlärning de senaste årtiondet har förstärkningsinlärning blivit ett primärt forskningsämne för att lösa komplexa uppgifter och inom automation. Tillämpningarna är många, allt från att spela datorspel på en professionell nivå till robotar som samarbetar för att plocka varor i ett lager. Dock så är systemen väldigt komplexa och förståelsen kring varför en agent väljer att lösa en uppgift på ett specifikt sätt är okända för en mänsklig observatör. Detta gör att de praktiska tillämpningarna av dessa agenter är begränsade till icke-kritiska system och den information som kan användas för att lära ut nya sätt att lösa olika uppgifter är dolda. Utifrån detta så har förklarbar artificiell intelligens (XAI) blivit ett område inom forskning som fått allt mer uppmärksamhet de senaste åren. Detta för att kunna förklara maskininlärningssystem för den mänskliga användaren. I denna examensrapport föreslår vi att använda modelloberoende XAI tekniker kombinerat klustringstekniker på enkla Atarispel, vi föreslår även ett sätt att automatisera hur man kan utvärdera hur väl en förklaring förklarar beteendet hos agenterna. Detta i ett försök att upptäcka till vilken grad modelloberoende XAI tekniker kan användas för att förklara beteenden hos förstärkningsinlärningsagenter. De testade metoderna var RISE, t-SNE och Deletion. Metoderna utvärderades på flera olika agenter, tränade att spelaAtari-breakout. Resultatet visar att de kan användas för att förklara beteendet hos agenterna på en lokal nivå (en individuell bild ur ett spel), globalt beteende (över den totala spelsekvensen) samt även att metoderna kan hitta olika strategier användna av de olika agenterna där mängden träning de fått skiljer sig.

Книги з теми "Explainable Artificial Intelligence (XAI)":

1

Kose, Utku, Nilgun Sengoz, Xi Chen, and Jose Antonio Marmolejo Saucedo. Explainable Artificial Intelligence (XAI) in Healthcare. New York: CRC Press, 2024. http://dx.doi.org/10.1201/9781003426073.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Tin-Chih Toly. Explainable Artificial Intelligence (XAI) in Manufacturing. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27961-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Khamparia, Aditya, Deepak Gupta, Ashish Khanna, and Valentina E. Balas, eds. Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI). Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1476-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Longo, Luca, ed. Explainable Artificial Intelligence. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44064-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Longo, Luca, ed. Explainable Artificial Intelligence. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44070-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Longo, Luca, ed. Explainable Artificial Intelligence. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44067-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Krötzsch, Markus, and Daria Stepanova, eds. Reasoning Web. Explainable Artificial Intelligence. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31423-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tulli, Silvia, and David W. Aha. Explainable Agency in Artificial Intelligence. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003355281.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lahby, Mohamed, Utku Kose, and Akash Kumar Bhoi. Explainable Artificial Intelligence for Smart Cities. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003172772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ahmed, Mohiuddin, Sheikh Rabiul Islam, Adnan Anwar, Nour Moustafa, and Al-Sakib Khan Pathan, eds. Explainable Artificial Intelligence for Cyber Security. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96630-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Explainable Artificial Intelligence (XAI)":

1

Mohaghegh, Shahab D. "Explainable Artificial Intelligence (XAI)." In Artificial Intelligence for Science and Engineering Applications, 89–122. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003369356-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Holzinger, Andreas, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek. "xxAI - Beyond Explainable Artificial Intelligence." In xxAI - Beyond Explainable AI, 3–10. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model’s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on “XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.”
3

Chen, Tin-Chih Toly. "Explainable Artificial Intelligence (XAI) with Applications." In Explainable Ambient Intelligence (XAmI), 23–38. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54935-9_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sardar, Tanvir Habib, Sunanda Das, and Bishwajeet Kumar Pandey. "Explainable AI (XAI)." In Medical Data Analysis and Processing using Explainable Artificial Intelligence, 1–18. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003257721-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chen, Tin-Chih Toly. "Explainable Artificial Intelligence (XAI) in Manufacturing." In Explainable Artificial Intelligence (XAI) in Manufacturing, 1–11. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27961-4_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Aditya Shastry, K. "Artificial Intelligence for Healthcare Applications." In Explainable Artificial Intelligence (XAI) in Healthcare, 1–29. New York: CRC Press, 2024. http://dx.doi.org/10.1201/9781003426073-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lampathaki, Fenareti, Enrica Bosani, Evmorfia Biliri, Erifili Ichtiaroglou, Andreas Louca, Dimitris Syrrafos, Mattia Calabresi, Michele Sesana, Veronica Antonello, and Andrea Capaccioli. "XAI for Product Demand Planning: Models, Experiences, and Lessons Learnt." In Artificial Intelligence in Manufacturing, 437–58. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-46452-2_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractToday, Explainable AI is gaining more and more traction due to its inherent added value to allow all involved stakeholders to understand why/how a decision has been made by an AI system. In this context, the problem of Product Demand Forecasting as faced by Whirlpool has been elaborated and tackled through an Explainable AI approach. The Explainable AI solution has been designed and delivered in the H2020 XMANAI project and is presented in detail in this chapter. The core XMANAI Platform has been used by data scientists to experiment with the data and configure Explainable AI pipelines, while a dedicated manufacturing application is addressed to business users that need to view and gain insights into product demand forecasts. The overall Explainable AI approach has been evaluated by the end users in Whirlpool. This chapter presents experiences and lessons learnt from this evaluation.
8

Helen Victoria, A., Ravi Shekhar Tiwari, and Ayaan Khadir Ghulam. "Libraries for Explainable Artificial Intelligence (EXAI)." In Explainable AI (XAI) for Sustainable Development, 211–32. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003457176-13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kırboğa, K. K., and E. U. Küçüksille. "XAI in Biomedical Applications." In Explainable Artificial Intelligence for Biomedical Applications, 79–99. New York: River Publishers, 2023. http://dx.doi.org/10.1201/9781032629353-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Uysal, Ilhan, and Utku Kose. "XAI for Drug Discovery." In Explainable Artificial Intelligence for Biomedical Applications, 265–88. New York: River Publishers, 2023. http://dx.doi.org/10.1201/9781032629353-13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Explainable Artificial Intelligence (XAI)":

1

Gunning, David. "DARPA's explainable artificial intelligence (XAI) program." In IUI '19: 24th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3301275.3308446.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ignatiev, Alexey. "Towards Trustable Explainable AI." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.
3

Krstić, Zvjezdana, and Mirjana Maksimović. "Significance of Explainable Artificial Intelligence (XAI) in Marketing." In 29th International Scientific Conference Strategic Management and Decision Support Systems in Strategic Management. University of Novi Sad, Faculty of Economics in Subotica, 2024. http://dx.doi.org/10.46541/978-86-7233-428-9_401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable artificial intelligence (XAI) is increasingly crucial due to its extremely important role in modern marketing, as it advances predictive analytics of consumer behavior and analysis of the purchase decision-making process. This paper examines the importance of XAI in marketing, emphasizing its role in improving the effectiveness and efficiency of marketing strategies. By examining the evolution of AI in marketing and the challenges posed by opaque algorithms, this study highlights the transformative potential of XAI in bridging the gap between marketers and consumers. In addition, ethical issues related to the application of XAI are discussed, emphasizing the imperative of conscientious application in order to maintain privacy and consumer trust. Furthermore, possible directions for the use of XAI are explored, with the aim of driving marketing practices in a data-dominated era. This paper highlights the key role of XAI in shaping future trends in marketing research and its implications for businesses operating in a dynamic market environment.
4

Gerlings, Julie, Arisa Shollo, and Ioanna Constantiou. "Reviewing the Need for Explainable Artificial Intelligence (xAI)." In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2021. http://dx.doi.org/10.24251/hicss.2021.156.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sudar, K. Muthamil, P. Nagaraj, S. Nithisaa, R. Aishwarya, M. Aakash, and S. Ishwarya Lakshmi. "Alzheimer's Disease Analysis using Explainable Artificial Intelligence (XAI)." In 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS). IEEE, 2022. http://dx.doi.org/10.1109/icscds53736.2022.9760858.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Meske, Christian, Babak Abedin, Iris Junglas, and Fethi Rabhi. "Introduction to the Minitrack on Explainable Artificial Intelligence (XAI)." In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2021. http://dx.doi.org/10.24251/hicss.2021.153.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Abedin, Babak, Christian Meske, Fethi Rabhi, and Mathias Klier. "Introduction to the Minitrack on Explainable Artificial Intelligence (XAI)." In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2023. http://dx.doi.org/10.24251/hicss.2023.131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Abedin, Babak, Mathias Klier, Christian Meske, and Fethi Rabhi. "Introduction to the Minitrack on Explainable Artificial Intelligence (XAI)." In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2022. http://dx.doi.org/10.24251/hicss.2022.182.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

P. Peixoto, Maria J., and Akramul Azim. "Explainable Artificial Intelligence (XAI) Approach for Reinforcement Learning Systems." In SAC '24: 39th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3605098.3635992.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sethi, Aryan, Sahiti Dharmavaram, and S. K. Somasundaram. "Explainable Artificial Intelligence (XAI) Approach to Heart Disease Prediction." In 2024 3rd International Conference on Artificial Intelligence For Internet of Things (AIIoT). IEEE, 2024. http://dx.doi.org/10.1109/aiiot58432.2024.10574635.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Explainable Artificial Intelligence (XAI)":

1

Core, Mark G., H. C. Lane, Michael van Lent, Dave Gomboc, Steve Solomon, and Milton Rosenberg. Building Explainable Artificial Intelligence Systems. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada459166.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Phillips, P. Jonathon, Carina A. Hahn, Peter C. Fontana, Amy N. Yates, Kristen Greene, David A. Broniatowski, and Mark A. Przybocki. Four Principles of Explainable Artificial Intelligence. National Institute of Standards and Technology, September 2021. http://dx.doi.org/10.6028/nist.ir.8312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Walker, Cody, Vivek Agarwal, Linyu Lin, Anna Hall, Rachael Hill, Ronald Boring PhD, Torrey Mortenson, and Nancy Lybeck. Explainable Artificial Intelligence Technology for Predictive Maintenance. Office of Scientific and Technical Information (OSTI), August 2023. http://dx.doi.org/10.2172/1998555.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії