Literatura académica sobre el tema "Explainable artifical intelligence"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Explainable artifical intelligence".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Explainable artifical intelligence"

1

Ridley, Michael. "Explainable Artificial Intelligence". Ethics of Artificial Intelligence, n.º 299 (19 de septiembre de 2019): 28–46. http://dx.doi.org/10.29242/rli.299.3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gunning, David, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf y Guang-Zhong Yang. "XAI—Explainable artificial intelligence". Science Robotics 4, n.º 37 (18 de diciembre de 2019): eaay7120. http://dx.doi.org/10.1126/scirobotics.aay7120.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Chauhan, Tavishee y Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques". International Journal on Recent and Innovation Trends in Computing and Communication 10, n.º 4 (30 de abril de 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Texto completo
Resumen
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Abdelmonem, Ahmed y Nehal N. Mostafa. "Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection". Fusion: Practice and Applications 3, n.º 1 (2021): 54–69. http://dx.doi.org/10.54216/fpa.030104.

Texto completo
Resumen
Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popular method for dealing with such data and acquiring more trustworthy, helpful, and precise insights. Compared to other, more traditional-based data fusion methods, machine learning's capacity to automatically learn from experience with nonexplicit programming significantly improves fusion's computational and predictive power. This paper comprehensively studies the most explainable artificial intelligent methods based on anomaly detection. We proposed the required criteria of the transparency model to measure the data fusion analytics techniques. Also, define the different used evaluation metrics in explainable artificial intelligence. We provide some applications for explainable artificial intelligence. We provide a case study of anomaly detection with the fusion of machine learning. Finally, we discuss the key challenges and future directions in explainable artificial intelligence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sharma, Deepak Kumar, Jahanavi Mishra, Aeshit Singh, Raghav Govil, Gautam Srivastava y Jerry Chun-Wei Lin. "Explainable Artificial Intelligence for Cybersecurity". Computers and Electrical Engineering 103 (octubre de 2022): 108356. http://dx.doi.org/10.1016/j.compeleceng.2022.108356.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Karpov, O. E., D. A. Andrikov, V. A. Maksimenko y A. E. Hramov. "EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICINE". Vrach i informacionnye tehnologii, n.º 2 (2022): 4–11. http://dx.doi.org/10.25881/18110193_2022_2_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence". Russian Journal of Philosophical Sciences 65, n.º 1 (25 de junio de 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Texto completo
Resumen
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence". Digital Technologies Research and Applications 1, n.º 1 (26 de enero de 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Texto completo
Resumen
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zednik, Carlos y Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence". Minds and Machines 32, n.º 1 (marzo de 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Texto completo
Resumen
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Allen, Ben. "Discovering Themes in Deep Brain Stimulation Research Using Explainable Artificial Intelligence". Biomedicines 11, n.º 3 (3 de marzo de 2023): 771. http://dx.doi.org/10.3390/biomedicines11030771.

Texto completo
Resumen
Deep brain stimulation is a treatment that controls symptoms by changing brain activity. The complexity of how to best treat brain dysfunction with deep brain stimulation has spawned research into artificial intelligence approaches. Machine learning is a subset of artificial intelligence that uses computers to learn patterns in data and has many healthcare applications, such as an aid in diagnosis, personalized medicine, and clinical decision support. Yet, how machine learning models make decisions is often opaque. The spirit of explainable artificial intelligence is to use machine learning models that produce interpretable solutions. Here, we use topic modeling to synthesize recent literature on explainable artificial intelligence approaches to extracting domain knowledge from machine learning models relevant to deep brain stimulation. The results show that patient classification (i.e., diagnostic models, precision medicine) is the most common problem in deep brain stimulation studies that employ explainable artificial intelligence. Other topics concern attempts to optimize stimulation strategies and the importance of explainable methods. Overall, this review supports the potential for artificial intelligence to revolutionize deep brain stimulation by personalizing stimulation protocols and adapting stimulation in real time.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Explainable artifical intelligence"

1

Nilsson, Linus. "Explainable Artificial Intelligence for Reinforcement Learning Agents". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294162.

Texto completo
Resumen
Following the success that machine learning has enjoyed over the last decade, reinforcement learning has become a prime research area for automation and solving complex tasks. Ranging from playing video games at a professional level to robots collaborating in picking goods in warehouses, the applications of reinforcement learning are numerous. The systems are however, very complex and the understanding of why the reinforcement learning agents solve the tasks given to them in certain ways are still largely unknown to the human observer. This makes the actual use of the agents limited to non-critical tasks and the information that could be learnt by them hidden. To this end, explainable artificial intelligence (XAI) has been a topic that has received more attention in the last couple of years, in an attempt to be able to explain the machine learning systems to the human operators. In this thesis we propose to use model-agnostic XAI techniques combined with clustering techniques on simple Atari games, as well as proposing an automated evaluation for how well the explanations explain the behavior of the agents. This in an effort to uncover to what extent model-agnostic XAI can be used to gain insight into the behavior of reinforcement learning agents. The tested methods were RISE, t-SNE and Deletion. The methods were evaluated on several different agents trained on playing the Atari-breakout game and the results show that they can be used to explain the behavior of the agents on a local level (one individual frame of a game sequence), global (behavior over the entire game sequence) as well as uncovering different strategies used by the agents and as training time differs between agents.
Efter framgångarna inom maskininlärning de senaste årtiondet har förstärkningsinlärning blivit ett primärt forskningsämne för att lösa komplexa uppgifter och inom automation. Tillämpningarna är många, allt från att spela datorspel på en professionell nivå till robotar som samarbetar för att plocka varor i ett lager. Dock så är systemen väldigt komplexa och förståelsen kring varför en agent väljer att lösa en uppgift på ett specifikt sätt är okända för en mänsklig observatör. Detta gör att de praktiska tillämpningarna av dessa agenter är begränsade till icke-kritiska system och den information som kan användas för att lära ut nya sätt att lösa olika uppgifter är dolda. Utifrån detta så har förklarbar artificiell intelligens (XAI) blivit ett område inom forskning som fått allt mer uppmärksamhet de senaste åren. Detta för att kunna förklara maskininlärningssystem för den mänskliga användaren. I denna examensrapport föreslår vi att använda modelloberoende XAI tekniker kombinerat klustringstekniker på enkla Atarispel, vi föreslår även ett sätt att automatisera hur man kan utvärdera hur väl en förklaring förklarar beteendet hos agenterna. Detta i ett försök att upptäcka till vilken grad modelloberoende XAI tekniker kan användas för att förklara beteenden hos förstärkningsinlärningsagenter. De testade metoderna var RISE, t-SNE och Deletion. Metoderna utvärderades på flera olika agenter, tränade att spelaAtari-breakout. Resultatet visar att de kan användas för att förklara beteendet hos agenterna på en lokal nivå (en individuell bild ur ett spel), globalt beteende (över den totala spelsekvensen) samt även att metoderna kan hitta olika strategier användna av de olika agenterna där mängden träning de fått skiljer sig.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Vincenzi, Leonardo. "eXplainable Artificial Intelligence User Experience: contesto e stato dell’arte". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23338/.

Texto completo
Resumen
Il grande sviluppo del mondo dell’Intelligenza Artificiale unito alla sua vastissima applicazione in molteplici ambiti degli ultimi anni, ha portato a una sempre maggior richiesta di spiegabilità dei sistemi di Machine Learning. A seguito di questa necessità il campo dell’eXplainable Artificial Intelligence ha compiuto passi importanti verso la creazione di sistemi e metodi per rendere i sistemi intelligenti sempre più trasparenti e in un futuro prossimo, per garantire sempre più equità e sicurezza nelle decisioni prese dall’AI, si prevede una sempre più rigida regolamentazione verso la sua spiegabilità. Per compiere un ulteriore salto di qualità, il recente campo di studio XAI UX si pone come obiettivo principale l’inserimento degli utenti al centro dei processi di progettazione di sistemi di AI, attraverso la combinazione di tecniche di spiegabilità offerte dall'eXplainable AI insieme allo studio di soluzioni UX. Il nuovo focus sull’utente e la necessità di creare team multidisciplinari, e quindi con maggiori barriere comunicative tra le persone, impongono ancora un largo studio sia da parte degli esperti XAI sia da parte della comunità HCI, e sono attualmente le principali difficoltà da risolvere. All’interno dell’elaborato si fornisce una visione attuale sullo stato dell'arte della XAI UX introducendo le motivazioni, il contesto e i vari ambiti di ricerca che comprende.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

PICCHIOTTI, NICOLA. "Explainable Artificial Intelligence: an application to complex genetic diseases". Doctoral thesis, Università degli studi di Pavia, 2021. http://hdl.handle.net/11571/1447637.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

ABUKMEIL, MOHANAD. "UNSUPERVISED GENERATIVE MODELS FOR DATA ANALYSIS AND EXPLAINABLE ARTIFICIAL INTELLIGENCE". Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/889159.

Texto completo
Resumen
For more than a century, the methods of learning representation and the exploration of the intrinsic structures of data have developed remarkably and currently include supervised, semi-supervised, and unsupervised methods. However, recent years have witnessed the flourishing of big data, where typical dataset dimensions are high, and the data can come in messy, missing, incomplete, unlabeled, or corrupted forms. Consequently, discovering and learning the hidden structure buried inside such data becomes highly challenging. From this perspective, latent data analysis and dimensionality reduction play a substantial role in decomposing the exploratory factors and learning the hidden structures of data, which encompasses the significant features that characterize the categories and trends among data samples in an ordered manner. That is by extracting patterns, differentiating trends, and testing hypotheses to identify anomalies, learning compact knowledge, and performing many different machine learning (ML) tasks such as classification, detection, and prediction. Unsupervised generative learning (UGL) methods are a class of ML characterized by their possibility of analyzing and decomposing latent data, reducing dimensionality, visualizing the manifold of data, and learning representations with limited levels of predefined labels and prior assumptions. Furthermore, explainable artificial intelligence (XAI) is an emerging field of ML that deals with explaining the decisions and behaviors of learned models. XAI is also associated with UGL models to explain the hidden structure of data, and to explain the learned representations of ML models. However, the current UGL models lack large-scale generalizability and explainability in the testing stage, which leads to restricting their potential in ML and XAI applications. To overcome the aforementioned limitations, this thesis proposes innovative methods that integrate UGL and XAI to enable data factorization and dimensionality reduction to improve the generalizability of the learned ML models. Moreover, the proposed methods enable visual explainability in modern applications as anomaly detection and autonomous driving systems. The main research contributions are listed as follows: • A novel overview of UGL models including blind source separation (BSS), manifold learning (MfL), and neural networks (NNs). Also, the overview considers open issues and challenges among each UGL method. • An innovative method to identify the dimensions of the compact feature space via a generalized rank in the application of image dimensionality reduction. • An innovative method to hierarchically reduce and visualize the manifold of data to improve the generalizability in limited data learning scenarios, and computational complexity reduction applications. • An original method to visually explain autoencoders by reconstructing an attention map in the application of anomaly detection and explainable autonomous driving systems. The novel methods introduced in this thesis are benchmarked on publicly available datasets, and they outperformed the state-of-the-art methods considering different evaluation metrics. Furthermore, superior results were obtained with respect to the state-of-the-art to confirm the feasibility of the proposed methodologies concerning the computational complexity, availability of learning data, model explainability, and high data reconstruction accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Rouget, Thierry. "Learning explainable concepts in the presence of a qualitative model". Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9762.

Texto completo
Resumen
This thesis addresses the problem of learning concept descriptions that are interpretable, or explainable. Explainability is understood as the ability to justify the learned concept in terms of the existing background knowledge. The starting point for the work was an existing system that would induce only fully explainable rules. The system performed well when the model used during induction was complete and correct. In practice, however, models are likely to be imperfect, i.e. incomplete and incorrect. We report here a new approach that achieves explainability with imperfect models. The basis of the system is the standard inductive search driven by an accuracy-oriented heuristic, biased towards rule explainability. The bias is abandoned when there is heuristic evidence that a significant loss of accuracy results from constraining the search to explainable rules only. The users can express their relative preference for accuracy vs. explainability. Experiments with the system indicate that, even with a partially incomplete and/or incorrect model, insisting on explainability results in only a small loss of accuracy. We also show how the new approach described can repair a faulty model using evidence derived from data during induction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Gjeka, Mario. "Uno strumento per le spiegazioni di sistemi di Explainable Artificial Intelligence". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Buscar texto completo
Resumen
L'obiettivo di questa tesi è quello di mostrare l’importanza delle spiegazioni in un sistema intelligente. Il bisogno di avere un'intelligenza artificiale spiegabile e trasparente sta crescendo notevolmente, esigenza evidenziata dalla ricerca delle aziende di sviluppare sistemi informatici intelligenti trasparenti e spiegabili.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Amarasinghe, Kasun. "Explainable Neural Networks based Anomaly Detection for Cyber-Physical Systems". VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/6091.

Texto completo
Resumen
Cyber-Physical Systems (CPSs) are the core of modern critical infrastructure (e.g. power-grids) and securing them is of paramount importance. Anomaly detection in data is crucial for CPS security. While Artificial Neural Networks (ANNs) are strong candidates for the task, they are seldom deployed in safety-critical domains due to the perception that ANNs are black-boxes. Therefore, to leverage ANNs in CPSs, cracking open the black box through explanation is essential. The main objective of this dissertation is developing explainable ANN-based Anomaly Detection Systems for Cyber-Physical Systems (CP-ADS). The main objective was broken down into three sub-objectives: 1) Identifying key-requirements that an explainable CP-ADS should satisfy, 2) Developing supervised ANN-based explainable CP-ADSs, 3) Developing unsupervised ANN-based explainable CP-ADSs. In achieving those objectives, this dissertation provides the following contributions: 1) a set of key-requirements that an explainable CP-ADS should satisfy, 2) a methodology for deriving summaries of the knowledge of a trained supervised CP-ADS, 3) a methodology for validating derived summaries, 4) an unsupervised neural network methodology for learning cyber-physical (CP) behavior, 5) a methodology for visually and linguistically explaining the learned CP behavior. All the methods were implemented on real-world and benchmark datasets. The set of key-requirements presented in the first contribution was used to evaluate the performance of the presented methods. The successes and limitations of the presented methods were identified. Furthermore, steps that can be taken to overcome the limitations were proposed. Therefore, this dissertation takes several necessary steps toward developing explainable ANN-based CP-ADS and serves as a framework that can be expanded to develop trustworthy ANN-based CP-ADSs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

PANIGUTTI, Cecilia. "eXplainable AI for trustworthy healthcare applications". Doctoral thesis, Scuola Normale Superiore, 2022. https://hdl.handle.net/11384/125202.

Texto completo
Resumen
Acknowledging that AI will inevitably become a central element of clinical practice, this thesis investigates the role of eXplainable AI (XAI) techniques in developing trustworthy AI applications in healthcare. The first part of this thesis focuses on the societal, ethical, and legal aspects of the use of AI in healthcare. It first compares the different approaches to AI ethics worldwide and then focuses on the practical implications of the European ethical and legal guidelines for AI applications in healthcare. The second part of the thesis explores how XAI techniques can help meet three key requirements identified in the initial analysis: transparency, auditability, and human oversight. The technical transparency requirement is tackled by enabling explanatory techniques to deal with common healthcare data characteristics and tailor them to the medical field. In this regard, this thesis presents two novel XAI techniques that incrementally reach this goal by first focusing on multi-label predictive algorithms and then tackling sequential data and incorporating domainspecific knowledge in the explanation process. This thesis then analyzes the ability to leverage the developed XAI technique to audit a fictional commercial black-box clinical decision support system (DSS). Finally, the thesis studies AI explanation’s ability to effectively enable human oversight by studying the impact of explanations on the decision-making process of healthcare professionals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Keneni, Blen M. Keneni. "Evolving Rule Based Explainable Artificial Intelligence for Decision Support System of Unmanned Aerial Vehicles". University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1525094091882295.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hammarström, Tobias. "Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection". Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424779.

Texto completo
Resumen
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Explainable artifical intelligence"

1

Gaur, Loveleen y Biswa Mohan Sahoo. Explainable Artificial Intelligence for Intelligent Transportation Systems. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09644-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Krötzsch, Markus y Daria Stepanova, eds. Reasoning Web. Explainable Artificial Intelligence. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31423-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Lahby, Mohamed, Utku Kose y Akash Kumar Bhoi. Explainable Artificial Intelligence for Smart Cities. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003172772.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ahmed, Mohiuddin, Sheikh Rabiul Islam, Adnan Anwar, Nour Moustafa y Al-Sakib Khan Pathan, eds. Explainable Artificial Intelligence for Cyber Security. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96630-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kamath, Uday y John Liu. Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Marcos, Mar, Jose M. Juarez, Richard Lenz, Grzegorz J. Nalepa, Slawomir Nowaczyk, Mor Peleg, Jerzy Stefanowski y Gregor Stiglic, eds. Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37446-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rutkowski, Tom. Explainable Artificial Intelligence Based on Neuro-Fuzzy Modeling with Applications in Finance. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75521-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Khamparia, Aditya, Deepak Gupta, Ashish Khanna y Valentina E. Balas, eds. Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI). Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1476-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kose, Utku, Akash Kumar Bhoi y Mohamed Lahby. Explainable Artificial Intelligence for Smart Cities. Taylor & Francis Group, 2021.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Kose, Utku, Akash Kumar Bhoi y Mohamed Lahby. Explainable Artificial Intelligence for Smart Cities. Taylor & Francis Group, 2021.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Explainable artifical intelligence"

1

Samek, Wojciech y Klaus-Robert Müller. "Towards Explainable Artificial Intelligence". En Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 5–22. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Holzinger, Andreas, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller y Wojciech Samek. "xxAI - Beyond Explainable Artificial Intelligence". En xxAI - Beyond Explainable AI, 3–10. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_1.

Texto completo
Resumen
AbstractThe success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model’s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on “XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.”
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Alonso Moral, Jose Maria, Ciro Castiello, Luis Magdalena y Corrado Mencar. "Toward Explainable Artificial Intelligence Through Fuzzy Systems". En Explainable Fuzzy Systems, 1–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71098-9_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Guidotti, Riccardo, Anna Monreale, Dino Pedreschi y Fosca Giannotti. "Principles of Explainable Artificial Intelligence". En Explainable AI Within the Digital Transformation and Cyber Physical Systems, 9–31. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76409-8_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lindsay, Leeanne, Sonya Coleman, Dermot Kerr, Brian Taylor y Anne Moorhead. "Explainable Artificial Intelligence for Falls Prediction". En Communications in Computer and Information Science, 76–84. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6634-9_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Narteni, Sara, Melissa Ferretti, Vanessa Orani, Ivan Vaccari, Enrico Cambiaso y Maurizio Mongelli. "From Explainable to Reliable Artificial Intelligence". En Lecture Notes in Computer Science, 255–73. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84060-0_17.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lin, Kuo-Yi, Yuguang Liu, Li Li y Runliang Dou. "A Review of Explainable Artificial Intelligence". En Advances in Production Management Systems. Artificial Intelligence for Sustainable and Resilient Production Systems, 574–84. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85910-7_61.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sayed-Mouchaweh, Moamar. "Prologue: Introduction to Explainable Artificial Intelligence". En Explainable AI Within the Digital Transformation and Cyber Physical Systems, 1–8. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76409-8_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Islam, Sheikh Rabiul y William Eberle. "Domain Knowledge-Aided Explainable Artificial Intelligence". En Studies in Computational Intelligence, 73–92. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96630-0_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Averkin, Alexey y Sergey Yarushev. "Fuzzy Approach to Explainable Artificial Intelligence". En Lecture Notes in Networks and Systems, 180–87. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25252-5_27.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Explainable artifical intelligence"

1

Sheh, Raymond. "Explainable Artificial Intelligence Requirements for Safe, Intelligent Robots". En 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR). IEEE, 2021. http://dx.doi.org/10.1109/isr50024.2021.9419498.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Alonso, Jose M. "Explainable Artificial Intelligence for Kids". En Proceedings of the 2019 Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology (EUSFLAT 2019). Paris, France: Atlantis Press, 2019. http://dx.doi.org/10.2991/eusflat-19.2019.21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Górski, Łukasz y Shashishekar Ramakrishna. "Explainable artificial intelligence, lawyer's perspective". En ICAIL '21: Eighteenth International Conference for Artificial Intelligence and Law. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462757.3466145.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Dosilovic, Filip Karlo, Mario Brcic y Nikica Hlupic. "Explainable artificial intelligence: A survey". En 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, 2018. http://dx.doi.org/10.23919/mipro.2018.8400040.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zheng, Yongqing, Han Yu, Kun Zhang, Yuliang Shi, Cyril Leung y Chunyan Miao. "Intelligent Decision Support for Improving Power Management". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/965.

Texto completo
Resumen
With the development and adoption of the electricity information tracking system in China, real-time electricity consumption big data have become available to enable artificial intelligence (AI) to help power companies and the urban management departments to make demand side management decisions. We demonstrate the Power Intelligent Decision Support (PIDS) platform, which can generate Orderly Power Utilization (OPU) decision recommendations and perform Demand Response (DR) implementation management based on a short-term load forecasting model. It can also provide different users with query and application functions to facilitate explainable decision support.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Shevskaya, Natalya V., Ekaterina S. Akhrymuk y Nikita V. Popov. "Causal Relationships in Explainable Artificial Intelligence". En 2022 XXV International Conference on Soft Computing and Measurements (SCM). IEEE, 2022. http://dx.doi.org/10.1109/scm55405.2022.9794848.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Krajna, Agneza, Mihael Kovac, Mario Brcic y Ana Sarcevic. "Explainable Artificial Intelligence: An Updated Perspective". En 2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO). IEEE, 2022. http://dx.doi.org/10.23919/mipro55190.2022.9803681.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Gunning, David. "DARPA's explainable artificial intelligence (XAI) program". En IUI '19: 24th International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3301275.3308446.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Tanwar, Nakul, Jaishree Meena y Yasha Hasija. "Explicate Toxicity By eXplainable Artificial Intelligence". En 2022 International Conference on Industry 4.0 Technology (I4Tech). IEEE, 2022. http://dx.doi.org/10.1109/i4tech55392.2022.9952865.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Coroama, Loredana y Adrian Groza. "Explainable Artificial Intelligence for Person Identification". En 2021 IEEE 17th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, 2021. http://dx.doi.org/10.1109/iccp53602.2021.9733525.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Explainable artifical intelligence"

1

Core, Mark G., H. C. Lane, Michael van Lent, Dave Gomboc, Steve Solomon y Milton Rosenberg. Building Explainable Artificial Intelligence Systems. Fort Belvoir, VA: Defense Technical Information Center, enero de 2006. http://dx.doi.org/10.21236/ada459166.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Phillips, P. Jonathon, Carina A. Hahn, Peter C. Fontana, Amy N. Yates, Kristen Greene, David A. Broniatowski y Mark A. Przybocki. Four Principles of Explainable Artificial Intelligence. National Institute of Standards and Technology, septiembre de 2021. http://dx.doi.org/10.6028/nist.ir.8312.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía