Academic literature on the topic 'Artificial Intelligence, Explainable AI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial Intelligence, Explainable AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial Intelligence, Explainable AI"

1

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Full text
Abstract:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
APA, Harvard, Vancouver, ISO, and other styles
2

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (April 30, 2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Full text
Abstract:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains. In specific, it focuses on various model interpretability methods with respect to Explainable AI techniques. It emphasizes on Explainable Artificial Intelligence (XAI) approaches that have been developed and can be used to solve the challenges corresponding to various businesses. This article creates a scenario of significance of explainable artificial intelligence in vast number of disciplines.
APA, Harvard, Vancouver, ISO, and other styles
3

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence." Digital Technologies Research and Applications 1, no. 1 (January 26, 2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Full text
Abstract:
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain experts. All of these elements have contributed to Explainable Artificial Intelligence (XAI).
APA, Harvard, Vancouver, ISO, and other styles
4

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (March 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Full text
Abstract:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
APA, Harvard, Vancouver, ISO, and other styles
5

Alufaisan, Yasmeen, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6618–26. http://dx.doi.org/10.1609/aaai.v35i8.16819.

Full text
Abstract:
Explainable AI provides insights to users into the why for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, we compare objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct vs. incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the why information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Dikmen, Murat, and Catherine Burns. "Abstraction Hierarchy Based Explainable Artificial Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 319–23. http://dx.doi.org/10.1177/1071181320641073.

Full text
Abstract:
This work explores the application of Cognitive Work Analysis (CWA) in the context of Explainable Artificial Intelligence (XAI). We built an AI system using a loan evaluation data set and applied an XAI technique to obtain data-driven explanations for predictions. Using an Abstraction Hierarchy (AH), we generated domain knowledge-based explanations to accompany data-driven explanations. An online experiment was conducted to test the usefulness of AH-based explanations. Participants read financial profiles of loan applicants, the AI system’s loan approval/rejection decisions, and explanations that justify the decisions. Presence or absence of AH-based explanations was manipulated, and participants’ perceptions of the explanation quality was measured. The results showed that providing AH-based explanations helped participants learn about the loan evaluation process and improved the perceived quality of explanations. We conclude that a CWA approach can increase understandability when explaining the decisions made by AI systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Full text
Abstract:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
APA, Harvard, Vancouver, ISO, and other styles
9

Esmaeili, Morteza, Riyas Vettukattil, Hasan Banitalebi, Nina R. Krogh, and Jonn Terje Geitung. "Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization." Journal of Personalized Medicine 11, no. 11 (November 16, 2021): 1213. http://dx.doi.org/10.3390/jpm11111213.

Full text
Abstract:
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Thakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar, and Sydney Simpson. "Explainable Artificial Intelligence for Developing Smart Cities Solutions." Smart Cities 3, no. 4 (November 13, 2020): 1353–82. http://dx.doi.org/10.3390/smartcities3040065.

Full text
Abstract:
Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Artificial Intelligence, Explainable AI"

1

Vincenzi, Leonardo. "eXplainable Artificial Intelligence User Experience: contesto e stato dell’arte." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23338/.

Full text
Abstract:
Il grande sviluppo del mondo dell’Intelligenza Artificiale unito alla sua vastissima applicazione in molteplici ambiti degli ultimi anni, ha portato a una sempre maggior richiesta di spiegabilità dei sistemi di Machine Learning. A seguito di questa necessità il campo dell’eXplainable Artificial Intelligence ha compiuto passi importanti verso la creazione di sistemi e metodi per rendere i sistemi intelligenti sempre più trasparenti e in un futuro prossimo, per garantire sempre più equità e sicurezza nelle decisioni prese dall’AI, si prevede una sempre più rigida regolamentazione verso la sua spiegabilità. Per compiere un ulteriore salto di qualità, il recente campo di studio XAI UX si pone come obiettivo principale l’inserimento degli utenti al centro dei processi di progettazione di sistemi di AI, attraverso la combinazione di tecniche di spiegabilità offerte dall'eXplainable AI insieme allo studio di soluzioni UX. Il nuovo focus sull’utente e la necessità di creare team multidisciplinari, e quindi con maggiori barriere comunicative tra le persone, impongono ancora un largo studio sia da parte degli esperti XAI sia da parte della comunità HCI, e sono attualmente le principali difficoltà da risolvere. All’interno dell’elaborato si fornisce una visione attuale sullo stato dell'arte della XAI UX introducendo le motivazioni, il contesto e i vari ambiti di ricerca che comprende.
APA, Harvard, Vancouver, ISO, and other styles
2

Gjeka, Mario. "Uno strumento per le spiegazioni di sistemi di Explainable Artificial Intelligence." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
L'obiettivo di questa tesi è quello di mostrare l’importanza delle spiegazioni in un sistema intelligente. Il bisogno di avere un'intelligenza artificiale spiegabile e trasparente sta crescendo notevolmente, esigenza evidenziata dalla ricerca delle aziende di sviluppare sistemi informatici intelligenti trasparenti e spiegabili.
APA, Harvard, Vancouver, ISO, and other styles
3

Hammarström, Tobias. "Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424779.

Full text
Abstract:
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
APA, Harvard, Vancouver, ISO, and other styles
4

Costa, Bueno Vicente. "Fuzzy Horn clauses in artificial intelligence: a study of free models, and applications in art painting style categorization." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/673374.

Full text
Abstract:
Aquesta tesi doctoral contribueix a l’estudi de les clàusules de Horn en lògiques difuses, així com al seu ús en representació difusa del coneixement aplicada al disseny d’un algorisme de classificació de pintures segons el seu estil artístic. En la primera part del treball ens centrem en algunes nocions rellevants per a la programació lògica, com ho són per exemple els models lliures i les estructures de Herbrand en lògica matemàtica difusa. Així doncs, provem l’existència de models lliures en classes universals difuses de Horn, i demostrem que tota teoria difusa universal de Horn sense igualtat té un model de Herbrand. A més, introduïm dues nocions de minimalitat per a models lliures, i demostrem que aquestes nocions són equivalents en el cas de les fully named structures. En la segona part de la tesi doctoral, utilitzem les clàusules de Horn combinades amb el modelatge qualitatiu com a marc de representació difusa del coneixement per a la categorització d’estils de pintura artística. Finalment, dissenyem un classificador de pintures basat en clàusules de Horn avaluades, descriptors qualitatius de colors i explicacions. Aquest algorisme, anomenat l-SHE, proporciona raons dels resultats obtinguts i mostra percentatges competitius de precisió a l’experimentació.
La presente tesis doctoral contribuye al estudio de las cláusulas de Horn en lógicas difusas, así como a su uso en representación difusa del conocimiento aplicada al diseño de un algoritmo de clasificación de pinturas según su estilo artístico. En la primera parte del trabajo nos centramos en algunas nociones relevantes para la programación lógica, como lo son por ejemplo los modelos libres y las estructuras de Herbrand en lógica matemática difusa. Así pues, probamos la existencia de modelos libres en clases universales difusas de Horn y demostramos que toda teoría difusa universal de Horn sin igualdad tiene un modelo de Herbrand. Asimismo, introducimos dos nociones de minimalidad para modelos libres, y demostramos que estas nociones son equivalentes en el caso de las fully named structures. En la segunda parte de la tesis doctoral, utilizamos cláusulas de Horn combinadas con el modelado cualitativo como marco de representación difusa del conocimiento para la categorización de estilos de pintura artística. Finalmente, diseñamos un clasificador de pinturas basado en cláusulas de Horn evaluadas, descriptores cualitativos de colores y explicaciones. Este algoritmo, que llamamos l-SHE, proporciona razones de los resultados obtenidos y obtiene porcentajes competitivos de precisión en la experimentación.
This PhD thesis contributes to the systematic study of Horn clauses of predicate fuzzy logics and their use in knowledge representation for the design of an art painting style classification algorithm. We first focus the study on relevant notions in logic programming, such as free models and Herbrand structures in mathematical fuzzy logic. We show the existence of free models in fuzzy universal Horn classes, and we prove that every equality-free consistent universal Horn fuzzy theory has a Herbrand model. Two notions of minimality of free models are introduced, and we show that these notions are equivalent in the case of fully named structures. Then, we use Horn clauses combined with qualitative modeling as a fuzzy knowledge representation framework for art painting style categorization. Finally, we design a style painting classifier based on evaluated Horn clauses, qualitative color descriptors, and explanations. This algorithm, called l-SHE, provides reasons for the obtained results and obtains percentages of accuracy in the experimentation that are competitive.
Universitat Autònoma de Barcelona. Programa de Doctorat en Ciència Cognitiva i Llenguatge
APA, Harvard, Vancouver, ISO, and other styles
5

Giuliani, Luca. "Extending the Moving Targets Method for Injecting Constraints in Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23885/.

Full text
Abstract:
Informed Machine Learning is an umbrella term that comprises a set of methodologies in which domain knowledge is injected into a data-driven system in order to improve its level of accuracy, satisfy some external constraint, and in general serve the purposes of explainability and reliability. The said topid has been widely explored in the literature by means of many different techniques. Moving Targets is one such a technique particularly focused on constraint satisfaction: it is based on decomposition and bi-level optimization and proceeds by iteratively refining the target labels through a master step which is in charge of enforcing the constraints, while the training phase is delegated to a learner. In this work, we extend the algorithm in order to deal with semi-supervised learning and soft constraints. In particular, we focus our empirical evaluation on both regression and classification tasks involving monotonicity shape constraints. We demonstrate that our method is robust with respect to its hyperparameters, as well as being able to generalize very well while reducing the number of violations on the enforced constraints. Additionally, the method can even outperform, both in terms of accuracy and constraint satisfaction, other state-of-the-art techniques such as Lattice Models and Semantic-based Regularization with a Lagrangian Dual approach for automatic hyperparameter tuning.
APA, Harvard, Vancouver, ISO, and other styles
6

Nilsson, Linus. "Explainable Artificial Intelligence for Reinforcement Learning Agents." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294162.

Full text
Abstract:
Following the success that machine learning has enjoyed over the last decade, reinforcement learning has become a prime research area for automation and solving complex tasks. Ranging from playing video games at a professional level to robots collaborating in picking goods in warehouses, the applications of reinforcement learning are numerous. The systems are however, very complex and the understanding of why the reinforcement learning agents solve the tasks given to them in certain ways are still largely unknown to the human observer. This makes the actual use of the agents limited to non-critical tasks and the information that could be learnt by them hidden. To this end, explainable artificial intelligence (XAI) has been a topic that has received more attention in the last couple of years, in an attempt to be able to explain the machine learning systems to the human operators. In this thesis we propose to use model-agnostic XAI techniques combined with clustering techniques on simple Atari games, as well as proposing an automated evaluation for how well the explanations explain the behavior of the agents. This in an effort to uncover to what extent model-agnostic XAI can be used to gain insight into the behavior of reinforcement learning agents. The tested methods were RISE, t-SNE and Deletion. The methods were evaluated on several different agents trained on playing the Atari-breakout game and the results show that they can be used to explain the behavior of the agents on a local level (one individual frame of a game sequence), global (behavior over the entire game sequence) as well as uncovering different strategies used by the agents and as training time differs between agents.
Efter framgångarna inom maskininlärning de senaste årtiondet har förstärkningsinlärning blivit ett primärt forskningsämne för att lösa komplexa uppgifter och inom automation. Tillämpningarna är många, allt från att spela datorspel på en professionell nivå till robotar som samarbetar för att plocka varor i ett lager. Dock så är systemen väldigt komplexa och förståelsen kring varför en agent väljer att lösa en uppgift på ett specifikt sätt är okända för en mänsklig observatör. Detta gör att de praktiska tillämpningarna av dessa agenter är begränsade till icke-kritiska system och den information som kan användas för att lära ut nya sätt att lösa olika uppgifter är dolda. Utifrån detta så har förklarbar artificiell intelligens (XAI) blivit ett område inom forskning som fått allt mer uppmärksamhet de senaste åren. Detta för att kunna förklara maskininlärningssystem för den mänskliga användaren. I denna examensrapport föreslår vi att använda modelloberoende XAI tekniker kombinerat klustringstekniker på enkla Atarispel, vi föreslår även ett sätt att automatisera hur man kan utvärdera hur väl en förklaring förklarar beteendet hos agenterna. Detta i ett försök att upptäcka till vilken grad modelloberoende XAI tekniker kan användas för att förklara beteenden hos förstärkningsinlärningsagenter. De testade metoderna var RISE, t-SNE och Deletion. Metoderna utvärderades på flera olika agenter, tränade att spelaAtari-breakout. Resultatet visar att de kan användas för att förklara beteendet hos agenterna på en lokal nivå (en individuell bild ur ett spel), globalt beteende (över den totala spelsekvensen) samt även att metoderna kan hitta olika strategier användna av de olika agenterna där mängden träning de fått skiljer sig.
APA, Harvard, Vancouver, ISO, and other styles
7

Karlsson, Marcus. "Developing services based on Artificial Intelligence." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-73090.

Full text
Abstract:
This thesis explores the development process of services based on artificial intelligence (AI) technology within an industrial setting. There has been a renewed interest in the technology and leading technology companies as well as many start-ups has integrated it into their market offerings. The technology´s general application potential for enhancing products and services along with the task automation possibility for improved operational excellence makes it a valuable asset for companies. However, the implementation rate of AI services is still low for many industrial actors. The research in the area has been technically dominated with little contribution from other disciplines. Therefore, the purpose of this thesis is to identify development challenges of AI services and drawing on service development- and value-theory to propose a process framework promoting implementation. The work will have two main contributions. Firstly, to compare differences in theoretical and practical development challenges and secondly to combine AI with service development and value theory. The empirical research is done through a single case study based on a systematic combining research approach. It moves iteratively between the theory and empirical findings to direct and support the thesis throughout the work process. The data was collected through semi-structured interviews with a purposive sample. It consisted of two groups of interview participants, one AI expert group and one case internal group. This was supported by participant observation of the case environment. The data analysis was done through flexible pattern matching. The results were divided into two sections, practical challenges and development aspect of AI service development. These were combined with the selected theories and a process framework was generated. The study showed a current understudied area of business and organisational aspect regarding AI service development. Several such challenges were identified with limited theoretical research as support. For a wider industrial adoption of AI technology, more research is needed to understand the integration into the organisation. Further, sustainability and ethical aspect were found not to be a primary concern, only mention in one of the interviews. This, despite the plethora of theory and identified risks found in the literature. Lastly, the interdisciplinary research approach was found to be beneficial to the AI field to integrate the technology into an industrial setting. The developed framework could draw from existing service development models to help manage the identified challenges.
Denna uppsats utforskar utvecklingsprocessen av tjänster baserade på artificiell intelligens (AI) i en industriell miljö. Tekniken har fått ett förnyat intresse vilket har lett till att allt fler ledande teknik företag och start-up:s har integrerat AI i deras marknads erbjudande. Teknikens generella applikations möjlighet för att kunna förbättra produkter och tjänster tillsammans med dess automatiserings möjlighet för ökad operationell effektivitet gör den till en värdefull tillgång för företag. Dock så är implementations graden fortfarande låg för majoriteten av industrins aktörer. Forskningen inom AI området har varit mycket teknik dominerat med lite bidrag från andra forskningsdiscipliner. Därför syftar denna uppsats att identifiera utvecklingsutmaningar med AI tjänster och genom att hämta delar från tjänsteutveckling- och värde teori generera ett processramverk som premierar implementation. Uppsatsen har två huvudsakliga forskningsbidrag. Först genom att jämföra skillnader mellan teoretiska och praktiska utvecklingsutmaningar, sedan bidra genom att kombinera AI med tjänsteutveckling- och värdeteori. Den empiriska forskningen utfördes genom en fallstudie baserad på ett systematic combining tillvägagångsätt. På så sätt rör sig forskning iterativt mellan teori och empiri för att forma och stödja uppsatsen genom arbetet. Datat var insamlad genom semi strukturerade intervjuer med två separata, medvetet valda intervjugrupper där ena utgjorde en AI expert grupp och andra en intern grupp för fallstudien. Detta stöttades av deltagande observationer inom fallstudiens miljö. Dataanalysen utfördes med metoden flexible pattern matching. Resultatet var uppdelat i två olika sektioner, den första med praktiska utmaningar och den andra med utvecklingsaspekter av AI tjänsteutveckling. Dessa kombinerades med de utvalda teorierna för att skapa ett processramverk. Uppsatsen visar ett under studerat område angående affär och organisation i relation till AI tjänsteutveckling. Ett flertal av sådana utmaningar identifierades med begränsat stöd i existerande forskningslitteratur. För en mer utbredd adoption av AI tekniken behövs mer forskning för att förstå hur AI ska integreras med organisationer. Vidare, hållbarhet och etiska aspekter var inte en primär aspekt i resultatet, endast bemött i en av intervjuerna trots samlingen av artiklar och identifierade risker i litteraturen. Till sist, det tvärvetenskapliga angreppsättet var givande för AI området för att bättre integrera tekniken till en industriell miljö. Det utvecklade processramverket kunde bygga på existerande tjänsteutvecklings modeller för att hantera de identifierade utmaningarna.
APA, Harvard, Vancouver, ISO, and other styles
8

Rouget, Thierry. "Learning explainable concepts in the presence of a qualitative model." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9762.

Full text
Abstract:
This thesis addresses the problem of learning concept descriptions that are interpretable, or explainable. Explainability is understood as the ability to justify the learned concept in terms of the existing background knowledge. The starting point for the work was an existing system that would induce only fully explainable rules. The system performed well when the model used during induction was complete and correct. In practice, however, models are likely to be imperfect, i.e. incomplete and incorrect. We report here a new approach that achieves explainability with imperfect models. The basis of the system is the standard inductive search driven by an accuracy-oriented heuristic, biased towards rule explainability. The bias is abandoned when there is heuristic evidence that a significant loss of accuracy results from constraining the search to explainable rules only. The users can express their relative preference for accuracy vs. explainability. Experiments with the system indicate that, even with a partially incomplete and/or incorrect model, insisting on explainability results in only a small loss of accuracy. We also show how the new approach described can repair a faulty model using evidence derived from data during induction.
APA, Harvard, Vancouver, ISO, and other styles
9

Amarasinghe, Kasun. "Explainable Neural Networks based Anomaly Detection for Cyber-Physical Systems." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/6091.

Full text
Abstract:
Cyber-Physical Systems (CPSs) are the core of modern critical infrastructure (e.g. power-grids) and securing them is of paramount importance. Anomaly detection in data is crucial for CPS security. While Artificial Neural Networks (ANNs) are strong candidates for the task, they are seldom deployed in safety-critical domains due to the perception that ANNs are black-boxes. Therefore, to leverage ANNs in CPSs, cracking open the black box through explanation is essential. The main objective of this dissertation is developing explainable ANN-based Anomaly Detection Systems for Cyber-Physical Systems (CP-ADS). The main objective was broken down into three sub-objectives: 1) Identifying key-requirements that an explainable CP-ADS should satisfy, 2) Developing supervised ANN-based explainable CP-ADSs, 3) Developing unsupervised ANN-based explainable CP-ADSs. In achieving those objectives, this dissertation provides the following contributions: 1) a set of key-requirements that an explainable CP-ADS should satisfy, 2) a methodology for deriving summaries of the knowledge of a trained supervised CP-ADS, 3) a methodology for validating derived summaries, 4) an unsupervised neural network methodology for learning cyber-physical (CP) behavior, 5) a methodology for visually and linguistically explaining the learned CP behavior. All the methods were implemented on real-world and benchmark datasets. The set of key-requirements presented in the first contribution was used to evaluate the performance of the presented methods. The successes and limitations of the presented methods were identified. Furthermore, steps that can be taken to overcome the limitations were proposed. Therefore, this dissertation takes several necessary steps toward developing explainable ANN-based CP-ADS and serves as a framework that can be expanded to develop trustworthy ANN-based CP-ADSs.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Jee Won. "How speciesism affects artificial intelligence (AI) adoption intent." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/228673/1/Jee%20Won_Kim_Thesis.pdf.

Full text
Abstract:
As there have been concerns about the excessive advancement of artificial intelligence (AI) surpassing humans, exploring reactions to AI as challenging human superiority is meaningful. By examining how the hierarchical and discriminative views on animals (speciesism) affects the views on non-living AI, this thesis has significant and novel contributions to AI adoption literature and AI product marketing.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Artificial Intelligence, Explainable AI"

1

Krötzsch, Markus, and Daria Stepanova, eds. Reasoning Web. Explainable Artificial Intelligence. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31423-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lahby, Mohamed, Utku Kose, and Akash Kumar Bhoi. Explainable Artificial Intelligence for Smart Cities. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003172772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ahmed, Mohiuddin, Sheikh Rabiul Islam, Adnan Anwar, Nour Moustafa, and Al-Sakib Khan Pathan, eds. Explainable Artificial Intelligence for Cyber Security. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96630-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gaur, Loveleen, and Biswa Mohan Sahoo. Explainable Artificial Intelligence for Intelligent Transportation Systems. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09644-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kamath, Uday, and John Liu. Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Artificial intelligence. 3rd ed. Reading, Mass: Addison-Wesley, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Winston, Patrick Henry. Artificial intelligence. 3rd ed. Reading, MA: Addison-Wesley, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Surviving AI. [Place of publication not identified]: Three Cs, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Artificial intelligence. 3rd ed. Reading, Mass: Addison-Wesley Pub. Co., 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

AI jiten: Encyclopedia of artificial intelligence. 2nd ed. Tōkyō: Kyōritsu Shuppan, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Artificial Intelligence, Explainable AI"

1

Holzinger, Andreas, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek. "xxAI - Beyond Explainable Artificial Intelligence." In xxAI - Beyond Explainable AI, 3–10. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_1.

Full text
Abstract:
AbstractThe success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model’s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on “XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.”
APA, Harvard, Vancouver, ISO, and other styles
2

Samek, Wojciech, and Klaus-Robert Müller. "Towards Explainable Artificial Intelligence." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 5–22. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fan, Xiuyi, and Siyuan Liu. "Explainable AI for Classification Using Probabilistic Logic Inference." In Artificial Intelligence, 16–26. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93049-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guidotti, Riccardo, Anna Monreale, Dino Pedreschi, and Fosca Giannotti. "Principles of Explainable Artificial Intelligence." In Explainable AI Within the Digital Transformation and Cyber Physical Systems, 9–31. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76409-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gaur, Loveleen, and Biswa Mohan Sahoo. "Explainable AI in ITS: Ethical Concerns." In Explainable Artificial Intelligence for Intelligent Transportation Systems, 79–90. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09644-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chennam, Krishna Keerthi, Swapna Mudrakola, V. Uma Maheswari, Rajanikanth Aluvalu, and K. Gangadhara Rao. "Black Box Models for eXplainable Artificial Intelligence." In Explainable AI: Foundations, Methodologies and Applications, 1–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12807-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cruz, Francisco, Richard Dazeley, and Peter Vamplew. "Memory-Based Explainable Reinforcement Learning." In AI 2019: Advances in Artificial Intelligence, 66–77. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35288-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gaur, Loveleen, and Biswa Mohan Sahoo. "Introduction to Explainable AI and Intelligent Transportation." In Explainable Artificial Intelligence for Intelligent Transportation Systems, 1–25. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09644-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sayed-Mouchaweh, Moamar. "Prologue: Introduction to Explainable Artificial Intelligence." In Explainable AI Within the Digital Transformation and Cyber Physical Systems, 1–8. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76409-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mamalakis, Antonios, Imme Ebert-Uphoff, and Elizabeth A. Barnes. "Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science." In xxAI - Beyond Explainable AI, 315–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_16.

Full text
Abstract:
AbstractIn recent years, artificial intelligence and specifically artificial neural networks (NNs) have shown great success in solving complex, nonlinear problems in earth sciences. Despite their success, the strategies upon which NNs make decisions are hard to decipher, which prevents scientists from interpreting and building trust in the NN predictions; a highly desired and necessary condition for the further use and exploitation of NNs’ potential. Thus, a variety of methods have been recently introduced with the aim of attributing the NN predictions to specific features in the input space and explaining their strategy. The so-called eXplainable Artificial Intelligence (XAI) is already seeing great application in a plethora of fields, offering promising results and insights about the decision strategies of NNs. Here, we provide an overview of the most recent work from our group, applying XAI to meteorology and climate science. Specifically, we present results from satellite applications that include weather phenomena identification and image to image translation, applications to climate prediction at subseasonal to decadal timescales, and detection of forced climatic changes and anthropogenic footprint. We also summarize a recently introduced synthetic benchmark dataset that can be used to improve our understanding of different XAI methods and introduce objectivity into the assessment of their fidelity. With this overview, we aim to illustrate how gaining accurate insights about the NN decision strategy can help climate scientists and meteorologists improve practices in fine-tuning model architectures, calibrating trust in climate and weather prediction and attribution, and learning new science.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Artificial Intelligence, Explainable AI"

1

Ignatiev, Alexey. "Towards Trustable Explainable AI." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/726.

Full text
Abstract:
Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.
APA, Harvard, Vancouver, ISO, and other styles
2

Byrne, Ruth M. J. "Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/876.

Full text
Abstract:
Counterfactuals about what could have happened are increasingly used in an array of Artificial Intelligence (AI) applications, and especially in explainable AI (XAI). Counterfactuals can aid the provision of interpretable models to make the decisions of inscrutable systems intelligible to developers and users. However, not all counterfactuals are equally helpful in assisting human comprehension. Discoveries about the nature of the counterfactuals that humans create are a helpful guide to maximize the effectiveness of counterfactual use in AI.
APA, Harvard, Vancouver, ISO, and other styles
3

Kou, Ziyi, Lanyu Shang, Yang Zhang, Zhenrui Yue, Huimin Zeng, and Dong Wang. "Crowd, Expert & AI: A Human-AI Interactive Approach Towards Natural Language Explanation Based COVID-19 Misinformation Detection." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/706.

Full text
Abstract:
In this paper, we study an explainable COVID-19 misinformation detection problem where the goal is to accurately identify COVID-19 misleading posts on social media and explain the posts with natural language explanations (NLEs). Our problem is motivated by the limitations of current explainable misinformation detection approaches that cannot provide NLEs for COVID-19 posts due to the lack of sufficient professional COVID-19 knowledge for supervision. To address such a limitation, we develop CEA-COVID, a crowd-expert-AI framework that jointly exploits the common logical reasoning ability of online crowd workers and the professional knowledge of COVID-19 experts to effectively generate NLEs for detecting and explaining COVID-19 misinformation. We evaluate CEA-COVID using two public COVID-19 misinformation datasets on social media. Results demonstrate that CEA-COVID outperforms existing explainable misinformation detection models in terms of both explainability and detection accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

Belle, Vaishak. "Logic meets Probability: Towards Explainable AI Systems for Uncertain Worlds." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/733.

Full text
Abstract:
Logical AI is concerned with formal languages to represent and reason with qualitative specifications; statistical AI is concerned with learning quantitative specifications from data. To combine the strengths of these two camps, there has been exciting recent progress on unifying logic and probability. We review the many guises for this union, while emphasizing the need for a formal language to represent a system's knowledge. Formal languages allow their internal properties to be robustly scrutinized, can be augmented by adding new knowledge, and are amenable to abstractions, all of which are vital to the design of intelligent systems that are explainable and interpretable.
APA, Harvard, Vancouver, ISO, and other styles
5

Clinciu, Miruna-Adriana, and Helen Hastie. "A Survey of Explainable AI Terminology." In Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-8403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sampat, Shailaja. "Technical, Hard and Explainable Question Answering (THE-QA)." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/916.

Full text
Abstract:
The ability of an agent to rationally answer questions about a given task is the key measure of its intelligence. While we have obtained phenomenal performance over various language and vision tasks separately, 'Technical, Hard and Explainable Question Answering' (THE-QA) is a new challenging corpus which addresses them jointly. THE-QA is a question answering task involving diagram understanding and reading comprehension. We plan to establish benchmarks over this new corpus using deep learning models guided by knowledge representation methods. The proposed approach will envisage detailed semantic parsing of technical figures and text, which is robust against diverse formats. It will be aided by knowledge acquisition and reasoning module that categorizes different knowledge types, identify sources to acquire that knowledge and perform reasoning to answer the questions correctly. THE-QA data will present a strong challenge to the community for future research and will bridge the gap between state-of-the-art Artificial Intelligence (AI) and 'Human-level' AI.
APA, Harvard, Vancouver, ISO, and other styles
7

Kapcia, Marcin, Hassan Eshkiki, Jamie Duell, Xiuyi Fan, Shangming Zhou, and Benjamin Mora. "ExMed: An AI Tool for Experimenting Explainable AI Techniques on Medical Data Analytics." In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2021. http://dx.doi.org/10.1109/ictai52525.2021.00134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Han, Yang Liu, Xiguang Wei, Chuyu Zheng, Tianjian Chen, Qiang Yang, and Xiong Peng. "Fair and Explainable Dynamic Engagement of Crowd Workers." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/961.

Full text
Abstract:
Years of rural-urban migration has resulted in a significant population in China seeking ad-hoc work in large urban centres. At the same time, many businesses face large fluctuations in demand for manpower and require more efficient ways to satisfy such demands. This paper outlines AlgoCrowd, an artificial intelligence (AI)-empowered algorithmic crowdsourcing platform. Equipped with an efficient explainable task-worker matching optimization approach designed to focus on fair treatment of workers while maximizing collective utility, the platform provides explainable task recommendations to workers' personal work management mobile apps which are becoming popular, with the aim to address the above societal challenge.
APA, Harvard, Vancouver, ISO, and other styles
9

Reiter, Ehud. "Natural Language Generation Challenges for Explainable AI." In Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-8402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Yongqing, Han Yu, Kun Zhang, Yuliang Shi, Cyril Leung, and Chunyan Miao. "Intelligent Decision Support for Improving Power Management." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/965.

Full text
Abstract:
With the development and adoption of the electricity information tracking system in China, real-time electricity consumption big data have become available to enable artificial intelligence (AI) to help power companies and the urban management departments to make demand side management decisions. We demonstrate the Power Intelligent Decision Support (PIDS) platform, which can generate Orderly Power Utilization (OPU) decision recommendations and perform Demand Response (DR) implementation management based on a short-term load forecasting model. It can also provide different users with query and application functions to facilitate explainable decision support.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Artificial Intelligence, Explainable AI"

1

Core, Mark G., H. C. Lane, Michael van Lent, Dave Gomboc, Steve Solomon, and Milton Rosenberg. Building Explainable Artificial Intelligence Systems. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada459166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Phillips, P. Jonathon, Carina A. Hahn, Peter C. Fontana, Amy N. Yates, Kristen Greene, David A. Broniatowski, and Mark A. Przybocki. Four Principles of Explainable Artificial Intelligence. National Institute of Standards and Technology, September 2021. http://dx.doi.org/10.6028/nist.ir.8312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Diaz-Herrera, Jorge L. Artificial Intelligence (AI) and Ada: Integrating AI with Mainstream Software Engineering. Fort Belvoir, VA: Defense Technical Information Center, September 1994. http://dx.doi.org/10.21236/ada286093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arnold, Zachary, and Ngor Luong. China’s Artificial Intelligence Industry Alliance. Center for Security and Emerging Technology, May 2021. http://dx.doi.org/10.51593/20200094.

Full text
Abstract:
As part of its strategy to achieve global leadership in AI, the Chinese government brings together local governments, academic institutions, and companies to establish collaboration platforms. This data brief examines the role of China’s Artificial Intelligence Industry Alliance in advancing its AI strategy, and the key players in the Chinese AI industry.
APA, Harvard, Vancouver, ISO, and other styles
5

Gillespie, Nicole, Caitlin Curtis, Rossana Bianchi, Ali Akbari, and Rita Fentener van Vlissingen. Achieving Trustworthy AI: A Model for Trustworthy Artificial Intelligence. Australia: The University of Queensland and KPMG, November 2020. http://dx.doi.org/10.14264/ca0819d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Waugh, Gordon W., and Deirdre J. Knapp. Development of an Army Civilian Artificial Intelligence (AI) Specialty. Fort Belvoir, VA: Defense Technical Information Center, November 1997. http://dx.doi.org/10.21236/ada343149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wiley, Cathy J. User's Guide to the (ncar ai) Artificial Intelligence Technical Library. Fort Belvoir, VA: Defense Technical Information Center, June 1991. http://dx.doi.org/10.21236/ada237270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Murdick, Dewey, and Patrick Thomas. Patents and Artificial Intelligence: A Primer. Center for Security and Emerging Technology, September 2020. http://dx.doi.org/10.51593/20200038.

Full text
Abstract:
Patent data can provide insights into the most active countries, fields and organizations in artificial intelligence research. This data brief analyzes worldwide trends in AI patenting to offer metrics on inventive activity.
APA, Harvard, Vancouver, ISO, and other styles
9

Hannas, William, Huey-Meei Chang, Daniel Chou, and Brian Fleeger. China's Advanced AI Research: Monitoring China's Paths to "General" Artificial Intelligence. Center for Security and Emerging Technology, July 2022. http://dx.doi.org/10.51593/20210064.

Full text
Abstract:
China is following a national strategy to lead the world in artificial intelligence by 2030, including by pursuing “general AI” that can act autonomously in novel circumstances. Open-source research identifies 30 Chinese institutions engaged in one or more of this project‘s aspects, including machine learning, brain-inspired AI, and brain-computer interfaces. This report previews a CSET pilot program that will track China’s progress and provide timely alerts.
APA, Harvard, Vancouver, ISO, and other styles
10

Baker, James E. Ethics and Artificial Intelligence: A Policymaker's Introduction. Center for Security and Emerging Technology, April 2021. http://dx.doi.org/10.51593/20190022.

Full text
Abstract:
The law plays a vital role in how artificial intelligence can be developed and used in ethical ways. But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required. Sound ethical codes and principles concerning AI can help fill legal gaps. In this paper, CSET Distinguished Fellow James E. Baker offers a primer on the limits and promise of three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography