Academic literature on the topic 'Explainable AI Planning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explainable AI Planning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Explainable AI Planning"

1

Sreedharan, Sarath, Anagha Kulkarni, and Subbarao Kambhampati. "Explainable Human--AI Interaction: A Planning Perspective." Synthesis Lectures on Artificial Intelligence and Machine Learning 16, no. 1 (January 24, 2022): 1–184. http://dx.doi.org/10.2200/s01152ed1v01y202111aim050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Van, Tran Cao Son, and William Yeoh. "Explainable Problem in clingo-dl Programs." Proceedings of the International Symposium on Combinatorial Search 12, no. 1 (July 21, 2021): 231–32. http://dx.doi.org/10.1609/socs.v12i1.18593.

Full text
Abstract:
Research in explainable planning is becoming increasingly important as human-AI collaborations become more pervasive. An explanation is needed when the planning system’s solution does not match the human’s expectation. In this paper, we introduce the explainability problem in clingo-dl programs (XASP-D) because clingo-dl can effectively work with numerical scheduling, a problem similar to the explainable planning.
APA, Harvard, Vancouver, ISO, and other styles
3

Maathuis, Clara. "On Explainable AI Solutions for Targeting in Cyber Military Operations." International Conference on Cyber Warfare and Security 17, no. 1 (March 2, 2022): 166–75. http://dx.doi.org/10.34190/iccws.17.1.38.

Full text
Abstract:
Nowadays, it is hard to recall a domain, system, or problem that does not use, embed, or could be tackled through AI. From early stages of its development, its techniques and technologies were successfully implemented by military forces for different purposes in distinct military operations. Since cyberspace represents the last officially recognized operational battlefield, it also offers a direct virtual setting for implementing AI solutions for military operations conducted inside or through it. However, planning and conducting AI-based cyber military operations are actions still in the beginning of development. Thus, both practitioner and academic dedication isrequired since the impact of their use could have significant consequences which requires that the output of such intelligent solutions is explainable to the engineers developing them and also to their users e.g., military decision makers. Hence, this article starts by discussing the meaning of explainable AI in the context of targeting in military cyber operations, continues by analyzing the challenges of embedding AI solutions (e.g., intelligent cyber weapons) in different targeting phases, and is structuring them in corresponding taxonomies packaged in a design framework. It does that by crossing the targeting process focusing on target development, capability analysis, and target engagement. Moreover, this research argues that especially in such operations carried out in silence and at incredible speed, it is of major importance that the military forces involved are aware of the following. First, the decisions taken by theintelligent systems embedded. Second, are not only aware, but also able to interpret the results obtained from the AI solutions in a proper, effective, and efficient way. From there, this research draws possible technological and humanoriented methods that facilitate the successful implementation of XAI solutions for targeting in military cyber operations.
APA, Harvard, Vancouver, ISO, and other styles
4

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Cox, Louis. "Information Structures for Causally Explainable Decisions." Entropy 23, no. 5 (May 13, 2021): 601. http://dx.doi.org/10.3390/e23050601.

Full text
Abstract:
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.
APA, Harvard, Vancouver, ISO, and other styles
6

Snidaro, Lauro, Jesús García Herrero, James Llinas, and Erik Blasch. "Recent Trends in Context Exploitation for Information Fusion and AI." AI Magazine 40, no. 3 (September 30, 2019): 14–27. http://dx.doi.org/10.1609/aimag.v40i3.2864.

Full text
Abstract:
AI is related to information fusion (IF). Many methods in AI that use perception and reasoning align to the functionalities of high-level IF (HLIF) operations that estimate situational and impact states. To achieve HLIF sensor, user, and mission management operations, AI elements of planning, control, and knowledge representation are needed. Both AI reasoning and IF inferencing and estimation exploit context as a basis for achieving deeper levels of understanding of complex world conditions. Open challenges for AI researchers include achieving concept generalization, response adaptation, and situation assessment. This article presents a brief survey of recent and current research on the exploitation of context in IF and discusses the interplay and similarities between IF, context exploitation, and AI. In addition, it highlights the role that contextual information can provide in the next generation of adaptive intelligent systems based on explainable AI. The article describes terminology, addresses notional processing concepts, and lists references for readers to follow up and explore ideas offered herein.
APA, Harvard, Vancouver, ISO, and other styles
7

Madni, Hamza Ahmad, Muhammad Umer, Abid Ishaq, Nihal Abuzinadah, Oumaima Saidani, Shtwai Alsubai, Monia Hamdi, and Imran Ashraf. "Water-Quality Prediction Based on H2O AutoML and Explainable AI Techniques." Water 15, no. 3 (January 25, 2023): 475. http://dx.doi.org/10.3390/w15030475.

Full text
Abstract:
Rapid expansion of the world’s population has negatively impacted the environment, notably water quality. As a result, water-quality prediction has arisen as a hot issue during the last decade. Existing techniques fall short in terms of good accuracy. Furthermore, presently, the dataset available for analysis contains missing values; these missing values have a significant effect on the performance of the classifiers. An automated system for water-quality prediction that deals with the missing values efficiently and achieves good accuracy for water-quality prediction is proposed in this study. To handle the accuracy problem, this study makes use of the stacked ensemble H2O AutoML model; to handle the missing values, this study makes use of the KNN imputer. Moreover, the performance of the proposed system is compared to that of seven machine learning algorithms. Experiments are performed in two scenarios: removing missing values and using the KNN imputer. The contribution of each feature regarding prediction is explained using SHAP (SHapley Additive exPlanations). Results reveal that the proposed stacked model outperforms other models with 97% accuracy, 96% precision, 99% recall, and 98% F1-score for water-quality prediction.
APA, Harvard, Vancouver, ISO, and other styles
8

Temenos, Anastasios, Ioannis N. Tzortzis, Maria Kaselimi, Ioannis Rallis, Anastasios Doulamis, and Nikolaos Doulamis. "Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing." Remote Sensing 14, no. 13 (June 26, 2022): 3074. http://dx.doi.org/10.3390/rs14133074.

Full text
Abstract:
The COVID-19 pandemic has affected many aspects of human life around the world, due to its tremendous outcomes on public health and socio-economic activities. Policy makers have tried to develop efficient responses based on technologies and advanced pandemic control methodologies, to limit the wide spreading of the virus in urban areas. However, techniques such as social isolation and lockdown are short-term solutions that minimize the spread of the pandemic in cities and do not invert long-term issues that derive from climate change, air pollution and urban planning challenges that enhance the spreading ability. Thus, it seems crucial to understand what kind of factors assist or prevent the wide spreading of the virus. Although AI frameworks have a very efficient predictive ability as data-driven procedures, they often struggle to identify strong correlations among multidimensional data and provide robust explanations. In this paper, we propose the fusion of a heterogeneous, spatio-temporal dataset that combine data from eight European cities spanning from 1 January 2020 to 31 December 2021 and describe atmospheric, socio-economic, health, mobility and environmental factors all related to potential links with COVID-19. Remote sensing data are the key solution to monitor the availability on public green spaces between cities in the study period. So, we evaluate the benefits of NIR and RED bands of satellite images to calculate the NDVI and locate the percentage in vegetation cover on each city for each week of our 2-year study. This novel dataset is evaluated by a tree-based machine learning algorithm that utilizes ensemble learning and is trained to make robust predictions on daily cases and deaths. Comparisons with other machine learning techniques justify its robustness on the regression metrics RMSE and MAE. Furthermore, the explainable frameworks SHAP and LIME are utilized to locate potential positive or negative influence of the factors on global and local level, with respect to our model’s predictive ability. A variation of SHAP, namely treeSHAP, is utilized for our tree-based algorithm to make fast and accurate explanations.
APA, Harvard, Vancouver, ISO, and other styles
9

Wongburi, Praewa, and Jae K. Park. "Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network." Sustainability 14, no. 10 (May 21, 2022): 6276. http://dx.doi.org/10.3390/su14106276.

Full text
Abstract:
Sludge Volume Index (SVI) is one of the most important operational parameters in an activated sludge process. It is difficult to predict SVI because of the nonlinearity of data and variability operation conditions. With complex time-series data from Wastewater Treatment Plants (WWTPs), the Recurrent Neural Network (RNN) with an Explainable Artificial Intelligence was applied to predict SVI and interpret the prediction result. RNN architecture has been proven to efficiently handle time-series and non-uniformity data. Moreover, due to the complexity of the model, the newly Explainable Artificial Intelligence concept was used to interpret the result. Data were collected from the Nine Springs Wastewater Treatment Plant, Madison, Wisconsin, and the data were analyzed and cleaned using Python program and data analytics approaches. An RNN model predicted SVI accurately after training with historical big data collected at the Nine Spring WWTP. The Explainable Artificial Intelligence (AI) analysis was able to determine which input parameters affected higher SVI most. The prediction of SVI will benefit WWTPs to establish corrective measures to maintaining stable SVI. The SVI prediction model and Explainable Artificial Intelligence method will help the wastewater treatment sector to improve operational performance, system management, and process reliability.
APA, Harvard, Vancouver, ISO, and other styles
10

Renda, Alessandro, Pietro Ducange, Francesco Marcelloni, Dario Sabella, Miltiadis C. Filippou, Giovanni Nardini, Giovanni Stea, et al. "Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking." Information 13, no. 8 (August 20, 2022): 395. http://dx.doi.org/10.3390/info13080395.

Full text
Abstract:
This article presents the concept of federated learning (FL) of eXplainable Artificial Intelligence (XAI) models as an enabling technology in advanced 5G towards 6G systems and discusses its applicability to the automated vehicle networking use case. Although the FL of neural networks has been widely investigated exploiting variants of stochastic gradient descent as the optimization method, it has not yet been adequately studied in the context of inherently explainable models. On the one side, XAI permits improving user experience of the offered communication services by helping end users trust (by design) that in-network AI functionality issues appropriate action recommendations. On the other side, FL ensures security and privacy of both vehicular and user data across the whole system. These desiderata are often ignored in existing AI-based solutions for wireless network planning, design and operation. In this perspective, the article provides a detailed description of relevant 6G use cases, with a focus on vehicle-to-everything (V2X) environments: we describe a framework to evaluate the proposed approach involving online training based on real data from live networks. FL of XAI models is expected to bring benefits as a methodology for achieving seamless availability of decentralized, lightweight and communication efficient intelligence. Impacts of the proposed approach (including standardization perspectives) consist in a better trustworthiness of operations, e.g., via explainability of quality of experience (QoE) predictions, along with security and privacy-preserving management of data from sensors, terminals, users and applications.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Explainable AI Planning"

1

"Foundations of Human-Aware Planning -- A Tale of Three Models." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.51791.

Full text
Abstract:
abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2018
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Explainable AI Planning"

1

Sreedharan, Sarath Sarath, and Anagha Anagha Kulkarni. Explainable Human-AI Interaction: A Planning Perspective. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sreedharan, Sarath, Anagha Kulkarni, and Subbarao Kambhampati. Explainable Human-AI Interaction: A Planning Perspective. Morgan & Claypool, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sreedharan, Sarath, Anagha Kulkarni, and Subbarao Kambhampati. Explainable Human-AI Interaction: A Planning Perspective. Morgan & Claypool Publishers, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Explainable AI Planning"

1

Murray, Andrew, Benjamin Krarup, and Michael Cashmore. "Towards Temporally Uncertain Explainable AI Planning." In Lecture Notes in Computer Science, 45–59. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94876-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hoffmann, Jörg, and Daniele Magazzeni. "Explainable AI Planning (XAIP): Overview and the Case of Contrastive Explanation (Extended Abstract)." In Reasoning Web. Explainable Artificial Intelligence, 277–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31423-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Explainable AI Planning"

1

Chakraborti, Tathagata, Kshitij P. Fadnis, Kartik Talamadupula, Mishal Dholakia, Biplav Srivastava, Jeffrey O. Kephart, and Rachel K. E. Bellamy. "Visualizations for an Explainable Planning Agent." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/849.

Full text
Abstract:
In this demonstration, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human-in-the-loop decision-making. Imposing transparency and explainability requirements on such agents is crucial for establishing human trust and common ground with an end-to-end automated planning system. Visualizing the agent's internal decision making processes is a crucial step towards achieving this. This may include externalizing the "brain" of the agent: starting from its sensory inputs, to progressively higher order decisions made by it in order to drive its planning components. We demonstrate these functionalities in the context of a smart assistant in the Cognitive Environments Laboratory at IBM's T.J. Watson Research Center.
APA, Harvard, Vancouver, ISO, and other styles
2

Chakraborti, Tathagata, Sarath Sreedharan, and Subbarao Kambhampati. "The Emerging Landscape of Explainable Automated Planning & Decision Making." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/669.

Full text
Abstract:
In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms. We hope that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-in-the-loop systems, as well as provide the established researcher with some perspective on the evolution of the exciting world of explainable planning.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography