Journal articles on the topic 'Explainable AI Planning'

To see the other types of publications on this topic, follow the link: Explainable AI Planning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 journal articles for your research on the topic 'Explainable AI Planning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sreedharan, Sarath, Anagha Kulkarni, and Subbarao Kambhampati. "Explainable Human--AI Interaction: A Planning Perspective." Synthesis Lectures on Artificial Intelligence and Machine Learning 16, no. 1 (January 24, 2022): 1–184. http://dx.doi.org/10.2200/s01152ed1v01y202111aim050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Van, Tran Cao Son, and William Yeoh. "Explainable Problem in clingo-dl Programs." Proceedings of the International Symposium on Combinatorial Search 12, no. 1 (July 21, 2021): 231–32. http://dx.doi.org/10.1609/socs.v12i1.18593.

Full text
Abstract:
Research in explainable planning is becoming increasingly important as human-AI collaborations become more pervasive. An explanation is needed when the planning system’s solution does not match the human’s expectation. In this paper, we introduce the explainability problem in clingo-dl programs (XASP-D) because clingo-dl can effectively work with numerical scheduling, a problem similar to the explainable planning.
APA, Harvard, Vancouver, ISO, and other styles
3

Maathuis, Clara. "On Explainable AI Solutions for Targeting in Cyber Military Operations." International Conference on Cyber Warfare and Security 17, no. 1 (March 2, 2022): 166–75. http://dx.doi.org/10.34190/iccws.17.1.38.

Full text
Abstract:
Nowadays, it is hard to recall a domain, system, or problem that does not use, embed, or could be tackled through AI. From early stages of its development, its techniques and technologies were successfully implemented by military forces for different purposes in distinct military operations. Since cyberspace represents the last officially recognized operational battlefield, it also offers a direct virtual setting for implementing AI solutions for military operations conducted inside or through it. However, planning and conducting AI-based cyber military operations are actions still in the beginning of development. Thus, both practitioner and academic dedication isrequired since the impact of their use could have significant consequences which requires that the output of such intelligent solutions is explainable to the engineers developing them and also to their users e.g., military decision makers. Hence, this article starts by discussing the meaning of explainable AI in the context of targeting in military cyber operations, continues by analyzing the challenges of embedding AI solutions (e.g., intelligent cyber weapons) in different targeting phases, and is structuring them in corresponding taxonomies packaged in a design framework. It does that by crossing the targeting process focusing on target development, capability analysis, and target engagement. Moreover, this research argues that especially in such operations carried out in silence and at incredible speed, it is of major importance that the military forces involved are aware of the following. First, the decisions taken by theintelligent systems embedded. Second, are not only aware, but also able to interpret the results obtained from the AI solutions in a proper, effective, and efficient way. From there, this research draws possible technological and humanoriented methods that facilitate the successful implementation of XAI solutions for targeting in military cyber operations.
APA, Harvard, Vancouver, ISO, and other styles
4

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Cox, Louis. "Information Structures for Causally Explainable Decisions." Entropy 23, no. 5 (May 13, 2021): 601. http://dx.doi.org/10.3390/e23050601.

Full text
Abstract:
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.
APA, Harvard, Vancouver, ISO, and other styles
6

Snidaro, Lauro, Jesús García Herrero, James Llinas, and Erik Blasch. "Recent Trends in Context Exploitation for Information Fusion and AI." AI Magazine 40, no. 3 (September 30, 2019): 14–27. http://dx.doi.org/10.1609/aimag.v40i3.2864.

Full text
Abstract:
AI is related to information fusion (IF). Many methods in AI that use perception and reasoning align to the functionalities of high-level IF (HLIF) operations that estimate situational and impact states. To achieve HLIF sensor, user, and mission management operations, AI elements of planning, control, and knowledge representation are needed. Both AI reasoning and IF inferencing and estimation exploit context as a basis for achieving deeper levels of understanding of complex world conditions. Open challenges for AI researchers include achieving concept generalization, response adaptation, and situation assessment. This article presents a brief survey of recent and current research on the exploitation of context in IF and discusses the interplay and similarities between IF, context exploitation, and AI. In addition, it highlights the role that contextual information can provide in the next generation of adaptive intelligent systems based on explainable AI. The article describes terminology, addresses notional processing concepts, and lists references for readers to follow up and explore ideas offered herein.
APA, Harvard, Vancouver, ISO, and other styles
7

Madni, Hamza Ahmad, Muhammad Umer, Abid Ishaq, Nihal Abuzinadah, Oumaima Saidani, Shtwai Alsubai, Monia Hamdi, and Imran Ashraf. "Water-Quality Prediction Based on H2O AutoML and Explainable AI Techniques." Water 15, no. 3 (January 25, 2023): 475. http://dx.doi.org/10.3390/w15030475.

Full text
Abstract:
Rapid expansion of the world’s population has negatively impacted the environment, notably water quality. As a result, water-quality prediction has arisen as a hot issue during the last decade. Existing techniques fall short in terms of good accuracy. Furthermore, presently, the dataset available for analysis contains missing values; these missing values have a significant effect on the performance of the classifiers. An automated system for water-quality prediction that deals with the missing values efficiently and achieves good accuracy for water-quality prediction is proposed in this study. To handle the accuracy problem, this study makes use of the stacked ensemble H2O AutoML model; to handle the missing values, this study makes use of the KNN imputer. Moreover, the performance of the proposed system is compared to that of seven machine learning algorithms. Experiments are performed in two scenarios: removing missing values and using the KNN imputer. The contribution of each feature regarding prediction is explained using SHAP (SHapley Additive exPlanations). Results reveal that the proposed stacked model outperforms other models with 97% accuracy, 96% precision, 99% recall, and 98% F1-score for water-quality prediction.
APA, Harvard, Vancouver, ISO, and other styles
8

Temenos, Anastasios, Ioannis N. Tzortzis, Maria Kaselimi, Ioannis Rallis, Anastasios Doulamis, and Nikolaos Doulamis. "Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing." Remote Sensing 14, no. 13 (June 26, 2022): 3074. http://dx.doi.org/10.3390/rs14133074.

Full text
Abstract:
The COVID-19 pandemic has affected many aspects of human life around the world, due to its tremendous outcomes on public health and socio-economic activities. Policy makers have tried to develop efficient responses based on technologies and advanced pandemic control methodologies, to limit the wide spreading of the virus in urban areas. However, techniques such as social isolation and lockdown are short-term solutions that minimize the spread of the pandemic in cities and do not invert long-term issues that derive from climate change, air pollution and urban planning challenges that enhance the spreading ability. Thus, it seems crucial to understand what kind of factors assist or prevent the wide spreading of the virus. Although AI frameworks have a very efficient predictive ability as data-driven procedures, they often struggle to identify strong correlations among multidimensional data and provide robust explanations. In this paper, we propose the fusion of a heterogeneous, spatio-temporal dataset that combine data from eight European cities spanning from 1 January 2020 to 31 December 2021 and describe atmospheric, socio-economic, health, mobility and environmental factors all related to potential links with COVID-19. Remote sensing data are the key solution to monitor the availability on public green spaces between cities in the study period. So, we evaluate the benefits of NIR and RED bands of satellite images to calculate the NDVI and locate the percentage in vegetation cover on each city for each week of our 2-year study. This novel dataset is evaluated by a tree-based machine learning algorithm that utilizes ensemble learning and is trained to make robust predictions on daily cases and deaths. Comparisons with other machine learning techniques justify its robustness on the regression metrics RMSE and MAE. Furthermore, the explainable frameworks SHAP and LIME are utilized to locate potential positive or negative influence of the factors on global and local level, with respect to our model’s predictive ability. A variation of SHAP, namely treeSHAP, is utilized for our tree-based algorithm to make fast and accurate explanations.
APA, Harvard, Vancouver, ISO, and other styles
9

Wongburi, Praewa, and Jae K. Park. "Prediction of Sludge Volume Index in a Wastewater Treatment Plant Using Recurrent Neural Network." Sustainability 14, no. 10 (May 21, 2022): 6276. http://dx.doi.org/10.3390/su14106276.

Full text
Abstract:
Sludge Volume Index (SVI) is one of the most important operational parameters in an activated sludge process. It is difficult to predict SVI because of the nonlinearity of data and variability operation conditions. With complex time-series data from Wastewater Treatment Plants (WWTPs), the Recurrent Neural Network (RNN) with an Explainable Artificial Intelligence was applied to predict SVI and interpret the prediction result. RNN architecture has been proven to efficiently handle time-series and non-uniformity data. Moreover, due to the complexity of the model, the newly Explainable Artificial Intelligence concept was used to interpret the result. Data were collected from the Nine Springs Wastewater Treatment Plant, Madison, Wisconsin, and the data were analyzed and cleaned using Python program and data analytics approaches. An RNN model predicted SVI accurately after training with historical big data collected at the Nine Spring WWTP. The Explainable Artificial Intelligence (AI) analysis was able to determine which input parameters affected higher SVI most. The prediction of SVI will benefit WWTPs to establish corrective measures to maintaining stable SVI. The SVI prediction model and Explainable Artificial Intelligence method will help the wastewater treatment sector to improve operational performance, system management, and process reliability.
APA, Harvard, Vancouver, ISO, and other styles
10

Renda, Alessandro, Pietro Ducange, Francesco Marcelloni, Dario Sabella, Miltiadis C. Filippou, Giovanni Nardini, Giovanni Stea, et al. "Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking." Information 13, no. 8 (August 20, 2022): 395. http://dx.doi.org/10.3390/info13080395.

Full text
Abstract:
This article presents the concept of federated learning (FL) of eXplainable Artificial Intelligence (XAI) models as an enabling technology in advanced 5G towards 6G systems and discusses its applicability to the automated vehicle networking use case. Although the FL of neural networks has been widely investigated exploiting variants of stochastic gradient descent as the optimization method, it has not yet been adequately studied in the context of inherently explainable models. On the one side, XAI permits improving user experience of the offered communication services by helping end users trust (by design) that in-network AI functionality issues appropriate action recommendations. On the other side, FL ensures security and privacy of both vehicular and user data across the whole system. These desiderata are often ignored in existing AI-based solutions for wireless network planning, design and operation. In this perspective, the article provides a detailed description of relevant 6G use cases, with a focus on vehicle-to-everything (V2X) environments: we describe a framework to evaluate the proposed approach involving online training based on real data from live networks. FL of XAI models is expected to bring benefits as a methodology for achieving seamless availability of decentralized, lightweight and communication efficient intelligence. Impacts of the proposed approach (including standardization perspectives) consist in a better trustworthiness of operations, e.g., via explainability of quality of experience (QoE) predictions, along with security and privacy-preserving management of data from sensors, terminals, users and applications.
APA, Harvard, Vancouver, ISO, and other styles
11

Oh, Hoonseong, and Sangmin Lee. "Evaluation and Interpretation of Tourist Satisfaction for Local Korean Festivals Using Explainable AI." Sustainability 13, no. 19 (September 30, 2021): 10901. http://dx.doi.org/10.3390/su131910901.

Full text
Abstract:
In this paper, we propose using explainable artificial intelligence (XAI) techniques to predict and interpret the effects of local festival components on tourist satisfaction. We use data-driven analytics, including prediction, interpretation, and utilization phases, to help festivals establish a tourism strategy. Ultimately, this study aims to identify the most significant variables in local tourism strategy and to predict tourist satisfaction. To do so, we conducted an experimental study to compare the prediction accuracy of representative predictive algorithms. We then built a surrogate model based on a game theory-based framework, known as SHapley Additive exPlanations (SHAP), to understand the prediction results and to obtain insight into how tourist satisfaction with local festivals can be improved. Tourist data were collected from local festivals in South Korea over a period of 12 years. We conclude that the proposed predictive and interpretable strategy can identify the strengths and weaknesses of each local festival, allowing festival planners and administrators to enhance their tourist satisfaction rates by addressing the identified weaknesses.
APA, Harvard, Vancouver, ISO, and other styles
12

Sreedharan, Sarath, Tathagata Chakraborti, Christian Muise, Yasaman Khazaeni, and Subbarao Kambhampati. "– D3WA+ – A Case Study of XAIP in a Model Acquisition Task for Dialogue Planning." Proceedings of the International Conference on Automated Planning and Scheduling 30 (June 1, 2020): 488–97. http://dx.doi.org/10.1609/icaps.v30i1.6744.

Full text
Abstract:
Recently, the D3WA system was proposed as a paradigm shift in how complex goal-oriented dialogue agents can be specified by taking a declarative view of design. However, it turns out actual users of the system have a hard time evolving their mental model and grasping the imperative consequences of declarative design. In this paper, we adopt ideas from existing works in the field of Explainable AI Planning (XAIP) to provide guidance to the dialogue designer during the model acquisition process. We will highlight in the course of this discussion how the setting presents unique challenges to the XAIP setting, including having to deal with the user persona of a domain modeler rather than the end-user of the system, and consequently having to deal with the unsolvability of models in addition to explaining generated plans.Quickview http://ibm.biz/d3wa-xaip
APA, Harvard, Vancouver, ISO, and other styles
13

Karthik, Valmeekam, Sarath Sreedharan, Sailik Sengupta, and Subbarao Kambhampati. "RADAR-X: An Interactive Interface Pairing Contrastive Explanations with Revised Plan Suggestions." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 16051–53. http://dx.doi.org/10.1609/aaai.v35i18.18009.

Full text
Abstract:
Automated Planning techniques can be leveraged to build effective decision support systems that assist the human-in-the-loop. Such systems must provide intuitive explanations when the suggestions made by these systems seem inexplicable to the human. In this regard, we consider scenarios where the user questions the system's suggestion by providing alternatives (referred to as foils). In response, we empower existing decision support technologies to engage in an interactive explanatory dialogue with the user and provide contrastive explanations based on user-specified foils to reach a consensus on proposed decisions. To provide contrastive explanations, we adapt existing techniques in Explainable AI Planning (XAIP). Furthermore, we use this dialog to elicit the user's latent preferences and propose three modes of interaction that use these preferences to provide revised plan suggestions. Finally, we showcase a decision support system that provides all these capabilities.
APA, Harvard, Vancouver, ISO, and other styles
14

Sachit, Mourtadha Sarhan, Helmi Zulhaidi Mohd Shafri, Ahmad Fikri Abdullah, Azmin Shakrine Mohd Rafie, and Mohamed Barakat A. Gibril. "Global Spatial Suitability Mapping of Wind and Solar Systems Using an Explainable AI-Based Approach." ISPRS International Journal of Geo-Information 11, no. 8 (July 26, 2022): 422. http://dx.doi.org/10.3390/ijgi11080422.

Full text
Abstract:
An assessment of site suitability for wind and solar plants is a strategic step toward ensuring a low-cost, high-performing, and sustainable project. However, these issues are often handled on a local scale using traditional decision-making approaches that involve biased and non-generalizable weightings. This study presents a global wind and solar mapping approach based on eXplainable Artificial Intelligence (XAI). To the best of the author’s knowledge, the current study is the first attempt to create global maps for siting onshore wind and solar power systems and formulate novel weights for decision criteria. A total of 13 conditioning factors (independent variables) defined through a comprehensive literature review and multicollinearity analysis were assessed. Real-world renewable energy experiences (more than 55,000 on-site wind and solar plants worldwide) are exploited to train three machine learning (ML) algorithms, namely Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP). Then, the output of ML models was explained using SHapley Additive exPlanations (SHAP). RF outperformed SVM and MLP in both wind and solar modeling with an overall accuracy of 90% and 89%, kappa coefficient of 0.79 and 0.78, and area under the curve of 0.96 and 0.95, respectively. The high and very high suitability categories accounted for 23.2% (~26.84 million km2) of the site suitability map for wind power plants. In addition, they covered more encouraging areas (24.0% and 19.4%, respectively, equivalent to ~50.31 million km2) on the global map for hosting solar energy farms. SHAP interpretations were consistent with the Gini index indicating the dominance of the weights of technical and economic factors over the spatial assessment under consideration. This study provides support to decision-makers toward sustainable power planning worldwide.
APA, Harvard, Vancouver, ISO, and other styles
15

Chew, Alvin Wei Ze, and Limao Zhang. "Data-driven multiscale modelling and analysis of COVID-19 spatiotemporal evolution using explainable AI." Sustainable Cities and Society 80 (May 2022): 103772. http://dx.doi.org/10.1016/j.scs.2022.103772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Schwendicke, F., W. Samek, and J. Krois. "Artificial Intelligence in Dentistry: Chances and Challenges." Journal of Dental Research 99, no. 7 (April 21, 2020): 769–74. http://dx.doi.org/10.1177/0022034520915714.

Full text
Abstract:
The term “artificial intelligence” (AI) refers to the idea of machines being capable of performing human tasks. A subdomain of AI is machine learning (ML), which “learns” intrinsic statistical patterns in data to eventually cast predictions on unseen data. Deep learning is a ML technique using multi-layer mathematical operations for learning and inferring on complex data like imagery. This succinct narrative review describes the application, limitations and possible future of AI-based dental diagnostics, treatment planning, and conduct, for example, image analysis, prediction making, record keeping, as well as dental research and discovery. AI-based applications will streamline care, relieving the dental workforce from laborious routine tasks, increasing health at lower costs for a broader population, and eventually facilitate personalized, predictive, preventive, and participatory dentistry. However, AI solutions have not by large entered routine dental practice, mainly due to 1) limited data availability, accessibility, structure, and comprehensiveness, 2) lacking methodological rigor and standards in their development, 3) and practical questions around the value and usefulness of these solutions, but also ethics and responsibility. Any AI application in dentistry should demonstrate tangible value by, for example, improving access to and quality of care, increasing efficiency and safety of services, empowering and enabling patients, supporting medical research, or increasing sustainability. Individual privacy, rights, and autonomy need to be put front and center; a shift from centralized to distributed/federated learning may address this while improving scalability and robustness. Lastly, trustworthiness into, and generalizability of, dental AI solutions need to be guaranteed; the implementation of continuous human oversight and standards grounded in evidence-based dentistry should be expected. Methods to visualize, interpret, and explain the logic behind AI solutions will contribute (“explainable AI”). Dental education will need to accompany the introduction of clinical AI solutions by fostering digital literacy in the future dental workforce.
APA, Harvard, Vancouver, ISO, and other styles
17

Blanes-Selva, Vicent, Ascensión Doñate-Martínez, Gordon Linklater, Jorge Garcés-Ferrer, and Juan M. García-Gómez. "Responsive and Minimalist App Based on Explainable AI to Assess Palliative Care Needs during Bedside Consultations on Older Patients." Sustainability 13, no. 17 (September 2, 2021): 9844. http://dx.doi.org/10.3390/su13179844.

Full text
Abstract:
Palliative care is an alternative to standard care for gravely ill patients that has demonstrated many clinical benefits in cost-effective interventions. It is expected to grow in demand soon, so it is necessary to detect those patients who may benefit from these programs using a personalised objective criterion at the correct time. Our goal was to develop a responsive and minimalist web application embedding a 1-year mortality explainable predictive model to assess palliative care at bedside consultation. A 1-year mortality predictive model has been trained. We ranked the input variables and evaluated models with an increasing number of variables. We selected the model with the seven most relevant variables. Finally, we created a responsive, minimalist and explainable app to support bedside decision making for older palliative care. The selected variables are age, medication, Charlson, Barthel, urea, RDW-SD and metastatic tumour. The predictive model achieved an AUC ROC of 0.83 [CI: 0.82, 0.84]. A Shapley value graph was used for explainability. The app allows identifying patients in need of palliative care using the bad prognosis criterion, which can be a useful, easy and quick tool to support healthcare professionals in obtaining a fast recommendation in order to allocate health resources efficiently.
APA, Harvard, Vancouver, ISO, and other styles
18

Tiensuu, Henna, Satu Tamminen, Esa Puukko, and Juha Röning. "Evidence-Based and Explainable Smart Decision Support for Quality Improvement in Stainless Steel Manufacturing." Applied Sciences 11, no. 22 (November 18, 2021): 10897. http://dx.doi.org/10.3390/app112210897.

Full text
Abstract:
This article demonstrates the use of data mining methods for evidence-based smart decision support in quality control. The data were collected in a measurement campaign which provided a new and potential quality measurement approach for manufacturing process planning and control. In this study, the machine learning prediction models and Explainable AI methods (XAI) serve as a base for the decision support system for smart manufacturing. The discovered information about the root causes behind the predicted failure can be used to improve the quality, and it also enables the definition of suitable security boundaries for better settings of the production parameters. The user’s need defines the given type of information. The developed method is applied to the monitoring of the surface roughness of the stainless steel strip, but the framework is not application dependent. The modeling analysis reveals that the parameters of the annealing and pickling line (RAP) have the best potential for real-time roughness improvement.
APA, Harvard, Vancouver, ISO, and other styles
19

Maathuis, Clara. "An Outlook of Digital Twins in Offensive Military Cyber Operations." European Conference on the Impact of Artificial Intelligence and Robotics 4, no. 1 (November 17, 2022): 45–53. http://dx.doi.org/10.34190/eciair.4.1.765.

Full text
Abstract:
The outlook of military cyber operations is changing due to the prospects of data generation and accessibility, continuous technological advancements and their (public) availability, technological and human (inter)connections increase, plus the dynamism, needs, diverse nature, perspectives, and skills of experts involved in their planning, execution, and assessment phases respecting (inter)national aims, demands, and trends. Such operations are daily conducted and recently empowered by AI to reach or protect their targets and deal with the unintended effects produced through their engagement on them and/or collateral entities. However, these operations are governed and surrounded by different uncertainty levels e.g., intended effects prediction, consideration of effective alternatives, and understanding new dimensions of possible (strategic) future(s). Hence, the legality and ethicality of such operations should be assured; particularly, in Offensive Military Cyber Operations (OMCO), the agents involved in their design/deployment should consider, develop, and propose proper (intelligent) measures/methods. Such mechanisms can be built embedding intelligent techniques based on hardware, software, and communication data plus expert-knowledge through novel systems like digital twins. While digital twins find themselves in their infancy in military, cyber, and AI academic research and discourses, they started to show their modelling and simulation potential and effective real-time decision support in different industry applications. Nevertheless, this research aims to (i) understand what digital twins mean in OMCO context while embedding explainable AI and responsible AI perspectives, and (ii) capture challenges and benefits of their development. Accordingly, a multidisciplinary stance is considered through extensive review in the domains involved packaged in a design framework meant to assist the agents involved in their development and deployment.
APA, Harvard, Vancouver, ISO, and other styles
20

Ploug, Thomas, Anna Sundby, Thomas B. Moeslund, and Søren Holm. "Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey." Journal of Medical Internet Research 23, no. 12 (December 13, 2021): e26611. http://dx.doi.org/10.2196/26611.

Full text
Abstract:
Background Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. Objective This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. Methods We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. Results Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. Conclusions The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.
APA, Harvard, Vancouver, ISO, and other styles
21

Krarup, Benjamin, Senka Krivic, Daniele Magazzeni, Derek Long, Michael Cashmore, and David E. Smith. "Contrastive Explanations of Plans through Model Restrictions." Journal of Artificial Intelligence Research 72 (October 27, 2021): 533–612. http://dx.doi.org/10.1613/jair.1.12813.

Full text
Abstract:
In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user’s expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, to arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are usually contrastive, i.e. “why A rather than B?”. We use the data from this study to construct a taxonomy of user questions that often arise during plan exploration. Our approach to iterative plan exploration is a process of successive model restriction. Each contrastive user question imposes a set of constraints on the planning problem, leading to the construction of a new hypothetical planning problem as a restriction of the original. Solving this restricted problem results in a plan that can be compared with the original plan, admitting a contrastive explanation. We formally define model-based compilations in PDDL2.1 for each type of constraint derived from a contrastive user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework supporting iterative model restriction. We demonstrate its benefits in a second user study.
APA, Harvard, Vancouver, ISO, and other styles
22

Munkhdalai, Lkhagvadorj, Tsendsuren Munkhdalai, Pham Van Van Huy, Jang-Eui Hong, Keun Ho Ryu, and Nipon Theera-Umpon. "Neural Network-Augmented Locally Adaptive Linear Regression Model for Tabular Data." Sustainability 14, no. 22 (November 17, 2022): 15273. http://dx.doi.org/10.3390/su142215273.

Full text
Abstract:
Creating an interpretable model with high predictive performance is crucial in eXplainable AI (XAI) field. We introduce an interpretable neural network-based regression model for tabular data in this study. Our proposed model uses ordinary least squares (OLS) regression as a base-learner, and we re-update the parameters of our base-learner by using neural networks, which is a meta-learner in our proposed model. The meta-learner updates the regression coefficients using the confidence interval formula. We extensively compared our proposed model to other benchmark approaches on public datasets for regression task. The results showed that our proposed neural network-based interpretable model showed outperformed results compared to the benchmark models. We also applied our proposed model to the synthetic data to measure model interpretability, and we showed that our proposed model can explain the correlation between input and output variables by approximating the local linear function for each point. In addition, we trained our model on the economic data to discover the correlation between the central bank policy rate and inflation over time. As a result, it is drawn that the effect of central bank policy rates on inflation tends to strengthen during a recession and weaken during an expansion. We also performed the analysis on CO2 emission data, and our model discovered some interesting explanations between input and target variables, such as a parabolic relationship between CO2 emissions and gross national product (GNP). Finally, these experiments showed that our proposed neural network-based interpretable model could be applicable for many real-world applications where data type is tabular and explainable models are required.
APA, Harvard, Vancouver, ISO, and other styles
23

Benhamou, Eric, Jean-Jacques Ohana, David Saltiel, and Beatrice Guez. "Explainable AI (XAI) Models Applied to Planning in Financial Markets." SSRN Electronic Journal, 2021. http://dx.doi.org/10.2139/ssrn.3862437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kamal, Md Sarwar, Nilanjan Dey, Linkon Chowdhury, Syed Irtija Hasan, and KC Santosh. "Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning." IEEE Transactions on Instrumentation and Measurement, 2022, 1. http://dx.doi.org/10.1109/tim.2022.3171613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lu, Ming. "Explainable AI for Industrial Processes in Construction." Scientia, 2021. http://dx.doi.org/10.33548/scientia838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Black, Elizabeth, Martim Brandão, Oana Cocarascu, Bart De Keijzer, Yali Du, Derek Long, Michael Luck, et al. "Reasoning and interaction for social artificial intelligence." AI Communications, September 12, 2022, 1–17. http://dx.doi.org/10.3233/aic-220133.

Full text
Abstract:
Current work on multi-agent systems at King’s College London is extensive, though largely based in two research groups within the Department of Informatics: the Distributed Artificial Intelligence (DAI) thematic group and the Reasoning & Planning (RAP) thematic group. DAI combines AI expertise with political and economic theories and data, to explore social and technological contexts of interacting intelligent entities. It develops computational models for analysing social, political and economic phenomena to improve the effectiveness and fairness of policies and regulations, and combines intelligent agent systems, software engineering, norms, trust and reputation, agent-based simulation, communication and provenance of data, knowledge engineering, crowd computing and semantic technologies, and algorithmic game theory and computational social choice, to address problems arising in autonomous systems, financial markets, privacy and security, urban living and health. RAP conducts research in symbolic models for reasoning involving argumentation, knowledge representation, planning, and other related areas, including development of logical models of argumentation-based reasoning and decision-making, and their usage for explainable AI and integration of machine and human reasoning, as well as combining planning and argumentation methodologies for strategic argumentation.
APA, Harvard, Vancouver, ISO, and other styles
27

Mondal, Tarutal Ghosh, and Genda Chen. "Artificial intelligence in civil infrastructure health monitoring—Historical perspectives, current trends, and future visions." Frontiers in Built Environment 8 (September 23, 2022). http://dx.doi.org/10.3389/fbuil.2022.1007886.

Full text
Abstract:
Over the past 2 decades, the use of artificial intelligence (AI) has exponentially increased toward complete automation of structural inspection and assessment tasks. This trend will continue to rise in image processing as unmanned aerial systems (UAS) and the internet of things (IoT) markets are expected to expand at a compound annual growth rate of 57.5% and 26%, respectively, from 2021 to 2028. This paper aims to catalog the milestone development work, summarize the current research trends, and envision a few future research directions in the innovative application of AI in civil infrastructure health monitoring. A blow-by-blow account of the major technology progression in this research field is provided in a chronological order. Detailed applications, key contributions, and performance measures of each milestone publication are presented. Representative technologies are detailed to demonstrate current research trends. A road map for future research is outlined to address contemporary issues such as explainable and physics-informed AI. This paper will provide readers with a lucid memoir of the historical progress, a good sense of the current trends, and a clear vision for future research.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography