Journal articles on the topic 'Explainable recommendation'

To see the other types of publications on this topic, follow the link: Explainable recommendation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explainable recommendation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang, and Jinjun Chen. "Explainable recommendation based on knowledge graph and multi-objective optimization." Complex & Intelligent Systems 7, no. 3 (March 6, 2021): 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.

Full text
Abstract:
AbstractRecommendation system is a technology that can mine user's preference for items. Explainable recommendation is to produce recommendations for target users and give reasons at the same time to reveal reasons for recommendations. The explainability of recommendations that can improve the transparency of recommendations and the probability of users choosing the recommended items. The merits about explainability of recommendations are obvious, but it is not enough to focus solely on explainability of recommendations in field of explainable recommendations. Therefore, it is essential to construct an explainable recommendation framework to improve the explainability of recommended items while maintaining accuracy and diversity. An explainable recommendation framework based on knowledge graph and multi-objective optimization is proposed that can optimize the precision, diversity and explainability about recommendations at the same time. Knowledge graph connects users and items through different relationships to obtain an explainable candidate list for target user, and the path between target user and recommended item is used as an explanation basis. The explainable candidate list is optimized through multi-objective optimization algorithm to obtain the final recommendation list. It is concluded from the results about experiments that presented explainable recommendation framework provides high-quality recommendations that contains high accuracy, diversity and explainability.
APA, Harvard, Vancouver, ISO, and other styles
2

Leal, Fátima, Bruno Veloso, Benedita Malheiro, Juan C. Burguillo, Adriana E. Chis, and Horacio González-Vélez. "Stream-based explainable recommendations via blockchain profiling." Integrated Computer-Aided Engineering 29, no. 1 (December 28, 2021): 105–21. http://dx.doi.org/10.3233/ica-210668.

Full text
Abstract:
Explainable recommendations enable users to understand why certain items are suggested and, ultimately, nurture system transparency, trustworthiness, and confidence. Large crowdsourcing recommendation systems ought to crucially promote authenticity and transparency of recommendations. To address such challenge, this paper proposes the use of stream-based explainable recommendations via blockchain profiling. Our contribution relies on chained historical data to improve the quality and transparency of online collaborative recommendation filters – Memory-based and Model-based – using, as use cases, data streamed from two large tourism crowdsourcing platforms, namely Expedia and TripAdvisor. Building historical trust-based models of raters, our method is implemented as an external module and integrated with the collaborative filter through a post-recommendation component. The inter-user trust profiling history, traceability and authenticity are ensured by blockchain, since these profiles are stored as a smart contract in a private Ethereum network. Our empirical evaluation with HotelExpedia and Tripadvisor has consistently shown the positive impact of blockchain-based profiling on the quality (measured as recall) and transparency (determined via explanations) of recommendations.
APA, Harvard, Vancouver, ISO, and other styles
3

Kido, Shunsuke, Ryuji Sakamoto, and Masayoshi Aritsugi. "Making Use of More Reviews Skillfully in Explaninable Recommendation Gerneration." journal of Data Intelligence 2, no. 4 (November 2021): 434–47. http://dx.doi.org/10.26421/jdi2.4-3.

Full text
Abstract:
There are a lot of reviews in the Internet, and existing explainable recommendation techniques use them. However, how to use reviews has not been so far adequately addressed. This paper proposes a new exploiting method of reviews in explainable recommendation generation. Our new method makes use of not only reviews written but also those referred to by users. This paper adopts two state-of-the-art explainable recommendation approaches and shows how to apply our method to them. Moreover, our method in this paper considers the possibility of making use of reviews which do not provide detailed review utilization. Our proposal can be applied to different explainable recommendation approaches, which is shown by adopting the two approaches, with reviews that do not necessarily provide their detailed utilization data. The evaluation with using Amazon reviews shows an improvement of the two explainable recommendation approaches. Our proposal is the first attempt to make use of reviews which are written or referred to by users in generating explainable recommendation. Particularly, this study does not suppose that reviews provide their detailed utilization data.
APA, Harvard, Vancouver, ISO, and other styles
4

Sana, Saba, and Mohammad Shoaib. "Trustworthy Explainable Recommendation Framework for Relevancy." Computers, Materials & Continua 73, no. 3 (2022): 5887–909. http://dx.doi.org/10.32604/cmc.2022.028046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Xiaolin, Menghan Wang, Chaochao Chen, Yan Wang, and Zhehao Cheng. "EXPLORE: EXPLainable item-tag CO-REcommendation." Information Sciences 474 (February 2019): 170–86. http://dx.doi.org/10.1016/j.ins.2018.09.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ai, Qingyao, Vahid Azizi, Xu Chen, and Yongfeng Zhang. "Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation." Algorithms 11, no. 9 (September 13, 2018): 137. http://dx.doi.org/10.3390/a11090137.

Full text
Abstract:
Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Tongxuan, Xiaolong Zheng, Saike He, Zhu Zhang, and Desheng Dash Wu. "Learning user-item paths for explainable recommendation." IFAC-PapersOnLine 53, no. 5 (2020): 436–40. http://dx.doi.org/10.1016/j.ifacol.2021.04.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yongfeng, and Xu Chen. "Explainable Recommendation: A Survey and New Perspectives." Foundations and Trends® in Information Retrieval 14, no. 1 (2020): 1–101. http://dx.doi.org/10.1561/1500000066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Jingyue, Xiting Wang, Yasha Wang, and Xing Xie. "Explainable Recommendation through Attentive Multi-View Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.

Full text
Abstract:
Recommender systems have been playing an increasingly important role in our daily life due to the explosive growth of information. Accuracy and explainability are two core aspects when we evaluate a recommendation model and have become one of the fundamental trade-offs in machine learning. In this paper, we propose to alleviate the trade-off between accuracy and explainability by developing an explainable deep model that combines the advantages of deep learning-based models and existing explainable methods. The basic idea is to build an initial network based on an explainable deep hierarchy (e.g., Microsoft Concept Graph) and improve the model accuracy by optimizing key variables in the hierarchy (e.g., node importance and relevance). To ensure accurate rating prediction, we propose an attentive multi-view learning framework. The framework enables us to handle sparse and noisy data by co-regularizing among different feature levels and combining predictions attentively. To mine readable explanations from the hierarchy, we formulate personalized explanation generation as a constrained tree node selection problem and propose a dynamic programming algorithm to solve it. Experimental results show that our model outperforms state-of-the-art methods in terms of both accuracy and explainability.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Xiang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, and Tat-Seng Chua. "Explainable Reasoning over Knowledge Graphs for Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5329–36. http://dx.doi.org/10.1609/aaai.v33i01.33015329.

Full text
Abstract:
Incorporating knowledge graph into recommender systems has attracted increasing attention in recent years. By exploring the interlinks within a knowledge graph, the connectivity between users and items can be discovered as paths, which provide rich and complementary information to user-item interactions. Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user’s interest. However, existing efforts have not fully explored this connectivity to infer user preferences, especially in terms of modeling the sequential dependencies within and holistic semantics of a path.In this paper, we contribute a new model named Knowledgeaware Path Recurrent Network (KPRN) to exploit knowledge graph for recommendation. KPRN can generate path representations by composing the semantics of both entities and relations. By leveraging the sequential dependencies within a path, we allow effective reasoning on paths to infer the underlying rationale of a user-item interaction. Furthermore, we design a new weighted pooling operation to discriminate the strengths of different paths in connecting a user with an item, endowing our model with a certain level of explainability. We conduct extensive experiments on two datasets about movie and music, demonstrating significant improvements over state-of-the-art solutions Collaborative Knowledge Base Embedding and Neural Factorization Machine.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Guoshuai, Hao Fu, Ruihua Song, Tetsuya Sakai, Zhongxia Chen, Xing Xie, and Xueming Qian. "Personalized Reason Generation for Explainable Song Recommendation." ACM Transactions on Intelligent Systems and Technology 10, no. 4 (August 29, 2019): 1–21. http://dx.doi.org/10.1145/3337967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hou, Yunfeng, Ning Yang, Yi Wu, and Philip S. Yu. "Explainable recommendation with fusion of aspect information." World Wide Web 22, no. 1 (April 13, 2018): 221–40. http://dx.doi.org/10.1007/s11280-018-0558-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, Xiao, Pengjie Ren, Zhaochun Ren, Fei Sun, Xiangnan He, Dawei Yin, and Maarten de Rijke. "Report on the international workshop on natural language processing for recommendations (NLP4REC 2020) workshop held at WSDM 2020." ACM SIGIR Forum 54, no. 1 (June 2020): 1–5. http://dx.doi.org/10.1145/3451964.3451970.

Full text
Abstract:
This paper summarizes the outcomes of the International Workshop on Natural Language Processing for Recommendations (NLP4REC 2020), held in Houston, USA, on February 7, 2020, during WSDM 2020. The purpose of this workshop was to explore the potential research topics and industrial applications in leveraging natural language processing techniques to tackle the challenges in constructing more intelligent recommender systems. Specific topics included, but were not limited to knowledge-aware recommendation, explainable recommendation, conversational recommendation, and sequential recommendation.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Xu, Yongfeng Zhang, and Zheng Qin. "Dynamic Explainable Recommendation Based on Neural Attentive Models." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 53–60. http://dx.doi.org/10.1609/aaai.v33i01.330153.

Full text
Abstract:
Providing explanations in a recommender system is getting more and more attention in both industry and research communities. Most existing explainable recommender models regard user preferences as invariant to generate static explanations. However, in real scenarios, a user’s preference is always dynamic, and she may be interested in different product features at different states. The mismatching between the explanation and user preference may degrade costumers’ satisfaction, confidence and trust for the recommender system. With the desire to fill up this gap, in this paper, we build a novel Dynamic Explainable Recommender (called DER) for more accurate user modeling and explanations. In specific, we design a time-aware gated recurrent unit (GRU) to model user dynamic preferences, and profile an item by its review information based on sentence-level convolutional neural network (CNN). By attentively learning the important review information according to the user current state, we are not only able to improve the recommendation performance, but also can provide explanations tailored for the users’ current preferences. We conduct extensive experiments to demonstrate the superiority of our model for improving recommendation performance. And to evaluate the explainability of our model, we first present examples to provide intuitive analysis on the highlighted review information, and then crowd-sourcing based evaluations are conducted to quantitatively verify our model’s superiority.
APA, Harvard, Vancouver, ISO, and other styles
15

Bai, Peng, Yang Xia, and Yongsheng Xia. "Fusing Knowledge and Aspect Sentiment for Explainable Recommendation." IEEE Access 8 (2020): 137150–60. http://dx.doi.org/10.1109/access.2020.3012347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Seong, Su-Jin, Soo-Bum Kwon, Ji-Uk Yoon, Jin-Yong Oh, and Jeong-Won Cha. "Explainable Deep Neural Network for Anesthetic Treatment Recommendation." KIISE Transactions on Computing Practices 26, no. 12 (December 31, 2020): 550–55. http://dx.doi.org/10.5626/ktcp.2020.26.12.550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Ying, Xin He, Hongji Wang, Yudong Sun, and Xin Wang. "Fast Explainable Recommendation Model by Combining Fine-Grained Sentiment in Review Data." Computational Intelligence and Neuroscience 2022 (October 18, 2022): 1–15. http://dx.doi.org/10.1155/2022/4940401.

Full text
Abstract:
With the rapid development of e-commerce, recommendation system has become one of the main tools that assists users in decision-making, enhances user’s experience, and creates economic value. Since it is difficult to explain the implicit features generated by matrix factorization, explainable recommendation system has attracted more and more attention recently. In this paper, we propose an explainable fast recommendation model by combining fine-grained sentiment in review data (FSER, (Fast) Fine-grained Sentiment for Explainable Recommendation). We innovatively construct user-rating matrix, user-aspect sentiment matrix, and item aspect-descriptive word frequency matrix from the review-based data. And the three matrices are reconstructed by matrix factorization method. The reconstructed results of user-aspect sentiment matrix and item aspect-descriptive word frequency matrix can provide explanation for the final recommendation results. Experiments in the Yelp and Public Comment datasets demonstrate that, compared with several classical models, the proposed FSER model is in the optimal recommendation accuracy range and has lower sparseness and higher training efficiency than tensor models or neural network models; furthermore, it can generate explanatory texts and diagrams that have high interpretation quality.
APA, Harvard, Vancouver, ISO, and other styles
18

Liang, Qianqiao, Xiaolin Zheng, Yan Wang, and Mengying Zhu. "O3ERS: An explainable recommendation system with online learning, online recommendation, and online explanation." Information Sciences 562 (July 2021): 94–115. http://dx.doi.org/10.1016/j.ins.2020.12.070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sopchoke, Sirawit, Ken-ichi Fukui, and Masayuki Numao. "Explainable and unexpectable recommendations using relational learning on multiple domains." Intelligent Data Analysis 24, no. 6 (December 18, 2020): 1289–309. http://dx.doi.org/10.3233/ida-194729.

Full text
Abstract:
In this research, we combine relational learning with multi-domain to develop a formal framework for a recommendation system. The design of our framework aims at: (i) constructing general rules for recommendations, (ii) providing suggested items with clear and understandable explanations, (iii) delivering a broad range of recommendations including novel and unexpected items. We use relational learning to find all possible relations, including novel relations, and to form the general rules for recommendations. Each rule is represented in relational logic, a formal language, associating with probability. The rules are used to suggest the items, in any domain, to the user whose preferences or other properties satisfy the conditions of the rule. The information described by the rule serves as an explanation for the suggested item. It states clearly why the items are chosen for the users. The explanation is in if-then logical format which is unambiguous, less redundant and more concise compared to a natural language used in other explanation recommendation systems. The explanation itself can help persuade the user to try out the suggested items, and the associated probability can drive the user to make a decision easier and faster with more confidence. Incorporating information or knowledge from multiple domains allows us to broaden our search space and provides us with more opportunities to discover items which are previously unseen or surprised to a user resulting in a wide range of recommendations. The experiment results show that our proposed algorithm is very promising. Although the quality of recommendations provided by our framework is moderate, our framework does produce interesting recommendations not found in the primitive single-domain based system and with simple and understandable explanations.
APA, Harvard, Vancouver, ISO, and other styles
20

Doh, Ronky Francis, Conghua Zhou, John Kingsley Arthur, Isaac Tawiah, and Benjamin Doh. "A Systematic Review of Deep Knowledge Graph-Based Recommender Systems, with Focus on Explainable Embeddings." Data 7, no. 7 (July 12, 2022): 94. http://dx.doi.org/10.3390/data7070094.

Full text
Abstract:
Recommender systems (RS) have been developed to make personalized suggestions and enrich users’ preferences in various online applications to address the information explosion problems. However, traditional recommender-based systems act as black boxes, not presenting the user with insights into the system logic or reasons for recommendations. Recently, generating explainable recommendations with deep knowledge graphs (DKG) has attracted significant attention. DKG is a subset of explainable artificial intelligence (XAI) that utilizes the strengths of deep learning (DL) algorithms to learn, provide high-quality predictions, and complement the weaknesses of knowledge graphs (KGs) in the explainability of recommendations. DKG-based models can provide more meaningful, insightful, and trustworthy justifications for recommended items and alleviate the information explosion problems. Although several studies have been carried out on RS, only a few papers have been published on DKG-based methodologies, and a review in this new research direction is still insufficiently explored. To fill this literature gap, this paper uses a systematic literature review framework to survey the recently published papers from 2018 to 2022 in the landscape of DKG and XAI. We analyze how the methods produced in these papers extract essential information from graph-based representations to improve recommendations’ accuracy, explainability, and reliability. From the perspective of the leveraged knowledge-graph related information and how the knowledge-graph or path embeddings are learned and integrated with the DL methods, we carefully select and classify these published works into four main categories: the Two-stage explainable learning methods, the Joint-stage explainable learning methods, the Path-embedding explainable learning methods, and the Propagation explainable learning methods. We further summarize these works according to the characteristics of the approaches and the recommendation scenarios to facilitate the ease of checking the literature. We finally conclude by discussing some open challenges left for future research in this vibrant field.
APA, Harvard, Vancouver, ISO, and other styles
21

Guo, Siyuan, Ying Wang, Hao Yuan, Zeyu Huang, Jianwei Chen, and Xin Wang. "TAERT: Triple-Attentional Explainable Recommendation with Temporal Convolutional Network." Information Sciences 567 (August 2021): 185–200. http://dx.doi.org/10.1016/j.ins.2021.03.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Damak, Khalil, Sami Khenissi, and Olfa Nasraoui. "A framework for unbiased explainable pairwise ranking for recommendation." Software Impacts 11 (February 2022): 100208. http://dx.doi.org/10.1016/j.simpa.2021.100208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Zuoxi, Shoubin Dong, and Jinlong Hu. "GFE: General Knowledge Enhanced Framework for Explainable Sequential Recommendation." Knowledge-Based Systems 230 (October 2021): 107375. http://dx.doi.org/10.1016/j.knosys.2021.107375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Syed, Muzamil Hussain, Tran Quoc Bao Huy, and Sun-Tae Chung. "Context-Aware Explainable Recommendation Based on Domain Knowledge Graph." Big Data and Cognitive Computing 6, no. 1 (January 20, 2022): 11. http://dx.doi.org/10.3390/bdcc6010011.

Full text
Abstract:
With the rapid growth of internet data, knowledge graphs (KGs) are considered as efficient form of knowledge representation that captures the semantics of web objects. In recent years, reasoning over KG for various artificial intelligence tasks have received a great deal of research interest. Providing recommendations based on users’ natural language queries is an equally difficult undertaking. In this paper, we propose a novel, context-aware recommender system, based on domain KG, to respond to user-defined natural queries. The proposed recommender system consists of three stages. First, we generate incomplete triples from user queries, which are then segmented using logical conjunction (∧) and disjunction (∨) operations. Then, we generate candidates by utilizing a KGE-based framework (Query2Box) for reasoning over segmented logical triples, with ∧, ∨, and ∃ operators; finally, the generated candidates are re-ranked using neural collaborative filtering (NCF) model by exploiting contextual (auxiliary) information from GraphSAGE embedding. Our approach demonstrates to be simple, yet efficient, at providing explainable recommendations on user’s queries, while leveraging user-item contextual information. Furthermore, our framework has shown to be capable of handling logical complex queries by transforming them into a disjunctive normal form (DNF) of simple queries. In this work, we focus on the restaurant domain as an application domain and use the Yelp dataset to evaluate the system. Experiments demonstrate that the proposed recommender system generalizes well on candidate generation from logical queries and effectively re-ranks those candidates, compared to the matrix factorization model.
APA, Harvard, Vancouver, ISO, and other styles
25

Jiang, Tianming, and Jiangfeng Zeng. "Time-Aware Explainable Recommendation via Updating Enabled Online Prediction." Entropy 24, no. 11 (November 11, 2022): 1639. http://dx.doi.org/10.3390/e24111639.

Full text
Abstract:
There has been growing attention on explainable recommendation that is able to provide high-quality results as well as intuitive explanations. However, most existing studies use offline prediction strategies where recommender systems are trained once while used forever, which ignores the dynamic and evolving nature of user–item interactions. There are two main issues with these methods. First, their random dataset split setting will result in data leakage that knowledge should not be known at the time of training is utilized. Second, the dynamic characteristics of user preferences are overlooked, resulting in a model aging issue where the model’s performance degrades along with time. In this paper, we propose an updating enabled online prediction framework for the time-aware explainable recommendation. Specifically, we propose an online prediction scheme to eliminate the data leakage issue and two novel updating strategies to relieve the model aging issue. Moreover, we conduct extensive experiments on four real-world datasets to evaluate the effectiveness of our proposed methods. Compared with the state-of-the-art, our time-aware approach achieves higher accuracy results and more convincing explanations for the entire lifetime of recommendation systems, i.e., both the initial period and the long-term usage.
APA, Harvard, Vancouver, ISO, and other styles
26

Zuo, Xianglin, Tianhao Jia, Xin He, Bo Yang, and Ying Wang. "Exploiting Dual-Attention Networks for Explainable Recommendation in Heterogeneous Information Networks." Entropy 24, no. 12 (November 24, 2022): 1718. http://dx.doi.org/10.3390/e24121718.

Full text
Abstract:
The aim of explainable recommendation is not only to provide recommended items to users, but also to make users aware of why these items are recommended. Traditional recommendation methods infer user preferences for items using user–item rating information. However, the expressive power of latent representations of users and items is relatively limited due to the sparseness of the user–item rating matrix. Heterogeneous information networks (HIN) provide contextual information for improving recommendation performance and interpreting the interactions between users and items. However, due to the heterogeneity and complexity of context information in HIN, it is still a challenge to integrate this contextual information into explainable recommendation systems effectively. In this paper, we propose a novel framework—the dual-attention networks for explainable recommendation (DANER) in HINs. We first used multiple meta-paths to capture high-order semantic relations between users and items in HIN for generating similarity matrices, and then utilized matrix decomposition on similarity matrices to obtain low-dimensional sparse representations of users and items. Secondly, we introduced two-level attention networks, namely a local attention network and a global attention network, to integrate the representations of users and items from different meta-paths for obtaining high-quality representations. Finally, we use a standard multi-layer perceptron to model the interactions between users and items, which predict users’ ratings of items. Furthermore, the dual-attention mechanism also contributes to identifying critical meta-paths to generate relevant explanations for users. Comprehensive experiments on two real-world datasets demonstrate the effectiveness of DANER on recommendation performance as compared with the state-of-the-art methods. A case study illustrates the interpretability of DANER.
APA, Harvard, Vancouver, ISO, and other styles
27

Tao, Shaohua, Runhe Qiu, Yuan Ping, and Hui Ma. "Multi-modal Knowledge-aware Reinforcement Learning Network for Explainable Recommendation." Knowledge-Based Systems 227 (September 2021): 107217. http://dx.doi.org/10.1016/j.knosys.2021.107217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Vo, Tham. "An integrated network embedding with reinforcement learning for explainable recommendation." Soft Computing 26, no. 8 (March 6, 2022): 3757–75. http://dx.doi.org/10.1007/s00500-022-06843-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Yujie, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, and Maarten de Rijke. "Explainable Outfit Recommendation with Joint Outfit Matching and Comment Generation." IEEE Transactions on Knowledge and Data Engineering 32, no. 8 (August 1, 2020): 1502–16. http://dx.doi.org/10.1109/tkde.2019.2906190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Peng, Lemei Zhang, and Jon Atle Gulla. "Dynamic attention-based explainable recommendation with textual and visual fusion." Information Processing & Management 57, no. 6 (November 2020): 102099. http://dx.doi.org/10.1016/j.ipm.2019.102099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Newn, Joshua, Ryan M. Kelly, Simon D'Alfonso, and Reeva Lederman. "Examining and Promoting Explainable Recommendations for Personal Sensing Technology Acceptance." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 3 (September 6, 2022): 1–27. http://dx.doi.org/10.1145/3550297.

Full text
Abstract:
Personal sensing is a promising approach for enabling the delivery of timely and personalised recommendations to improve mental health and well-being. However, existing research has revealed numerous barriers to personal sensing acceptance. This paper explores the influence of explanations on the acceptability of recommendations based on personal sensing. We conducted a qualitative study using five plausible personal sensing scenarios to elicit prospective users' attitudes towards personal sensing, followed by a reflective interview. Our analysis formed six nuanced design considerations for personal sensing recommendation acceptance: user personalisation, appropriate phrasing, adaptive capability, users' confidence, peer endorsement, and sense of agency. Simultaneously, we found that the availability of an explanation at each personal sensing layer positively influenced the willingness of the participants to accept personal sensing technology. Together, this paper contributes a better understanding of how we can design personal sensing technology to be more acceptable.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Chao, Hengshu Zhu, Peng Wang, Chen Zhu, Xi Zhang, Enhong Chen, and Hui Xiong. "Personalized and Explainable Employee Training Course Recommendations: A Bayesian Variational Approach." ACM Transactions on Information Systems 40, no. 4 (October 31, 2022): 1–32. http://dx.doi.org/10.1145/3490476.

Full text
Abstract:
As a major component of strategic talent management, learning and development (L&D) aims at improving the individual and organization performances through planning tailored training for employees to increase and improve their skills and knowledge. While many companies have developed the learning management systems (LMSs) for facilitating the online training of employees, a long-standing important issue is how to achieve personalized training recommendations with the consideration of their needs for future career development. To this end, in this article, we present a focused study on the explainable personalized online course recommender system for enhancing employee training and development. Specifically, we first propose a novel end-to-end hierarchical framework, namely Demand-aware Collaborative Bayesian Variational Network (DCBVN), to jointly model both the employees’ current competencies and their career development preferences in an explainable way. In DCBVN, we first extract the latent interpretable representations of the employees’ competencies from their skill profiles with autoencoding variational inference based topic modeling. Then, we develop an effective demand recognition mechanism for learning the personal demands of career development for employees. In particular, all the above processes are integrated into a unified Bayesian inference view for obtaining both accurate and explainable recommendations. Furthermore, for handling the employees with sparse or missing skill profiles, we develop an improved version of DCBVN, called the Demand-aware Collaborative Competency Attentive Network (DCCAN) framework , by considering the connectivity among employees. In DCCAN, we first build two employee competency graphs from learning and working aspects. Then, we design a graph-attentive network and a multi-head integration mechanism to infer one’s competency information from her neighborhood employees. Finally, we can generate explainable recommendation results based on the competency representations. Extensive experimental results on real-world data clearly demonstrate the effectiveness and the interpretability of both of our frameworks, as well as their robustness on sparse and cold-start scenarios.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Zuoxi, and Shoubin Dong. "HAGERec: Hierarchical Attention Graph Convolutional Network Incorporating Knowledge Graph for Explainable Recommendation." Knowledge-Based Systems 204 (September 2020): 106194. http://dx.doi.org/10.1016/j.knosys.2020.106194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Chao, Weixin Zhou, Zhiyu Wang, Bin Jiang, Dongsheng Li, and Huawei Shen. "Accurate and Explainable Recommendation via Hierarchical Attention Network Oriented Towards Crowd Intelligence." Knowledge-Based Systems 213 (February 2021): 106687. http://dx.doi.org/10.1016/j.knosys.2020.106687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Xin, Ying Wang, and Yunzhi Ling. "Attention-Guide Walk Model in Heterogeneous Information Network for Multi-Style Recommendation Explanation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6275–82. http://dx.doi.org/10.1609/aaai.v34i04.6095.

Full text
Abstract:
Explainable Recommendation aims at not only providing the recommended items to users, but also making users aware why these items are recommended. Too many interactive factors between users and items can be used to interpret the recommendation in a heterogeneous information network. However, these interactive factors are usually massive, implicit and noisy. The existing recommendation explanation approaches only consider the single explanation style, such as aspect-level or review-level. To address these issues, we propose a framework (MSRE) of generating the multi-style recommendation explanation with the attention-guide walk model on affiliation relations and interaction relations in the heterogeneous information network. Inspired by the attention mechanism, we determine the important contexts for recommendation explanation and learn joint representation of multi-style user-item interactions for enhancing recommendation performance. Constructing extensive experiments on three real-world datasets verifies the effectiveness of our framework on both recommendation performance and recommendation explanation.
APA, Harvard, Vancouver, ISO, and other styles
36

Ammar, Nariman, and Arash Shaban-Nejad. "Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development." JMIR Medical Informatics 8, no. 11 (November 4, 2020): e18752. http://dx.doi.org/10.2196/18752.

Full text
Abstract:
Background The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem. Objective In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance. Methods We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. Results To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. Conclusions This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.
APA, Harvard, Vancouver, ISO, and other styles
37

Ji, Ke, and Hong Shen. "Jointly modeling content, social network and ratings for explainable and cold-start recommendation." Neurocomputing 218 (December 2016): 1–12. http://dx.doi.org/10.1016/j.neucom.2016.03.070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Xin, Xuemeng Song, Fuli Feng, Haokun Wen, Ling-Yu Duan, and Liqiang Nie. "Attribute-wise Explainable Fashion Compatibility Modeling." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1 (April 16, 2021): 1–21. http://dx.doi.org/10.1145/3425636.

Full text
Abstract:
With the boom of the fashion market and people’s daily needs for beauty, clothing matching has gained increased research attention. In a sense, tackling this problem lies in modeling the human notions of the compatibility between fashion items, i.e., Fashion Compatibility Modeling (FCM), which plays an important role in a wide bunch of commercial applications, including clothing recommendation and dressing assistant. Recent advances in multimedia processing have shown remarkable effectiveness in accurate compatibility evaluation. However, these studies work like a black box and cannot provide appropriate explanations, which are indeed of importance for gaining users’ trust and improving their experience. In fact, fashion experts usually explain the compatibility evaluation through the matching patterns between fashion attributes (e.g., a silk tank top cannot go with a knit dress). Inspired by this, we devise an attribute-wise explainable FCM solution, named ExFCM , which can simultaneously generate the item-level compatibility evaluation for input fashion items and the attribute-level explanations for the evaluation result. In particular, ExFCM consists of two key components: attribute-wise representation learning and attribute interaction modeling. The former works on learning the region-aware attribute representation for each item with the threshold global average pooling. Besides, the latter is responsible for compiling the attribute-level matching signals into the overall compatibility evaluation adaptively with the attentive interaction mechanism. Note that ExFCM is trained without any attribute-level compatibility annotations, which facilitates its practical applications. Extensive experiments on two real-world datasets validate that ExFCM can generate more accurate compatibility evaluations than the existing methods, together with reasonable explanations.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Huafeng, Liping Jing, Jingxuan Wen, Pengyu Xu, Jian Yu, and Michael K. Ng. "Bayesian Additive Matrix Approximation for Social Recommendation." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–34. http://dx.doi.org/10.1145/3451391.

Full text
Abstract:
Social relations between users have been proven to be a good type of auxiliary information to improve the recommendation performance. However, it is a challenging issue to sufficiently exploit the social relations and correctly determine the user preference from both social and rating information. In this article, we propose a unified Bayesian Additive Matrix Approximation model (BAMA), which takes advantage of rating preference and social network to provide high-quality recommendation. The basic idea of BAMA is to extract social influence from social networks, integrate them to Bayesian additive co-clustering for effectively determining the user clusters and item clusters, and provide an accurate rating prediction. In addition, an efficient algorithm with collapsed Gibbs Sampling is designed to inference the proposed model. A series of experiments were conducted on six real-world social datasets. The results demonstrate the superiority of the proposed BAMA by comparing with the state-of-the-art methods from three views, all users, cold-start users, and users with few social relations. With the aid of social information, furthermore, BAMA has ability to provide the explainable recommendation.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Chao, Dongsheng Li, Junchi Yan, Hanchi Huang, and Xiaokang Yang. "Scalable and Explainable 1-Bit Matrix Completion via Graph Signal Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7011–19. http://dx.doi.org/10.1609/aaai.v35i8.16863.

Full text
Abstract:
One-bit matrix completion is an important class of positive-unlabeled (PU) learning problems where the observations consist of only positive examples, e.g., in top-N recommender systems. For the first time, we show that 1-bit matrix completion can be formulated as the problem of recovering clean graph signals from noise-corrupted signals in hypergraphs. This makes it possible to enjoy recent advances in graph signal learning. Then, we propose the spectral graph matrix completion (SGMC) method, which can recover the underlying matrix in distributed systems by filtering the noisy data in the graph frequency domain. Meanwhile, it can provide micro- and macro-level explanations by following vertex-frequency analysis. To tackle the computational and memory issue of performing graph signal operations on large graphs, we construct a scalable Nystrom algorithm which can efficiently compute orthonormal eigenvectors. Furthermore, we also develop polynomial and sparse frequency filters to remedy the accuracy loss caused by the approximations. We demonstrate the effectiveness of our algorithms on top-N recommendation tasks, and the results on three large-scale real-world datasets show that SGMC can outperform state-of-the-art top-N recommendation algorithms in accuracy while only requiring a small fraction of training time compared to the baselines.
APA, Harvard, Vancouver, ISO, and other styles
41

Weber, Ingmar, and Venkata Rama Kiran Garimella. "Using Co-Following for Personalized Out-of-Context Twitter Friend Recommendation." Proceedings of the International AAAI Conference on Web and Social Media 8, no. 1 (May 16, 2014): 654–55. http://dx.doi.org/10.1609/icwsm.v8i1.14497.

Full text
Abstract:
We present two demos that give personalized `"out-of-context" recommendations of Twitter users to follow. By out-of-context we mean that a user wants to receive recommendation on, say, musicians to follow even though the user's tweets' contents and social links have no connection to the "context" of music. In this setting, where a user has never expressed interest in the context of music, many existing methods fail. Our approach exploits co-following information and hidden correlations where, say, a user's political preference might actually provide clues about their likely music preference. We implement this framework in two very distinct settings: one for recommending musicians and one for recommending political parties in Tunisia. Our framework is simple and similar to Amazon's "users who bought X also bought Y" and can be used not only for explainable out-of-context recommendations but also for social studies on, say, which music is "closest" to users of a particular political affiliation. It also helps to introduce and to "link" a user to an unknown domain, say, politics in Tunisia.
APA, Harvard, Vancouver, ISO, and other styles
42

Xi, Jianing, Dan Wang, Xuebing Yang, Wensheng Zhang, and Qinghua Huang. "Cancer omic data based explainable AI drug recommendation inference: A traceability perspective for explainability." Biomedical Signal Processing and Control 79 (January 2023): 104144. http://dx.doi.org/10.1016/j.bspc.2022.104144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Abu-Rasheed, Hasan, Christian Weber, Johannes Zenkert, Mareike Dornhöfer, and Madjid Fathi. "Transferrable Framework Based on Knowledge Graphs for Generating Explainable Results in Domain-Specific, Intelligent Information Retrieval." Informatics 9, no. 1 (January 19, 2022): 6. http://dx.doi.org/10.3390/informatics9010006.

Full text
Abstract:
In modern industrial systems, collected textual data accumulates over time, offering an important source of information for enhancing present and future industrial practices. Although many AI-based solutions have been developed in the literature for a domain-specific information retrieval (IR) from this data, the explainability of these systems was rarely investigated in such domain-specific environments. In addition to considering the domain requirements within an explainable intelligent IR, transferring the explainable IR algorithm to other domains remains an open-ended challenge. This is due to the high costs, which are associated with intensive customization and required knowledge modelling, when developing new explainable solutions for each industrial domain. In this article, we present a transferable framework for generating domain-specific explanations for intelligent IR systems. The aim of our work is to provide a comprehensive approach for constructing explainable IR and recommendation algorithms, which are capable of adopting to domain requirements and are usable in multiple domains at the same time. Our method utilizes knowledge graphs (KG) for modeling the domain knowledge. The KG provides a solid foundation for developing intelligent IR solutions. Utilizing the same KG, we develop graph-based components for generating textual and visual explanations of the retrieved information, taking into account the domain requirements and supporting the transferability to other domain-specific environments, through the structured approach. The use of the KG resulted in minimum-to-zero adjustments when creating explanations for multiple intelligent IR algorithms in multiple domains. We test our method within two different use cases, a semiconductor manufacturing centered use case and a job-to-applicant matching one. Our quantitative results show a high capability of our approach to generate high-level explanations for the end users. In addition, the developed explanation components were highly adaptable to both industrial domains without sacrificing the overall accuracy of the intelligent IR algorithm. Furthermore, a qualitative user-study was conducted. We recorded a high level of acceptance from the users, who reported an enhanced overall experience with the explainable IR system.
APA, Harvard, Vancouver, ISO, and other styles
44

Caro-Martínez, Marta, Guillermo Jiménez-Díaz, and Juan A. Recio-García. "Conceptual Modeling of Explainable Recommender Systems: An Ontological Formalization to Guide Their Design and Development." Journal of Artificial Intelligence Research 71 (July 24, 2021): 557–89. http://dx.doi.org/10.1613/jair.1.12789.

Full text
Abstract:
With the increasing importance of e-commerce and the immense variety of products, users need help to decide which ones are the most interesting to them. This is one of the main goals of recommender systems. However, users’ trust may be compromised if they do not understand how or why the recommendation was achieved. Here, explanations are essential to improve user confidence in recommender systems and to make the recommendation useful. Providing explanation capabilities into recommender systems is not an easy task as their success depends on several aspects such as the explanation’s goal, the user’s expectation, the knowledge available, or the presentation method. Therefore, this work proposes a conceptual model to alleviate this problem by defining the requirements of explanations for recommender systems. Our goal is to provide a model that guides the development of effective explanations for recommender systems as they are correctly designed and suited to the user’s needs. Although earlier explanation taxonomies sustain this work, our model includes new concepts not considered in previous works. Moreover, we make a novel contribution regarding the formalization of this model as an ontology that can be integrated into the development of proper explanations for recommender systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Kakadiya, Ashutosh, Sriraam Natarajan, and Balaraman Ravindran. "Relational Boosted Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12123–30. http://dx.doi.org/10.1609/aaai.v35i13.17439.

Full text
Abstract:
Contextual bandits algorithms have become essential in real-world user interaction problems in recent years. However, these algorithms represent context as attribute value representation, which makes them infeasible for real world domains like social networks, which are inherently relational. We propose Relational Boosted Bandits (RB2), a contextual bandits algorithm for relational domains based on (relational) boosted trees. RB2 enables us to learn interpretable and explainable models due to the more descriptive nature of the relational representation. We empirically demonstrate the effectiveness and interpretability of RB2 on tasks such as link prediction, relational classification, and recommendation.
APA, Harvard, Vancouver, ISO, and other styles
46

Zakharova, I. G., M. S. Vorobeva, and Yu V. Boganyuk. "Support of individual educational trajectories based on the concept of explainable artificial intelligence." Education and science journal 24, no. 1 (January 18, 2022): 163–90. http://dx.doi.org/10.17853/1994-5639-2022-1-163-190.

Full text
Abstract:
Introduction. Professional education in the context of individual educational trajectories (IET) meets the needs of both students themselves and the labour market due to the relevance of the content, flexibility of the educational process and learning technologies. However, in the context of digitalisation, IET support, including their planning and subsequent management of learning, entails the emergence of new requirements for information, analytical and methodological support of information systems designed to manage the educational process of the university. The problem of this study is determined by the contradiction between the intensive growth (natural for digitalisation) in the volume and variety of types of collected data, which can and should be used to support IET. In addition, there is also a lack of adequate analytical tools in educational information management systems.Aim. The present research aimed to study and test the digitalisation methodology for IET support, based on the application of the concept of explainable artificial intelligence for analysing student digital footprint data, the content of documents regulating the educational process, as well as labour market demands.Research methodology and methods. As a theoretical basis for the study, the authors relied on the principles of explainable artificial intelligence and their application to the interpretation of data from the educational process and the prediction of educational outcomes. The methods of intellectual analysis of texts in natural language were employed for preliminary processing of source documents. To predict educational outcomes, the authors used clustering, classification and regression models created through applying machine learning methods.Results. The authors developed and studied predictive models with the subsequent formation of recommendations for the tasks of choosing an educational programme by applicants, choosing an elective discipline, forming a team for a group project and employment in accordance with professional competencies. The developed computer program automatically generates objective and explainable recommendations based on expert knowledge and predicting results. The algorithm for constructing recommendations is divided into stages and provides for variability in decision making.Scientific novelty. The authors proposed a methodology for digital support of IET, corresponding to the principles of explainable artificial intelligence, i.e. machine learning models predict educational outcomes, and a special algorithm automatically generates personalised recommendations based on the results of the analysis of data on the educational process. The developed approach confirmed its effectiveness in testing on the example of bachelor’s and master’s degree programmes in the field of computer science, information technology and information security.Practical significance. A preliminary analysis of significant volumes of initial data made it possible to obtain objective information about the data quality, including the content and structure of documents presented in various university information systems. Based on the the oretical results of the research, the authors developed a recommendation system. It included special services for students, teaching staff, tutors, and administrators, providing visual and user-oriented predictive results and recommendations. Testing of services at the Institute of Mathematics and Computer Science of University of Tyumen confirmed the feasibility of developing the functionality of the university information systems in the direction of collecting and analysing data from a student’s digital footprint and the relevance of this analysis results both by subjects of the educational process and by the labour market.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Nakyeong, Jeongje Jo, Myeongjun Jeon, Wooju Kim, and Juyoung Kang. "Semantic and explainable research-related recommendation system based on semi-supervised methodology using BERT and LDA models." Expert Systems with Applications 190 (March 2022): 116209. http://dx.doi.org/10.1016/j.eswa.2021.116209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Blanes-Selva, Vicent, Ascensión Doñate-Martínez, Gordon Linklater, Jorge Garcés-Ferrer, and Juan M. García-Gómez. "Responsive and Minimalist App Based on Explainable AI to Assess Palliative Care Needs during Bedside Consultations on Older Patients." Sustainability 13, no. 17 (September 2, 2021): 9844. http://dx.doi.org/10.3390/su13179844.

Full text
Abstract:
Palliative care is an alternative to standard care for gravely ill patients that has demonstrated many clinical benefits in cost-effective interventions. It is expected to grow in demand soon, so it is necessary to detect those patients who may benefit from these programs using a personalised objective criterion at the correct time. Our goal was to develop a responsive and minimalist web application embedding a 1-year mortality explainable predictive model to assess palliative care at bedside consultation. A 1-year mortality predictive model has been trained. We ranked the input variables and evaluated models with an increasing number of variables. We selected the model with the seven most relevant variables. Finally, we created a responsive, minimalist and explainable app to support bedside decision making for older palliative care. The selected variables are age, medication, Charlson, Barthel, urea, RDW-SD and metastatic tumour. The predictive model achieved an AUC ROC of 0.83 [CI: 0.82, 0.84]. A Shapley value graph was used for explainability. The app allows identifying patients in need of palliative care using the bad prognosis criterion, which can be a useful, easy and quick tool to support healthcare professionals in obtaining a fast recommendation in order to allocate health resources efficiently.
APA, Harvard, Vancouver, ISO, and other styles
49

Khrais, Laith T. "Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce." Future Internet 12, no. 12 (December 8, 2020): 226. http://dx.doi.org/10.3390/fi12120226.

Full text
Abstract:
The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.
APA, Harvard, Vancouver, ISO, and other styles
50

Shimizu, Ryotaro, Megumi Matsutani, and Masayuki Goto. "An explainable recommendation framework based on an improved knowledge graph attention network with massive volumes of side information." Knowledge-Based Systems 239 (March 2022): 107970. http://dx.doi.org/10.1016/j.knosys.2021.107970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography