Academic literature on the topic 'Explainable recommendation systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explainable recommendation systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Explainable recommendation systems"

1

Pasrija, Vatesh, and Supriya Pasrija. "Demystifying Recommendations: Transparency and Explainability in Recommendation Systems." International Journal for Research in Applied Science and Engineering Technology 12, no. 2 (2024): 1376–83. http://dx.doi.org/10.22214/ijraset.2024.58541.

Full text
Abstract:
Abstract: Recommendation algorithms are widely used, however many consumers want more clarity on why specific goods are recommended to them. The absence of explainability jeopardizes user trust, satisfaction, and potentially privacy. Improving transparency is difficult and involves the need for flexible interfaces, privacy protection, scalability, and customisation. Explainable recommendations provide substantial advantages such as enhancing relevance assessment, bolstering user interactions, facilitating system monitoring, and fostering accountability. Typical methods include giving summaries of the fundamental approach, emphasizing significant data points, and utilizing hybrid recommendation models. Case studies demonstrate that openness has assisted platforms such as YouTube and Spotify in achieving more than a 10% increase in key metrics like click-through rate. Additional research should broaden the methods for providing explanations, increase real-world implementation in other industries, guarantee human-centered supervision of suggestions, and promote consumer trust by following ethical standards. Accurate recommendations are essential. The future involves developing technologies that provide users with information while honoring their autonomy.
APA, Harvard, Vancouver, ISO, and other styles
2

Lai, Kai-Huang, Zhe-Rui Yang, Pei-Yuan Lai, Chang-Dong Wang, Mohsen Guizani, and Min Chen. "Knowledge-Aware Explainable Reciprocal Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8636–44. http://dx.doi.org/10.1609/aaai.v38i8.28708.

Full text
Abstract:
Reciprocal recommender systems (RRS) have been widely used in online platforms such as online dating and recruitment. They can simultaneously fulfill the needs of both parties involved in the recommendation process. Due to the inherent nature of the task, interaction data is relatively sparse compared to other recommendation tasks. Existing works mainly address this issue through content-based recommendation methods. However, these methods often implicitly model textual information from a unified perspective, making it challenging to capture the distinct intentions held by each party, which further leads to limited performance and the lack of interpretability. In this paper, we propose a Knowledge-Aware Explainable Reciprocal Recommender System (KAERR), which models metapaths between two parties independently, considering their respective perspectives and requirements. Various metapaths are fused using an attention-based mechanism, where the attention weights unveil dual-perspective preferences and provide recommendation explanations for both parties. Extensive experiments on two real-world datasets from diverse scenarios demonstrate that the proposed model outperforms state-of-the-art baselines, while also delivering compelling reasons for recommendations to both parties.
APA, Harvard, Vancouver, ISO, and other styles
3

Leal, Fátima, Bruno Veloso, Benedita Malheiro, Juan C. Burguillo, Adriana E. Chis, and Horacio González-Vélez. "Stream-based explainable recommendations via blockchain profiling." Integrated Computer-Aided Engineering 29, no. 1 (2021): 105–21. http://dx.doi.org/10.3233/ica-210668.

Full text
Abstract:
Explainable recommendations enable users to understand why certain items are suggested and, ultimately, nurture system transparency, trustworthiness, and confidence. Large crowdsourcing recommendation systems ought to crucially promote authenticity and transparency of recommendations. To address such challenge, this paper proposes the use of stream-based explainable recommendations via blockchain profiling. Our contribution relies on chained historical data to improve the quality and transparency of online collaborative recommendation filters – Memory-based and Model-based – using, as use cases, data streamed from two large tourism crowdsourcing platforms, namely Expedia and TripAdvisor. Building historical trust-based models of raters, our method is implemented as an external module and integrated with the collaborative filter through a post-recommendation component. The inter-user trust profiling history, traceability and authenticity are ensured by blockchain, since these profiles are stored as a smart contract in a private Ethereum network. Our empirical evaluation with HotelExpedia and Tripadvisor has consistently shown the positive impact of blockchain-based profiling on the quality (measured as recall) and transparency (determined via explanations) of recommendations.
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Mengyuan, Mengying Zhu, Yan Wang, et al. "Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 9250–59. http://dx.doi.org/10.1609/aaai.v38i8.28777.

Full text
Abstract:
Large language model-based explainable recommendation (LLM-based ER) systems can provide remarkable human-like explanations and have widely received attention from researchers. However, the original LLM-based ER systems face three low-quality problems in their generated explanations, i.e., lack of personalization, inconsistency, and questionable explanation data. To address these problems, we propose a novel LLM-based ER model denoted as LLM2ER to serve as a backbone and devise two innovative explainable quality reward models for fine-tuning such a backbone in a reinforcement learning paradigm, ultimately yielding a fine-tuned model denoted as LLM2ER-EQR, which can provide high-quality explanations. LLM2ER-EQR can generate personalized, informative, and consistent high-quality explanations learned from questionable-quality explanation datasets. Extensive experiments conducted on three real-world datasets demonstrate that our model can generate fluent, diverse, informative, and highly personalized explanations.
APA, Harvard, Vancouver, ISO, and other styles
5

Ai, Qingyao, Vahid Azizi, Xu Chen, and Yongfeng Zhang. "Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation." Algorithms 11, no. 9 (2018): 137. http://dx.doi.org/10.3390/a11090137.

Full text
Abstract:
Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
6

Cho, Gyungah, Pyoung-seop Shim, and Jaekwang Kim. "Explainable B2B Recommender System for Potential Customer Prediction Using KGAT." Electronics 12, no. 17 (2023): 3536. http://dx.doi.org/10.3390/electronics12173536.

Full text
Abstract:
The adoption of recommender systems in business-to-business (B2B) can make the management of companies more efficient. Although the importance of recommendation is increasing with the expansion of B2B e-commerce, not enough studies on B2B recommendations have been conducted. Due to several differences between B2B and business-to-consumer (B2C), the B2B recommender system should be defined differently. This paper presents a new perspective on the explainable B2B recommender system using the knowledge graph attention network for recommendation (KGAT). Unlike traditional recommendation systems that suggest products to consumers, this study focuses on recommending potential buyers to sellers. Additionally, the utilization of the KGAT attention mechanisms enables the provision of explanations for each company’s recommendations. The Korea Electronic Taxation System Association provides the Market Transaction Dataset in South Korea, and this research shows how the dataset is utilized in the knowledge graph (KG). The main tasks can be summarized in three points: (i) suggesting the application of an explainable recommender system in B2B for recommending potential customers, (ii) extracting the performance-enhancing features of a knowledge graph, and (iii) enhancing keyword extraction for trading items to improve recommendation performance. We can anticipate providing good insight into the development of the industry via the utilization of the B2B recommendation of potential customer prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Tongxuan, Xiaolong Zheng, Saike He, Zhu Zhang, and Desheng Dash Wu. "Learning user-item paths for explainable recommendation." IFAC-PapersOnLine 53, no. 5 (2020): 436–40. http://dx.doi.org/10.1016/j.ifacol.2021.04.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guesmi, Mouadh, Mohamed Amine Chatti, Shoeb Joarder, et al. "Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System." Information 14, no. 7 (2023): 401. http://dx.doi.org/10.3390/info14070401.

Full text
Abstract:
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Xiao, Pengjie Ren, Zhaochun Ren, et al. "Report on the international workshop on natural language processing for recommendations (NLP4REC 2020) workshop held at WSDM 2020." ACM SIGIR Forum 54, no. 1 (2020): 1–5. http://dx.doi.org/10.1145/3451964.3451970.

Full text
Abstract:
This paper summarizes the outcomes of the International Workshop on Natural Language Processing for Recommendations (NLP4REC 2020), held in Houston, USA, on February 7, 2020, during WSDM 2020. The purpose of this workshop was to explore the potential research topics and industrial applications in leveraging natural language processing techniques to tackle the challenges in constructing more intelligent recommender systems. Specific topics included, but were not limited to knowledge-aware recommendation, explainable recommendation, conversational recommendation, and sequential recommendation.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Lei, Yongfeng Zhang, and Li Chen. "Personalized Prompt Learning for Explainable Recommendation." ACM Transactions on Information Systems 41, no. 4 (2023): 1–26. http://dx.doi.org/10.1145/3580488.

Full text
Abstract:
Providing user-understandable explanations to justify recommendations could help users better understand the recommended items, increase the system’s ease of use, and gain users’ trust. A typical approach to realize it is natural language generation. However, previous works mostly adopt recurrent neural networks to meet the ends, leaving the potentially more effective pre-trained Transformer models under-explored. In fact, user and item IDs, as important identifiers in recommender systems, are inherently in different semantic space as words that pre-trained models were already trained on. Thus, how to effectively fuse IDs into such models becomes a critical issue. Inspired by recent advancement in prompt learning, we come up with two solutions: find alternative words to represent IDs (called discrete prompt learning) and directly input ID vectors to a pre-trained model (termed continuous prompt learning). In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages. To bridge the gap, we further propose two training strategies: sequential tuning and recommendation as regularization. Extensive experiments show that our continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography