Siga este link para ver outros tipos de publicações sobre o tema: Explainable recommendation systems.

Artigos de revistas sobre o tema "Explainable recommendation systems"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Explainable recommendation systems".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Pasrija, Vatesh, e Supriya Pasrija. "Demystifying Recommendations: Transparency and Explainability in Recommendation Systems". International Journal for Research in Applied Science and Engineering Technology 12, n.º 2 (29 de fevereiro de 2024): 1376–83. http://dx.doi.org/10.22214/ijraset.2024.58541.

Texto completo da fonte
Resumo:
Abstract: Recommendation algorithms are widely used, however many consumers want more clarity on why specific goods are recommended to them. The absence of explainability jeopardizes user trust, satisfaction, and potentially privacy. Improving transparency is difficult and involves the need for flexible interfaces, privacy protection, scalability, and customisation. Explainable recommendations provide substantial advantages such as enhancing relevance assessment, bolstering user interactions, facilitating system monitoring, and fostering accountability. Typical methods include giving summaries of the fundamental approach, emphasizing significant data points, and utilizing hybrid recommendation models. Case studies demonstrate that openness has assisted platforms such as YouTube and Spotify in achieving more than a 10% increase in key metrics like click-through rate. Additional research should broaden the methods for providing explanations, increase real-world implementation in other industries, guarantee human-centered supervision of suggestions, and promote consumer trust by following ethical standards. Accurate recommendations are essential. The future involves developing technologies that provide users with information while honoring their autonomy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Lai, Kai-Huang, Zhe-Rui Yang, Pei-Yuan Lai, Chang-Dong Wang, Mohsen Guizani e Min Chen. "Knowledge-Aware Explainable Reciprocal Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de março de 2024): 8636–44. http://dx.doi.org/10.1609/aaai.v38i8.28708.

Texto completo da fonte
Resumo:
Reciprocal recommender systems (RRS) have been widely used in online platforms such as online dating and recruitment. They can simultaneously fulfill the needs of both parties involved in the recommendation process. Due to the inherent nature of the task, interaction data is relatively sparse compared to other recommendation tasks. Existing works mainly address this issue through content-based recommendation methods. However, these methods often implicitly model textual information from a unified perspective, making it challenging to capture the distinct intentions held by each party, which further leads to limited performance and the lack of interpretability. In this paper, we propose a Knowledge-Aware Explainable Reciprocal Recommender System (KAERR), which models metapaths between two parties independently, considering their respective perspectives and requirements. Various metapaths are fused using an attention-based mechanism, where the attention weights unveil dual-perspective preferences and provide recommendation explanations for both parties. Extensive experiments on two real-world datasets from diverse scenarios demonstrate that the proposed model outperforms state-of-the-art baselines, while also delivering compelling reasons for recommendations to both parties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Leal, Fátima, Bruno Veloso, Benedita Malheiro, Juan C. Burguillo, Adriana E. Chis e Horacio González-Vélez. "Stream-based explainable recommendations via blockchain profiling". Integrated Computer-Aided Engineering 29, n.º 1 (28 de dezembro de 2021): 105–21. http://dx.doi.org/10.3233/ica-210668.

Texto completo da fonte
Resumo:
Explainable recommendations enable users to understand why certain items are suggested and, ultimately, nurture system transparency, trustworthiness, and confidence. Large crowdsourcing recommendation systems ought to crucially promote authenticity and transparency of recommendations. To address such challenge, this paper proposes the use of stream-based explainable recommendations via blockchain profiling. Our contribution relies on chained historical data to improve the quality and transparency of online collaborative recommendation filters – Memory-based and Model-based – using, as use cases, data streamed from two large tourism crowdsourcing platforms, namely Expedia and TripAdvisor. Building historical trust-based models of raters, our method is implemented as an external module and integrated with the collaborative filter through a post-recommendation component. The inter-user trust profiling history, traceability and authenticity are ensured by blockchain, since these profiles are stored as a smart contract in a private Ethereum network. Our empirical evaluation with HotelExpedia and Tripadvisor has consistently shown the positive impact of blockchain-based profiling on the quality (measured as recall) and transparency (determined via explanations) of recommendations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yang, Mengyuan, Mengying Zhu, Yan Wang, Linxun Chen, Yilei Zhao, Xiuyuan Wang, Bing Han, Xiaolin Zheng e Jianwei Yin. "Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de março de 2024): 9250–59. http://dx.doi.org/10.1609/aaai.v38i8.28777.

Texto completo da fonte
Resumo:
Large language model-based explainable recommendation (LLM-based ER) systems can provide remarkable human-like explanations and have widely received attention from researchers. However, the original LLM-based ER systems face three low-quality problems in their generated explanations, i.e., lack of personalization, inconsistency, and questionable explanation data. To address these problems, we propose a novel LLM-based ER model denoted as LLM2ER to serve as a backbone and devise two innovative explainable quality reward models for fine-tuning such a backbone in a reinforcement learning paradigm, ultimately yielding a fine-tuned model denoted as LLM2ER-EQR, which can provide high-quality explanations. LLM2ER-EQR can generate personalized, informative, and consistent high-quality explanations learned from questionable-quality explanation datasets. Extensive experiments conducted on three real-world datasets demonstrate that our model can generate fluent, diverse, informative, and highly personalized explanations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ai, Qingyao, Vahid Azizi, Xu Chen e Yongfeng Zhang. "Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation". Algorithms 11, n.º 9 (13 de setembro de 2018): 137. http://dx.doi.org/10.3390/a11090137.

Texto completo da fonte
Resumo:
Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Cho, Gyungah, Pyoung-seop Shim e Jaekwang Kim. "Explainable B2B Recommender System for Potential Customer Prediction Using KGAT". Electronics 12, n.º 17 (22 de agosto de 2023): 3536. http://dx.doi.org/10.3390/electronics12173536.

Texto completo da fonte
Resumo:
The adoption of recommender systems in business-to-business (B2B) can make the management of companies more efficient. Although the importance of recommendation is increasing with the expansion of B2B e-commerce, not enough studies on B2B recommendations have been conducted. Due to several differences between B2B and business-to-consumer (B2C), the B2B recommender system should be defined differently. This paper presents a new perspective on the explainable B2B recommender system using the knowledge graph attention network for recommendation (KGAT). Unlike traditional recommendation systems that suggest products to consumers, this study focuses on recommending potential buyers to sellers. Additionally, the utilization of the KGAT attention mechanisms enables the provision of explanations for each company’s recommendations. The Korea Electronic Taxation System Association provides the Market Transaction Dataset in South Korea, and this research shows how the dataset is utilized in the knowledge graph (KG). The main tasks can be summarized in three points: (i) suggesting the application of an explainable recommender system in B2B for recommending potential customers, (ii) extracting the performance-enhancing features of a knowledge graph, and (iii) enhancing keyword extraction for trading items to improve recommendation performance. We can anticipate providing good insight into the development of the industry via the utilization of the B2B recommendation of potential customer prediction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Wang, Tongxuan, Xiaolong Zheng, Saike He, Zhu Zhang e Desheng Dash Wu. "Learning user-item paths for explainable recommendation". IFAC-PapersOnLine 53, n.º 5 (2020): 436–40. http://dx.doi.org/10.1016/j.ifacol.2021.04.119.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Guesmi, Mouadh, Mohamed Amine Chatti, Shoeb Joarder, Qurat Ul Ain, Clara Siepmann, Hoda Ghanbarzadeh e Rawaa Alatrash. "Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System". Information 14, n.º 7 (14 de julho de 2023): 401. http://dx.doi.org/10.3390/info14070401.

Texto completo da fonte
Resumo:
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Huang, Xiao, Pengjie Ren, Zhaochun Ren, Fei Sun, Xiangnan He, Dawei Yin e Maarten de Rijke. "Report on the international workshop on natural language processing for recommendations (NLP4REC 2020) workshop held at WSDM 2020". ACM SIGIR Forum 54, n.º 1 (junho de 2020): 1–5. http://dx.doi.org/10.1145/3451964.3451970.

Texto completo da fonte
Resumo:
This paper summarizes the outcomes of the International Workshop on Natural Language Processing for Recommendations (NLP4REC 2020), held in Houston, USA, on February 7, 2020, during WSDM 2020. The purpose of this workshop was to explore the potential research topics and industrial applications in leveraging natural language processing techniques to tackle the challenges in constructing more intelligent recommender systems. Specific topics included, but were not limited to knowledge-aware recommendation, explainable recommendation, conversational recommendation, and sequential recommendation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Li, Lei, Yongfeng Zhang e Li Chen. "Personalized Prompt Learning for Explainable Recommendation". ACM Transactions on Information Systems 41, n.º 4 (23 de março de 2023): 1–26. http://dx.doi.org/10.1145/3580488.

Texto completo da fonte
Resumo:
Providing user-understandable explanations to justify recommendations could help users better understand the recommended items, increase the system’s ease of use, and gain users’ trust. A typical approach to realize it is natural language generation. However, previous works mostly adopt recurrent neural networks to meet the ends, leaving the potentially more effective pre-trained Transformer models under-explored. In fact, user and item IDs, as important identifiers in recommender systems, are inherently in different semantic space as words that pre-trained models were already trained on. Thus, how to effectively fuse IDs into such models becomes a critical issue. Inspired by recent advancement in prompt learning, we come up with two solutions: find alternative words to represent IDs (called discrete prompt learning) and directly input ID vectors to a pre-trained model (termed continuous prompt learning). In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages. To bridge the gap, we further propose two training strategies: sequential tuning and recommendation as regularization. Extensive experiments show that our continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Zhang, Yongfeng, e Xu Chen. "Explainable Recommendation: A Survey and New Perspectives". Foundations and Trends® in Information Retrieval 14, n.º 1 (2020): 1–101. http://dx.doi.org/10.1561/1500000066.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Zhu, Xingyu, Xiaona Xia, Yuheng Wu e Wenxu Zhao. "Enhancing Explainable Recommendations: Integrating Reason Generation and Rating Prediction through Multi-Task Learning". Applied Sciences 14, n.º 18 (14 de setembro de 2024): 8303. http://dx.doi.org/10.3390/app14188303.

Texto completo da fonte
Resumo:
In recent years, recommender systems—which provide personalized recommendations by analyzing users’ historical behavior to infer their preferences—have become essential tools across various domains, including e-commerce, streaming media, and social platforms. Recommender systems play a crucial role in enhancing user experience by mining vast amounts of data to identify what is most relevant to users. Among these, deep learning-based recommender systems have demonstrated exceptional recommendation performance. However, these “black-box” systems lack reasonable explanations for their recommendation results, which reduces their impact and credibility. To address this situation, an effective strategy is to provide a personalized textual explanation along with the recommendation. This approach has received increasing attention from researchers because it can enhance users’ trust in recommender systems through intuitive explanations. In this context, our paper introduces a novel explainable recommendation model named GCLTE. This model integrates Graph Contrastive Learning with transformers within an Encoder–Decoder framework to perform rating prediction and reason generation simultaneously. In addition, we cleverly combine the neural network layer with the transformer using a straightforward information enhancement operation. Finally, our extensive experiments on three real-world datasets demonstrate the effectiveness of GCLTE in both recommendation and explanation. The experimental results show that our model outperforms the top existing models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

B, Meenakshi,. "Enhancing Loan Prediction Accuracy: A Comparative Analysis of Machine Learning Algorithms with XAI Integration". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 05 (16 de maio de 2024): 1–5. http://dx.doi.org/10.55041/ijsrem33859.

Texto completo da fonte
Resumo:
The contemporary financial landscape necessitates loan recommendation systems that offer both accuracy and transparency. Conventional assessment methodologies often suffer from limitations in efficiency and transparency, leading to potential risks for both lenders and borrowers. This research proposes the development of a novel loan recommendation system that leverages the power of machine learning (ML) and Explainable Artificial Intelligence (XAI). The paper delves into the processes of data collection, preprocessing, model training, evaluation, and subsequent integration into a web application using the Flask framework. The employed datasets encompass a variety of loan types, with the study aiming to identify the most effective ML algorithms from a selection that includes XGBoost, CatBoost, Random Forest, Gradient Boosting, and Logistic Regression. To enhance the system's transparency, Explainable AI methods, such as LIME, are incorporated. The culmination of this research is a web application that facilitates personalized predictions regarding loan eligibility, accompanied by clear explanations. Index terms - Loan Recommendation System, Machine Learning, Explainable AI, XGBoost, CatBoost, Random Forest, Gradient Boost, Logistic Regression,LIME.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Doh, Ronky Francis, Conghua Zhou, John Kingsley Arthur, Isaac Tawiah e Benjamin Doh. "A Systematic Review of Deep Knowledge Graph-Based Recommender Systems, with Focus on Explainable Embeddings". Data 7, n.º 7 (12 de julho de 2022): 94. http://dx.doi.org/10.3390/data7070094.

Texto completo da fonte
Resumo:
Recommender systems (RS) have been developed to make personalized suggestions and enrich users’ preferences in various online applications to address the information explosion problems. However, traditional recommender-based systems act as black boxes, not presenting the user with insights into the system logic or reasons for recommendations. Recently, generating explainable recommendations with deep knowledge graphs (DKG) has attracted significant attention. DKG is a subset of explainable artificial intelligence (XAI) that utilizes the strengths of deep learning (DL) algorithms to learn, provide high-quality predictions, and complement the weaknesses of knowledge graphs (KGs) in the explainability of recommendations. DKG-based models can provide more meaningful, insightful, and trustworthy justifications for recommended items and alleviate the information explosion problems. Although several studies have been carried out on RS, only a few papers have been published on DKG-based methodologies, and a review in this new research direction is still insufficiently explored. To fill this literature gap, this paper uses a systematic literature review framework to survey the recently published papers from 2018 to 2022 in the landscape of DKG and XAI. We analyze how the methods produced in these papers extract essential information from graph-based representations to improve recommendations’ accuracy, explainability, and reliability. From the perspective of the leveraged knowledge-graph related information and how the knowledge-graph or path embeddings are learned and integrated with the DL methods, we carefully select and classify these published works into four main categories: the Two-stage explainable learning methods, the Joint-stage explainable learning methods, the Path-embedding explainable learning methods, and the Propagation explainable learning methods. We further summarize these works according to the characteristics of the approaches and the recommendation scenarios to facilitate the ease of checking the literature. We finally conclude by discussing some open challenges left for future research in this vibrant field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Wang, Linlin, Zefeng Cai, Gerard De Melo, Zhu Cao e Liang He. "Disentangled CVAEs with Contrastive Learning for Explainable Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 11 (26 de junho de 2023): 13691–99. http://dx.doi.org/10.1609/aaai.v37i11.26604.

Texto completo da fonte
Resumo:
Modern recommender systems are increasingly expected to provide informative explanations that enable users to understand the reason for particular recommendations. However, previous methods struggle to interpret the input IDs of user--item pairs in real-world datasets, failing to extract adequate characteristics for controllable generation. To address this issue, we propose disentangled conditional variational autoencoders (CVAEs) for explainable recommendation, which leverage disentangled latent preference factors and guide the explanation generation with the refined condition of CVAEs via a self-regularization contrastive learning loss. Extensive experiments demonstrate that our method generates high-quality explanations and achieves new state-of-the-art results in diverse domains.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Gao, Jingyue, Xiting Wang, Yasha Wang e Xing Xie. "Explainable Recommendation through Attentive Multi-View Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.

Texto completo da fonte
Resumo:
Recommender systems have been playing an increasingly important role in our daily life due to the explosive growth of information. Accuracy and explainability are two core aspects when we evaluate a recommendation model and have become one of the fundamental trade-offs in machine learning. In this paper, we propose to alleviate the trade-off between accuracy and explainability by developing an explainable deep model that combines the advantages of deep learning-based models and existing explainable methods. The basic idea is to build an initial network based on an explainable deep hierarchy (e.g., Microsoft Concept Graph) and improve the model accuracy by optimizing key variables in the hierarchy (e.g., node importance and relevance). To ensure accurate rating prediction, we propose an attentive multi-view learning framework. The framework enables us to handle sparse and noisy data by co-regularizing among different feature levels and combining predictions attentively. To mine readable explanations from the hierarchy, we formulate personalized explanation generation as a constrained tree node selection problem and propose a dynamic programming algorithm to solve it. Experimental results show that our model outperforms state-of-the-art methods in terms of both accuracy and explainability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Wang, Xiang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao e Tat-Seng Chua. "Explainable Reasoning over Knowledge Graphs for Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 5329–36. http://dx.doi.org/10.1609/aaai.v33i01.33015329.

Texto completo da fonte
Resumo:
Incorporating knowledge graph into recommender systems has attracted increasing attention in recent years. By exploring the interlinks within a knowledge graph, the connectivity between users and items can be discovered as paths, which provide rich and complementary information to user-item interactions. Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user’s interest. However, existing efforts have not fully explored this connectivity to infer user preferences, especially in terms of modeling the sequential dependencies within and holistic semantics of a path.In this paper, we contribute a new model named Knowledgeaware Path Recurrent Network (KPRN) to exploit knowledge graph for recommendation. KPRN can generate path representations by composing the semantics of both entities and relations. By leveraging the sequential dependencies within a path, we allow effective reasoning on paths to infer the underlying rationale of a user-item interaction. Furthermore, we design a new weighted pooling operation to discriminate the strengths of different paths in connecting a user with an item, endowing our model with a certain level of explainability. We conduct extensive experiments on two datasets about movie and music, demonstrating significant improvements over state-of-the-art solutions Collaborative Knowledge Base Embedding and Neural Factorization Machine.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Yang, Zuoxi, Shoubin Dong e Jinlong Hu. "GFE: General Knowledge Enhanced Framework for Explainable Sequential Recommendation". Knowledge-Based Systems 230 (outubro de 2021): 107375. http://dx.doi.org/10.1016/j.knosys.2021.107375.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Syed, Muzamil Hussain, Tran Quoc Bao Huy e Sun-Tae Chung. "Context-Aware Explainable Recommendation Based on Domain Knowledge Graph". Big Data and Cognitive Computing 6, n.º 1 (20 de janeiro de 2022): 11. http://dx.doi.org/10.3390/bdcc6010011.

Texto completo da fonte
Resumo:
With the rapid growth of internet data, knowledge graphs (KGs) are considered as efficient form of knowledge representation that captures the semantics of web objects. In recent years, reasoning over KG for various artificial intelligence tasks have received a great deal of research interest. Providing recommendations based on users’ natural language queries is an equally difficult undertaking. In this paper, we propose a novel, context-aware recommender system, based on domain KG, to respond to user-defined natural queries. The proposed recommender system consists of three stages. First, we generate incomplete triples from user queries, which are then segmented using logical conjunction (∧) and disjunction (∨) operations. Then, we generate candidates by utilizing a KGE-based framework (Query2Box) for reasoning over segmented logical triples, with ∧, ∨, and ∃ operators; finally, the generated candidates are re-ranked using neural collaborative filtering (NCF) model by exploiting contextual (auxiliary) information from GraphSAGE embedding. Our approach demonstrates to be simple, yet efficient, at providing explainable recommendations on user’s queries, while leveraging user-item contextual information. Furthermore, our framework has shown to be capable of handling logical complex queries by transforming them into a disjunctive normal form (DNF) of simple queries. In this work, we focus on the restaurant domain as an application domain and use the Yelp dataset to evaluate the system. Experiments demonstrate that the proposed recommender system generalizes well on candidate generation from logical queries and effectively re-ranks those candidates, compared to the matrix factorization model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Jiang, Tianming, e Jiangfeng Zeng. "Time-Aware Explainable Recommendation via Updating Enabled Online Prediction". Entropy 24, n.º 11 (11 de novembro de 2022): 1639. http://dx.doi.org/10.3390/e24111639.

Texto completo da fonte
Resumo:
There has been growing attention on explainable recommendation that is able to provide high-quality results as well as intuitive explanations. However, most existing studies use offline prediction strategies where recommender systems are trained once while used forever, which ignores the dynamic and evolving nature of user–item interactions. There are two main issues with these methods. First, their random dataset split setting will result in data leakage that knowledge should not be known at the time of training is utilized. Second, the dynamic characteristics of user preferences are overlooked, resulting in a model aging issue where the model’s performance degrades along with time. In this paper, we propose an updating enabled online prediction framework for the time-aware explainable recommendation. Specifically, we propose an online prediction scheme to eliminate the data leakage issue and two novel updating strategies to relieve the model aging issue. Moreover, we conduct extensive experiments on four real-world datasets to evaluate the effectiveness of our proposed methods. Compared with the state-of-the-art, our time-aware approach achieves higher accuracy results and more convincing explanations for the entire lifetime of recommendation systems, i.e., both the initial period and the long-term usage.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Takii, Kensuke, Brendan Flanagan, Huiyong Li, Yuanyuan Yang, Kento Koike e Hiroaki Ogata. "Explainable eBook recommendation for extensive reading in K-12 EFL learning". Research and Practice in Technology Enhanced Learning 20 (10 de setembro de 2024): 027. http://dx.doi.org/10.58459/rptel.2025.20027.

Texto completo da fonte
Resumo:
An automatic recommendation system for learning materials in e-learning addresses the challenge of selecting appropriate materials amid information overload and varying self-directed learning (SDL) skills. Such systems can enhance learning by providing personalized recommendations. In Extensive Reading (ER) for English as a Foreign Language (EFL), recommending materials is crucial due to the paradox that learners with low SDL skills struggle to select suitable ER resources, despite ER’s potential to improve SDL. Additionally, determining the difficulty level of ER materials and assessing learners’ progress remains challenging. The system must also explain its recommendations to foster motivation and trust. This study proposes a mechanism to estimate the difficulty of ER materials, adapted to learner preferences, using information retrieval techniques, and an explainable recommendation system for English materials. An experiment was conducted with 240 Japanese junior high school students in an ER program to assess the accuracy of difficulty estimation and identify learner characteristics receptive to the recommendations. While the recommendations did not significantly impact learners’ English skills or motivation, they were positively received. A strong relationship was found between the use and acceptance of recommendations and learners’ motivation. The study suggests that although the system did not increase overall motivation, it has potential to further enhance the motivation of naturally motivated learners.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Liang, Qianqiao, Xiaolin Zheng, Yan Wang e Mengying Zhu. "O3ERS: An explainable recommendation system with online learning, online recommendation, and online explanation". Information Sciences 562 (julho de 2021): 94–115. http://dx.doi.org/10.1016/j.ins.2020.12.070.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Ankur Aggarwal. "Evolution of recommendation systems in the age of Generative AI". International Journal of Science and Research Archive 14, n.º 1 (30 de janeiro de 2025): 485–92. https://doi.org/10.30574/ijsra.2025.14.1.0061.

Texto completo da fonte
Resumo:
This article examines the transformative evolution of recommendation systems in the era of Generative AI, exploring how these advanced technologies have revolutionized user experience and business outcomes across digital platforms. The article investigates the transition from traditional rule-based approaches to sophisticated model-based systems, highlighting the impact of deep learning technologies, explainable AI mechanisms, and multimodal integration. Through comprehensive analysis of recent developments, the article demonstrates how Generative AI has enhanced personalization capabilities, improved recommendation accuracy, and enabled more contextually relevant suggestions while addressing crucial aspects of user privacy and system transparency. The article encompasses various domains, including e-commerce, content streaming, and digital marketplaces, offering insights into both technical advancements and practical implementations of modern recommendation systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Tao, Shaohua, Runhe Qiu, Yuan Ping e Hui Ma. "Multi-modal Knowledge-aware Reinforcement Learning Network for Explainable Recommendation". Knowledge-Based Systems 227 (setembro de 2021): 107217. http://dx.doi.org/10.1016/j.knosys.2021.107217.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Guo, Siyuan, Ying Wang, Hao Yuan, Zeyu Huang, Jianwei Chen e Xin Wang. "TAERT: Triple-Attentional Explainable Recommendation with Temporal Convolutional Network". Information Sciences 567 (agosto de 2021): 185–200. http://dx.doi.org/10.1016/j.ins.2021.03.034.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Samir, Mina, Nada Sherief e Walid Abdelmoez. "Improving Bug Assignment and Developer Allocation in Software Engineering through Interpretable Machine Learning Models". Computers 12, n.º 7 (23 de junho de 2023): 128. http://dx.doi.org/10.3390/computers12070128.

Texto completo da fonte
Resumo:
Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug’s class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning a suitable developer to a new bug by analyzing the history of developers’ profiles and analyzing the history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with “black box” AI, which even its designers cannot explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system’s effectiveness, transparency, and confidence. The trained model is utilized in the recommendation stage to calculate relevance scores for developers based on expertise and past bug handling performance, ultimately presenting the developers with the highest scores as recommendations for new bugs. This approach aims to strike a balance between computational efficiency and accurate predictions, enabling efficient bug assignment while considering developer expertise and historical performance. In this paper, we propose two explainable models for recommendation. The first is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to identify the most suitable developer for each bug from bug history.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Sopchoke, Sirawit, Ken-ichi Fukui e Masayuki Numao. "Explainable and unexpectable recommendations using relational learning on multiple domains". Intelligent Data Analysis 24, n.º 6 (18 de dezembro de 2020): 1289–309. http://dx.doi.org/10.3233/ida-194729.

Texto completo da fonte
Resumo:
In this research, we combine relational learning with multi-domain to develop a formal framework for a recommendation system. The design of our framework aims at: (i) constructing general rules for recommendations, (ii) providing suggested items with clear and understandable explanations, (iii) delivering a broad range of recommendations including novel and unexpected items. We use relational learning to find all possible relations, including novel relations, and to form the general rules for recommendations. Each rule is represented in relational logic, a formal language, associating with probability. The rules are used to suggest the items, in any domain, to the user whose preferences or other properties satisfy the conditions of the rule. The information described by the rule serves as an explanation for the suggested item. It states clearly why the items are chosen for the users. The explanation is in if-then logical format which is unambiguous, less redundant and more concise compared to a natural language used in other explanation recommendation systems. The explanation itself can help persuade the user to try out the suggested items, and the associated probability can drive the user to make a decision easier and faster with more confidence. Incorporating information or knowledge from multiple domains allows us to broaden our search space and provides us with more opportunities to discover items which are previously unseen or surprised to a user resulting in a wide range of recommendations. The experiment results show that our proposed algorithm is very promising. Although the quality of recommendations provided by our framework is moderate, our framework does produce interesting recommendations not found in the primitive single-domain based system and with simple and understandable explanations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Priyanka Singla. "An Intelligent Job Recommendation System based on Semantic Embeddings and Machine Learning". Journal of Information Systems Engineering and Management 10, n.º 5s (24 de janeiro de 2025): 520–42. https://doi.org/10.52783/jisem.v10i5s.681.

Texto completo da fonte
Resumo:
To address the shortcomings in existing approaches of job recommendation systems, this paper proposes a novel machine-learning-based job recommendation system that performs bi-directional matching for dynamic and accurate recommendations. The proposed approach generates ideal job recommendations for a targeted Curriculum Vitae (CV) and vice versa. Unlike previous approaches, the proposed approach incorporates natural language processing (NLP) techniques to extract linguistic features such as Bag of Words (BoW), n-grams, TF-IDF, and Parts-of-Speech (PoS) tag and build a rich feature set. These features are further analyzed using semantic embeddings, enabling robust job matching. Experiments were performed to validate the performance of the proposed approach. The designed system is validated on various real-world datasets, overcoming the dataset size limitations of prior works. Due to combination of semantic embeddings, machine learning, and various similarity measures, this approach demonstrates the potential to deliver reliable, explainable, and ideal job recommendations, addressing the challenges of static and false outputs in existing systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Malikireddy, Sai Kiran Reddy. "Revolutionizing Product Recommendations with Generative AI: Context-Aware Personalization at Scale". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 12 (30 de dezembro de 2024): 1–8. https://doi.org/10.55041/ijsrem40434.

Texto completo da fonte
Resumo:
Generative Artificial Intelligence (GenAI) is poised to transform the product recommendation landscape by bridging the gap between user intent and personalized discovery. Traditional recommendation systems rely heavily on collaborative filtering, content-based algorithms, or hybrid models, often constrained by sparse data and limited contextual understanding. GenAI introduces a paradigm shift by leveraging advanced transformer-based architectures and multimodal embeddings to deliver highly contextual, dynamic, and explainable recommendations at scale. This paper explores the use of GenAI for product recommendation systems, focusing on its ability to generate rich, context- aware interactions that mimic human-like personalization. By fine-tuning pre-trained language models on domain- specific product catalogs and user behavior data, we demonstrate how GenAI can synthesize user preferences into coherent narratives, predict latent needs, and suggest products that align with evolving trends. Additionally, we propose a novel “Recommendation Dialogue Model” that integrates natural language prompts with visual and textual content to provide seamless, conversational shopping experiences. Our experiments, conducted on benchmark datasets and real-world e-commerce platforms, show that GenAI-based systems outperform traditional models in precision, recall, and customer satisfaction metrics. Furthermore, we address challenges such as mitigating bias, ensuring diversity in recommendations, and preserving privacy through federated learning approaches. By reimagining product discovery as a generative process, this work highlights the transformative potential of GenAI to create hyper-personalized, interactive, and engaging recommendation systems that redefine how users find and connect with products. The implications extend to e-commerce, media streaming, and beyond, offering a blueprint for the next generation of intelligent systems. Keywords: Generative Artificial Intelligence, Product Recommendations, Transformer Architectures, Multimodal Embeddings, Recommendation Dialogue Model, Natural Language Processing, Contextual Understanding, Federated Learning, Privacy Preservation
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Zuo, Xianglin, Tianhao Jia, Xin He, Bo Yang e Ying Wang. "Exploiting Dual-Attention Networks for Explainable Recommendation in Heterogeneous Information Networks". Entropy 24, n.º 12 (24 de novembro de 2022): 1718. http://dx.doi.org/10.3390/e24121718.

Texto completo da fonte
Resumo:
The aim of explainable recommendation is not only to provide recommended items to users, but also to make users aware of why these items are recommended. Traditional recommendation methods infer user preferences for items using user–item rating information. However, the expressive power of latent representations of users and items is relatively limited due to the sparseness of the user–item rating matrix. Heterogeneous information networks (HIN) provide contextual information for improving recommendation performance and interpreting the interactions between users and items. However, due to the heterogeneity and complexity of context information in HIN, it is still a challenge to integrate this contextual information into explainable recommendation systems effectively. In this paper, we propose a novel framework—the dual-attention networks for explainable recommendation (DANER) in HINs. We first used multiple meta-paths to capture high-order semantic relations between users and items in HIN for generating similarity matrices, and then utilized matrix decomposition on similarity matrices to obtain low-dimensional sparse representations of users and items. Secondly, we introduced two-level attention networks, namely a local attention network and a global attention network, to integrate the representations of users and items from different meta-paths for obtaining high-quality representations. Finally, we use a standard multi-layer perceptron to model the interactions between users and items, which predict users’ ratings of items. Furthermore, the dual-attention mechanism also contributes to identifying critical meta-paths to generate relevant explanations for users. Comprehensive experiments on two real-world datasets demonstrate the effectiveness of DANER on recommendation performance as compared with the state-of-the-art methods. A case study illustrates the interpretability of DANER.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Kim, Se Young, Dae Ho Kim, Min Ji Kim, Hyo Jin Ko e Ok Ran Jeong. "XAI-Based Clinical Decision Support Systems: A Systematic Review". Applied Sciences 14, n.º 15 (30 de julho de 2024): 6638. http://dx.doi.org/10.3390/app14156638.

Texto completo da fonte
Resumo:
With increasing electronic medical data and the development of artificial intelligence, clinical decision support systems (CDSSs) assist clinicians in diagnosis and prescription. Traditional knowledge-based CDSSs follow an accumulated medical knowledgebase and a predefined rule system, which clarifies the decision-making process; however, maintenance cost issues exist in the medical data quality control and standardization processes. Non-knowledge-based CDSSs utilize vast amounts of data and algorithms to effectively make decisions; however, the deep learning black-box problem causes unreliable results. EXplainable Artificial Intelligence (XAI)-based CDSSs provide valid rationales and explainable results. These systems ensure trustworthiness and transparency by showing the recommendation and prediction result process using explainable techniques. However, existing systems have limitations, such as the scope of data utilization and the lack of explanatory power of AI models. This study proposes a new XAI-based CDSS framework to address these issues; introduces resources, datasets, and models that can be utilized; and provides a foundation model to support decision-making in various disease domains. Finally, we propose future directions for CDSS technology and highlight societal issues that need to be addressed to emphasize the potential of CDSSs in the future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Lin, Yujie, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma e Maarten de Rijke. "Explainable Outfit Recommendation with Joint Outfit Matching and Comment Generation". IEEE Transactions on Knowledge and Data Engineering 32, n.º 8 (1 de agosto de 2020): 1502–16. http://dx.doi.org/10.1109/tkde.2019.2906190.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Nyachama, Kerry. "Effectiveness of Recommender Systems in Knowledge Discovery". European Journal of Information and Knowledge Management 3, n.º 1 (28 de março de 2024): 50–62. http://dx.doi.org/10.47941/ejikm.1753.

Texto completo da fonte
Resumo:
Purpose: The general purpose of the study was to investigate the effectiveness of recommender systems in knowledge discovery. Methodology: The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive’s time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. Findings: The findings reveal that there exists a contextual and methodological gap relating to recommender systems in knowledge discovery. The study on the effectiveness of recommender systems in knowledge discovery found that such systems played a pivotal role in facilitating users' exploration of vast information repositories, enabling them to uncover relevant resources and expand their knowledge. It found that recommender systems employing advanced algorithms and personalized techniques demonstrated higher effectiveness in generating relevant recommendations tailored to users' preferences and needs. Additionally, the study highlighted the positive correlation between user engagement metrics and knowledge discovery outcomes, emphasizing the importance of fostering active user participation in the recommendation process. Contextual information was also identified as a crucial factor influencing recommendation effectiveness. Overall, the study underscored the significance of continuous refinement and optimization of recommender system algorithms to enhance knowledge discovery outcomes for users. Unique Contribution to Theory, Practice and Policy: The Social Learning theory, Information Foraging theory and Cognitive Load theory may be used to anchor future studies on recommender systems in knowledge discovery. The study provided recommendations to enhance the efficacy of such systems. It suggested adopting hybrid recommender systems that combine collaborative and content-based filtering techniques to offer more accurate and diverse recommendations. Additionally, the study emphasized the importance of integrating contextual information into recommendation algorithms to dynamically adjust recommendations based on situational context. Furthermore, it recommended the use of explainable AI techniques to improve transparency and user understanding of recommendation processes. Maximizing user engagement through active participation and feedback was also highlighted as crucial, along with prioritizing recommendation diversity to foster exploration and serendipitous discovery of new knowledge resources.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Lin, Ching-Sheng, Chung-Nan Tsai, Shao-Tang Su, Jung-Sing Jwo, Cheng-Hsiung Lee e Xin Wang. "Predictive Prompts with Joint Training of Large Language Models for Explainable Recommendation". Mathematics 11, n.º 20 (10 de outubro de 2023): 4230. http://dx.doi.org/10.3390/math11204230.

Texto completo da fonte
Resumo:
Large language models have recently gained popularity in various applications due to their ability to generate natural text for complex tasks. Recommendation systems, one of the frequently studied research topics, can be further improved using the capabilities of large language models to track and understand user behaviors and preferences. In this research, we aim to build reliable and transparent recommendation system by generating human-readable explanations to help users obtain better insights into the recommended items and gain more trust. We propose a learning scheme to jointly train the rating prediction task and explanation generation task. The rating prediction task learns the predictive representation from the input of user and item vectors. Subsequently, inspired by the recent success of prompt engineering, these predictive representations are served as predictive prompts, which are soft embeddings, to elicit and steer any knowledge behind language models for the explanation generation task. Empirical studies show that the proposed approach achieves competitive results compared with other existing baselines on the public English TripAdvisor dataset of explainable recommendations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Yang, Zuoxi, e Shoubin Dong. "HAGERec: Hierarchical Attention Graph Convolutional Network Incorporating Knowledge Graph for Explainable Recommendation". Knowledge-Based Systems 204 (setembro de 2020): 106194. http://dx.doi.org/10.1016/j.knosys.2020.106194.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Yang, Chao, Weixin Zhou, Zhiyu Wang, Bin Jiang, Dongsheng Li e Huawei Shen. "Accurate and Explainable Recommendation via Hierarchical Attention Network Oriented Towards Crowd Intelligence". Knowledge-Based Systems 213 (fevereiro de 2021): 106687. http://dx.doi.org/10.1016/j.knosys.2020.106687.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Liu, Peng, Lemei Zhang e Jon Atle Gulla. "Dynamic attention-based explainable recommendation with textual and visual fusion". Information Processing & Management 57, n.º 6 (novembro de 2020): 102099. http://dx.doi.org/10.1016/j.ipm.2019.102099.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Jing, Yanzhen, Guanghui Zhou, Chao Zhang, Fengtian Chang, Hairui Yan e Zhongdong Xiao. "XMKR: Explainable manufacturing knowledge recommendation for collaborative design with graph embedding learning". Advanced Engineering Informatics 59 (janeiro de 2024): 102339. http://dx.doi.org/10.1016/j.aei.2023.102339.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Caro-Martínez, Marta, Guillermo Jiménez-Díaz e Juan A. Recio-García. "Conceptual Modeling of Explainable Recommender Systems: An Ontological Formalization to Guide Their Design and Development". Journal of Artificial Intelligence Research 71 (24 de julho de 2021): 557–89. http://dx.doi.org/10.1613/jair.1.12789.

Texto completo da fonte
Resumo:
With the increasing importance of e-commerce and the immense variety of products, users need help to decide which ones are the most interesting to them. This is one of the main goals of recommender systems. However, users’ trust may be compromised if they do not understand how or why the recommendation was achieved. Here, explanations are essential to improve user confidence in recommender systems and to make the recommendation useful. Providing explanation capabilities into recommender systems is not an easy task as their success depends on several aspects such as the explanation’s goal, the user’s expectation, the knowledge available, or the presentation method. Therefore, this work proposes a conceptual model to alleviate this problem by defining the requirements of explanations for recommender systems. Our goal is to provide a model that guides the development of effective explanations for recommender systems as they are correctly designed and suited to the user’s needs. Although earlier explanation taxonomies sustain this work, our model includes new concepts not considered in previous works. Moreover, we make a novel contribution regarding the formalization of this model as an ontology that can be integrated into the development of proper explanations for recommender systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Zhang, Yongfeng, Xu Chen, Da Xu e Tobias Schnabel. "Introduction to the Special Issue on Causal Inference for Recommender Systems". ACM Transactions on Recommender Systems 2, n.º 2 (30 de junho de 2024): 1–4. http://dx.doi.org/10.1145/3661465.

Texto completo da fonte
Resumo:
A significant proportion of machine learning methodologies for recommendation systems are grounded in the fundamental principle of matching, utilizing perceptual and similarity-based learning approaches. These methods include both the extraction of features from data through representation learning and the derivation of similarity matching functions via neural function learning. While these models are important for recommendation systems, their foundational design philosophy primarily captures correlational signals within the data. Transitioning from correlation-based learning to causal learning in recommendation systems represents a critical area to explore, as causal models enable extrapolation beyond observational data in both representation learning and ranking tasks. Specifically, causal learning offers potential enhancements to the recommender system community across multiple dimensions, including, but not limited to, explainable, unbiased, fairness-aware, robust, and cognitive reasoning models for recommendation. This special issue is dedicated to exploring the research and practical applications of causal inference within the realms of recommendation and broader ranking scenarios. It has attracted interest from an array of researchers and practitioners on disseminating the latest developments in causal modeling for recommender systems. Moreover, it has attracted the interest of professionals from various fields such as Information Retrieval, Machine Learning, Artificial Intelligence, Natural Language Processing, Data Science, and others.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Wang, Chao, Hengshu Zhu, Peng Wang, Chen Zhu, Xi Zhang, Enhong Chen e Hui Xiong. "Personalized and Explainable Employee Training Course Recommendations: A Bayesian Variational Approach". ACM Transactions on Information Systems 40, n.º 4 (31 de outubro de 2022): 1–32. http://dx.doi.org/10.1145/3490476.

Texto completo da fonte
Resumo:
As a major component of strategic talent management, learning and development (L&D) aims at improving the individual and organization performances through planning tailored training for employees to increase and improve their skills and knowledge. While many companies have developed the learning management systems (LMSs) for facilitating the online training of employees, a long-standing important issue is how to achieve personalized training recommendations with the consideration of their needs for future career development. To this end, in this article, we present a focused study on the explainable personalized online course recommender system for enhancing employee training and development. Specifically, we first propose a novel end-to-end hierarchical framework, namely Demand-aware Collaborative Bayesian Variational Network (DCBVN), to jointly model both the employees’ current competencies and their career development preferences in an explainable way. In DCBVN, we first extract the latent interpretable representations of the employees’ competencies from their skill profiles with autoencoding variational inference based topic modeling. Then, we develop an effective demand recognition mechanism for learning the personal demands of career development for employees. In particular, all the above processes are integrated into a unified Bayesian inference view for obtaining both accurate and explainable recommendations. Furthermore, for handling the employees with sparse or missing skill profiles, we develop an improved version of DCBVN, called the Demand-aware Collaborative Competency Attentive Network (DCCAN) framework , by considering the connectivity among employees. In DCCAN, we first build two employee competency graphs from learning and working aspects. Then, we design a graph-attentive network and a multi-head integration mechanism to infer one’s competency information from her neighborhood employees. Finally, we can generate explainable recommendation results based on the competency representations. Extensive experimental results on real-world data clearly demonstrate the effectiveness and the interpretability of both of our frameworks, as well as their robustness on sparse and cold-start scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Camastra, Francesco, Angelo Ciaramella, Giuseppe Salvi, Salvatore Sposato e Antonino Staiano. "On the interpretability of fuzzy knowledge base systems". PeerJ Computer Science 10 (3 de dezembro de 2024): e2558. https://doi.org/10.7717/peerj-cs.2558.

Texto completo da fonte
Resumo:
In recent years, fuzzy rule-based systems have been attracting great interest in interpretable and eXplainable Artificial Intelligence as ante-hoc methods. These systems represent knowledge that humans can easily understand, but since they are not interpretable per se, they must remain simple and understandable, and the rule base must have a compactness property. This article presents an algorithm for minimizing the fuzzy rule base, leveraging rough set theory and a greedy strategy. Reducing fuzzy rules simplifies the rule base, facilitating the construction of interpretable inference systems such as decision support and recommendation systems. Validation and comparison of the proposed methodology using both real and benchmark data yield encouraging results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Chen, Chao, Dongsheng Li, Junchi Yan, Hanchi Huang e Xiaokang Yang. "Scalable and Explainable 1-Bit Matrix Completion via Graph Signal Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de maio de 2021): 7011–19. http://dx.doi.org/10.1609/aaai.v35i8.16863.

Texto completo da fonte
Resumo:
One-bit matrix completion is an important class of positive-unlabeled (PU) learning problems where the observations consist of only positive examples, e.g., in top-N recommender systems. For the first time, we show that 1-bit matrix completion can be formulated as the problem of recovering clean graph signals from noise-corrupted signals in hypergraphs. This makes it possible to enjoy recent advances in graph signal learning. Then, we propose the spectral graph matrix completion (SGMC) method, which can recover the underlying matrix in distributed systems by filtering the noisy data in the graph frequency domain. Meanwhile, it can provide micro- and macro-level explanations by following vertex-frequency analysis. To tackle the computational and memory issue of performing graph signal operations on large graphs, we construct a scalable Nystrom algorithm which can efficiently compute orthonormal eigenvectors. Furthermore, we also develop polynomial and sparse frequency filters to remedy the accuracy loss caused by the approximations. We demonstrate the effectiveness of our algorithms on top-N recommendation tasks, and the results on three large-scale real-world datasets show that SGMC can outperform state-of-the-art top-N recommendation algorithms in accuracy while only requiring a small fraction of training time compared to the baselines.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Younus, Yasir Mahmood. "An Explainable Content-Based Course Recommender Using Job Skills". AlKadhum Journal of Science 1, n.º 2 (14 de dezembro de 2023): 32–43. http://dx.doi.org/10.61710/akjs.v1i2.62.

Texto completo da fonte
Resumo:
The large number of courses offered in universities and online studies made it difficult for students to choose the courses that suit their interests and career goals, which led students to lose many opportunities to be employed in the job they wanted. To keep pace with the rapid development of technology, and instead of relying on the job title as was previously done, the employers began to identify the skills required for a job. The competencies of the candidates are then examined and evaluated according to those requirements. Thus, it has become necessary for students to take courses that suit their future professional interests, ensuring that they are employed in the job they desire and supporting their long-term career success. Fortunately, the emergence of skills-based employment has provided an opportunity for universities and colleges to create a clearer path to the courses offered to allow students to take courses that match their future career interests. In this study, we used K-Mean clustering algorithm, TF-idf approach, and content-based filtering algorithm to provide relevant courses for students based on the required job with an explanation of why these courses are recommended. Our result illustrates that our method offers many advantages compared with other recommender systems. our system converts a simple course recommendation into a tool for discovering skills. Since many recommendation systems work as black boxes, we designed our system to recommend the relevant course with explaining why these courses are recommended, which will add a factor of transparency to our system and confirms the reliability of the system to the students.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Dai, Yiling, Kyosuke Takami, Brendan Flanagan e Hiroaki Ogata. "Beyond recommendation acceptance: explanation’s learning effects in a math recommender system". Research and Practice in Technology Enhanced Learning 19 (12 de setembro de 2023): 020. http://dx.doi.org/10.58459/rptel.2024.19020.

Texto completo da fonte
Resumo:
Recommender systems can provide personalized advice on learning for individual students. Providing explanations of those recommendations are expected to increase the transparency and persuasiveness of the system, thus improve students’ adoption of the recommendation. Little research has explored the explanations’ practical effects on learning performance except for the acceptance of recommended learning activities. The recommendation explanations can improve the learning performance if the explanations are designed to contribute to relevant learning skills. This study conducted a comparative experiment (N = 276) in high school classrooms, aiming to investigate whether the use of an explainable math recommender system improves students’ learning performance. We found that the presence of the explanations had positive effects on students’ learning improvement and perceptions of the systems, but not the number of solved quizzes during the learning task. These results imply the possibility that the recommendation explanations may affect students’ meta-cognitive skills and their perceptions, which further contribute to students’ learning improvement. When separating the students based on their prior math abilities, we found a significant correlation between the number of viewed recommendations and the final learning improvement for the students with lower math abilities. This indicates that the students with lower math abilities may benefit from reading their learning progress indicated in the explanations. For students with higher math abilities, their learning improvement was more related to the behavior to select and solve recommended quizzes, which indicates a necessity of more sophisticated and interactive recommender system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Abu-Rasheed, Hasan, Christian Weber, Johannes Zenkert, Mareike Dornhöfer e Madjid Fathi. "Transferrable Framework Based on Knowledge Graphs for Generating Explainable Results in Domain-Specific, Intelligent Information Retrieval". Informatics 9, n.º 1 (19 de janeiro de 2022): 6. http://dx.doi.org/10.3390/informatics9010006.

Texto completo da fonte
Resumo:
In modern industrial systems, collected textual data accumulates over time, offering an important source of information for enhancing present and future industrial practices. Although many AI-based solutions have been developed in the literature for a domain-specific information retrieval (IR) from this data, the explainability of these systems was rarely investigated in such domain-specific environments. In addition to considering the domain requirements within an explainable intelligent IR, transferring the explainable IR algorithm to other domains remains an open-ended challenge. This is due to the high costs, which are associated with intensive customization and required knowledge modelling, when developing new explainable solutions for each industrial domain. In this article, we present a transferable framework for generating domain-specific explanations for intelligent IR systems. The aim of our work is to provide a comprehensive approach for constructing explainable IR and recommendation algorithms, which are capable of adopting to domain requirements and are usable in multiple domains at the same time. Our method utilizes knowledge graphs (KG) for modeling the domain knowledge. The KG provides a solid foundation for developing intelligent IR solutions. Utilizing the same KG, we develop graph-based components for generating textual and visual explanations of the retrieved information, taking into account the domain requirements and supporting the transferability to other domain-specific environments, through the structured approach. The use of the KG resulted in minimum-to-zero adjustments when creating explanations for multiple intelligent IR algorithms in multiple domains. We test our method within two different use cases, a semiconductor manufacturing centered use case and a job-to-applicant matching one. Our quantitative results show a high capability of our approach to generate high-level explanations for the end users. In addition, the developed explanation components were highly adaptable to both industrial domains without sacrificing the overall accuracy of the intelligent IR algorithm. Furthermore, a qualitative user-study was conducted. We recorded a high level of acceptance from the users, who reported an enhanced overall experience with the explainable IR system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Alhejaili, Abdullah, e Shaheen Fatima. "Expressive Latent Feature Modelling for Explainable Matrix Factorisation based Recommender Systems". ACM Transactions on Interactive Intelligent Systems, 2 de maio de 2022. http://dx.doi.org/10.1145/3530299.

Texto completo da fonte
Resumo:
The traditional matrix factorisation (MF) based recommender system methods, despite their success in making the recommendation, lack explainable recommendations as the produced latent features are meaningless and cannot explain the recommendation. This paper introduces an MF-based explainable recommender system framework that utilises the user-item rating data and the available item information to model meaningful user and item latent features. These features are exploited to enhance the rating prediction accuracy and the recommendation explainability. Our proposed feature-based explainable recommender system framework utilises these meaningful user and item latent features to explain the recommendation without relying on private or outer data. The recommendations are explained to the user using text message and bar chart. Our proposed model has been evaluated in terms of the rating prediction accuracy and the reasonableness of the explanation using six real-world benchmark datasets for movies, books, video games and fashion recommendation systems. The results show that the proposed model can produce accurate explainable recommendations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Markchom, Thanet, Huizhi Liang e James Ferryman. "Explainable Meta-Path Based Recommender Systems". ACM Transactions on Recommender Systems, 28 de setembro de 2023. http://dx.doi.org/10.1145/3625828.

Texto completo da fonte
Resumo:
Meta-paths have been popularly used to provide explainability in recommendations. Although long/complicated meta-paths could represent complex user-item connectivity, they are not easy to interpret. This work tackles this problem by introducing a meta-path translation task. The objective is to translate a meta-path to its comparable explainable meta-paths that perform similarly in terms of recommendation but have higher explainability compared to the given one. We propose a definition of meta-path explainability to determine comparable explainable meta-paths and a meta-path grammar that allows comparable explainable meta-paths to be formed in a similar way as sentences in human languages. Based on this grammar, we propose a meta-path translation model, a sequence-to-sequence (Seq2Seq) model to translate a long and complicated meta-path to its comparable explainable meta-paths. Two novel datasets for meta-path translation were generated based on two real-world recommendation datasets. The experiments were conducted on these generated datasets. The results show that our model outperformed state-of-the-art Seq2Seq baselines regarding meta-path translation and maintained a better trade-off between accuracy and diversity/readability in predicting comparable explainable meta-paths. These results indicate that our model can effectively generate a group of explainable meta-paths as alternative explanations for those recommendations based on any given long/complicated meta-path.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

"Ontology Reasoning Towards Sentimental Product Recommendations Explanations". International Journal of Recent Technology and Engineering 8, n.º 3 (30 de setembro de 2019): 4706–9. http://dx.doi.org/10.35940/ijrte.c6852.098319.

Texto completo da fonte
Resumo:
In the last two decades various organizations like service industries, study communities, academic world and public industries are working intensely on sentiment analysis, to extract and analyze public views. The reviews given on the social websites, commercial websites, etc. enable customer to share their point of view. Explainable Recommendation algorithms help the user by providing explainable recommendations, which improves user satisfaction. Recently, many researchers proposed explainable recommendations. In this survey Firstly, various opinion-mining approaches are explored. Secondly, we reviewed sentiment-based and ontology based recommendation systems. Finally, prospects for the research in opinion mining is discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Yu, Dianer, Qian Li, Xiangmeng Wang, Qing Li e Guandong Xu. "Counterfactual Explainable Conversational Recommendation". IEEE Transactions on Knowledge and Data Engineering, 2023, 1–13. http://dx.doi.org/10.1109/tkde.2023.3322403.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia