Letteratura scientifica selezionata sul tema "Explainable recommendation systems"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Explainable recommendation systems".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Explainable recommendation systems"

1

Pasrija, Vatesh, e Supriya Pasrija. "Demystifying Recommendations: Transparency and Explainability in Recommendation Systems". International Journal for Research in Applied Science and Engineering Technology 12, n. 2 (29 febbraio 2024): 1376–83. http://dx.doi.org/10.22214/ijraset.2024.58541.

Testo completo
Abstract (sommario):
Abstract: Recommendation algorithms are widely used, however many consumers want more clarity on why specific goods are recommended to them. The absence of explainability jeopardizes user trust, satisfaction, and potentially privacy. Improving transparency is difficult and involves the need for flexible interfaces, privacy protection, scalability, and customisation. Explainable recommendations provide substantial advantages such as enhancing relevance assessment, bolstering user interactions, facilitating system monitoring, and fostering accountability. Typical methods include giving summaries of the fundamental approach, emphasizing significant data points, and utilizing hybrid recommendation models. Case studies demonstrate that openness has assisted platforms such as YouTube and Spotify in achieving more than a 10% increase in key metrics like click-through rate. Additional research should broaden the methods for providing explanations, increase real-world implementation in other industries, guarantee human-centered supervision of suggestions, and promote consumer trust by following ethical standards. Accurate recommendations are essential. The future involves developing technologies that provide users with information while honoring their autonomy.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lai, Kai-Huang, Zhe-Rui Yang, Pei-Yuan Lai, Chang-Dong Wang, Mohsen Guizani e Min Chen. "Knowledge-Aware Explainable Reciprocal Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 8 (24 marzo 2024): 8636–44. http://dx.doi.org/10.1609/aaai.v38i8.28708.

Testo completo
Abstract (sommario):
Reciprocal recommender systems (RRS) have been widely used in online platforms such as online dating and recruitment. They can simultaneously fulfill the needs of both parties involved in the recommendation process. Due to the inherent nature of the task, interaction data is relatively sparse compared to other recommendation tasks. Existing works mainly address this issue through content-based recommendation methods. However, these methods often implicitly model textual information from a unified perspective, making it challenging to capture the distinct intentions held by each party, which further leads to limited performance and the lack of interpretability. In this paper, we propose a Knowledge-Aware Explainable Reciprocal Recommender System (KAERR), which models metapaths between two parties independently, considering their respective perspectives and requirements. Various metapaths are fused using an attention-based mechanism, where the attention weights unveil dual-perspective preferences and provide recommendation explanations for both parties. Extensive experiments on two real-world datasets from diverse scenarios demonstrate that the proposed model outperforms state-of-the-art baselines, while also delivering compelling reasons for recommendations to both parties.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Leal, Fátima, Bruno Veloso, Benedita Malheiro, Juan C. Burguillo, Adriana E. Chis e Horacio González-Vélez. "Stream-based explainable recommendations via blockchain profiling". Integrated Computer-Aided Engineering 29, n. 1 (28 dicembre 2021): 105–21. http://dx.doi.org/10.3233/ica-210668.

Testo completo
Abstract (sommario):
Explainable recommendations enable users to understand why certain items are suggested and, ultimately, nurture system transparency, trustworthiness, and confidence. Large crowdsourcing recommendation systems ought to crucially promote authenticity and transparency of recommendations. To address such challenge, this paper proposes the use of stream-based explainable recommendations via blockchain profiling. Our contribution relies on chained historical data to improve the quality and transparency of online collaborative recommendation filters – Memory-based and Model-based – using, as use cases, data streamed from two large tourism crowdsourcing platforms, namely Expedia and TripAdvisor. Building historical trust-based models of raters, our method is implemented as an external module and integrated with the collaborative filter through a post-recommendation component. The inter-user trust profiling history, traceability and authenticity are ensured by blockchain, since these profiles are stored as a smart contract in a private Ethereum network. Our empirical evaluation with HotelExpedia and Tripadvisor has consistently shown the positive impact of blockchain-based profiling on the quality (measured as recall) and transparency (determined via explanations) of recommendations.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yang, Mengyuan, Mengying Zhu, Yan Wang, Linxun Chen, Yilei Zhao, Xiuyuan Wang, Bing Han, Xiaolin Zheng e Jianwei Yin. "Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 8 (24 marzo 2024): 9250–59. http://dx.doi.org/10.1609/aaai.v38i8.28777.

Testo completo
Abstract (sommario):
Large language model-based explainable recommendation (LLM-based ER) systems can provide remarkable human-like explanations and have widely received attention from researchers. However, the original LLM-based ER systems face three low-quality problems in their generated explanations, i.e., lack of personalization, inconsistency, and questionable explanation data. To address these problems, we propose a novel LLM-based ER model denoted as LLM2ER to serve as a backbone and devise two innovative explainable quality reward models for fine-tuning such a backbone in a reinforcement learning paradigm, ultimately yielding a fine-tuned model denoted as LLM2ER-EQR, which can provide high-quality explanations. LLM2ER-EQR can generate personalized, informative, and consistent high-quality explanations learned from questionable-quality explanation datasets. Extensive experiments conducted on three real-world datasets demonstrate that our model can generate fluent, diverse, informative, and highly personalized explanations.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Ai, Qingyao, Vahid Azizi, Xu Chen e Yongfeng Zhang. "Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation". Algorithms 11, n. 9 (13 settembre 2018): 137. http://dx.doi.org/10.3390/a11090137.

Testo completo
Abstract (sommario):
Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Cho, Gyungah, Pyoung-seop Shim e Jaekwang Kim. "Explainable B2B Recommender System for Potential Customer Prediction Using KGAT". Electronics 12, n. 17 (22 agosto 2023): 3536. http://dx.doi.org/10.3390/electronics12173536.

Testo completo
Abstract (sommario):
The adoption of recommender systems in business-to-business (B2B) can make the management of companies more efficient. Although the importance of recommendation is increasing with the expansion of B2B e-commerce, not enough studies on B2B recommendations have been conducted. Due to several differences between B2B and business-to-consumer (B2C), the B2B recommender system should be defined differently. This paper presents a new perspective on the explainable B2B recommender system using the knowledge graph attention network for recommendation (KGAT). Unlike traditional recommendation systems that suggest products to consumers, this study focuses on recommending potential buyers to sellers. Additionally, the utilization of the KGAT attention mechanisms enables the provision of explanations for each company’s recommendations. The Korea Electronic Taxation System Association provides the Market Transaction Dataset in South Korea, and this research shows how the dataset is utilized in the knowledge graph (KG). The main tasks can be summarized in three points: (i) suggesting the application of an explainable recommender system in B2B for recommending potential customers, (ii) extracting the performance-enhancing features of a knowledge graph, and (iii) enhancing keyword extraction for trading items to improve recommendation performance. We can anticipate providing good insight into the development of the industry via the utilization of the B2B recommendation of potential customer prediction.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Wang, Tongxuan, Xiaolong Zheng, Saike He, Zhu Zhang e Desheng Dash Wu. "Learning user-item paths for explainable recommendation". IFAC-PapersOnLine 53, n. 5 (2020): 436–40. http://dx.doi.org/10.1016/j.ifacol.2021.04.119.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Guesmi, Mouadh, Mohamed Amine Chatti, Shoeb Joarder, Qurat Ul Ain, Clara Siepmann, Hoda Ghanbarzadeh e Rawaa Alatrash. "Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System". Information 14, n. 7 (14 luglio 2023): 401. http://dx.doi.org/10.3390/info14070401.

Testo completo
Abstract (sommario):
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Huang, Xiao, Pengjie Ren, Zhaochun Ren, Fei Sun, Xiangnan He, Dawei Yin e Maarten de Rijke. "Report on the international workshop on natural language processing for recommendations (NLP4REC 2020) workshop held at WSDM 2020". ACM SIGIR Forum 54, n. 1 (giugno 2020): 1–5. http://dx.doi.org/10.1145/3451964.3451970.

Testo completo
Abstract (sommario):
This paper summarizes the outcomes of the International Workshop on Natural Language Processing for Recommendations (NLP4REC 2020), held in Houston, USA, on February 7, 2020, during WSDM 2020. The purpose of this workshop was to explore the potential research topics and industrial applications in leveraging natural language processing techniques to tackle the challenges in constructing more intelligent recommender systems. Specific topics included, but were not limited to knowledge-aware recommendation, explainable recommendation, conversational recommendation, and sequential recommendation.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Li, Lei, Yongfeng Zhang e Li Chen. "Personalized Prompt Learning for Explainable Recommendation". ACM Transactions on Information Systems 41, n. 4 (23 marzo 2023): 1–26. http://dx.doi.org/10.1145/3580488.

Testo completo
Abstract (sommario):
Providing user-understandable explanations to justify recommendations could help users better understand the recommended items, increase the system’s ease of use, and gain users’ trust. A typical approach to realize it is natural language generation. However, previous works mostly adopt recurrent neural networks to meet the ends, leaving the potentially more effective pre-trained Transformer models under-explored. In fact, user and item IDs, as important identifiers in recommender systems, are inherently in different semantic space as words that pre-trained models were already trained on. Thus, how to effectively fuse IDs into such models becomes a critical issue. Inspired by recent advancement in prompt learning, we come up with two solutions: find alternative words to represent IDs (called discrete prompt learning) and directly input ID vectors to a pre-trained model (termed continuous prompt learning). In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages. To bridge the gap, we further propose two training strategies: sequential tuning and recommendation as regularization. Extensive experiments show that our continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Explainable recommendation systems"

1

Attolou, Hervé-Madelein. "Explications pour des recommandations manquantes basées sur les graphes". Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1337.

Testo completo
Abstract (sommario):
Cette thèse explore le domaine spécifique des explications du type "Pourquoipas" (recommandations manquantes), qui se concentrent sur l'explication del'absence de certains éléments dans la liste de recommandations. Le besoind'explications recommandations manquantes est particulièrement crucial dans desscénarios de recommandation complexes, où l'absence de certaines recommandationspeut entraîner l'insatisfaction ou la méfiance des utilisateurs. Par exemple, un util-isateur d'une plateforme de commerce en ligne pourrait se demander pourquoi unproduit spécifique n'a pas été recommandé malgré le fait qu'il remplissait certainscritères. En fournissant des explications sur les recommandations manquantes, nousvisons à améliorer la transparence, la satisfaction des utilisateurs, leur engagementet la fiabilité globale du système.La principale contribution de cette thèse est le développement d'EMiGRe (Ex-plainable Missing Graph REcommender), une infrastructure algorithmique novatricequi fournit des explications actionnables pour les recommandations manquantesdans les systèmes de recommandation basés sur les graphes. Contrairement auxméthodes d'explicabilité traditionnelles, qui se concentrent sur la justification deséléments recommandés, EMiGRe se concentre sur l'absence d'éléments spécifiquesdans les listes de recommandations. L'infrastructure fonctionne en analysant les in-teractions de l'utilisateur dans une modélisation de graphe d'information hétérogèned'un ensemble de données, en identifiant les actions ou relations clés qui, lorsqu'ellessont modifiées, auraient conduit à la recommandation de l'élément manquant. EMi-GRe propose deux modes d'explication :- Le mode Suppression identifie les actions ou interactions existantes qui em-pêchent le système de recommander l'élément souhaité et suggère de les supprimer.- Le mode Ajout propose des actions ou des éléments supplémentaires qui, s'ilsétaient utilisés, déclencheraient la recommandation de l'élément manquant.Pour générer des explications dans les deux modes Ajout et Suppression, nousexplorons l'espace de solutions à l'aide d'un ensemble d'heuristiques adaptées àdes objectifs spécifiques. Le cadre offre plusieurs heuristiques, chacune servant unobjectif : l'heuristique incrémentale privilégie un calcul plus rapide en augmentantprogressivement le nombre d'éléments sélectionnés, potentiellement au détrimentd'explications minimales. En revanche, l'heuristique combinatoire vise à trouver desexplications plus petites en explorant minutieusement l'espace de solutions. De plus,une heuristique de comparaison exhaustive est incluse pour évaluer la contributionprécise de chaque voisin à l'élément manquant par rapport à tous les autres éléments,augmentant ainsi le taux de succès.Pour valider l'efficacité du cadre EMiGRe des évaluations expérimentales ap-profondies ont été menées sur des jeux de données synthétiques et réels. Les jeux dedonnées incluent des données provenant de sources telles qu'Amazon, simulant unscénario de commerce en ligne réel, ainsi que le jeu de données Food.com représentantun problème de recommandation sur une plateforme de recettes. Les résultats ex-périmentaux montrent qu'EMiGRe est capable de fournir des explications de bonnequalité pour les recommandations manquantes avec un minimum de surcharge com-putationnelle. En particulier, le système démontre une amélioration significativedes taux de succès des explications par rapport aux méthodes traditionnelles deforce brute, tout en maintenant une taille d'explication et un temps de traitementacceptables.De plus, cette thèse introduit une nouvelle évaluation des explications des recom-mandations manquantes, en définissant des métriques telles que le taux de succès, lataille de l'explication et le temps de traitement pour mesurer la qualité et l'efficacitédes explications
In the era of big data, Recommendation Systems play a pivotal role in helping users navigate and discover relevant content from vast amounts of data. Whilemodern Recommendation Systems have evolved to provide accurate and relevant recommendations, they often fall short in explaining their decisions to users. Thislack of transparency raises important questions about trust and user engagement, especially in cases where certain expected items are not recommended. To addressthis, recent research has focused on developing explainable Recommendation Systems, which provide users with insights into why certain items are recommended oromitted.This thesis explores the specific area of Why-Not Explanations, which focuses on explaining why certain items are missing from the recommendation list. Theneed for Why-Not Explanations is particularly crucial in complex recommendation scenarios, where the absence of certain recommendations can lead to user dissatisfaction or mistrust. For instance, a user on an e-commerce platform might wonder why a specific product was not recommended despite fulfilling certain criteria. By providing explanations for missing recommendations, we aim to improve transparency, user satisfaction, engagement, and the overall trustworthiness of the system.The main contribution of this thesis is the development of EMiGRe (Explainable Missing Graph REcommender), a novel framework that provides actionable Why-Not Explanations for graph-based Recommendation Systems. Unlike traditional explainability methods, which focus on justifying why certain items were recommended, EMiGRe focuses on the absence of specific items from recommendation lists. The framework operates by analyzing the user's interactions within a Heterogeneous Information Graph (HIN) modelization of a dataset, identifying key actions or relations that, when modified, would have led to the recommendation of the missing item. EMiGRe provides two modes for explanation:• Remove Mode identifies existing actions or interactions that are preventing the system from recommending the desired item and suggests removing these.• Add Mode suggests additional actions or items that, if interacted with, would trigger the recommendation of the missing item.To generate explanations in both Add and Remove modes, we explore the solution space using a set of heuristics tailored for specific objectives. The framework offers multiple heuristics each serving a purpose: Incremental Powerset an Exhaustive Comparison . The Incremental heuristic prioritizes faster computation by gradually increasing the set of selected items, potentially overlooking minimal explanations. In contrast, the Powerset heuristic aims to find smaller explanations by thoroughly searching the solution space. Additionally, Exhaustive Comparison comparison heuristic is included to assess the precise contribution of each neighbor to the Why-Not Item (W NI) compared to all other items, increasing the success rate.To validate the effectiveness of the EMiGRe framework, extensive experimental evaluations were conducted on both synthetic and real-world datasets. The datasets include datasets from sources like Amazon, which simulates a real-world e-commerce scenario, and the Food dataset representing a recommendation problemin a recipe-based platform. The experimental results show that EMiGRe is able to provide good-quality Why-Not Explanations. Specifically, the system demonstratesan improvement in explanation success rates compared to traditional brute-force methods, while maintaining acceptable explanation size and processing time.Moreover, this thesis introduces a novel evaluation for Why-Not Explanations, defining metrics such as success rate, explanation size, and processing time to measure the quality and efficiency of explanations
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hsu, Pei-Ying, e 許珮瑩. "A Novel Explainable Mutual Fund Recommendation System Based on Deep Learning Techniques with Knowledge Graph Embeddings". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/wur49w.

Testo completo
Abstract (sommario):
碩士
國立交通大學
資訊管理研究所
107
Since deep learning based models have gained success in various fields during recent years, many recommendation systems also start to take advantage of the deep learning techniques. However, while the deep learning based recommendation systems have achieved high recommendation performance, the lack of interpretability may reduce users' trust and satisfaction, while limiting the model to wide adoption in the real world. As a result, to strike a balance between high accuracy and interpretability, or even obtain both of them at the same time, has become a popular issue among the researches of recommendation systems. In this thesis, we would like to predict and recommend the funds that would be purchased by the customers in the next month, while providing explanations simultaneously. To achieve the goal, we leverage the structure of knowledge graph, and take advantage of deep learning techniques to embed customers and funds features to a unified latent space. We fully utilize the structure knowledge which cannot be learned by the traditional deep learning models, and get the personalized recommendations and explanations. Moreover, we extend the explanations to more complex ones by changing the training procedure of the model, and proposed a measure to rate for the customized explanations while considering strength and uniqueness of the explanations at the same time. Finally, we regard that the knowledge graph based structure could be extended to other applications, and proposed some possible special recommendations accordingly. By evaluating on the dataset of mutual fund transaction records, we verify the effectiveness of our model to provide precise recommendations, and also evaluate the assumptions that our model could utilize the structure knowledge well. Last but not least, we conduct some case study of explanations to demonstrate the effectiveness of our model to provide usual explanations, complex explanations, and other special recommendations.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Palesi, Luciano Alessandro Ipsaro. "Human Centered Big Data Analytics for Smart Applications". Doctoral thesis, 2022. http://hdl.handle.net/2158/1282883.

Testo completo
Abstract (sommario):
This thesis is concerned with Smart Applications. Smart applications are all those applications that incorporate data-driven, actionable insights in the user experience, and they allow in different contexts users to complete actions or make decisions efficiently. The differences between smart applications and traditional applications are mainly that the former are dynamic and evolve on the basis of intuition, user feedback or new data. Moreover, smart applications are data-driven and linked to the context of use. There are several aspects to be considered in the development of intelligent applications, such as machine learning algorithms for producing insights, privacy, data security and ethics. The purpose of this thesis is to study and develop human centered algorithms and systems in different contexts (retail, industry, environment and smart city) with particular attention to big data analysis and prediction techniques. The second purpose of this thesis is to study and develop techniques for the interpretation of results in order to make artificial intelligence algorithms "explainable". Finally, the third and last purpose is to develop solutions in GDPR compliant environments and then secure systems that respect user privacy.
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Explainable recommendation systems"

1

Zong, Xiaoning, Yong Liu, Yonghui Xu, Yixin Zhang, Zhiqi Shen, Yonghua Yang e Lizhen Cui. "SAER: Sentiment-Opinion Alignment Explainable Recommendation". In Database Systems for Advanced Applications, 315–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00126-0_24.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zong, Xiaoning, Yong Liu, Yonghui Xu, Yixin Zhang, Zhiqi Shen, Yonghua Yang e Lizhen Cui. "SAER: Sentiment-Opinion Alignment Explainable Recommendation". In Database Systems for Advanced Applications, 315–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00126-0_24.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Ma, Boxuan, Tianyuan Yang e Baofeng Ren. "A Survey on Explainable Course Recommendation Systems". In Distributed, Ambient and Pervasive Interactions, 273–87. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60012-8_17.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wang, Huiying, Yue Kou, Derong Shen e Tiezheng Nie. "An Explainable Recommendation Method Based on Multi-timeslice Graph Embedding". In Web Information Systems and Applications, 84–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60029-7_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Yin, Ziyu, Yue Kou, Guangqi Wang, Derong Shen e Tiezheng Nie. "Explainable Recommendation via Neural Rating Regression and Fine-Grained Sentiment Perception". In Web Information Systems and Applications, 580–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87571-8_50.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zhang, Luhao, Ruiyu Fang, Tianchi Yang, Maodi Hu, Tao Li, Chuan Shi e Dong Wang. "A Joint Framework for Explainable Recommendation with Knowledge Reasoning and Graph Representation". In Database Systems for Advanced Applications, 351–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00129-1_30.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Valdiviezo-Diaz, Priscila. "Pattern Detection in e-Commerce Using Clustering Techniques to Explainable Products Recommendation". In Lecture Notes in Networks and Systems, 700–713. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-66329-1_45.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Elazab, Fatma, Alia El Bolock, Cornelia Herbert e Slim Abdennadher. "XReC: Towards a Generic Module-Based Framework for Explainable Recommendation Based on Character". In Highlights in Practical Applications of Agents, Multi-Agent Systems, and Social Good. The PAAMS Collection, 17–27. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85710-3_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Girija, J., e Sree Pragna Machupalli. "Innovative Precision Medicine: An Explainable AI- Driven Biomarker-Guided Recommendation System with Multilayer FeedForward Neural Network Model". In Lecture Notes in Networks and Systems, 438–47. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-66410-6_35.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Elazab, Fatma, Alia El Bolock, Cornelia Herbert e Slim Abdennadher. "Multi-modal Explainable Music Recommendation Based on the Relations Between Character and Music Listening Behavior". In Highlights in Practical Applications of Agents, Multi-Agent Systems, and Cognitive Mimetics. The PAAMS Collection, 92–103. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37593-4_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Explainable recommendation systems"

1

Turgut, Özlem, İbrahim Kök e Suat Özdemir. "AgroXAI: Explainable AI-Driven Crop Recommendation System for Agriculture 4.0". In 2024 IEEE International Conference on Big Data (BigData), 7208–17. IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825771.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kumar, Surendra, e Mohit Kumar. "Enhancing Agricultural Decision-Making through an Explainable AI-Based Crop Recommendation System". In 2024 International Conference on Signal Processing and Advance Research in Computing (SPARC), 1–6. IEEE, 2024. https://doi.org/10.1109/sparc61891.2024.10829064.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Praseptiawan, Mugi, M. Fikri Damar Muchtarom, Nabila Muthia Putri, Ahmad Naim Che Pee, Mohd Hafiz Zakaria e Meida Cahyo Untoro. "Mooc Course Recommendation System Model with Explainable AI (XAI) Using Content Based Filtering Method". In 2024 11th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), 144–47. IEEE, 2024. https://doi.org/10.1109/eecsi63442.2024.10776491.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Praseptiawan, Mugi, Nabila Muthia Putri, M. Fikri Damar Muchtarom, Mohd Hafiz Zakaria e Ahmad Naim Che Pee. "Application of Collaborative Filtering and Explainable AI Methods in Recommendation System Modeling to Predict MOOC Course Preferences". In 2024 2nd International Symposium on Information Technology and Digital Innovation (ISITDI), 228–33. IEEE, 2024. https://doi.org/10.1109/isitdi62380.2024.10797073.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Vijayaraghavan, Sairamvinay, e Prasant Mohapatra. "Stability of Explainable Recommendation". In RecSys '23: Seventeenth ACM Conference on Recommender Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3604915.3608853.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Tsukuda, Kosetsu, e Masataka Goto. "Explainable Recommendation for Repeat Consumption". In RecSys '20: Fourteenth ACM Conference on Recommender Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3383313.3412230.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Shulman, Eyal, e Lior Wolf. "Meta Decision Trees for Explainable Recommendation Systems". In AIES '20: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375627.3375876.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Pesovski, Ivica, Ana Madevska Bogdanova e Vladimir Trajkovik. "Systematic Review of the published Explainable Educational Recommendation Systems". In 2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET). IEEE, 2022. http://dx.doi.org/10.1109/ithet56107.2022.10032029.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Hou, Min, Le Wu, Enhong Chen, Zhi Li, Vincent W. Zheng e Qi Liu. "Explainable Fashion Recommendation: A Semantic Attribute Region Guided Approach". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/650.

Testo completo
Abstract (sommario):
In fashion recommender systems, each product usually consists of multiple semantic attributes (e.g., sleeves, collar, etc). When making cloth decisions, people usually show preferences for different semantic attributes (e.g., the clothes with v-neck collar). Nevertheless, most previous fashion recommendation models comprehend the clothing images with a global content representation and lack detailed understanding of users' semantic preferences, which usually leads to inferior recommendation performance. To bridge this gap, we propose a novel Semantic Attribute Explainable Recommender System (SAERS). Specifically, we first introduce a fine-grained interpretable semantic space. We then develop a Semantic Extraction Network (SEN) and Fine-grained Preferences Attention (FPA) module to project users and items into this space, respectively. With SAERS, we are capable of not only providing cloth recommendations for users, but also explaining the reason why we recommend the cloth through intuitive visual attribute semantic highlights in a personalized manner. Extensive experiments conducted on real-world datasets clearly demonstrate the effectiveness of our approach compared with the state-of-the-art methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Zarzour, Hafed, Mohammad Alsmirat e Yaser Jararweh. "Using Deep Learning for Positive Reviews Prediction in Explainable Recommendation Systems". In 2022 13th International Conference on Information and Communication Systems (ICICS). IEEE, 2022. http://dx.doi.org/10.1109/icics55353.2022.9811151.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia