Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Explainable recommendation systems“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Explainable recommendation systems" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Explainable recommendation systems"
Pasrija, Vatesh, und Supriya Pasrija. „Demystifying Recommendations: Transparency and Explainability in Recommendation Systems“. International Journal for Research in Applied Science and Engineering Technology 12, Nr. 2 (29.02.2024): 1376–83. http://dx.doi.org/10.22214/ijraset.2024.58541.
Der volle Inhalt der QuelleLai, Kai-Huang, Zhe-Rui Yang, Pei-Yuan Lai, Chang-Dong Wang, Mohsen Guizani und Min Chen. „Knowledge-Aware Explainable Reciprocal Recommendation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 8 (24.03.2024): 8636–44. http://dx.doi.org/10.1609/aaai.v38i8.28708.
Der volle Inhalt der QuelleLeal, Fátima, Bruno Veloso, Benedita Malheiro, Juan C. Burguillo, Adriana E. Chis und Horacio González-Vélez. „Stream-based explainable recommendations via blockchain profiling“. Integrated Computer-Aided Engineering 29, Nr. 1 (28.12.2021): 105–21. http://dx.doi.org/10.3233/ica-210668.
Der volle Inhalt der QuelleYang, Mengyuan, Mengying Zhu, Yan Wang, Linxun Chen, Yilei Zhao, Xiuyuan Wang, Bing Han, Xiaolin Zheng und Jianwei Yin. „Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 8 (24.03.2024): 9250–59. http://dx.doi.org/10.1609/aaai.v38i8.28777.
Der volle Inhalt der QuelleAi, Qingyao, Vahid Azizi, Xu Chen und Yongfeng Zhang. „Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation“. Algorithms 11, Nr. 9 (13.09.2018): 137. http://dx.doi.org/10.3390/a11090137.
Der volle Inhalt der QuelleCho, Gyungah, Pyoung-seop Shim und Jaekwang Kim. „Explainable B2B Recommender System for Potential Customer Prediction Using KGAT“. Electronics 12, Nr. 17 (22.08.2023): 3536. http://dx.doi.org/10.3390/electronics12173536.
Der volle Inhalt der QuelleWang, Tongxuan, Xiaolong Zheng, Saike He, Zhu Zhang und Desheng Dash Wu. „Learning user-item paths for explainable recommendation“. IFAC-PapersOnLine 53, Nr. 5 (2020): 436–40. http://dx.doi.org/10.1016/j.ifacol.2021.04.119.
Der volle Inhalt der QuelleGuesmi, Mouadh, Mohamed Amine Chatti, Shoeb Joarder, Qurat Ul Ain, Clara Siepmann, Hoda Ghanbarzadeh und Rawaa Alatrash. „Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System“. Information 14, Nr. 7 (14.07.2023): 401. http://dx.doi.org/10.3390/info14070401.
Der volle Inhalt der QuelleHuang, Xiao, Pengjie Ren, Zhaochun Ren, Fei Sun, Xiangnan He, Dawei Yin und Maarten de Rijke. „Report on the international workshop on natural language processing for recommendations (NLP4REC 2020) workshop held at WSDM 2020“. ACM SIGIR Forum 54, Nr. 1 (Juni 2020): 1–5. http://dx.doi.org/10.1145/3451964.3451970.
Der volle Inhalt der QuelleLi, Lei, Yongfeng Zhang und Li Chen. „Personalized Prompt Learning for Explainable Recommendation“. ACM Transactions on Information Systems 41, Nr. 4 (23.03.2023): 1–26. http://dx.doi.org/10.1145/3580488.
Der volle Inhalt der QuelleDissertationen zum Thema "Explainable recommendation systems"
Attolou, Hervé-Madelein. „Explications pour des recommandations manquantes basées sur les graphes“. Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1337.
Der volle Inhalt der QuelleIn the era of big data, Recommendation Systems play a pivotal role in helping users navigate and discover relevant content from vast amounts of data. Whilemodern Recommendation Systems have evolved to provide accurate and relevant recommendations, they often fall short in explaining their decisions to users. Thislack of transparency raises important questions about trust and user engagement, especially in cases where certain expected items are not recommended. To addressthis, recent research has focused on developing explainable Recommendation Systems, which provide users with insights into why certain items are recommended oromitted.This thesis explores the specific area of Why-Not Explanations, which focuses on explaining why certain items are missing from the recommendation list. Theneed for Why-Not Explanations is particularly crucial in complex recommendation scenarios, where the absence of certain recommendations can lead to user dissatisfaction or mistrust. For instance, a user on an e-commerce platform might wonder why a specific product was not recommended despite fulfilling certain criteria. By providing explanations for missing recommendations, we aim to improve transparency, user satisfaction, engagement, and the overall trustworthiness of the system.The main contribution of this thesis is the development of EMiGRe (Explainable Missing Graph REcommender), a novel framework that provides actionable Why-Not Explanations for graph-based Recommendation Systems. Unlike traditional explainability methods, which focus on justifying why certain items were recommended, EMiGRe focuses on the absence of specific items from recommendation lists. The framework operates by analyzing the user's interactions within a Heterogeneous Information Graph (HIN) modelization of a dataset, identifying key actions or relations that, when modified, would have led to the recommendation of the missing item. EMiGRe provides two modes for explanation:• Remove Mode identifies existing actions or interactions that are preventing the system from recommending the desired item and suggests removing these.• Add Mode suggests additional actions or items that, if interacted with, would trigger the recommendation of the missing item.To generate explanations in both Add and Remove modes, we explore the solution space using a set of heuristics tailored for specific objectives. The framework offers multiple heuristics each serving a purpose: Incremental Powerset an Exhaustive Comparison . The Incremental heuristic prioritizes faster computation by gradually increasing the set of selected items, potentially overlooking minimal explanations. In contrast, the Powerset heuristic aims to find smaller explanations by thoroughly searching the solution space. Additionally, Exhaustive Comparison comparison heuristic is included to assess the precise contribution of each neighbor to the Why-Not Item (W NI) compared to all other items, increasing the success rate.To validate the effectiveness of the EMiGRe framework, extensive experimental evaluations were conducted on both synthetic and real-world datasets. The datasets include datasets from sources like Amazon, which simulates a real-world e-commerce scenario, and the Food dataset representing a recommendation problemin a recipe-based platform. The experimental results show that EMiGRe is able to provide good-quality Why-Not Explanations. Specifically, the system demonstratesan improvement in explanation success rates compared to traditional brute-force methods, while maintaining acceptable explanation size and processing time.Moreover, this thesis introduces a novel evaluation for Why-Not Explanations, defining metrics such as success rate, explanation size, and processing time to measure the quality and efficiency of explanations
Hsu, Pei-Ying, und 許珮瑩. „A Novel Explainable Mutual Fund Recommendation System Based on Deep Learning Techniques with Knowledge Graph Embeddings“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/wur49w.
Der volle Inhalt der Quelle國立交通大學
資訊管理研究所
107
Since deep learning based models have gained success in various fields during recent years, many recommendation systems also start to take advantage of the deep learning techniques. However, while the deep learning based recommendation systems have achieved high recommendation performance, the lack of interpretability may reduce users' trust and satisfaction, while limiting the model to wide adoption in the real world. As a result, to strike a balance between high accuracy and interpretability, or even obtain both of them at the same time, has become a popular issue among the researches of recommendation systems. In this thesis, we would like to predict and recommend the funds that would be purchased by the customers in the next month, while providing explanations simultaneously. To achieve the goal, we leverage the structure of knowledge graph, and take advantage of deep learning techniques to embed customers and funds features to a unified latent space. We fully utilize the structure knowledge which cannot be learned by the traditional deep learning models, and get the personalized recommendations and explanations. Moreover, we extend the explanations to more complex ones by changing the training procedure of the model, and proposed a measure to rate for the customized explanations while considering strength and uniqueness of the explanations at the same time. Finally, we regard that the knowledge graph based structure could be extended to other applications, and proposed some possible special recommendations accordingly. By evaluating on the dataset of mutual fund transaction records, we verify the effectiveness of our model to provide precise recommendations, and also evaluate the assumptions that our model could utilize the structure knowledge well. Last but not least, we conduct some case study of explanations to demonstrate the effectiveness of our model to provide usual explanations, complex explanations, and other special recommendations.
Palesi, Luciano Alessandro Ipsaro. „Human Centered Big Data Analytics for Smart Applications“. Doctoral thesis, 2022. http://hdl.handle.net/2158/1282883.
Der volle Inhalt der QuelleBuchteile zum Thema "Explainable recommendation systems"
Zong, Xiaoning, Yong Liu, Yonghui Xu, Yixin Zhang, Zhiqi Shen, Yonghua Yang und Lizhen Cui. „SAER: Sentiment-Opinion Alignment Explainable Recommendation“. In Database Systems for Advanced Applications, 315–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00126-0_24.
Der volle Inhalt der QuelleZong, Xiaoning, Yong Liu, Yonghui Xu, Yixin Zhang, Zhiqi Shen, Yonghua Yang und Lizhen Cui. „SAER: Sentiment-Opinion Alignment Explainable Recommendation“. In Database Systems for Advanced Applications, 315–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00126-0_24.
Der volle Inhalt der QuelleMa, Boxuan, Tianyuan Yang und Baofeng Ren. „A Survey on Explainable Course Recommendation Systems“. In Distributed, Ambient and Pervasive Interactions, 273–87. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60012-8_17.
Der volle Inhalt der QuelleWang, Huiying, Yue Kou, Derong Shen und Tiezheng Nie. „An Explainable Recommendation Method Based on Multi-timeslice Graph Embedding“. In Web Information Systems and Applications, 84–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60029-7_8.
Der volle Inhalt der QuelleYin, Ziyu, Yue Kou, Guangqi Wang, Derong Shen und Tiezheng Nie. „Explainable Recommendation via Neural Rating Regression and Fine-Grained Sentiment Perception“. In Web Information Systems and Applications, 580–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87571-8_50.
Der volle Inhalt der QuelleZhang, Luhao, Ruiyu Fang, Tianchi Yang, Maodi Hu, Tao Li, Chuan Shi und Dong Wang. „A Joint Framework for Explainable Recommendation with Knowledge Reasoning and Graph Representation“. In Database Systems for Advanced Applications, 351–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00129-1_30.
Der volle Inhalt der QuelleValdiviezo-Diaz, Priscila. „Pattern Detection in e-Commerce Using Clustering Techniques to Explainable Products Recommendation“. In Lecture Notes in Networks and Systems, 700–713. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-66329-1_45.
Der volle Inhalt der QuelleElazab, Fatma, Alia El Bolock, Cornelia Herbert und Slim Abdennadher. „XReC: Towards a Generic Module-Based Framework for Explainable Recommendation Based on Character“. In Highlights in Practical Applications of Agents, Multi-Agent Systems, and Social Good. The PAAMS Collection, 17–27. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85710-3_2.
Der volle Inhalt der QuelleGirija, J., und Sree Pragna Machupalli. „Innovative Precision Medicine: An Explainable AI- Driven Biomarker-Guided Recommendation System with Multilayer FeedForward Neural Network Model“. In Lecture Notes in Networks and Systems, 438–47. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-66410-6_35.
Der volle Inhalt der QuelleElazab, Fatma, Alia El Bolock, Cornelia Herbert und Slim Abdennadher. „Multi-modal Explainable Music Recommendation Based on the Relations Between Character and Music Listening Behavior“. In Highlights in Practical Applications of Agents, Multi-Agent Systems, and Cognitive Mimetics. The PAAMS Collection, 92–103. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37593-4_8.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Explainable recommendation systems"
Turgut, Özlem, İbrahim Kök und Suat Özdemir. „AgroXAI: Explainable AI-Driven Crop Recommendation System for Agriculture 4.0“. In 2024 IEEE International Conference on Big Data (BigData), 7208–17. IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825771.
Der volle Inhalt der QuelleKumar, Surendra, und Mohit Kumar. „Enhancing Agricultural Decision-Making through an Explainable AI-Based Crop Recommendation System“. In 2024 International Conference on Signal Processing and Advance Research in Computing (SPARC), 1–6. IEEE, 2024. https://doi.org/10.1109/sparc61891.2024.10829064.
Der volle Inhalt der QuellePraseptiawan, Mugi, M. Fikri Damar Muchtarom, Nabila Muthia Putri, Ahmad Naim Che Pee, Mohd Hafiz Zakaria und Meida Cahyo Untoro. „Mooc Course Recommendation System Model with Explainable AI (XAI) Using Content Based Filtering Method“. In 2024 11th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), 144–47. IEEE, 2024. https://doi.org/10.1109/eecsi63442.2024.10776491.
Der volle Inhalt der QuellePraseptiawan, Mugi, Nabila Muthia Putri, M. Fikri Damar Muchtarom, Mohd Hafiz Zakaria und Ahmad Naim Che Pee. „Application of Collaborative Filtering and Explainable AI Methods in Recommendation System Modeling to Predict MOOC Course Preferences“. In 2024 2nd International Symposium on Information Technology and Digital Innovation (ISITDI), 228–33. IEEE, 2024. https://doi.org/10.1109/isitdi62380.2024.10797073.
Der volle Inhalt der QuelleVijayaraghavan, Sairamvinay, und Prasant Mohapatra. „Stability of Explainable Recommendation“. In RecSys '23: Seventeenth ACM Conference on Recommender Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3604915.3608853.
Der volle Inhalt der QuelleTsukuda, Kosetsu, und Masataka Goto. „Explainable Recommendation for Repeat Consumption“. In RecSys '20: Fourteenth ACM Conference on Recommender Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3383313.3412230.
Der volle Inhalt der QuelleShulman, Eyal, und Lior Wolf. „Meta Decision Trees for Explainable Recommendation Systems“. In AIES '20: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375627.3375876.
Der volle Inhalt der QuellePesovski, Ivica, Ana Madevska Bogdanova und Vladimir Trajkovik. „Systematic Review of the published Explainable Educational Recommendation Systems“. In 2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET). IEEE, 2022. http://dx.doi.org/10.1109/ithet56107.2022.10032029.
Der volle Inhalt der QuelleHou, Min, Le Wu, Enhong Chen, Zhi Li, Vincent W. Zheng und Qi Liu. „Explainable Fashion Recommendation: A Semantic Attribute Region Guided Approach“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/650.
Der volle Inhalt der QuelleZarzour, Hafed, Mohammad Alsmirat und Yaser Jararweh. „Using Deep Learning for Positive Reviews Prediction in Explainable Recommendation Systems“. In 2022 13th International Conference on Information and Communication Systems (ICICS). IEEE, 2022. http://dx.doi.org/10.1109/icics55353.2022.9811151.
Der volle Inhalt der Quelle