Gotowa bibliografia na temat „Offline Contextual Bandit”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Offline Contextual Bandit”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Offline Contextual Bandit"
Huang, Wen, i Xintao Wu. "Robustly Improving Bandit Algorithms with Confounded and Selection Biased Offline Data: A Causal Approach". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 18 (24.03.2024): 20438–46. http://dx.doi.org/10.1609/aaai.v38i18.30027.
Pełny tekst źródłaNarita, Yusuke, Shota Yasui i Kohei Yata. "Efficient Counterfactual Learning from Bandit Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.
Pełny tekst źródłaDegroote, Hans, Patrick De Causmaecker, Bernd Bischl i Lars Kotthoff. "A Regression-Based Methodology for Online Algorithm Selection". Proceedings of the International Symposium on Combinatorial Search 9, nr 1 (1.09.2021): 37–45. http://dx.doi.org/10.1609/socs.v9i1.18458.
Pełny tekst źródłaLi, Zhao, Junshuai Song, Zehong Hu, Zhen Wang i Jun Gao. "Constrained Dual-Level Bandit for Personalized Impression Regulation in Online Ranking Systems". ACM Transactions on Knowledge Discovery from Data 16, nr 2 (21.07.2021): 1–23. http://dx.doi.org/10.1145/3461340.
Pełny tekst źródłaVera, Alberto, Siddhartha Banerjee i Itai Gurvich. "Online Allocation and Pricing: Constant Regret via Bellman Inequalities". Operations Research 69, nr 3 (maj 2021): 821–40. http://dx.doi.org/10.1287/opre.2020.2061.
Pełny tekst źródłaAyle, Morgane, Jimmy Tekli, Julia El-Zini, Boulos El-Asmar i Mariette Awad. "BAR — A Reinforcement Learning Agent for Bounding-Box Automated Refinement". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 03 (3.04.2020): 2561–68. http://dx.doi.org/10.1609/aaai.v34i03.5639.
Pełny tekst źródłaSimchi-Levi, David, i Yunzong Xu. "Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits Under Realizability". Mathematics of Operations Research, 9.12.2021. http://dx.doi.org/10.1287/moor.2021.1193.
Pełny tekst źródłaSoemers, Dennis, Tim Brys, Kurt Driessens, Mark Winands i Ann Nowé. "Adapting to Concept Drift in Credit Card Transaction Data Streams Using Contextual Bandits and Decision Trees". Proceedings of the AAAI Conference on Artificial Intelligence 32, nr 1 (27.04.2018). http://dx.doi.org/10.1609/aaai.v32i1.11411.
Pełny tekst źródłaCao, Junyu, i Wei Sun. "Tiered Assortment: Optimization and Online Learning". Management Science, 4.10.2023. http://dx.doi.org/10.1287/mnsc.2023.4940.
Pełny tekst źródłaZeng, Yingyan, Xiaoyu Chen i Ran Jin. "Ensemble Active Learning by Contextual Bandits for AI Incubation in Manufacturing". ACM Transactions on Intelligent Systems and Technology, 25.10.2023. http://dx.doi.org/10.1145/3627821.
Pełny tekst źródłaRozprawy doktorskie na temat "Offline Contextual Bandit"
Sakhi, Otmane. "Offline Contextual Bandit : Theory and Large Scale Applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG011.
Pełny tekst źródłaThis thesis presents contributions to the problem of learning from logged interactions using the offline contextual bandit framework. We are interested in two related topics: (1) offline policy learning with performance certificates, and (2) fast and efficient policy learning applied to large scale, real world recommendation. For (1), we first leverage results from the distributionally robust optimisation framework to construct asymptotic, variance-sensitive bounds to evaluate policies' performances. These bounds lead to new, more practical learning objectives thanks to their composite nature and straightforward calibration. We then analyse the problem from the PAC-Bayesian perspective, and provide tighter, non-asymptotic bounds on the performance of policies. Our results motivate new strategies, that offer performance certificates before deploying the policies online. The newly derived strategies rely on composite learning objectives that do not require additional tuning. For (2), we first propose a hierarchical Bayesian model, that combines different signals, to efficiently estimate the quality of recommendation. We provide proper computational tools to scale the inference to real world problems, and demonstrate empirically the benefits of the approach in multiple scenarios. We then address the question of accelerating common policy optimisation approaches, particularly focusing on recommendation problems with catalogues of millions of items. We derive optimisation routines, based on new gradient approximations, computed in logarithmic time with respect to the catalogue size. Our approach improves on common, linear time gradient computations, yielding fast optimisation with no loss on the quality of the learned policies
Streszczenia konferencji na temat "Offline Contextual Bandit"
Li, Lihong, Wei Chu, John Langford i Xuanhui Wang. "Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms". W the fourth ACM international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1935826.1935878.
Pełny tekst źródłaBouneffouf, Djallel, Srinivasan Parthasarathy, Horst Samulowitz i Martin Wistuba. "Optimal Exploitation of Clustering and History Information in Multi-armed Bandit". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/279.
Pełny tekst źródłaDegroote, Hans. "Online Algorithm Selection". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/746.
Pełny tekst źródłaJanuszewski, Piotr, Dominik Grzegorzek i Paweł Czarnul. "Dataset Characteristics and Their Impact on Offline Policy Learning of Contextual Multi-Armed Bandits". W 16th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012311000003636.
Pełny tekst źródłaAmeko, Mawulolo K., Miranda L. Beltzer, Lihua Cai, Mehdi Boukhechba, Bethany A. Teachman i Laura E. Barnes. "Offline Contextual Multi-armed Bandits for Mobile Health Interventions: A Case Study on Emotion Regulation". W RecSys '20: Fourteenth ACM Conference on Recommender Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3383313.3412244.
Pełny tekst źródła