Статті в журналах з теми "Bandit Contextuel"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Bandit Contextuel".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Gisselbrecht, Thibault, Sylvain Lamprier та Patrick Gallinari. "Collecte ciblée à partir de flux de données en ligne dans les médias sociaux. Une approche de bandit contextuel". Document numérique 19, № 2-3 (30 грудня 2016): 11–30. http://dx.doi.org/10.3166/dn.19.2-3.11-30.
Повний текст джерелаDimakopoulou, Maria, Zhengyuan Zhou, Susan Athey, and Guido Imbens. "Balanced Linear Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3445–53. http://dx.doi.org/10.1609/aaai.v33i01.33013445.
Повний текст джерелаTong, Ruoyi. "A survey of the application and technical improvement of the multi-armed bandit." Applied and Computational Engineering 77, no. 1 (July 16, 2024): 25–31. http://dx.doi.org/10.54254/2755-2721/77/20240631.
Повний текст джерелаYang, Luting, Jianyi Yang, and Shaolei Ren. "Contextual Bandits with Delayed Feedback and Semi-supervised Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15943–44. http://dx.doi.org/10.1609/aaai.v35i18.17968.
Повний текст джерелаSharaf, Amr, and Hal Daumé III. "Meta-Learning Effective Exploration Strategies for Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.
Повний текст джерелаDu, Yihan, Siwei Wang, and Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Повний текст джерелаVaratharajah, Yogatheesan, and Brent Berry. "A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials." Life 12, no. 8 (August 21, 2022): 1277. http://dx.doi.org/10.3390/life12081277.
Повний текст джерелаLi, Jialian, Chao Du, and Jun Zhu. "A Bayesian Approach for Subset Selection in Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8384–91. http://dx.doi.org/10.1609/aaai.v35i9.17019.
Повний текст джерелаQu, Jiaming. "Survey of dynamic pricing based on Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.
Повний текст джерелаAtsidakou, Alexia, Constantine Caramanis, Evangelia Gergatsouli, Orestis Papadigenopoulos, and Christos Tzamos. "Contextual Pandora’s Box." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 10944–52. http://dx.doi.org/10.1609/aaai.v38i10.28969.
Повний текст джерелаZhang, Qianqian. "Real-world Applications of Bandit Algorithms: Insights and Innovations." Transactions on Computer Science and Intelligent Systems Research 5 (August 12, 2024): 753–58. http://dx.doi.org/10.62051/ge4sk783.
Повний текст джерелаWang, Zhiyong, Xutong Liu, Shuai Li, and John C. S. Lui. "Efficient Explorative Key-Term Selection Strategies for Conversational Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10288–95. http://dx.doi.org/10.1609/aaai.v37i8.26225.
Повний текст джерелаBansal, Nipun, Manju Bala, and Kapil Sharma. "FuzzyBandit An Autonomous Personalized Model Based on Contextual Multi Arm Bandits Using Explainable AI." Defence Science Journal 74, no. 4 (April 26, 2024): 496–504. http://dx.doi.org/10.14429/dsj.74.19330.
Повний текст джерелаTang, Qiao, Hong Xie, Yunni Xia, Jia Lee, and Qingsheng Zhu. "Robust Contextual Bandits via Bootstrapping." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12182–89. http://dx.doi.org/10.1609/aaai.v35i13.17446.
Повний текст джерелаWu, Jiazhen. "In-depth Exploration and Implementation of Multi-Armed Bandit Models Across Diverse Fields." Highlights in Science, Engineering and Technology 94 (April 26, 2024): 201–5. http://dx.doi.org/10.54097/d3ez0n61.
Повний текст джерелаWang, Kun. "Conservative Contextual Combinatorial Cascading Bandit." IEEE Access 9 (2021): 151434–43. http://dx.doi.org/10.1109/access.2021.3124416.
Повний текст джерелаElwood, Adam, Marco Leonardi, Ashraf Mohamed, and Alessandro Rozza. "Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models." Entropy 25, no. 2 (January 18, 2023): 188. http://dx.doi.org/10.3390/e25020188.
Повний текст джерелаBaheri, Ali. "Multilevel Constrained Bandits: A Hierarchical Upper Confidence Bound Approach with Safety Guarantees." Mathematics 13, no. 1 (January 3, 2025): 149. https://doi.org/10.3390/math13010149.
Повний текст джерелаStrong, Emily, Bernard Kleynhans, and Serdar Kadıoğlu. "MABWISER: Parallelizable Contextual Multi-armed Bandits." International Journal on Artificial Intelligence Tools 30, no. 04 (June 2021): 2150021. http://dx.doi.org/10.1142/s0218213021500214.
Повний текст джерелаLee, Kyungbok, Myunghee Cho Paik, Min-hwan Oh, and Gi-Soo Kim. "Mixed-Effects Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13409–17. http://dx.doi.org/10.1609/aaai.v38i12.29243.
Повний текст джерелаOh, Min-hwan, and Garud Iyengar. "Multinomial Logit Contextual Bandits: Provable Optimality and Practicality." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9205–13. http://dx.doi.org/10.1609/aaai.v35i10.17111.
Повний текст джерелаZhao, Yisen. "Enhancing conversational recommendation systems through the integration of KNN with ConLinUCB contextual bandits." Applied and Computational Engineering 68, no. 1 (June 6, 2024): 8–16. http://dx.doi.org/10.54254/2755-2721/68/20241388.
Повний текст джерелаChen, Qiufan. "A survey on contextual multi-armed bandits." Applied and Computational Engineering 53, no. 1 (March 28, 2024): 287–95. http://dx.doi.org/10.54254/2755-2721/53/20241593.
Повний текст джерелаMohaghegh Neyshabouri, Mohammadreza, Kaan Gokcesu, Hakan Gokcesu, Huseyin Ozkan, and Suleyman Serdar Kozat. "Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures." IEEE Transactions on Neural Networks and Learning Systems 30, no. 3 (March 2019): 923–37. http://dx.doi.org/10.1109/tnnls.2018.2854796.
Повний текст джерелаGu, Haoran, Yunni Xia, Hong Xie, Xiaoyu Shi, and Mingsheng Shang. "Robust and efficient algorithms for conversational contextual bandit." Information Sciences 657 (February 2024): 119993. http://dx.doi.org/10.1016/j.ins.2023.119993.
Повний текст джерелаNarita, Yusuke, Shota Yasui, and Kohei Yata. "Efficient Counterfactual Learning from Bandit Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.
Повний текст джерелаLi, Zhaoyu, and Qian Ai. "Managing Considerable Distributed Resources for Demand Response: A Resource Selection Strategy Based on Contextual Bandit." Electronics 12, no. 13 (June 23, 2023): 2783. http://dx.doi.org/10.3390/electronics12132783.
Повний текст джерелаHuang, Wen, and Xintao Wu. "Robustly Improving Bandit Algorithms with Confounded and Selection Biased Offline Data: A Causal Approach." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 20438–46. http://dx.doi.org/10.1609/aaai.v38i18.30027.
Повний текст джерелаSpieker, Helge, and Arnaud Gotlieb. "Adaptive metamorphic testing with contextual bandits." Journal of Systems and Software 165 (July 2020): 110574. http://dx.doi.org/10.1016/j.jss.2020.110574.
Повний текст джерелаJagerman, Rolf, Ilya Markov, and Maarten De Rijke. "Safe Exploration for Optimizing Contextual Bandits." ACM Transactions on Information Systems 38, no. 3 (June 26, 2020): 1–23. http://dx.doi.org/10.1145/3385670.
Повний текст джерелаKakadiya, Ashutosh, Sriraam Natarajan, and Balaraman Ravindran. "Relational Boosted Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12123–30. http://dx.doi.org/10.1609/aaai.v35i13.17439.
Повний текст джерелаSeifi, Farshad, and Seyed Taghi Akhavan Niaki. "Optimizing contextual bandit hyperparameters: A dynamic transfer learning-based framework." International Journal of Industrial Engineering Computations 15, no. 4 (2024): 951–64. http://dx.doi.org/10.5267/j.ijiec.2024.6.003.
Повний текст джерелаZhao, Yafei, and Long Yang. "Constrained contextual bandit algorithm for limited-budget recommendation system." Engineering Applications of Artificial Intelligence 128 (February 2024): 107558. http://dx.doi.org/10.1016/j.engappai.2023.107558.
Повний текст джерелаYang, Jianyi, and Shaolei Ren. "Robust Bandit Learning with Imperfect Context." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10594–602. http://dx.doi.org/10.1609/aaai.v35i12.17267.
Повний текст джерелаLiu, Zizhuo. "Investigation of progress and application related to Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 155–59. http://dx.doi.org/10.54254/2755-2721/37/20230496.
Повний текст джерелаSemenov, Alexander, Maciej Rysz, Gaurav Pandey, and Guanglin Xu. "Diversity in news recommendations using contextual bandits." Expert Systems with Applications 195 (June 2022): 116478. http://dx.doi.org/10.1016/j.eswa.2021.116478.
Повний текст джерелаSui, Guoxin, and Yong Yu. "Bayesian Contextual Bandits for Hyper Parameter Optimization." IEEE Access 8 (2020): 42971–79. http://dx.doi.org/10.1109/access.2020.2977129.
Повний текст джерелаTekin, Cem, and Mihaela van der Schaar. "Distributed Online Learning via Cooperative Contextual Bandits." IEEE Transactions on Signal Processing 63, no. 14 (July 2015): 3700–3714. http://dx.doi.org/10.1109/tsp.2015.2430837.
Повний текст джерелаQin, Yuzhen, Yingcong Li, Fabio Pasqualetti, Maryam Fazel, and Samet Oymak. "Stochastic Contextual Bandits with Long Horizon Rewards." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9525–33. http://dx.doi.org/10.1609/aaai.v37i8.26140.
Повний текст джерелаXu, Xiao, Fang Dong, Yanghua Li, Shaojian He, and Xin Li. "Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6518–25. http://dx.doi.org/10.1609/aaai.v34i04.6125.
Повний текст джерелаTekin, Cem, and Eralp Turgay. "Multi-objective Contextual Multi-armed Bandit With a Dominant Objective." IEEE Transactions on Signal Processing 66, no. 14 (July 15, 2018): 3799–813. http://dx.doi.org/10.1109/tsp.2018.2841822.
Повний текст джерелаYoon, Gyugeun, and Joseph Y. J. Chow. "Contextual Bandit-Based Sequential Transit Route Design under Demand Uncertainty." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 5 (May 2020): 613–25. http://dx.doi.org/10.1177/0361198120917388.
Повний текст джерелаLi, Litao. "Exploring Multi-Armed Bandit algorithms: Performance analysis in dynamic environments." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 252–59. http://dx.doi.org/10.54254/2755-2721/34/20230338.
Повний текст джерелаZhu, Tan, Guannan Liang, Chunjiang Zhu, Haining Li, and Jinbo Bi. "An Efficient Algorithm for Deep Stochastic Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 11193–201. http://dx.doi.org/10.1609/aaai.v35i12.17335.
Повний текст джерелаMartín H., José Antonio, and Ana M. Vargas. "Linear Bayes policy for learning in contextual-bandits." Expert Systems with Applications 40, no. 18 (December 2013): 7400–7406. http://dx.doi.org/10.1016/j.eswa.2013.07.041.
Повний текст джерелаRaghavan, Manish, Aleksandrs Slivkins, Jennifer Wortman Vaughan, and Zhiwei Steven Wu. "Greedy Algorithm Almost Dominates in Smoothed Contextual Bandits." SIAM Journal on Computing 52, no. 2 (April 12, 2023): 487–524. http://dx.doi.org/10.1137/19m1247115.
Повний текст джерелаAyala-Romero, Jose A., Andres Garcia-Saavedra, and Xavier Costa-Perez. "Risk-Aware Continuous Control with Neural Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 20930–38. http://dx.doi.org/10.1609/aaai.v38i19.30083.
Повний текст джерелаPilani, Akshay, Kritagya Mathur, Himanshu Agrawal, Deeksha Chandola, Vinay Anand Tikkiwal, and Arun Kumar. "Contextual Bandit Approach-based Recommendation System for Personalized Web-based Services." Applied Artificial Intelligence 35, no. 7 (April 6, 2021): 489–504. http://dx.doi.org/10.1080/08839514.2021.1883855.
Повний текст джерелаLi, Xinbin, Jiajia Liu, Lei Yan, Song Han, and Xinping Guan. "Relay Selection in Underwater Acoustic Cooperative Networks: A Contextual Bandit Approach." IEEE Communications Letters 21, no. 2 (February 2017): 382–85. http://dx.doi.org/10.1109/lcomm.2016.2625300.
Повний текст джерелаGisselbrecht, Thibault, Sylvain Lamprier, and Patrick Gallinari. "Dynamic Data Capture from Social Media Streams: A Contextual Bandit Approach." Proceedings of the International AAAI Conference on Web and Social Media 10, no. 1 (August 4, 2021): 131–40. http://dx.doi.org/10.1609/icwsm.v10i1.14734.
Повний текст джерела