Статті в журналах з теми "Bandit algorithm"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Bandit algorithm".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Ciucanu, Radu, Pascal Lafourcade, Gael Marcadet, and Marta Soare. "SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits." Journal of Artificial Intelligence Research 73 (February 23, 2022): 737–65. http://dx.doi.org/10.1613/jair.1.13163.
Повний текст джерелаZhou, Huozhi, Lingda Wang, Lav Varshney, and Ee-Peng Lim. "A Near-Optimal Change-Detection Based Algorithm for Piecewise-Stationary Combinatorial Semi-Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6933–40. http://dx.doi.org/10.1609/aaai.v34i04.6176.
Повний текст джерелаAzizi, Javad, Branislav Kveton, Mohammad Ghavamzadeh, and Sumeet Katariya. "Meta-Learning for Simple Regret Minimization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6709–17. http://dx.doi.org/10.1609/aaai.v37i6.25823.
Повний текст джерелаKuroki, Yuko, Liyuan Xu, Atsushi Miyauchi, Junya Honda, and Masashi Sugiyama. "Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback." Neural Computation 32, no. 9 (September 2020): 1733–73. http://dx.doi.org/10.1162/neco_a_01299.
Повний текст джерелаLi, Youxuan. "Improvement of the recommendation system based on the multi-armed bandit algorithm." Applied and Computational Engineering 36, no. 1 (January 22, 2024): 237–41. http://dx.doi.org/10.54254/2755-2721/36/20230453.
Повний текст джерелаLiu, Zizhuo. "Investigation of progress and application related to Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 155–59. http://dx.doi.org/10.54254/2755-2721/37/20230496.
Повний текст джерелаAgarwal, Mridul, Vaneet Aggarwal, Abhishek Kumar Umrawal, and Chris Quinn. "DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6557–65. http://dx.doi.org/10.1609/aaai.v35i8.16812.
Повний текст джерелаXue, Bo, Ji Cheng, Fei Liu, Yimu Wang, and Qingfu Zhang. "Multiobjective Lipschitz Bandits under Lexicographic Ordering." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16238–46. http://dx.doi.org/10.1609/aaai.v38i15.29558.
Повний текст джерелаSharaf, Amr, and Hal Daumé III. "Meta-Learning Effective Exploration Strategies for Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.
Повний текст джерелаNobari, Sadegh. "DBA: Dynamic Multi-Armed Bandit Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9869–70. http://dx.doi.org/10.1609/aaai.v33i01.33019869.
Повний текст джерелаQu, Jiaming. "Survey of dynamic pricing based on Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.
Повний текст джерелаNiño-Mora, José. "A Fast-Pivoting Algorithm for Whittle’s Restless Bandit Index." Mathematics 8, no. 12 (December 15, 2020): 2226. http://dx.doi.org/10.3390/math8122226.
Повний текст джерелаLamberton, Damien, and Gilles Pagès. "A penalized bandit algorithm." Electronic Journal of Probability 13 (2008): 341–73. http://dx.doi.org/10.1214/ejp.v13-489.
Повний текст джерелаCheung, Wang Chi, David Simchi-Levi, and Ruihao Zhu. "Hedging the Drift: Learning to Optimize Under Nonstationarity." Management Science 68, no. 3 (March 2022): 1696–713. http://dx.doi.org/10.1287/mnsc.2021.4024.
Повний текст джерелаChen, Panyangjie. "Investigation of selection and application of Multi-Armed Bandit algorithms in recommendation system." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 185–90. http://dx.doi.org/10.54254/2755-2721/34/20230323.
Повний текст джерелаFourati, Fares, Christopher John Quinn, Mohamed-Slim Alouini, and Vaneet Aggarwal. "Combinatorial Stochastic-Greedy Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12052–60. http://dx.doi.org/10.1609/aaai.v38i11.29093.
Повний текст джерелаOswal, Urvashi, Aniruddha Bhargava, and Robert Nowak. "Linear Bandits with Feature Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5331–38. http://dx.doi.org/10.1609/aaai.v34i04.5980.
Повний текст джерелаTang, Qiao, Hong Xie, Yunni Xia, Jia Lee, and Qingsheng Zhu. "Robust Contextual Bandits via Bootstrapping." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12182–89. http://dx.doi.org/10.1609/aaai.v35i13.17446.
Повний текст джерелаLi, Wenjie, Qifan Song, Jean Honorio, and Guang Lin. "Federated X-armed Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13628–36. http://dx.doi.org/10.1609/aaai.v38i12.29267.
Повний текст джерелаWang, Liangxu. "Investigation of frontier Multi-Armed Bandit algorithms and applications." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 179–84. http://dx.doi.org/10.54254/2755-2721/34/20230322.
Повний текст джерелаDu, Yihan, Siwei Wang, and Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Повний текст джерелаEsfandiari, Hossein, Amin Karbasi, Abbas Mehrabian, and Vahab Mirrokni. "Regret Bounds for Batched Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Повний текст джерелаHan, Qi, Li Zhu, and Fei Guo. "Forced Exploration in Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12270–77. http://dx.doi.org/10.1609/aaai.v38i11.29117.
Повний текст джерелаChen, Xijin, Kim May Lee, Sofia S. Villar, and David S. Robertson. "Some performance considerations when using multi-armed bandit algorithms in the presence of missing data." PLOS ONE 17, no. 9 (September 12, 2022): e0274272. http://dx.doi.org/10.1371/journal.pone.0274272.
Повний текст джерелаEne, Alina, Huy L. Nguyen, and Adrian Vladu. "Projection-Free Bandit Optimization with Privacy Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7322–30. http://dx.doi.org/10.1609/aaai.v35i8.16899.
Повний текст джерелаChen, Tianfeng. "Empirical performances comparison for ETC algorithm." Applied and Computational Engineering 13, no. 1 (October 23, 2023): 29–36. http://dx.doi.org/10.54254/2755-2721/13/20230705.
Повний текст джерелаZhu, Zhaowei, Jingxuan Zhu, Ji Liu, and Yang Liu. "Federated Bandit." Proceedings of the ACM on Measurement and Analysis of Computing Systems 5, no. 1 (February 18, 2021): 1–29. http://dx.doi.org/10.1145/3447380.
Повний текст джерелаRangi, Anshuka, Long Tran-Thanh, Haifeng Xu, and Massimo Franceschetti. "Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 8054–61. http://dx.doi.org/10.1609/aaai.v36i7.20777.
Повний текст джерелаAmani, Sanae, and Christos Thrampoulidis. "Decentralized Multi-Agent Linear Bandits with Safety Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6627–35. http://dx.doi.org/10.1609/aaai.v35i8.16820.
Повний текст джерелаHuang, Wen, Lu Zhang, and Xintao Wu. "Achieving Counterfactual Fairness for Causal Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6952–59. http://dx.doi.org/10.1609/aaai.v36i6.20653.
Повний текст джерелаNarita, Yusuke, Shota Yasui, and Kohei Yata. "Efficient Counterfactual Learning from Bandit Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.
Повний текст джерелаZhao, Shanshan, Wenhai Cui, Bei Jiang, Linglong Kong, and Xiaodong Yan. "Responsible Bandit Learning via Privacy-Protected Mean-Volatility Utility." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21815–22. http://dx.doi.org/10.1609/aaai.v38i19.30182.
Повний текст джерелаTolpin, David, and Solomon Shimony. "MCTS Based on Simple Rerget." Proceedings of the International Symposium on Combinatorial Search 3, no. 1 (August 20, 2021): 193–99. http://dx.doi.org/10.1609/socs.v3i1.18221.
Повний текст джерелаLi, Litao. "Exploring Multi-Armed Bandit algorithms: Performance analysis in dynamic environments." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 252–59. http://dx.doi.org/10.54254/2755-2721/34/20230338.
Повний текст джерелаOh, Min-hwan, and Garud Iyengar. "Multinomial Logit Contextual Bandits: Provable Optimality and Practicality." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9205–13. http://dx.doi.org/10.1609/aaai.v35i10.17111.
Повний текст джерелаVaratharajah, Yogatheesan, and Brent Berry. "A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials." Life 12, no. 8 (August 21, 2022): 1277. http://dx.doi.org/10.3390/life12081277.
Повний текст джерелаШиян, Дмитрий Николаевич, and Dmitry Shiyan. "One-armed bandit problem and the mirror descent algorithm." Mathematical Game Theory and Applications 15, no. 3 (February 2, 2024): 88–106. http://dx.doi.org/10.17076/mgta_2023_3_75.
Повний текст джерелаYu, Junpu. "Thompson -Greedy Algorithm: An Improvement to the Regret of Thompson Sampling and -Greedy on Multi-Armed Bandit Problems." Applied and Computational Engineering 8, no. 1 (August 1, 2023): 525–34. http://dx.doi.org/10.54254/2755-2721/8/20230264.
Повний текст джерелаGarcelon, Evrard, Mohammad Ghavamzadeh, Alessandro Lazaric, and Matteo Pirotta. "Improved Algorithms for Conservative Exploration in Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3962–69. http://dx.doi.org/10.1609/aaai.v34i04.5812.
Повний текст джерелаKasy, Maximilian, and Anja Sautmann. "Adaptive Treatment Assignment in Experiments for Policy Choice." Econometrica 89, no. 1 (2021): 113–32. http://dx.doi.org/10.3982/ecta17527.
Повний текст джерелаOntanon, Santiago. "The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, no. 1 (June 30, 2021): 58–64. http://dx.doi.org/10.1609/aiide.v9i1.12681.
Повний текст джерелаPatil, Vishakha, Ganesh Ghalme, Vineet Nair, and Y. Narahari. "Achieving Fairness in the Stochastic Multi-Armed Bandit Problem." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5379–86. http://dx.doi.org/10.1609/aaai.v34i04.5986.
Повний текст джерелаWang, Zhenlin, and Jonathan Scarlett. "Max-Min Grouped Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8603–11. http://dx.doi.org/10.1609/aaai.v36i8.20838.
Повний текст джерелаSakakibara, Masaya, Akira Notsu, Seiki Ubukata, and Katsuhiro Honda. "Designation of Candidate Solutions in Differential Evolution Based on Bandit Algorithm and its Evaluation." Journal of Advanced Computational Intelligence and Intelligent Informatics 23, no. 4 (July 20, 2019): 758–66. http://dx.doi.org/10.20965/jaciii.2019.p0758.
Повний текст джерелаKim, Gi-Soo, Jane P. Kim, and Hyun-Joon Yang. "Robust Tests in Online Decision-Making." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10016–24. http://dx.doi.org/10.1609/aaai.v36i9.21240.
Повний текст джерелаMansour, Yishay, Aleksandrs Slivkins, and Vasilis Syrgkanis. "Bayesian Incentive-Compatible Bandit Exploration." Operations Research 68, no. 4 (July 2020): 1132–61. http://dx.doi.org/10.1287/opre.2019.1949.
Повний текст джерелаDing, Wenkui, Tao Qin, Xu-Dong Zhang, and Tie-Yan Liu. "Multi-Armed Bandit with Budget Constraint and Variable Costs." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 232–38. http://dx.doi.org/10.1609/aaai.v27i1.8637.
Повний текст джерелаLiu, Yizhi. "An investigation of progress related to stochastic stationary bandit algorithms." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 197–201. http://dx.doi.org/10.54254/2755-2721/34/20230326.
Повний текст джерелаKaibel, Chris, and Torsten Biemann. "Rethinking the Gold Standard With Multi-armed Bandits: Machine Learning Allocation Algorithms for Experiments." Organizational Research Methods 24, no. 1 (June 11, 2019): 78–103. http://dx.doi.org/10.1177/1094428119854153.
Повний текст джерелаLupu, Andrei, Audrey Durand, and Doina Precup. "Leveraging Observations in Bandits: Between Risks and Benefits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6112–19. http://dx.doi.org/10.1609/aaai.v33i01.33016112.
Повний текст джерела