Статті в журналах з теми "Algorithme de bandit"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Algorithme de bandit".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Ciucanu, Radu, Pascal Lafourcade, Gael Marcadet, and Marta Soare. "SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits." Journal of Artificial Intelligence Research 73 (February 23, 2022): 737–65. http://dx.doi.org/10.1613/jair.1.13163.
Повний текст джерелаAzizi, Javad, Branislav Kveton, Mohammad Ghavamzadeh, and Sumeet Katariya. "Meta-Learning for Simple Regret Minimization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6709–17. http://dx.doi.org/10.1609/aaai.v37i6.25823.
Повний текст джерелаZhou, Huozhi, Lingda Wang, Lav Varshney, and Ee-Peng Lim. "A Near-Optimal Change-Detection Based Algorithm for Piecewise-Stationary Combinatorial Semi-Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6933–40. http://dx.doi.org/10.1609/aaai.v34i04.6176.
Повний текст джерелаLi, Youxuan. "Improvement of the recommendation system based on the multi-armed bandit algorithm." Applied and Computational Engineering 36, no. 1 (January 22, 2024): 237–41. http://dx.doi.org/10.54254/2755-2721/36/20230453.
Повний текст джерелаKuroki, Yuko, Liyuan Xu, Atsushi Miyauchi, Junya Honda, and Masashi Sugiyama. "Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback." Neural Computation 32, no. 9 (September 2020): 1733–73. http://dx.doi.org/10.1162/neco_a_01299.
Повний текст джерелаNiño-Mora, José. "A Fast-Pivoting Algorithm for Whittle’s Restless Bandit Index." Mathematics 8, no. 12 (December 15, 2020): 2226. http://dx.doi.org/10.3390/math8122226.
Повний текст джерелаOswal, Urvashi, Aniruddha Bhargava, and Robert Nowak. "Linear Bandits with Feature Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5331–38. http://dx.doi.org/10.1609/aaai.v34i04.5980.
Повний текст джерелаAgarwal, Mridul, Vaneet Aggarwal, Abhishek Kumar Umrawal, and Chris Quinn. "DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6557–65. http://dx.doi.org/10.1609/aaai.v35i8.16812.
Повний текст джерелаQu, Jiaming. "Survey of dynamic pricing based on Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.
Повний текст джерелаWan, Zongqi, Zhijie Zhang, Tongyang Li, Jialin Zhang, and Xiaoming Sun. "Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10087–94. http://dx.doi.org/10.1609/aaai.v37i8.26202.
Повний текст джерелаDu, Yihan, Siwei Wang, and Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Повний текст джерелаXue, Bo, Ji Cheng, Fei Liu, Yimu Wang, and Qingfu Zhang. "Multiobjective Lipschitz Bandits under Lexicographic Ordering." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16238–46. http://dx.doi.org/10.1609/aaai.v38i15.29558.
Повний текст джерелаLiu, Zizhuo. "Investigation of progress and application related to Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 155–59. http://dx.doi.org/10.54254/2755-2721/37/20230496.
Повний текст джерелаSharaf, Amr, and Hal Daumé III. "Meta-Learning Effective Exploration Strategies for Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.
Повний текст джерелаDimakopoulou, Maria, Zhengyuan Zhou, Susan Athey, and Guido Imbens. "Balanced Linear Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3445–53. http://dx.doi.org/10.1609/aaai.v33i01.33013445.
Повний текст джерелаTolpin, David, and Solomon Shimony. "MCTS Based on Simple Rerget." Proceedings of the International Symposium on Combinatorial Search 3, no. 1 (August 20, 2021): 193–99. http://dx.doi.org/10.1609/socs.v3i1.18221.
Повний текст джерелаWang, Liangxu. "Investigation of frontier Multi-Armed Bandit algorithms and applications." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 179–84. http://dx.doi.org/10.54254/2755-2721/34/20230322.
Повний текст джерелаAmani, Sanae, and Christos Thrampoulidis. "Decentralized Multi-Agent Linear Bandits with Safety Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6627–35. http://dx.doi.org/10.1609/aaai.v35i8.16820.
Повний текст джерелаWang, Zhiyong, Xutong Liu, Shuai Li, and John C. S. Lui. "Efficient Explorative Key-Term Selection Strategies for Conversational Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10288–95. http://dx.doi.org/10.1609/aaai.v37i8.26225.
Повний текст джерелаWang, Zhenlin, and Jonathan Scarlett. "Max-Min Grouped Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8603–11. http://dx.doi.org/10.1609/aaai.v36i8.20838.
Повний текст джерелаLi, Litao. "Exploring Multi-Armed Bandit algorithms: Performance analysis in dynamic environments." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 252–59. http://dx.doi.org/10.54254/2755-2721/34/20230338.
Повний текст джерелаYang, Luting, Jianyi Yang, and Shaolei Ren. "Contextual Bandits with Delayed Feedback and Semi-supervised Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15943–44. http://dx.doi.org/10.1609/aaai.v35i18.17968.
Повний текст джерелаYu, Baosheng, Meng Fang, and Dacheng Tao. "Per-Round Knapsack-Constrained Linear Submodular Bandits." Neural Computation 28, no. 12 (December 2016): 2757–89. http://dx.doi.org/10.1162/neco_a_00887.
Повний текст джерелаRoy Chaudhuri, Arghya, and Shivaram Kalyanakrishnan. "Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10085–92. http://dx.doi.org/10.1609/aaai.v34i06.6566.
Повний текст джерелаKaibel, Chris, and Torsten Biemann. "Rethinking the Gold Standard With Multi-armed Bandits: Machine Learning Allocation Algorithms for Experiments." Organizational Research Methods 24, no. 1 (June 11, 2019): 78–103. http://dx.doi.org/10.1177/1094428119854153.
Повний текст джерелаXi, Guangyu, Chao Tao, and Yuan Zhou. "Near-Optimal MNL Bandits Under Risk Criteria." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10397–404. http://dx.doi.org/10.1609/aaai.v35i12.17245.
Повний текст джерелаNiño-Mora, José. "Restless bandits, partial conservation laws and indexability." Advances in Applied Probability 33, no. 1 (March 2001): 76–98. http://dx.doi.org/10.1017/s0001867800010648.
Повний текст джерелаCheung, Wang Chi, David Simchi-Levi, and Ruihao Zhu. "Hedging the Drift: Learning to Optimize Under Nonstationarity." Management Science 68, no. 3 (March 2022): 1696–713. http://dx.doi.org/10.1287/mnsc.2021.4024.
Повний текст джерелаVaratharajah, Yogatheesan, and Brent Berry. "A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials." Life 12, no. 8 (August 21, 2022): 1277. http://dx.doi.org/10.3390/life12081277.
Повний текст джерелаChen, Panyangjie. "Investigation of selection and application of Multi-Armed Bandit algorithms in recommendation system." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 185–90. http://dx.doi.org/10.54254/2755-2721/34/20230323.
Повний текст джерелаChen, Xijin, Kim May Lee, Sofia S. Villar, and David S. Robertson. "Some performance considerations when using multi-armed bandit algorithms in the presence of missing data." PLOS ONE 17, no. 9 (September 12, 2022): e0274272. http://dx.doi.org/10.1371/journal.pone.0274272.
Повний текст джерелаHuang, Wen, Lu Zhang, and Xintao Wu. "Achieving Counterfactual Fairness for Causal Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6952–59. http://dx.doi.org/10.1609/aaai.v36i6.20653.
Повний текст джерелаEsfandiari, Hossein, Amin Karbasi, Abbas Mehrabian, and Vahab Mirrokni. "Regret Bounds for Batched Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Повний текст джерелаYan, Xue, Yali Du, Binxin Ru, Jun Wang, Haifeng Zhang, and Xu Chen. "Learning to Identify Top Elo Ratings: A Dueling Bandits Approach." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8797–805. http://dx.doi.org/10.1609/aaai.v36i8.20860.
Повний текст джерелаOntañón, Santiago. "Combinatorial Multi-armed Bandits for Real-Time Strategy Games." Journal of Artificial Intelligence Research 58 (March 29, 2017): 665–702. http://dx.doi.org/10.1613/jair.5398.
Повний текст джерелаTang, Qiao, Hong Xie, Yunni Xia, Jia Lee, and Qingsheng Zhu. "Robust Contextual Bandits via Bootstrapping." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12182–89. http://dx.doi.org/10.1609/aaai.v35i13.17446.
Повний текст джерелаTornede, Alexander, Viktor Bengs, and Eyke Hüllermeier. "Machine Learning for Online Algorithm Selection under Censored Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10370–80. http://dx.doi.org/10.1609/aaai.v36i9.21279.
Повний текст джерелаFourati, Fares, Christopher John Quinn, Mohamed-Slim Alouini, and Vaneet Aggarwal. "Combinatorial Stochastic-Greedy Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12052–60. http://dx.doi.org/10.1609/aaai.v38i11.29093.
Повний текст джерелаBarman, Siddharth, Arindam Khan, Arnab Maiti, and Ayush Sawarni. "Fairness and Welfare Quantification for Regret in Multi-Armed Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6762–69. http://dx.doi.org/10.1609/aaai.v37i6.25829.
Повний текст джерелаLi, Wenjie, Qifan Song, Jean Honorio, and Guang Lin. "Federated X-armed Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13628–36. http://dx.doi.org/10.1609/aaai.v38i12.29267.
Повний текст джерелаTolpin, David, and Solomon Shimony. "MCTS Based on Simple Regret." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 570–76. http://dx.doi.org/10.1609/aaai.v26i1.8126.
Повний текст джерелаEne, Alina, Huy L. Nguyen, and Adrian Vladu. "Projection-Free Bandit Optimization with Privacy Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7322–30. http://dx.doi.org/10.1609/aaai.v35i8.16899.
Повний текст джерелаWu, Chenyue. "Comparative analysis of the KL-UCB and UCB algorithms: Delving into complexity and performance." Applied and Computational Engineering 53, no. 1 (March 28, 2024): 39–47. http://dx.doi.org/10.54254/2755-2721/53/20241221.
Повний текст джерелаHerlihy, Christine, and John P. Dickerson. "Networked Restless Bandits with Positive Externalities." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 11997–2004. http://dx.doi.org/10.1609/aaai.v37i10.26415.
Повний текст джерелаNobari, Sadegh. "DBA: Dynamic Multi-Armed Bandit Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9869–70. http://dx.doi.org/10.1609/aaai.v33i01.33019869.
Повний текст джерелаMintz, Yonatan, Anil Aswani, Philip Kaminsky, Elena Flowers, and Yoshimi Fukuoka. "Nonstationary Bandits with Habituation and Recovery Dynamics." Operations Research 68, no. 5 (September 2020): 1493–516. http://dx.doi.org/10.1287/opre.2019.1918.
Повний текст джерелаHan, Qi, Li Zhu, and Fei Guo. "Forced Exploration in Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12270–77. http://dx.doi.org/10.1609/aaai.v38i11.29117.
Повний текст джерелаLupu, Andrei, Audrey Durand, and Doina Precup. "Leveraging Observations in Bandits: Between Risks and Benefits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6112–19. http://dx.doi.org/10.1609/aaai.v33i01.33016112.
Повний текст джерелаOntanon, Santiago. "The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, no. 1 (June 30, 2021): 58–64. http://dx.doi.org/10.1609/aiide.v9i1.12681.
Повний текст джерелаNarita, Yusuke, Shota Yasui, and Kohei Yata. "Efficient Counterfactual Learning from Bandit Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.
Повний текст джерела