Journal articles on the topic 'Algorithme de bandit'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Algorithme de bandit.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Ciucanu, Radu, Pascal Lafourcade, Gael Marcadet, and Marta Soare. "SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits." Journal of Artificial Intelligence Research 73 (February 23, 2022): 737–65. http://dx.doi.org/10.1613/jair.1.13163.
Azizi, Javad, Branislav Kveton, Mohammad Ghavamzadeh, and Sumeet Katariya. "Meta-Learning for Simple Regret Minimization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6709–17. http://dx.doi.org/10.1609/aaai.v37i6.25823.
Zhou, Huozhi, Lingda Wang, Lav Varshney, and Ee-Peng Lim. "A Near-Optimal Change-Detection Based Algorithm for Piecewise-Stationary Combinatorial Semi-Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6933–40. http://dx.doi.org/10.1609/aaai.v34i04.6176.
Li, Youxuan. "Improvement of the recommendation system based on the multi-armed bandit algorithm." Applied and Computational Engineering 36, no. 1 (January 22, 2024): 237–41. http://dx.doi.org/10.54254/2755-2721/36/20230453.
Kuroki, Yuko, Liyuan Xu, Atsushi Miyauchi, Junya Honda, and Masashi Sugiyama. "Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback." Neural Computation 32, no. 9 (September 2020): 1733–73. http://dx.doi.org/10.1162/neco_a_01299.
Niño-Mora, José. "A Fast-Pivoting Algorithm for Whittle’s Restless Bandit Index." Mathematics 8, no. 12 (December 15, 2020): 2226. http://dx.doi.org/10.3390/math8122226.
Oswal, Urvashi, Aniruddha Bhargava, and Robert Nowak. "Linear Bandits with Feature Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5331–38. http://dx.doi.org/10.1609/aaai.v34i04.5980.
Agarwal, Mridul, Vaneet Aggarwal, Abhishek Kumar Umrawal, and Chris Quinn. "DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6557–65. http://dx.doi.org/10.1609/aaai.v35i8.16812.
Qu, Jiaming. "Survey of dynamic pricing based on Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.
Wan, Zongqi, Zhijie Zhang, Tongyang Li, Jialin Zhang, and Xiaoming Sun. "Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10087–94. http://dx.doi.org/10.1609/aaai.v37i8.26202.
Du, Yihan, Siwei Wang, and Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Xue, Bo, Ji Cheng, Fei Liu, Yimu Wang, and Qingfu Zhang. "Multiobjective Lipschitz Bandits under Lexicographic Ordering." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16238–46. http://dx.doi.org/10.1609/aaai.v38i15.29558.
Liu, Zizhuo. "Investigation of progress and application related to Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (January 22, 2024): 155–59. http://dx.doi.org/10.54254/2755-2721/37/20230496.
Sharaf, Amr, and Hal Daumé III. "Meta-Learning Effective Exploration Strategies for Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.
Dimakopoulou, Maria, Zhengyuan Zhou, Susan Athey, and Guido Imbens. "Balanced Linear Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3445–53. http://dx.doi.org/10.1609/aaai.v33i01.33013445.
Tolpin, David, and Solomon Shimony. "MCTS Based on Simple Rerget." Proceedings of the International Symposium on Combinatorial Search 3, no. 1 (August 20, 2021): 193–99. http://dx.doi.org/10.1609/socs.v3i1.18221.
Wang, Liangxu. "Investigation of frontier Multi-Armed Bandit algorithms and applications." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 179–84. http://dx.doi.org/10.54254/2755-2721/34/20230322.
Amani, Sanae, and Christos Thrampoulidis. "Decentralized Multi-Agent Linear Bandits with Safety Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6627–35. http://dx.doi.org/10.1609/aaai.v35i8.16820.
Wang, Zhiyong, Xutong Liu, Shuai Li, and John C. S. Lui. "Efficient Explorative Key-Term Selection Strategies for Conversational Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10288–95. http://dx.doi.org/10.1609/aaai.v37i8.26225.
Wang, Zhenlin, and Jonathan Scarlett. "Max-Min Grouped Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8603–11. http://dx.doi.org/10.1609/aaai.v36i8.20838.
Li, Litao. "Exploring Multi-Armed Bandit algorithms: Performance analysis in dynamic environments." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 252–59. http://dx.doi.org/10.54254/2755-2721/34/20230338.
Yang, Luting, Jianyi Yang, and Shaolei Ren. "Contextual Bandits with Delayed Feedback and Semi-supervised Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15943–44. http://dx.doi.org/10.1609/aaai.v35i18.17968.
Yu, Baosheng, Meng Fang, and Dacheng Tao. "Per-Round Knapsack-Constrained Linear Submodular Bandits." Neural Computation 28, no. 12 (December 2016): 2757–89. http://dx.doi.org/10.1162/neco_a_00887.
Roy Chaudhuri, Arghya, and Shivaram Kalyanakrishnan. "Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10085–92. http://dx.doi.org/10.1609/aaai.v34i06.6566.
Kaibel, Chris, and Torsten Biemann. "Rethinking the Gold Standard With Multi-armed Bandits: Machine Learning Allocation Algorithms for Experiments." Organizational Research Methods 24, no. 1 (June 11, 2019): 78–103. http://dx.doi.org/10.1177/1094428119854153.
Xi, Guangyu, Chao Tao, and Yuan Zhou. "Near-Optimal MNL Bandits Under Risk Criteria." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10397–404. http://dx.doi.org/10.1609/aaai.v35i12.17245.
Niño-Mora, José. "Restless bandits, partial conservation laws and indexability." Advances in Applied Probability 33, no. 1 (March 2001): 76–98. http://dx.doi.org/10.1017/s0001867800010648.
Cheung, Wang Chi, David Simchi-Levi, and Ruihao Zhu. "Hedging the Drift: Learning to Optimize Under Nonstationarity." Management Science 68, no. 3 (March 2022): 1696–713. http://dx.doi.org/10.1287/mnsc.2021.4024.
Varatharajah, Yogatheesan, and Brent Berry. "A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials." Life 12, no. 8 (August 21, 2022): 1277. http://dx.doi.org/10.3390/life12081277.
Chen, Panyangjie. "Investigation of selection and application of Multi-Armed Bandit algorithms in recommendation system." Applied and Computational Engineering 34, no. 1 (January 22, 2024): 185–90. http://dx.doi.org/10.54254/2755-2721/34/20230323.
Chen, Xijin, Kim May Lee, Sofia S. Villar, and David S. Robertson. "Some performance considerations when using multi-armed bandit algorithms in the presence of missing data." PLOS ONE 17, no. 9 (September 12, 2022): e0274272. http://dx.doi.org/10.1371/journal.pone.0274272.
Huang, Wen, Lu Zhang, and Xintao Wu. "Achieving Counterfactual Fairness for Causal Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6952–59. http://dx.doi.org/10.1609/aaai.v36i6.20653.
Esfandiari, Hossein, Amin Karbasi, Abbas Mehrabian, and Vahab Mirrokni. "Regret Bounds for Batched Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Yan, Xue, Yali Du, Binxin Ru, Jun Wang, Haifeng Zhang, and Xu Chen. "Learning to Identify Top Elo Ratings: A Dueling Bandits Approach." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8797–805. http://dx.doi.org/10.1609/aaai.v36i8.20860.
Ontañón, Santiago. "Combinatorial Multi-armed Bandits for Real-Time Strategy Games." Journal of Artificial Intelligence Research 58 (March 29, 2017): 665–702. http://dx.doi.org/10.1613/jair.5398.
Tang, Qiao, Hong Xie, Yunni Xia, Jia Lee, and Qingsheng Zhu. "Robust Contextual Bandits via Bootstrapping." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12182–89. http://dx.doi.org/10.1609/aaai.v35i13.17446.
Tornede, Alexander, Viktor Bengs, and Eyke Hüllermeier. "Machine Learning for Online Algorithm Selection under Censored Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10370–80. http://dx.doi.org/10.1609/aaai.v36i9.21279.
Fourati, Fares, Christopher John Quinn, Mohamed-Slim Alouini, and Vaneet Aggarwal. "Combinatorial Stochastic-Greedy Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12052–60. http://dx.doi.org/10.1609/aaai.v38i11.29093.
Barman, Siddharth, Arindam Khan, Arnab Maiti, and Ayush Sawarni. "Fairness and Welfare Quantification for Regret in Multi-Armed Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6762–69. http://dx.doi.org/10.1609/aaai.v37i6.25829.
Li, Wenjie, Qifan Song, Jean Honorio, and Guang Lin. "Federated X-armed Bandit." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13628–36. http://dx.doi.org/10.1609/aaai.v38i12.29267.
Tolpin, David, and Solomon Shimony. "MCTS Based on Simple Regret." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 570–76. http://dx.doi.org/10.1609/aaai.v26i1.8126.
Ene, Alina, Huy L. Nguyen, and Adrian Vladu. "Projection-Free Bandit Optimization with Privacy Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7322–30. http://dx.doi.org/10.1609/aaai.v35i8.16899.
Wu, Chenyue. "Comparative analysis of the KL-UCB and UCB algorithms: Delving into complexity and performance." Applied and Computational Engineering 53, no. 1 (March 28, 2024): 39–47. http://dx.doi.org/10.54254/2755-2721/53/20241221.
Herlihy, Christine, and John P. Dickerson. "Networked Restless Bandits with Positive Externalities." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 11997–2004. http://dx.doi.org/10.1609/aaai.v37i10.26415.
Nobari, Sadegh. "DBA: Dynamic Multi-Armed Bandit Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9869–70. http://dx.doi.org/10.1609/aaai.v33i01.33019869.
Mintz, Yonatan, Anil Aswani, Philip Kaminsky, Elena Flowers, and Yoshimi Fukuoka. "Nonstationary Bandits with Habituation and Recovery Dynamics." Operations Research 68, no. 5 (September 2020): 1493–516. http://dx.doi.org/10.1287/opre.2019.1918.
Han, Qi, Li Zhu, and Fei Guo. "Forced Exploration in Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12270–77. http://dx.doi.org/10.1609/aaai.v38i11.29117.
Lupu, Andrei, Audrey Durand, and Doina Precup. "Leveraging Observations in Bandits: Between Risks and Benefits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6112–19. http://dx.doi.org/10.1609/aaai.v33i01.33016112.
Ontanon, Santiago. "The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, no. 1 (June 30, 2021): 58–64. http://dx.doi.org/10.1609/aiide.v9i1.12681.
Narita, Yusuke, Shota Yasui, and Kohei Yata. "Efficient Counterfactual Learning from Bandit Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.