Artykuły w czasopismach na temat „Algorithme de bandit”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Algorithme de bandit”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Ciucanu, Radu, Pascal Lafourcade, Gael Marcadet i Marta Soare. "SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits". Journal of Artificial Intelligence Research 73 (23.02.2022): 737–65. http://dx.doi.org/10.1613/jair.1.13163.
Pełny tekst źródłaAzizi, Javad, Branislav Kveton, Mohammad Ghavamzadeh i Sumeet Katariya. "Meta-Learning for Simple Regret Minimization". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 6709–17. http://dx.doi.org/10.1609/aaai.v37i6.25823.
Pełny tekst źródłaZhou, Huozhi, Lingda Wang, Lav Varshney i Ee-Peng Lim. "A Near-Optimal Change-Detection Based Algorithm for Piecewise-Stationary Combinatorial Semi-Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 6933–40. http://dx.doi.org/10.1609/aaai.v34i04.6176.
Pełny tekst źródłaLi, Youxuan. "Improvement of the recommendation system based on the multi-armed bandit algorithm". Applied and Computational Engineering 36, nr 1 (22.01.2024): 237–41. http://dx.doi.org/10.54254/2755-2721/36/20230453.
Pełny tekst źródłaKuroki, Yuko, Liyuan Xu, Atsushi Miyauchi, Junya Honda i Masashi Sugiyama. "Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback". Neural Computation 32, nr 9 (wrzesień 2020): 1733–73. http://dx.doi.org/10.1162/neco_a_01299.
Pełny tekst źródłaNiño-Mora, José. "A Fast-Pivoting Algorithm for Whittle’s Restless Bandit Index". Mathematics 8, nr 12 (15.12.2020): 2226. http://dx.doi.org/10.3390/math8122226.
Pełny tekst źródłaOswal, Urvashi, Aniruddha Bhargava i Robert Nowak. "Linear Bandits with Feature Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5331–38. http://dx.doi.org/10.1609/aaai.v34i04.5980.
Pełny tekst źródłaAgarwal, Mridul, Vaneet Aggarwal, Abhishek Kumar Umrawal i Chris Quinn. "DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 6557–65. http://dx.doi.org/10.1609/aaai.v35i8.16812.
Pełny tekst źródłaQu, Jiaming. "Survey of dynamic pricing based on Multi-Armed Bandit algorithms". Applied and Computational Engineering 37, nr 1 (22.01.2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.
Pełny tekst źródłaWan, Zongqi, Zhijie Zhang, Tongyang Li, Jialin Zhang i Xiaoming Sun. "Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 8 (26.06.2023): 10087–94. http://dx.doi.org/10.1609/aaai.v37i8.26202.
Pełny tekst źródłaDu, Yihan, Siwei Wang i Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Pełny tekst źródłaXue, Bo, Ji Cheng, Fei Liu, Yimu Wang i Qingfu Zhang. "Multiobjective Lipschitz Bandits under Lexicographic Ordering". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16238–46. http://dx.doi.org/10.1609/aaai.v38i15.29558.
Pełny tekst źródłaLiu, Zizhuo. "Investigation of progress and application related to Multi-Armed Bandit algorithms". Applied and Computational Engineering 37, nr 1 (22.01.2024): 155–59. http://dx.doi.org/10.54254/2755-2721/37/20230496.
Pełny tekst źródłaSharaf, Amr, i Hal Daumé III. "Meta-Learning Effective Exploration Strategies for Contextual Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 11 (18.05.2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.
Pełny tekst źródłaDimakopoulou, Maria, Zhengyuan Zhou, Susan Athey i Guido Imbens. "Balanced Linear Contextual Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3445–53. http://dx.doi.org/10.1609/aaai.v33i01.33013445.
Pełny tekst źródłaTolpin, David, i Solomon Shimony. "MCTS Based on Simple Rerget". Proceedings of the International Symposium on Combinatorial Search 3, nr 1 (20.08.2021): 193–99. http://dx.doi.org/10.1609/socs.v3i1.18221.
Pełny tekst źródłaWang, Liangxu. "Investigation of frontier Multi-Armed Bandit algorithms and applications". Applied and Computational Engineering 34, nr 1 (22.01.2024): 179–84. http://dx.doi.org/10.54254/2755-2721/34/20230322.
Pełny tekst źródłaAmani, Sanae, i Christos Thrampoulidis. "Decentralized Multi-Agent Linear Bandits with Safety Constraints". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 6627–35. http://dx.doi.org/10.1609/aaai.v35i8.16820.
Pełny tekst źródłaWang, Zhiyong, Xutong Liu, Shuai Li i John C. S. Lui. "Efficient Explorative Key-Term Selection Strategies for Conversational Contextual Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 8 (26.06.2023): 10288–95. http://dx.doi.org/10.1609/aaai.v37i8.26225.
Pełny tekst źródłaWang, Zhenlin, i Jonathan Scarlett. "Max-Min Grouped Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8603–11. http://dx.doi.org/10.1609/aaai.v36i8.20838.
Pełny tekst źródłaLi, Litao. "Exploring Multi-Armed Bandit algorithms: Performance analysis in dynamic environments". Applied and Computational Engineering 34, nr 1 (22.01.2024): 252–59. http://dx.doi.org/10.54254/2755-2721/34/20230338.
Pełny tekst źródłaYang, Luting, Jianyi Yang i Shaolei Ren. "Contextual Bandits with Delayed Feedback and Semi-supervised Learning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 18 (18.05.2021): 15943–44. http://dx.doi.org/10.1609/aaai.v35i18.17968.
Pełny tekst źródłaYu, Baosheng, Meng Fang i Dacheng Tao. "Per-Round Knapsack-Constrained Linear Submodular Bandits". Neural Computation 28, nr 12 (grudzień 2016): 2757–89. http://dx.doi.org/10.1162/neco_a_00887.
Pełny tekst źródłaRoy Chaudhuri, Arghya, i Shivaram Kalyanakrishnan. "Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 06 (3.04.2020): 10085–92. http://dx.doi.org/10.1609/aaai.v34i06.6566.
Pełny tekst źródłaKaibel, Chris, i Torsten Biemann. "Rethinking the Gold Standard With Multi-armed Bandits: Machine Learning Allocation Algorithms for Experiments". Organizational Research Methods 24, nr 1 (11.06.2019): 78–103. http://dx.doi.org/10.1177/1094428119854153.
Pełny tekst źródłaXi, Guangyu, Chao Tao i Yuan Zhou. "Near-Optimal MNL Bandits Under Risk Criteria". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 12 (18.05.2021): 10397–404. http://dx.doi.org/10.1609/aaai.v35i12.17245.
Pełny tekst źródłaNiño-Mora, José. "Restless bandits, partial conservation laws and indexability". Advances in Applied Probability 33, nr 1 (marzec 2001): 76–98. http://dx.doi.org/10.1017/s0001867800010648.
Pełny tekst źródłaCheung, Wang Chi, David Simchi-Levi i Ruihao Zhu. "Hedging the Drift: Learning to Optimize Under Nonstationarity". Management Science 68, nr 3 (marzec 2022): 1696–713. http://dx.doi.org/10.1287/mnsc.2021.4024.
Pełny tekst źródłaVaratharajah, Yogatheesan, i Brent Berry. "A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials". Life 12, nr 8 (21.08.2022): 1277. http://dx.doi.org/10.3390/life12081277.
Pełny tekst źródłaChen, Panyangjie. "Investigation of selection and application of Multi-Armed Bandit algorithms in recommendation system". Applied and Computational Engineering 34, nr 1 (22.01.2024): 185–90. http://dx.doi.org/10.54254/2755-2721/34/20230323.
Pełny tekst źródłaChen, Xijin, Kim May Lee, Sofia S. Villar i David S. Robertson. "Some performance considerations when using multi-armed bandit algorithms in the presence of missing data". PLOS ONE 17, nr 9 (12.09.2022): e0274272. http://dx.doi.org/10.1371/journal.pone.0274272.
Pełny tekst źródłaHuang, Wen, Lu Zhang i Xintao Wu. "Achieving Counterfactual Fairness for Causal Bandit". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 6952–59. http://dx.doi.org/10.1609/aaai.v36i6.20653.
Pełny tekst źródłaEsfandiari, Hossein, Amin Karbasi, Abbas Mehrabian i Vahab Mirrokni. "Regret Bounds for Batched Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Pełny tekst źródłaYan, Xue, Yali Du, Binxin Ru, Jun Wang, Haifeng Zhang i Xu Chen. "Learning to Identify Top Elo Ratings: A Dueling Bandits Approach". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8797–805. http://dx.doi.org/10.1609/aaai.v36i8.20860.
Pełny tekst źródłaOntañón, Santiago. "Combinatorial Multi-armed Bandits for Real-Time Strategy Games". Journal of Artificial Intelligence Research 58 (29.03.2017): 665–702. http://dx.doi.org/10.1613/jair.5398.
Pełny tekst źródłaTang, Qiao, Hong Xie, Yunni Xia, Jia Lee i Qingsheng Zhu. "Robust Contextual Bandits via Bootstrapping". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 13 (18.05.2021): 12182–89. http://dx.doi.org/10.1609/aaai.v35i13.17446.
Pełny tekst źródłaTornede, Alexander, Viktor Bengs i Eyke Hüllermeier. "Machine Learning for Online Algorithm Selection under Censored Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 9 (28.06.2022): 10370–80. http://dx.doi.org/10.1609/aaai.v36i9.21279.
Pełny tekst źródłaFourati, Fares, Christopher John Quinn, Mohamed-Slim Alouini i Vaneet Aggarwal. "Combinatorial Stochastic-Greedy Bandit". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 11 (24.03.2024): 12052–60. http://dx.doi.org/10.1609/aaai.v38i11.29093.
Pełny tekst źródłaBarman, Siddharth, Arindam Khan, Arnab Maiti i Ayush Sawarni. "Fairness and Welfare Quantification for Regret in Multi-Armed Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 6762–69. http://dx.doi.org/10.1609/aaai.v37i6.25829.
Pełny tekst źródłaLi, Wenjie, Qifan Song, Jean Honorio i Guang Lin. "Federated X-armed Bandit". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 12 (24.03.2024): 13628–36. http://dx.doi.org/10.1609/aaai.v38i12.29267.
Pełny tekst źródłaTolpin, David, i Solomon Shimony. "MCTS Based on Simple Regret". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 570–76. http://dx.doi.org/10.1609/aaai.v26i1.8126.
Pełny tekst źródłaEne, Alina, Huy L. Nguyen i Adrian Vladu. "Projection-Free Bandit Optimization with Privacy Guarantees". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7322–30. http://dx.doi.org/10.1609/aaai.v35i8.16899.
Pełny tekst źródłaWu, Chenyue. "Comparative analysis of the KL-UCB and UCB algorithms: Delving into complexity and performance". Applied and Computational Engineering 53, nr 1 (28.03.2024): 39–47. http://dx.doi.org/10.54254/2755-2721/53/20241221.
Pełny tekst źródłaHerlihy, Christine, i John P. Dickerson. "Networked Restless Bandits with Positive Externalities". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 10 (26.06.2023): 11997–2004. http://dx.doi.org/10.1609/aaai.v37i10.26415.
Pełny tekst źródłaNobari, Sadegh. "DBA: Dynamic Multi-Armed Bandit Algorithm". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9869–70. http://dx.doi.org/10.1609/aaai.v33i01.33019869.
Pełny tekst źródłaMintz, Yonatan, Anil Aswani, Philip Kaminsky, Elena Flowers i Yoshimi Fukuoka. "Nonstationary Bandits with Habituation and Recovery Dynamics". Operations Research 68, nr 5 (wrzesień 2020): 1493–516. http://dx.doi.org/10.1287/opre.2019.1918.
Pełny tekst źródłaHan, Qi, Li Zhu i Fei Guo. "Forced Exploration in Bandit Problems". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 11 (24.03.2024): 12270–77. http://dx.doi.org/10.1609/aaai.v38i11.29117.
Pełny tekst źródłaLupu, Andrei, Audrey Durand i Doina Precup. "Leveraging Observations in Bandits: Between Risks and Benefits". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 6112–19. http://dx.doi.org/10.1609/aaai.v33i01.33016112.
Pełny tekst źródłaOntanon, Santiago. "The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, nr 1 (30.06.2021): 58–64. http://dx.doi.org/10.1609/aiide.v9i1.12681.
Pełny tekst źródłaNarita, Yusuke, Shota Yasui i Kohei Yata. "Efficient Counterfactual Learning from Bandit Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.
Pełny tekst źródła