Artykuły w czasopismach na temat „Bandit algorithm”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Bandit algorithm”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Ciucanu, Radu, Pascal Lafourcade, Gael Marcadet i Marta Soare. "SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits". Journal of Artificial Intelligence Research 73 (23.02.2022): 737–65. http://dx.doi.org/10.1613/jair.1.13163.
Pełny tekst źródłaZhou, Huozhi, Lingda Wang, Lav Varshney i Ee-Peng Lim. "A Near-Optimal Change-Detection Based Algorithm for Piecewise-Stationary Combinatorial Semi-Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 6933–40. http://dx.doi.org/10.1609/aaai.v34i04.6176.
Pełny tekst źródłaAzizi, Javad, Branislav Kveton, Mohammad Ghavamzadeh i Sumeet Katariya. "Meta-Learning for Simple Regret Minimization". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 6709–17. http://dx.doi.org/10.1609/aaai.v37i6.25823.
Pełny tekst źródłaKuroki, Yuko, Liyuan Xu, Atsushi Miyauchi, Junya Honda i Masashi Sugiyama. "Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback". Neural Computation 32, nr 9 (wrzesień 2020): 1733–73. http://dx.doi.org/10.1162/neco_a_01299.
Pełny tekst źródłaLi, Youxuan. "Improvement of the recommendation system based on the multi-armed bandit algorithm". Applied and Computational Engineering 36, nr 1 (22.01.2024): 237–41. http://dx.doi.org/10.54254/2755-2721/36/20230453.
Pełny tekst źródłaLiu, Zizhuo. "Investigation of progress and application related to Multi-Armed Bandit algorithms". Applied and Computational Engineering 37, nr 1 (22.01.2024): 155–59. http://dx.doi.org/10.54254/2755-2721/37/20230496.
Pełny tekst źródłaAgarwal, Mridul, Vaneet Aggarwal, Abhishek Kumar Umrawal i Chris Quinn. "DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 6557–65. http://dx.doi.org/10.1609/aaai.v35i8.16812.
Pełny tekst źródłaXue, Bo, Ji Cheng, Fei Liu, Yimu Wang i Qingfu Zhang. "Multiobjective Lipschitz Bandits under Lexicographic Ordering". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16238–46. http://dx.doi.org/10.1609/aaai.v38i15.29558.
Pełny tekst źródłaSharaf, Amr, i Hal Daumé III. "Meta-Learning Effective Exploration Strategies for Contextual Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 11 (18.05.2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.
Pełny tekst źródłaNobari, Sadegh. "DBA: Dynamic Multi-Armed Bandit Algorithm". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9869–70. http://dx.doi.org/10.1609/aaai.v33i01.33019869.
Pełny tekst źródłaQu, Jiaming. "Survey of dynamic pricing based on Multi-Armed Bandit algorithms". Applied and Computational Engineering 37, nr 1 (22.01.2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.
Pełny tekst źródłaNiño-Mora, José. "A Fast-Pivoting Algorithm for Whittle’s Restless Bandit Index". Mathematics 8, nr 12 (15.12.2020): 2226. http://dx.doi.org/10.3390/math8122226.
Pełny tekst źródłaLamberton, Damien, i Gilles Pagès. "A penalized bandit algorithm". Electronic Journal of Probability 13 (2008): 341–73. http://dx.doi.org/10.1214/ejp.v13-489.
Pełny tekst źródłaCheung, Wang Chi, David Simchi-Levi i Ruihao Zhu. "Hedging the Drift: Learning to Optimize Under Nonstationarity". Management Science 68, nr 3 (marzec 2022): 1696–713. http://dx.doi.org/10.1287/mnsc.2021.4024.
Pełny tekst źródłaChen, Panyangjie. "Investigation of selection and application of Multi-Armed Bandit algorithms in recommendation system". Applied and Computational Engineering 34, nr 1 (22.01.2024): 185–90. http://dx.doi.org/10.54254/2755-2721/34/20230323.
Pełny tekst źródłaFourati, Fares, Christopher John Quinn, Mohamed-Slim Alouini i Vaneet Aggarwal. "Combinatorial Stochastic-Greedy Bandit". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 11 (24.03.2024): 12052–60. http://dx.doi.org/10.1609/aaai.v38i11.29093.
Pełny tekst źródłaOswal, Urvashi, Aniruddha Bhargava i Robert Nowak. "Linear Bandits with Feature Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5331–38. http://dx.doi.org/10.1609/aaai.v34i04.5980.
Pełny tekst źródłaTang, Qiao, Hong Xie, Yunni Xia, Jia Lee i Qingsheng Zhu. "Robust Contextual Bandits via Bootstrapping". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 13 (18.05.2021): 12182–89. http://dx.doi.org/10.1609/aaai.v35i13.17446.
Pełny tekst źródłaLi, Wenjie, Qifan Song, Jean Honorio i Guang Lin. "Federated X-armed Bandit". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 12 (24.03.2024): 13628–36. http://dx.doi.org/10.1609/aaai.v38i12.29267.
Pełny tekst źródłaWang, Liangxu. "Investigation of frontier Multi-Armed Bandit algorithms and applications". Applied and Computational Engineering 34, nr 1 (22.01.2024): 179–84. http://dx.doi.org/10.54254/2755-2721/34/20230322.
Pełny tekst źródłaDu, Yihan, Siwei Wang i Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Pełny tekst źródłaEsfandiari, Hossein, Amin Karbasi, Abbas Mehrabian i Vahab Mirrokni. "Regret Bounds for Batched Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Pełny tekst źródłaHan, Qi, Li Zhu i Fei Guo. "Forced Exploration in Bandit Problems". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 11 (24.03.2024): 12270–77. http://dx.doi.org/10.1609/aaai.v38i11.29117.
Pełny tekst źródłaChen, Xijin, Kim May Lee, Sofia S. Villar i David S. Robertson. "Some performance considerations when using multi-armed bandit algorithms in the presence of missing data". PLOS ONE 17, nr 9 (12.09.2022): e0274272. http://dx.doi.org/10.1371/journal.pone.0274272.
Pełny tekst źródłaEne, Alina, Huy L. Nguyen i Adrian Vladu. "Projection-Free Bandit Optimization with Privacy Guarantees". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7322–30. http://dx.doi.org/10.1609/aaai.v35i8.16899.
Pełny tekst źródłaChen, Tianfeng. "Empirical performances comparison for ETC algorithm". Applied and Computational Engineering 13, nr 1 (23.10.2023): 29–36. http://dx.doi.org/10.54254/2755-2721/13/20230705.
Pełny tekst źródłaZhu, Zhaowei, Jingxuan Zhu, Ji Liu i Yang Liu. "Federated Bandit". Proceedings of the ACM on Measurement and Analysis of Computing Systems 5, nr 1 (18.02.2021): 1–29. http://dx.doi.org/10.1145/3447380.
Pełny tekst źródłaRangi, Anshuka, Long Tran-Thanh, Haifeng Xu i Massimo Franceschetti. "Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 7 (28.06.2022): 8054–61. http://dx.doi.org/10.1609/aaai.v36i7.20777.
Pełny tekst źródłaAmani, Sanae, i Christos Thrampoulidis. "Decentralized Multi-Agent Linear Bandits with Safety Constraints". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 6627–35. http://dx.doi.org/10.1609/aaai.v35i8.16820.
Pełny tekst źródłaHuang, Wen, Lu Zhang i Xintao Wu. "Achieving Counterfactual Fairness for Causal Bandit". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 6952–59. http://dx.doi.org/10.1609/aaai.v36i6.20653.
Pełny tekst źródłaNarita, Yusuke, Shota Yasui i Kohei Yata. "Efficient Counterfactual Learning from Bandit Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.
Pełny tekst źródłaZhao, Shanshan, Wenhai Cui, Bei Jiang, Linglong Kong i Xiaodong Yan. "Responsible Bandit Learning via Privacy-Protected Mean-Volatility Utility". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21815–22. http://dx.doi.org/10.1609/aaai.v38i19.30182.
Pełny tekst źródłaTolpin, David, i Solomon Shimony. "MCTS Based on Simple Rerget". Proceedings of the International Symposium on Combinatorial Search 3, nr 1 (20.08.2021): 193–99. http://dx.doi.org/10.1609/socs.v3i1.18221.
Pełny tekst źródłaLi, Litao. "Exploring Multi-Armed Bandit algorithms: Performance analysis in dynamic environments". Applied and Computational Engineering 34, nr 1 (22.01.2024): 252–59. http://dx.doi.org/10.54254/2755-2721/34/20230338.
Pełny tekst źródłaOh, Min-hwan, i Garud Iyengar. "Multinomial Logit Contextual Bandits: Provable Optimality and Practicality". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 10 (18.05.2021): 9205–13. http://dx.doi.org/10.1609/aaai.v35i10.17111.
Pełny tekst źródłaVaratharajah, Yogatheesan, i Brent Berry. "A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials". Life 12, nr 8 (21.08.2022): 1277. http://dx.doi.org/10.3390/life12081277.
Pełny tekst źródłaШиян, Дмитрий Николаевич, i Dmitry Shiyan. "One-armed bandit problem and the mirror descent algorithm". Mathematical Game Theory and Applications 15, nr 3 (2.02.2024): 88–106. http://dx.doi.org/10.17076/mgta_2023_3_75.
Pełny tekst źródłaYu, Junpu. "Thompson -Greedy Algorithm: An Improvement to the Regret of Thompson Sampling and -Greedy on Multi-Armed Bandit Problems". Applied and Computational Engineering 8, nr 1 (1.08.2023): 525–34. http://dx.doi.org/10.54254/2755-2721/8/20230264.
Pełny tekst źródłaGarcelon, Evrard, Mohammad Ghavamzadeh, Alessandro Lazaric i Matteo Pirotta. "Improved Algorithms for Conservative Exploration in Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 3962–69. http://dx.doi.org/10.1609/aaai.v34i04.5812.
Pełny tekst źródłaKasy, Maximilian, i Anja Sautmann. "Adaptive Treatment Assignment in Experiments for Policy Choice". Econometrica 89, nr 1 (2021): 113–32. http://dx.doi.org/10.3982/ecta17527.
Pełny tekst źródłaOntanon, Santiago. "The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, nr 1 (30.06.2021): 58–64. http://dx.doi.org/10.1609/aiide.v9i1.12681.
Pełny tekst źródłaPatil, Vishakha, Ganesh Ghalme, Vineet Nair i Y. Narahari. "Achieving Fairness in the Stochastic Multi-Armed Bandit Problem". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5379–86. http://dx.doi.org/10.1609/aaai.v34i04.5986.
Pełny tekst źródłaWang, Zhenlin, i Jonathan Scarlett. "Max-Min Grouped Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8603–11. http://dx.doi.org/10.1609/aaai.v36i8.20838.
Pełny tekst źródłaSakakibara, Masaya, Akira Notsu, Seiki Ubukata i Katsuhiro Honda. "Designation of Candidate Solutions in Differential Evolution Based on Bandit Algorithm and its Evaluation". Journal of Advanced Computational Intelligence and Intelligent Informatics 23, nr 4 (20.07.2019): 758–66. http://dx.doi.org/10.20965/jaciii.2019.p0758.
Pełny tekst źródłaKim, Gi-Soo, Jane P. Kim i Hyun-Joon Yang. "Robust Tests in Online Decision-Making". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 9 (28.06.2022): 10016–24. http://dx.doi.org/10.1609/aaai.v36i9.21240.
Pełny tekst źródłaMansour, Yishay, Aleksandrs Slivkins i Vasilis Syrgkanis. "Bayesian Incentive-Compatible Bandit Exploration". Operations Research 68, nr 4 (lipiec 2020): 1132–61. http://dx.doi.org/10.1287/opre.2019.1949.
Pełny tekst źródłaDing, Wenkui, Tao Qin, Xu-Dong Zhang i Tie-Yan Liu. "Multi-Armed Bandit with Budget Constraint and Variable Costs". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (30.06.2013): 232–38. http://dx.doi.org/10.1609/aaai.v27i1.8637.
Pełny tekst źródłaLiu, Yizhi. "An investigation of progress related to stochastic stationary bandit algorithms". Applied and Computational Engineering 34, nr 1 (22.01.2024): 197–201. http://dx.doi.org/10.54254/2755-2721/34/20230326.
Pełny tekst źródłaKaibel, Chris, i Torsten Biemann. "Rethinking the Gold Standard With Multi-armed Bandits: Machine Learning Allocation Algorithms for Experiments". Organizational Research Methods 24, nr 1 (11.06.2019): 78–103. http://dx.doi.org/10.1177/1094428119854153.
Pełny tekst źródłaLupu, Andrei, Audrey Durand i Doina Precup. "Leveraging Observations in Bandits: Between Risks and Benefits". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 6112–19. http://dx.doi.org/10.1609/aaai.v33i01.33016112.
Pełny tekst źródła