Zeitschriftenartikel zum Thema „Algorithme de bandit“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Algorithme de bandit" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Ciucanu, Radu, Pascal Lafourcade, Gael Marcadet und Marta Soare. „SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits“. Journal of Artificial Intelligence Research 73 (23.02.2022): 737–65. http://dx.doi.org/10.1613/jair.1.13163.
Der volle Inhalt der QuelleAzizi, Javad, Branislav Kveton, Mohammad Ghavamzadeh und Sumeet Katariya. „Meta-Learning for Simple Regret Minimization“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 6 (26.06.2023): 6709–17. http://dx.doi.org/10.1609/aaai.v37i6.25823.
Der volle Inhalt der QuelleZhou, Huozhi, Lingda Wang, Lav Varshney und Ee-Peng Lim. „A Near-Optimal Change-Detection Based Algorithm for Piecewise-Stationary Combinatorial Semi-Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 6933–40. http://dx.doi.org/10.1609/aaai.v34i04.6176.
Der volle Inhalt der QuelleLi, Youxuan. „Improvement of the recommendation system based on the multi-armed bandit algorithm“. Applied and Computational Engineering 36, Nr. 1 (22.01.2024): 237–41. http://dx.doi.org/10.54254/2755-2721/36/20230453.
Der volle Inhalt der QuelleKuroki, Yuko, Liyuan Xu, Atsushi Miyauchi, Junya Honda und Masashi Sugiyama. „Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback“. Neural Computation 32, Nr. 9 (September 2020): 1733–73. http://dx.doi.org/10.1162/neco_a_01299.
Der volle Inhalt der QuelleNiño-Mora, José. „A Fast-Pivoting Algorithm for Whittle’s Restless Bandit Index“. Mathematics 8, Nr. 12 (15.12.2020): 2226. http://dx.doi.org/10.3390/math8122226.
Der volle Inhalt der QuelleOswal, Urvashi, Aniruddha Bhargava und Robert Nowak. „Linear Bandits with Feature Feedback“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 5331–38. http://dx.doi.org/10.1609/aaai.v34i04.5980.
Der volle Inhalt der QuelleAgarwal, Mridul, Vaneet Aggarwal, Abhishek Kumar Umrawal und Chris Quinn. „DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 6557–65. http://dx.doi.org/10.1609/aaai.v35i8.16812.
Der volle Inhalt der QuelleQu, Jiaming. „Survey of dynamic pricing based on Multi-Armed Bandit algorithms“. Applied and Computational Engineering 37, Nr. 1 (22.01.2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.
Der volle Inhalt der QuelleWan, Zongqi, Zhijie Zhang, Tongyang Li, Jialin Zhang und Xiaoming Sun. „Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 10087–94. http://dx.doi.org/10.1609/aaai.v37i8.26202.
Der volle Inhalt der QuelleDu, Yihan, Siwei Wang und Longbo Huang. „A One-Size-Fits-All Solution to Conservative Bandit Problems“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Der volle Inhalt der QuelleXue, Bo, Ji Cheng, Fei Liu, Yimu Wang und Qingfu Zhang. „Multiobjective Lipschitz Bandits under Lexicographic Ordering“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 15 (24.03.2024): 16238–46. http://dx.doi.org/10.1609/aaai.v38i15.29558.
Der volle Inhalt der QuelleLiu, Zizhuo. „Investigation of progress and application related to Multi-Armed Bandit algorithms“. Applied and Computational Engineering 37, Nr. 1 (22.01.2024): 155–59. http://dx.doi.org/10.54254/2755-2721/37/20230496.
Der volle Inhalt der QuelleSharaf, Amr, und Hal Daumé III. „Meta-Learning Effective Exploration Strategies for Contextual Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 11 (18.05.2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.
Der volle Inhalt der QuelleDimakopoulou, Maria, Zhengyuan Zhou, Susan Athey und Guido Imbens. „Balanced Linear Contextual Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3445–53. http://dx.doi.org/10.1609/aaai.v33i01.33013445.
Der volle Inhalt der QuelleTolpin, David, und Solomon Shimony. „MCTS Based on Simple Rerget“. Proceedings of the International Symposium on Combinatorial Search 3, Nr. 1 (20.08.2021): 193–99. http://dx.doi.org/10.1609/socs.v3i1.18221.
Der volle Inhalt der QuelleWang, Liangxu. „Investigation of frontier Multi-Armed Bandit algorithms and applications“. Applied and Computational Engineering 34, Nr. 1 (22.01.2024): 179–84. http://dx.doi.org/10.54254/2755-2721/34/20230322.
Der volle Inhalt der QuelleAmani, Sanae, und Christos Thrampoulidis. „Decentralized Multi-Agent Linear Bandits with Safety Constraints“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 6627–35. http://dx.doi.org/10.1609/aaai.v35i8.16820.
Der volle Inhalt der QuelleWang, Zhiyong, Xutong Liu, Shuai Li und John C. S. Lui. „Efficient Explorative Key-Term Selection Strategies for Conversational Contextual Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 10288–95. http://dx.doi.org/10.1609/aaai.v37i8.26225.
Der volle Inhalt der QuelleWang, Zhenlin, und Jonathan Scarlett. „Max-Min Grouped Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 8603–11. http://dx.doi.org/10.1609/aaai.v36i8.20838.
Der volle Inhalt der QuelleLi, Litao. „Exploring Multi-Armed Bandit algorithms: Performance analysis in dynamic environments“. Applied and Computational Engineering 34, Nr. 1 (22.01.2024): 252–59. http://dx.doi.org/10.54254/2755-2721/34/20230338.
Der volle Inhalt der QuelleYang, Luting, Jianyi Yang und Shaolei Ren. „Contextual Bandits with Delayed Feedback and Semi-supervised Learning (Student Abstract)“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 18 (18.05.2021): 15943–44. http://dx.doi.org/10.1609/aaai.v35i18.17968.
Der volle Inhalt der QuelleYu, Baosheng, Meng Fang und Dacheng Tao. „Per-Round Knapsack-Constrained Linear Submodular Bandits“. Neural Computation 28, Nr. 12 (Dezember 2016): 2757–89. http://dx.doi.org/10.1162/neco_a_00887.
Der volle Inhalt der QuelleRoy Chaudhuri, Arghya, und Shivaram Kalyanakrishnan. „Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 06 (03.04.2020): 10085–92. http://dx.doi.org/10.1609/aaai.v34i06.6566.
Der volle Inhalt der QuelleKaibel, Chris, und Torsten Biemann. „Rethinking the Gold Standard With Multi-armed Bandits: Machine Learning Allocation Algorithms for Experiments“. Organizational Research Methods 24, Nr. 1 (11.06.2019): 78–103. http://dx.doi.org/10.1177/1094428119854153.
Der volle Inhalt der QuelleXi, Guangyu, Chao Tao und Yuan Zhou. „Near-Optimal MNL Bandits Under Risk Criteria“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 12 (18.05.2021): 10397–404. http://dx.doi.org/10.1609/aaai.v35i12.17245.
Der volle Inhalt der QuelleNiño-Mora, José. „Restless bandits, partial conservation laws and indexability“. Advances in Applied Probability 33, Nr. 1 (März 2001): 76–98. http://dx.doi.org/10.1017/s0001867800010648.
Der volle Inhalt der QuelleCheung, Wang Chi, David Simchi-Levi und Ruihao Zhu. „Hedging the Drift: Learning to Optimize Under Nonstationarity“. Management Science 68, Nr. 3 (März 2022): 1696–713. http://dx.doi.org/10.1287/mnsc.2021.4024.
Der volle Inhalt der QuelleVaratharajah, Yogatheesan, und Brent Berry. „A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials“. Life 12, Nr. 8 (21.08.2022): 1277. http://dx.doi.org/10.3390/life12081277.
Der volle Inhalt der QuelleChen, Panyangjie. „Investigation of selection and application of Multi-Armed Bandit algorithms in recommendation system“. Applied and Computational Engineering 34, Nr. 1 (22.01.2024): 185–90. http://dx.doi.org/10.54254/2755-2721/34/20230323.
Der volle Inhalt der QuelleChen, Xijin, Kim May Lee, Sofia S. Villar und David S. Robertson. „Some performance considerations when using multi-armed bandit algorithms in the presence of missing data“. PLOS ONE 17, Nr. 9 (12.09.2022): e0274272. http://dx.doi.org/10.1371/journal.pone.0274272.
Der volle Inhalt der QuelleHuang, Wen, Lu Zhang und Xintao Wu. „Achieving Counterfactual Fairness for Causal Bandit“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 6 (28.06.2022): 6952–59. http://dx.doi.org/10.1609/aaai.v36i6.20653.
Der volle Inhalt der QuelleEsfandiari, Hossein, Amin Karbasi, Abbas Mehrabian und Vahab Mirrokni. „Regret Bounds for Batched Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Der volle Inhalt der QuelleYan, Xue, Yali Du, Binxin Ru, Jun Wang, Haifeng Zhang und Xu Chen. „Learning to Identify Top Elo Ratings: A Dueling Bandits Approach“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 8797–805. http://dx.doi.org/10.1609/aaai.v36i8.20860.
Der volle Inhalt der QuelleOntañón, Santiago. „Combinatorial Multi-armed Bandits for Real-Time Strategy Games“. Journal of Artificial Intelligence Research 58 (29.03.2017): 665–702. http://dx.doi.org/10.1613/jair.5398.
Der volle Inhalt der QuelleTang, Qiao, Hong Xie, Yunni Xia, Jia Lee und Qingsheng Zhu. „Robust Contextual Bandits via Bootstrapping“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 12182–89. http://dx.doi.org/10.1609/aaai.v35i13.17446.
Der volle Inhalt der QuelleTornede, Alexander, Viktor Bengs und Eyke Hüllermeier. „Machine Learning for Online Algorithm Selection under Censored Feedback“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 9 (28.06.2022): 10370–80. http://dx.doi.org/10.1609/aaai.v36i9.21279.
Der volle Inhalt der QuelleFourati, Fares, Christopher John Quinn, Mohamed-Slim Alouini und Vaneet Aggarwal. „Combinatorial Stochastic-Greedy Bandit“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 11 (24.03.2024): 12052–60. http://dx.doi.org/10.1609/aaai.v38i11.29093.
Der volle Inhalt der QuelleBarman, Siddharth, Arindam Khan, Arnab Maiti und Ayush Sawarni. „Fairness and Welfare Quantification for Regret in Multi-Armed Bandits“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 6 (26.06.2023): 6762–69. http://dx.doi.org/10.1609/aaai.v37i6.25829.
Der volle Inhalt der QuelleLi, Wenjie, Qifan Song, Jean Honorio und Guang Lin. „Federated X-armed Bandit“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 12 (24.03.2024): 13628–36. http://dx.doi.org/10.1609/aaai.v38i12.29267.
Der volle Inhalt der QuelleTolpin, David, und Solomon Shimony. „MCTS Based on Simple Regret“. Proceedings of the AAAI Conference on Artificial Intelligence 26, Nr. 1 (20.09.2021): 570–76. http://dx.doi.org/10.1609/aaai.v26i1.8126.
Der volle Inhalt der QuelleEne, Alina, Huy L. Nguyen und Adrian Vladu. „Projection-Free Bandit Optimization with Privacy Guarantees“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 7322–30. http://dx.doi.org/10.1609/aaai.v35i8.16899.
Der volle Inhalt der QuelleWu, Chenyue. „Comparative analysis of the KL-UCB and UCB algorithms: Delving into complexity and performance“. Applied and Computational Engineering 53, Nr. 1 (28.03.2024): 39–47. http://dx.doi.org/10.54254/2755-2721/53/20241221.
Der volle Inhalt der QuelleHerlihy, Christine, und John P. Dickerson. „Networked Restless Bandits with Positive Externalities“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 10 (26.06.2023): 11997–2004. http://dx.doi.org/10.1609/aaai.v37i10.26415.
Der volle Inhalt der QuelleNobari, Sadegh. „DBA: Dynamic Multi-Armed Bandit Algorithm“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9869–70. http://dx.doi.org/10.1609/aaai.v33i01.33019869.
Der volle Inhalt der QuelleMintz, Yonatan, Anil Aswani, Philip Kaminsky, Elena Flowers und Yoshimi Fukuoka. „Nonstationary Bandits with Habituation and Recovery Dynamics“. Operations Research 68, Nr. 5 (September 2020): 1493–516. http://dx.doi.org/10.1287/opre.2019.1918.
Der volle Inhalt der QuelleHan, Qi, Li Zhu und Fei Guo. „Forced Exploration in Bandit Problems“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 11 (24.03.2024): 12270–77. http://dx.doi.org/10.1609/aaai.v38i11.29117.
Der volle Inhalt der QuelleLupu, Andrei, Audrey Durand und Doina Precup. „Leveraging Observations in Bandits: Between Risks and Benefits“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 6112–19. http://dx.doi.org/10.1609/aaai.v33i01.33016112.
Der volle Inhalt der QuelleOntanon, Santiago. „The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games“. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, Nr. 1 (30.06.2021): 58–64. http://dx.doi.org/10.1609/aiide.v9i1.12681.
Der volle Inhalt der QuelleNarita, Yusuke, Shota Yasui und Kohei Yata. „Efficient Counterfactual Learning from Bandit Feedback“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4634–41. http://dx.doi.org/10.1609/aaai.v33i01.33014634.
Der volle Inhalt der Quelle