Artykuły w czasopismach na temat „Stochastic Multi-armed Bandit”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Stochastic Multi-armed Bandit”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Xiong, Guojun, i Jian Li. "Decentralized Stochastic Multi-Player Multi-Armed Walking Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 9 (26.06.2023): 10528–36. http://dx.doi.org/10.1609/aaai.v37i9.26251.
Pełny tekst źródłaCiucanu, Radu, Pascal Lafourcade, Gael Marcadet i Marta Soare. "SAMBA: A Generic Framework for Secure Federated Multi-Armed Bandits". Journal of Artificial Intelligence Research 73 (23.02.2022): 737–65. http://dx.doi.org/10.1613/jair.1.13163.
Pełny tekst źródłaWan, Zongqi, Zhijie Zhang, Tongyang Li, Jialin Zhang i Xiaoming Sun. "Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 8 (26.06.2023): 10087–94. http://dx.doi.org/10.1609/aaai.v37i8.26202.
Pełny tekst źródłaLesage-Landry, Antoine, i Joshua A. Taylor. "The Multi-Armed Bandit With Stochastic Plays". IEEE Transactions on Automatic Control 63, nr 7 (lipiec 2018): 2280–86. http://dx.doi.org/10.1109/tac.2017.2765501.
Pełny tekst źródłaEsfandiari, Hossein, Amin Karbasi, Abbas Mehrabian i Vahab Mirrokni. "Regret Bounds for Batched Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Pełny tekst źródłaDzhoha, A. S. "Sequential resource allocation in a stochastic environment: an overview and numerical experiments". Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics, nr 3 (2021): 13–25. http://dx.doi.org/10.17721/1812-5409.2021/3.1.
Pełny tekst źródłaJuditsky, A., A. V. Nazin, A. B. Tsybakov i N. Vayatis. "Gap-free Bounds for Stochastic Multi-Armed Bandit". IFAC Proceedings Volumes 41, nr 2 (2008): 11560–63. http://dx.doi.org/10.3182/20080706-5-kr-1001.01959.
Pełny tekst źródłaAllesiardo, Robin, Raphaël Féraud i Odalric-Ambrym Maillard. "The non-stationary stochastic multi-armed bandit problem". International Journal of Data Science and Analytics 3, nr 4 (30.03.2017): 267–83. http://dx.doi.org/10.1007/s41060-017-0050-5.
Pełny tekst źródłaHuo, Xiaoguang, i Feng Fu. "Risk-aware multi-armed bandit problem with application to portfolio selection". Royal Society Open Science 4, nr 11 (listopad 2017): 171377. http://dx.doi.org/10.1098/rsos.171377.
Pełny tekst źródłaXu, Lily, Elizabeth Bondi, Fei Fang, Andrew Perrault, Kai Wang i Milind Tambe. "Dual-Mandate Patrols: Multi-Armed Bandits for Green Security". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 17 (18.05.2021): 14974–82. http://dx.doi.org/10.1609/aaai.v35i17.17757.
Pełny tekst źródłaPatil, Vishakha, Ganesh Ghalme, Vineet Nair i Y. Narahari. "Achieving Fairness in the Stochastic Multi-Armed Bandit Problem". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5379–86. http://dx.doi.org/10.1609/aaai.v34i04.5986.
Pełny tekst źródłaJain, Shweta, Satyanath Bhat, Ganesh Ghalme, Divya Padmanabhan i Y. Narahari. "Mechanisms with learning for stochastic multi-armed bandit problems". Indian Journal of Pure and Applied Mathematics 47, nr 2 (czerwiec 2016): 229–72. http://dx.doi.org/10.1007/s13226-016-0186-3.
Pełny tekst źródłaCowan, Wesley, i Michael N. Katehakis. "MULTI-ARMED BANDITS UNDER GENERAL DEPRECIATION AND COMMITMENT". Probability in the Engineering and Informational Sciences 29, nr 1 (10.10.2014): 51–76. http://dx.doi.org/10.1017/s0269964814000217.
Pełny tekst źródłaDunn, R. T., i K. D. Glazebrook. "The performance of index-based policies for bandit problems with stochastic machine availability". Advances in Applied Probability 33, nr 2 (czerwiec 2001): 365–90. http://dx.doi.org/10.1017/s0001867800010843.
Pełny tekst źródłaBubeck, Sébastien. "Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems". Foundations and Trends® in Machine Learning 5, nr 1 (2012): 1–122. http://dx.doi.org/10.1561/2200000024.
Pełny tekst źródłaZayas-Cabán, Gabriel, Stefanus Jasin i Guihua Wang. "An asymptotically optimal heuristic for general nonstationary finite-horizon restless multi-armed, multi-action bandits". Advances in Applied Probability 51, nr 03 (wrzesień 2019): 745–72. http://dx.doi.org/10.1017/apr.2019.29.
Pełny tekst źródłaRoy Chaudhuri, Arghya, i Shivaram Kalyanakrishnan. "Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 06 (3.04.2020): 10085–92. http://dx.doi.org/10.1609/aaai.v34i06.6566.
Pełny tekst źródłaO'Flaherty, Brendan. "Some results on two-armed bandits when both projects vary". Journal of Applied Probability 26, nr 3 (wrzesień 1989): 655–58. http://dx.doi.org/10.2307/3214424.
Pełny tekst źródłaO'Flaherty, Brendan. "Some results on two-armed bandits when both projects vary". Journal of Applied Probability 26, nr 03 (wrzesień 1989): 655–58. http://dx.doi.org/10.1017/s0021900200038262.
Pełny tekst źródłaWang, Siwei, Haoyun Wang i Longbo Huang. "Adaptive Algorithms for Multi-armed Bandit with Composite and Anonymous Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 11 (18.05.2021): 10210–17. http://dx.doi.org/10.1609/aaai.v35i11.17224.
Pełny tekst źródłaZuo, Jinhang, Xiaoxi Zhang i Carlee Joe-Wong. "Observe Before Play: Multi-Armed Bandit with Pre-Observations". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 7023–30. http://dx.doi.org/10.1609/aaai.v34i04.6187.
Pełny tekst źródłaNamba, Hiroyuki. "Non-stationary Stochastic Multi-armed Bandit Problems with External Information on Stationarity". Transactions of the Japanese Society for Artificial Intelligence 36, nr 3 (1.05.2021): D—K84_1–11. http://dx.doi.org/10.1527/tjsai.36-3_d-k84.
Pełny tekst źródłaAuer, Peter, i Ronald Ortner. "UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem". Periodica Mathematica Hungarica 61, nr 1-2 (wrzesień 2010): 55–65. http://dx.doi.org/10.1007/s10998-010-3055-6.
Pełny tekst źródłaNiño-Mora, José. "Multi-Gear Bandits, Partial Conservation Laws, and Indexability". Mathematics 10, nr 14 (18.07.2022): 2497. http://dx.doi.org/10.3390/math10142497.
Pełny tekst źródłaOu, Mingdong, Nan Li, Cheng Yang, Shenghuo Zhu i Rong Jin. "Semi-Parametric Sampling for Stochastic Bandits with Many Arms". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 7933–40. http://dx.doi.org/10.1609/aaai.v33i01.33017933.
Pełny tekst źródłaFeldman, Zohar, i Carmel Domshlak. "On MABs and Separation of Concerns in Monte-Carlo Planning for MDPs". Proceedings of the International Conference on Automated Planning and Scheduling 24 (10.05.2014): 120–27. http://dx.doi.org/10.1609/icaps.v24i1.13631.
Pełny tekst źródłaOttens, Brammert, Christos Dimitrakakis i Boi Faltings. "DUCT: An Upper Confidence Bound Approach to Distributed Constraint Optimization Problems". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 528–34. http://dx.doi.org/10.1609/aaai.v26i1.8129.
Pełny tekst źródłaGuan, Ziwei, Kaiyi Ji, Donald J. Bucci Jr., Timothy Y. Hu, Joseph Palombo, Michael Liston i Yingbin Liang. "Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 4036–43. http://dx.doi.org/10.1609/aaai.v34i04.5821.
Pełny tekst źródłaCowan, Wesley, i Michael N. Katehakis. "EXPLORATION–EXPLOITATION POLICIES WITH ALMOST SURE, ARBITRARILY SLOW GROWING ASYMPTOTIC REGRET". Probability in the Engineering and Informational Sciences 34, nr 3 (26.01.2019): 406–28. http://dx.doi.org/10.1017/s0269964818000529.
Pełny tekst źródłaNiño-Mora, José. "Markovian Restless Bandits and Index Policies: A Review". Mathematics 11, nr 7 (28.03.2023): 1639. http://dx.doi.org/10.3390/math11071639.
Pełny tekst źródłaPapagiannis, Tasos, Georgios Alexandridis i Andreas Stafylopatis. "Pruning Stochastic Game Trees Using Neural Networks for Reduced Action Space Approximation". Mathematics 10, nr 9 (1.05.2022): 1509. http://dx.doi.org/10.3390/math10091509.
Pełny tekst źródłaGyörgy, A., i L. Kocsis. "Efficient Multi-Start Strategies for Local Search Algorithms". Journal of Artificial Intelligence Research 41 (29.07.2011): 407–44. http://dx.doi.org/10.1613/jair.3313.
Pełny tekst źródłaTrovo, Francesco, Stefano Paladino, Marcello Restelli i Nicola Gatti. "Sliding-Window Thompson Sampling for Non-Stationary Settings". Journal of Artificial Intelligence Research 68 (26.05.2020): 311–64. http://dx.doi.org/10.1613/jair.1.11407.
Pełny tekst źródłaKillian, Jackson A., Arpita Biswas, Lily Xu, Shresth Verma, Vineet Nair, Aparna Taneja, Aparna Hegde i in. "Robust Planning over Restless Groups: Engagement Interventions for a Large-Scale Maternal Telehealth Program". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 14295–303. http://dx.doi.org/10.1609/aaai.v37i12.26672.
Pełny tekst źródłaWATANABE, Ryo, Junpei KOMIYAMA, Atsuyoshi NAKAMURA i Mineichi KUDO. "KL-UCB-Based Policy for Budgeted Multi-Armed Bandits with Stochastic Action Costs". IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E100.A, nr 11 (2017): 2470–86. http://dx.doi.org/10.1587/transfun.e100.a.2470.
Pełny tekst źródłaYoussef, Marie-Josepha, Venugopal V. Veeravalli, Joumana Farah, Charbel Abdel Nour i Catherine Douillard. "Resource Allocation in NOMA-Based Self-Organizing Networks Using Stochastic Multi-Armed Bandits". IEEE Transactions on Communications 69, nr 9 (wrzesień 2021): 6003–17. http://dx.doi.org/10.1109/tcomm.2021.3092767.
Pełny tekst źródłaSledge, Isaac, i José Príncipe. "An Analysis of the Value of Information When Exploring Stochastic, Discrete Multi-Armed Bandits". Entropy 20, nr 3 (28.02.2018): 155. http://dx.doi.org/10.3390/e20030155.
Pełny tekst źródłaPokhrel, Shiva Raj, i Michel Mandjes. "Internet of Drones: Improving Multipath TCP over WiFi with Federated Multi-Armed Bandits for Limitless Connectivity". Drones 7, nr 1 (31.12.2022): 30. http://dx.doi.org/10.3390/drones7010030.
Pełny tekst źródłaPainter, Michael, Bruno Lacerda i Nick Hawes. "Convex Hull Monte-Carlo Tree-Search". Proceedings of the International Conference on Automated Planning and Scheduling 30 (1.06.2020): 217–25. http://dx.doi.org/10.1609/icaps.v30i1.6664.
Pełny tekst źródłaGasnikov, A. V., E. A. Krymova, A. A. Lagunovskaya, I. N. Usmanova i F. A. Fedorenko. "Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case". Automation and Remote Control 78, nr 2 (luty 2017): 224–34. http://dx.doi.org/10.1134/s0005117917020035.
Pełny tekst źródłaCiucanu, Radu, Pascal Lafourcade, Marius Lombard-Platet i Marta Soare. "Secure protocols for cumulative reward maximization in stochastic multi-armed bandits". Journal of Computer Security, 2.02.2022, 1–27. http://dx.doi.org/10.3233/jcs-210051.
Pełny tekst źródłaAmakasu, Takashi, Nicolas Chauvet, Guillaume Bachelier, Serge Huant, Ryoichi Horisaki i Makoto Naruse. "Conflict-free collective stochastic decision making by orbital angular momentum of photons through quantum interference". Scientific Reports 11, nr 1 (26.10.2021). http://dx.doi.org/10.1038/s41598-021-00493-2.
Pełny tekst źródłaImmorlica, Nicole, Karthik Abinav Sankararaman, Robert Schapire i Aleksandrs Slivkins. "Adversarial Bandits with Knapsacks". Journal of the ACM, 18.08.2022. http://dx.doi.org/10.1145/3557045.
Pełny tekst źródłaZhou, Datong, i Claire Tomlin. "Budget-Constrained Multi-Armed Bandits With Multiple Plays". Proceedings of the AAAI Conference on Artificial Intelligence 32, nr 1 (29.04.2018). http://dx.doi.org/10.1609/aaai.v32i1.11629.
Pełny tekst źródłaFernandez-Tapia, Joaquin, i Charles Monzani. "Stochastic Multi-Armed Bandit Algorithm for Optimal Budget Allocation in Programmatic Advertising". SSRN Electronic Journal, 2015. http://dx.doi.org/10.2139/ssrn.2600473.
Pełny tekst źródłaGopalan, Aditya, Prashanth L. A., Michael Fu i Steve Marcus. "Weighted Bandits or: How Bandits Learn Distorted Values That Are Not Expected". Proceedings of the AAAI Conference on Artificial Intelligence 31, nr 1 (13.02.2017). http://dx.doi.org/10.1609/aaai.v31i1.10922.
Pełny tekst źródłaMandel, Travis, Yun-En Liu, Emma Brunskill i Zoran Popović. "The Queue Method: Handling Delay, Heuristics, Prior Data, and Evaluation in Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 29, nr 1 (21.02.2015). http://dx.doi.org/10.1609/aaai.v29i1.9604.
Pełny tekst źródłaLiu, Fang, Swapna Buccapatnam i Ness Shroff. "Information Directed Sampling for Stochastic Bandits With Graph Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 32, nr 1 (29.04.2018). http://dx.doi.org/10.1609/aaai.v32i1.11751.
Pełny tekst źródłaHashima, Sherief, Mostafa M. Fouda, Sadman Sakib, Zubair Md Fadlullah, Kohei Hatano, Ehab Mahmoud Mohamed i Xuemin Shen. "Energy-Aware Hybrid RF-VLC Multi-Band Selection in D2D Communication: A Stochastic Multi-Armed Bandit Approach". IEEE Internet of Things Journal, 2022, 1. http://dx.doi.org/10.1109/jiot.2022.3162135.
Pełny tekst źródłaLi, Bo, i Chi Ho Yeung. "Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits". Chaos: An Interdisciplinary Journal of Nonlinear Science 33, nr 6 (1.06.2023). http://dx.doi.org/10.1063/5.0120076.
Pełny tekst źródła