Artículos de revistas sobre el tema "Multiarmed Bandits"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Multiarmed Bandits".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Righter, Rhonda y J. George Shanthikumar. "Independently Expiring Multiarmed Bandits". Probability in the Engineering and Informational Sciences 12, n.º 4 (octubre de 1998): 453–68. http://dx.doi.org/10.1017/s0269964800005325.
Texto completoGao, Xiujuan, Hao Liang y Tong Wang. "A Common Value Experimentation with Multiarmed Bandits". Mathematical Problems in Engineering 2018 (30 de julio de 2018): 1–8. http://dx.doi.org/10.1155/2018/4791590.
Texto completoKalathil, Dileep, Naumaan Nayyar y Rahul Jain. "Decentralized Learning for Multiplayer Multiarmed Bandits". IEEE Transactions on Information Theory 60, n.º 4 (abril de 2014): 2331–45. http://dx.doi.org/10.1109/tit.2014.2302471.
Texto completoCesa-Bianchi, Nicolò. "MULTIARMED BANDITS IN THE WORST CASE". IFAC Proceedings Volumes 35, n.º 1 (2002): 91–96. http://dx.doi.org/10.3182/20020721-6-es-1901.01001.
Texto completoBray, Robert L., Decio Coviello, Andrea Ichino y Nicola Persico. "Multitasking, Multiarmed Bandits, and the Italian Judiciary". Manufacturing & Service Operations Management 18, n.º 4 (octubre de 2016): 545–58. http://dx.doi.org/10.1287/msom.2016.0586.
Texto completoDenardo, Eric V., Haechurl Park y Uriel G. Rothblum. "Risk-Sensitive and Risk-Neutral Multiarmed Bandits". Mathematics of Operations Research 32, n.º 2 (mayo de 2007): 374–94. http://dx.doi.org/10.1287/moor.1060.0240.
Texto completoWeber, Richard. "On the Gittins Index for Multiarmed Bandits". Annals of Applied Probability 2, n.º 4 (noviembre de 1992): 1024–33. http://dx.doi.org/10.1214/aoap/1177005588.
Texto completoDrugan, Madalina M. "Covariance Matrix Adaptation for Multiobjective Multiarmed Bandits". IEEE Transactions on Neural Networks and Learning Systems 30, n.º 8 (agosto de 2019): 2493–502. http://dx.doi.org/10.1109/tnnls.2018.2885123.
Texto completoBurnetas, Apostolos N. y Michael N. Katehakis. "ASYMPTOTIC BAYES ANALYSIS FOR THE FINITE-HORIZON ONE-ARMED-BANDIT PROBLEM". Probability in the Engineering and Informational Sciences 17, n.º 1 (enero de 2003): 53–82. http://dx.doi.org/10.1017/s0269964803171045.
Texto completoNayyar, Naumaan, Dileep Kalathil y Rahul Jain. "On Regret-Optimal Learning in Decentralized Multiplayer Multiarmed Bandits". IEEE Transactions on Control of Network Systems 5, n.º 1 (marzo de 2018): 597–606. http://dx.doi.org/10.1109/tcns.2016.2635380.
Texto completoReverdy, Paul B., Vaibhav Srivastava y Naomi Ehrich Leonard. "Modeling Human Decision Making in Generalized Gaussian Multiarmed Bandits". Proceedings of the IEEE 102, n.º 4 (abril de 2014): 544–71. http://dx.doi.org/10.1109/jproc.2014.2307024.
Texto completoKrishnamurthy, Vikram y Bo Wahlberg. "Partially Observed Markov Decision Process Multiarmed Bandits—Structural Results". Mathematics of Operations Research 34, n.º 2 (mayo de 2009): 287–302. http://dx.doi.org/10.1287/moor.1080.0371.
Texto completoCamerlenghi, Federico, Bianca Dumitrascu, Federico Ferrari, Barbara E. Engelhardt y Stefano Favaro. "Nonparametric Bayesian multiarmed bandits for single-cell experiment design". Annals of Applied Statistics 14, n.º 4 (diciembre de 2020): 2003–19. http://dx.doi.org/10.1214/20-aoas1370.
Texto completoMintz, Yonatan, Anil Aswani, Philip Kaminsky, Elena Flowers y Yoshimi Fukuoka. "Nonstationary Bandits with Habituation and Recovery Dynamics". Operations Research 68, n.º 5 (septiembre de 2020): 1493–516. http://dx.doi.org/10.1287/opre.2019.1918.
Texto completoGlazebrook, K. D., D. Ruiz-Hernandez y C. Kirkbride. "Some indexable families of restless bandit problems". Advances in Applied Probability 38, n.º 3 (septiembre de 2006): 643–72. http://dx.doi.org/10.1239/aap/1158684996.
Texto completoGlazebrook, K. D., D. Ruiz-Hernandez y C. Kirkbride. "Some indexable families of restless bandit problems". Advances in Applied Probability 38, n.º 03 (septiembre de 2006): 643–72. http://dx.doi.org/10.1017/s000186780000121x.
Texto completoMeshram, Rahul, D. Manjunath y Aditya Gopalan. "On the Whittle Index for Restless Multiarmed Hidden Markov Bandits". IEEE Transactions on Automatic Control 63, n.º 9 (septiembre de 2018): 3046–53. http://dx.doi.org/10.1109/tac.2018.2799521.
Texto completoCaro, Felipe y Onesun Steve Yoo. "INDEXABILITY OF BANDIT PROBLEMS WITH RESPONSE DELAYS". Probability in the Engineering and Informational Sciences 24, n.º 3 (23 de abril de 2010): 349–74. http://dx.doi.org/10.1017/s0269964810000021.
Texto completoGlazebrook, K. D. y R. Minty. "A Generalized Gittins Index for a Class of Multiarmed Bandits with General Resource Requirements". Mathematics of Operations Research 34, n.º 1 (febrero de 2009): 26–44. http://dx.doi.org/10.1287/moor.1080.0342.
Texto completoFarias, Vivek F. y Ritesh Madan. "The Irrevocable Multiarmed Bandit Problem". Operations Research 59, n.º 2 (abril de 2011): 383–99. http://dx.doi.org/10.1287/opre.1100.0891.
Texto completoAuer, Peter, Nicolò Cesa-Bianchi, Yoav Freund y Robert E. Schapire. "The Nonstochastic Multiarmed Bandit Problem". SIAM Journal on Computing 32, n.º 1 (enero de 2002): 48–77. http://dx.doi.org/10.1137/s0097539701398375.
Texto completoPeköz, Erol A. "Some memoryless bandit policies". Journal of Applied Probability 40, n.º 1 (marzo de 2003): 250–56. http://dx.doi.org/10.1239/jap/1044476838.
Texto completoPeköz, Erol A. "Some memoryless bandit policies". Journal of Applied Probability 40, n.º 01 (marzo de 2003): 250–56. http://dx.doi.org/10.1017/s0021900200022373.
Texto completoDayanik, Savas, Warren Powell y Kazutoshi Yamazaki. "Index policies for discounted bandit problems with availability constraints". Advances in Applied Probability 40, n.º 2 (junio de 2008): 377–400. http://dx.doi.org/10.1239/aap/1214950209.
Texto completoDayanik, Savas, Warren Powell y Kazutoshi Yamazaki. "Index policies for discounted bandit problems with availability constraints". Advances in Applied Probability 40, n.º 02 (junio de 2008): 377–400. http://dx.doi.org/10.1017/s0001867800002573.
Texto completoTsitsiklis, J. "A lemma on the multiarmed bandit problem". IEEE Transactions on Automatic Control 31, n.º 6 (junio de 1986): 576–77. http://dx.doi.org/10.1109/tac.1986.1104332.
Texto completoReverdy, Paul, Vaibhav Srivastava y Naomi Ehrich Leonard. "Corrections to “Satisficing in Multiarmed Bandit Problems”". IEEE Transactions on Automatic Control 66, n.º 1 (enero de 2021): 476–78. http://dx.doi.org/10.1109/tac.2020.2981433.
Texto completoFrostig, Esther y Gideon Weiss. "Four proofs of Gittins’ multiarmed bandit theorem". Annals of Operations Research 241, n.º 1-2 (7 de enero de 2014): 127–65. http://dx.doi.org/10.1007/s10479-013-1523-0.
Texto completoIshikida, Takashi y Yat-wah Wan. "Scheduling Jobs That Are Subject to Deterministic Due Dates and Have Deteriorating Expected Rewards". Probability in the Engineering and Informational Sciences 11, n.º 1 (enero de 1997): 65–78. http://dx.doi.org/10.1017/s026996480000468x.
Texto completoJiang, Weijin, Pingping Chen, Wanqing Zhang, Yongxia Sun, Chen Junpeng y Qing Wen. "User Recruitment Algorithm for Maximizing Quality under Limited Budget in Mobile Crowdsensing". Discrete Dynamics in Nature and Society 2022 (20 de enero de 2022): 1–13. http://dx.doi.org/10.1155/2022/4804231.
Texto completoZeng, Fanzi y Xinwang Shen. "Channel Selection Based on Trust and Multiarmed Bandit in Multiuser, Multichannel Cognitive Radio Networks". Scientific World Journal 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/916156.
Texto completoMersereau, A. J., P. Rusmevichientong y J. N. Tsitsiklis. "A Structured Multiarmed Bandit Problem and the Greedy Policy". IEEE Transactions on Automatic Control 54, n.º 12 (diciembre de 2009): 2787–802. http://dx.doi.org/10.1109/tac.2009.2031725.
Texto completoVaraiya, P., J. Walrand y C. Buyukkoc. "Extensions of the multiarmed bandit problem: The discounted case". IEEE Transactions on Automatic Control 30, n.º 5 (mayo de 1985): 426–39. http://dx.doi.org/10.1109/tac.1985.1103989.
Texto completoMartin, David M. y Fred A. Johnson. "A Multiarmed Bandit Approach to Adaptive Water Quality Management". Integrated Environmental Assessment and Management 16, n.º 6 (14 de agosto de 2020): 841–52. http://dx.doi.org/10.1002/ieam.4302.
Texto completoKang, Xiaohan, Hong Ri, Mohd Nor Akmal Khalid y Hiroyuki Iida. "Addictive Games: Case Study on Multi-Armed Bandit Game". Information 12, n.º 12 (15 de diciembre de 2021): 521. http://dx.doi.org/10.3390/info12120521.
Texto completoMeng, Hao, Wasswa Shafik, S. Mojtaba Matinkhah y Zubair Ahmad. "A 5G Beam Selection Machine Learning Algorithm for Unmanned Aerial Vehicle Applications". Wireless Communications and Mobile Computing 2020 (1 de agosto de 2020): 1–16. http://dx.doi.org/10.1155/2020/1428968.
Texto completoChang, Hyeong Soo y Sanghee Choe. "Combining Multiple Strategies for Multiarmed Bandit Problems and Asymptotic Optimality". Journal of Control Science and Engineering 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/264953.
Texto completoYoshida, Y. "Optimal stopping problems for multiarmed bandit processes with arms' independence". Computers & Mathematics with Applications 26, n.º 12 (diciembre de 1993): 47–60. http://dx.doi.org/10.1016/0898-1221(93)90058-4.
Texto completoGokcesu, Kaan y Suleyman Serdar Kozat. "An Online Minimax Optimal Algorithm for Adversarial Multiarmed Bandit Problem". IEEE Transactions on Neural Networks and Learning Systems 29, n.º 11 (noviembre de 2018): 5565–80. http://dx.doi.org/10.1109/tnnls.2018.2806006.
Texto completoMisra, Kanishka, Eric M. Schwartz y Jacob Abernethy. "Dynamic Online Pricing with Incomplete Information Using Multiarmed Bandit Experiments". Marketing Science 38, n.º 2 (marzo de 2019): 226–52. http://dx.doi.org/10.1287/mksc.2018.1129.
Texto completoToelch, Ulf, Matthew J. Bruce, Marius T. H. Meeus y Simon M. Reader. "Humans copy rapidly increasing choices in a multiarmed bandit problem". Evolution and Human Behavior 31, n.º 5 (septiembre de 2010): 326–33. http://dx.doi.org/10.1016/j.evolhumbehav.2010.03.002.
Texto completoMuqattash, Isa y Jiaqiao Hu. "An ϵ-Greedy Multiarmed Bandit Approach to Markov Decision Processes". Stats 6, n.º 1 (1 de enero de 2023): 99–112. http://dx.doi.org/10.3390/stats6010006.
Texto completoMansour, Yishay, Aleksandrs Slivkins y Vasilis Syrgkanis. "Bayesian Incentive-Compatible Bandit Exploration". Operations Research 68, n.º 4 (julio de 2020): 1132–61. http://dx.doi.org/10.1287/opre.2019.1949.
Texto completoUriarte, Alberto y Santiago Ontañón. "Improving Monte Carlo Tree Search Policies in StarCraft via Probabilistic Models Learned from Replay Data". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 12, n.º 1 (25 de junio de 2021): 100–106. http://dx.doi.org/10.1609/aiide.v12i1.12852.
Texto completoQu, Yuben, Chao Dong, Dawei Niu, Hai Wang y Chang Tian. "A Two-Dimensional Multiarmed Bandit Approach to Secondary Users with Network Coding in Cognitive Radio Networks". Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/672837.
Texto completoBao, Wenqing, Xiaoqiang Cai y Xianyi Wu. "A General Theory of MultiArmed Bandit Processes with Constrained Arm Switches". SIAM Journal on Control and Optimization 59, n.º 6 (enero de 2021): 4666–88. http://dx.doi.org/10.1137/19m1282386.
Texto completoDrabik, Ewa. "On nearly selfoptimizing strategies for multiarmed bandit problems with controlled arms". Applicationes Mathematicae 23, n.º 4 (1996): 449–73. http://dx.doi.org/10.4064/am-23-4-449-473.
Texto completoLiu, Haoyang, Keqin Liu y Qing Zhao. "Learning in a Changing World: Restless Multiarmed Bandit With Unknown Dynamics". IEEE Transactions on Information Theory 59, n.º 3 (marzo de 2013): 1902–16. http://dx.doi.org/10.1109/tit.2012.2230215.
Texto completoAgrawal, Himanshu y Krishna Asawa. "Decentralized Learning for Opportunistic Spectrum Access: Multiuser Restless Multiarmed Bandit Formulation". IEEE Systems Journal 14, n.º 2 (junio de 2020): 2485–96. http://dx.doi.org/10.1109/jsyst.2019.2943361.
Texto completoNakayama, Kazuaki, Ryuzo Nakamura, Masato Hisakado y Shintaro Mori. "Optimal learning dynamics of multiagent system in restless multiarmed bandit game". Physica A: Statistical Mechanics and its Applications 549 (julio de 2020): 124314. http://dx.doi.org/10.1016/j.physa.2020.124314.
Texto completo