Literatura académica sobre el tema "Multiarmed Bandits"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Multiarmed Bandits".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Multiarmed Bandits"
Righter, Rhonda y J. George Shanthikumar. "Independently Expiring Multiarmed Bandits". Probability in the Engineering and Informational Sciences 12, n.º 4 (octubre de 1998): 453–68. http://dx.doi.org/10.1017/s0269964800005325.
Texto completoGao, Xiujuan, Hao Liang y Tong Wang. "A Common Value Experimentation with Multiarmed Bandits". Mathematical Problems in Engineering 2018 (30 de julio de 2018): 1–8. http://dx.doi.org/10.1155/2018/4791590.
Texto completoKalathil, Dileep, Naumaan Nayyar y Rahul Jain. "Decentralized Learning for Multiplayer Multiarmed Bandits". IEEE Transactions on Information Theory 60, n.º 4 (abril de 2014): 2331–45. http://dx.doi.org/10.1109/tit.2014.2302471.
Texto completoCesa-Bianchi, Nicolò. "MULTIARMED BANDITS IN THE WORST CASE". IFAC Proceedings Volumes 35, n.º 1 (2002): 91–96. http://dx.doi.org/10.3182/20020721-6-es-1901.01001.
Texto completoBray, Robert L., Decio Coviello, Andrea Ichino y Nicola Persico. "Multitasking, Multiarmed Bandits, and the Italian Judiciary". Manufacturing & Service Operations Management 18, n.º 4 (octubre de 2016): 545–58. http://dx.doi.org/10.1287/msom.2016.0586.
Texto completoDenardo, Eric V., Haechurl Park y Uriel G. Rothblum. "Risk-Sensitive and Risk-Neutral Multiarmed Bandits". Mathematics of Operations Research 32, n.º 2 (mayo de 2007): 374–94. http://dx.doi.org/10.1287/moor.1060.0240.
Texto completoWeber, Richard. "On the Gittins Index for Multiarmed Bandits". Annals of Applied Probability 2, n.º 4 (noviembre de 1992): 1024–33. http://dx.doi.org/10.1214/aoap/1177005588.
Texto completoDrugan, Madalina M. "Covariance Matrix Adaptation for Multiobjective Multiarmed Bandits". IEEE Transactions on Neural Networks and Learning Systems 30, n.º 8 (agosto de 2019): 2493–502. http://dx.doi.org/10.1109/tnnls.2018.2885123.
Texto completoBurnetas, Apostolos N. y Michael N. Katehakis. "ASYMPTOTIC BAYES ANALYSIS FOR THE FINITE-HORIZON ONE-ARMED-BANDIT PROBLEM". Probability in the Engineering and Informational Sciences 17, n.º 1 (enero de 2003): 53–82. http://dx.doi.org/10.1017/s0269964803171045.
Texto completoNayyar, Naumaan, Dileep Kalathil y Rahul Jain. "On Regret-Optimal Learning in Decentralized Multiplayer Multiarmed Bandits". IEEE Transactions on Control of Network Systems 5, n.º 1 (marzo de 2018): 597–606. http://dx.doi.org/10.1109/tcns.2016.2635380.
Texto completoTesis sobre el tema "Multiarmed Bandits"
Lin, Haixia 1977. "Multiple machine maintenance : applying a separable value function approximation to a variation of the multiarmed bandit". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87269.
Texto completoSaha, Aadirupa. "Battle of Bandits: Online Learning from Subsetwise Preferences and Other Structured Feedback". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/5184.
Texto completoMann, Timothy 1984. "Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration". Thesis, 2012. http://hdl.handle.net/1969.1/148402.
Texto completoCapítulos de libros sobre el tema "Multiarmed Bandits"
Lee, Chia-Jung, Yalei Yang, Sheng-Hui Meng y Tien-Wen Sung. "Adversarial Multiarmed Bandit Problems in Gradually Evolving Worlds". En Advances in Smart Vehicular Technology, Transportation, Communication and Applications, 305–11. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70730-3_36.
Texto completo"Multiarmed Bandits". En Mathematical Analysis of Machine Learning Algorithms, 326–44. Cambridge University Press, 2023. http://dx.doi.org/10.1017/9781009093057.017.
Texto completoAgrawal, Shipra. "Recent Advances in Multiarmed Bandits for Sequential Decision Making". En Operations Research & Management Science in the Age of Analytics, 167–88. INFORMS, 2019. http://dx.doi.org/10.1287/educ.2019.0204.
Texto completoActas de conferencias sobre el tema "Multiarmed Bandits"
Niño-Mora, José. "An Index Policy for Multiarmed Multimode Restless Bandits". En 3rd International ICST Conference on Performance Evaluation Methodologies and Tools. ICST, 2008. http://dx.doi.org/10.4108/icst.valuetools2008.4410.
Texto completoLandgren, Peter, Vaibhav Srivastava y Naomi Ehrich Leonard. "On distributed cooperative decision-making in multiarmed bandits". En 2016 European Control Conference (ECC). IEEE, 2016. http://dx.doi.org/10.1109/ecc.2016.7810293.
Texto completoNino-Mora, José. "Computing an Index Policy for Multiarmed Bandits with Deadlines". En 3rd International ICST Conference on Performance Evaluation Methodologies and Tools. ICST, 2008. http://dx.doi.org/10.4108/icst.valuetools2008.4406.
Texto completoSrivastava, Vaibhav, Paul Reverdy y Naomi E. Leonard. "Surveillance in an abruptly changing world via multiarmed bandits". En 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039462.
Texto completoLandgren, Peter, Vaibhav Srivastava y Naomi Ehrich Leonard. "Distributed cooperative decision-making in multiarmed bandits: Frequentist and Bayesian algorithms". En 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798264.
Texto completoLandgren, Peter, Vaibhav Srivastava y Naomi Ehrich Leonard. "Social Imitation in Cooperative Multiarmed Bandits: Partition-Based Algorithms with Strictly Local Information". En 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8619744.
Texto completoAnantharam, V. y P. Varaiya. "Asymptotically efficient rules in multiarmed Bandit problems". En 1986 25th IEEE Conference on Decision and Control. IEEE, 1986. http://dx.doi.org/10.1109/cdc.1986.267217.
Texto completoGummadi, Ramakrishna, Ramesh Johari y Jia Yuan Yu. "Mean field equilibria of multiarmed bandit games". En the 13th ACM Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2229012.2229060.
Texto completoMersereau, Adam J., Paat Rusmevichientong y John N. Tsitsiklis. "A structured multiarmed bandit problem and the greedy policy". En 2008 47th IEEE Conference on Decision and Control. IEEE, 2008. http://dx.doi.org/10.1109/cdc.2008.4738680.
Texto completoWei, Lai y Vaibhav Srivatsva. "On Abruptly-Changing and Slowly-Varying Multiarmed Bandit Problems". En 2018 Annual American Control Conference (ACC). IEEE, 2018. http://dx.doi.org/10.23919/acc.2018.8431265.
Texto completo