Academic literature on the topic 'Multiarmed Bandits'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multiarmed Bandits.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multiarmed Bandits"
Righter, Rhonda, and J. George Shanthikumar. "Independently Expiring Multiarmed Bandits." Probability in the Engineering and Informational Sciences 12, no. 4 (October 1998): 453–68. http://dx.doi.org/10.1017/s0269964800005325.
Full textGao, Xiujuan, Hao Liang, and Tong Wang. "A Common Value Experimentation with Multiarmed Bandits." Mathematical Problems in Engineering 2018 (July 30, 2018): 1–8. http://dx.doi.org/10.1155/2018/4791590.
Full textKalathil, Dileep, Naumaan Nayyar, and Rahul Jain. "Decentralized Learning for Multiplayer Multiarmed Bandits." IEEE Transactions on Information Theory 60, no. 4 (April 2014): 2331–45. http://dx.doi.org/10.1109/tit.2014.2302471.
Full textCesa-Bianchi, Nicolò. "MULTIARMED BANDITS IN THE WORST CASE." IFAC Proceedings Volumes 35, no. 1 (2002): 91–96. http://dx.doi.org/10.3182/20020721-6-es-1901.01001.
Full textBray, Robert L., Decio Coviello, Andrea Ichino, and Nicola Persico. "Multitasking, Multiarmed Bandits, and the Italian Judiciary." Manufacturing & Service Operations Management 18, no. 4 (October 2016): 545–58. http://dx.doi.org/10.1287/msom.2016.0586.
Full textDenardo, Eric V., Haechurl Park, and Uriel G. Rothblum. "Risk-Sensitive and Risk-Neutral Multiarmed Bandits." Mathematics of Operations Research 32, no. 2 (May 2007): 374–94. http://dx.doi.org/10.1287/moor.1060.0240.
Full textWeber, Richard. "On the Gittins Index for Multiarmed Bandits." Annals of Applied Probability 2, no. 4 (November 1992): 1024–33. http://dx.doi.org/10.1214/aoap/1177005588.
Full textDrugan, Madalina M. "Covariance Matrix Adaptation for Multiobjective Multiarmed Bandits." IEEE Transactions on Neural Networks and Learning Systems 30, no. 8 (August 2019): 2493–502. http://dx.doi.org/10.1109/tnnls.2018.2885123.
Full textBurnetas, Apostolos N., and Michael N. Katehakis. "ASYMPTOTIC BAYES ANALYSIS FOR THE FINITE-HORIZON ONE-ARMED-BANDIT PROBLEM." Probability in the Engineering and Informational Sciences 17, no. 1 (January 2003): 53–82. http://dx.doi.org/10.1017/s0269964803171045.
Full textNayyar, Naumaan, Dileep Kalathil, and Rahul Jain. "On Regret-Optimal Learning in Decentralized Multiplayer Multiarmed Bandits." IEEE Transactions on Control of Network Systems 5, no. 1 (March 2018): 597–606. http://dx.doi.org/10.1109/tcns.2016.2635380.
Full textDissertations / Theses on the topic "Multiarmed Bandits"
Lin, Haixia 1977. "Multiple machine maintenance : applying a separable value function approximation to a variation of the multiarmed bandit." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87269.
Full textSaha, Aadirupa. "Battle of Bandits: Online Learning from Subsetwise Preferences and Other Structured Feedback." Thesis, 2020. https://etd.iisc.ac.in/handle/2005/5184.
Full textMann, Timothy 1984. "Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration." Thesis, 2012. http://hdl.handle.net/1969.1/148402.
Full textBook chapters on the topic "Multiarmed Bandits"
Lee, Chia-Jung, Yalei Yang, Sheng-Hui Meng, and Tien-Wen Sung. "Adversarial Multiarmed Bandit Problems in Gradually Evolving Worlds." In Advances in Smart Vehicular Technology, Transportation, Communication and Applications, 305–11. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70730-3_36.
Full text"Multiarmed Bandits." In Mathematical Analysis of Machine Learning Algorithms, 326–44. Cambridge University Press, 2023. http://dx.doi.org/10.1017/9781009093057.017.
Full textAgrawal, Shipra. "Recent Advances in Multiarmed Bandits for Sequential Decision Making." In Operations Research & Management Science in the Age of Analytics, 167–88. INFORMS, 2019. http://dx.doi.org/10.1287/educ.2019.0204.
Full textConference papers on the topic "Multiarmed Bandits"
Niño-Mora, José. "An Index Policy for Multiarmed Multimode Restless Bandits." In 3rd International ICST Conference on Performance Evaluation Methodologies and Tools. ICST, 2008. http://dx.doi.org/10.4108/icst.valuetools2008.4410.
Full textLandgren, Peter, Vaibhav Srivastava, and Naomi Ehrich Leonard. "On distributed cooperative decision-making in multiarmed bandits." In 2016 European Control Conference (ECC). IEEE, 2016. http://dx.doi.org/10.1109/ecc.2016.7810293.
Full textNino-Mora, José. "Computing an Index Policy for Multiarmed Bandits with Deadlines." In 3rd International ICST Conference on Performance Evaluation Methodologies and Tools. ICST, 2008. http://dx.doi.org/10.4108/icst.valuetools2008.4406.
Full textSrivastava, Vaibhav, Paul Reverdy, and Naomi E. Leonard. "Surveillance in an abruptly changing world via multiarmed bandits." In 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039462.
Full textLandgren, Peter, Vaibhav Srivastava, and Naomi Ehrich Leonard. "Distributed cooperative decision-making in multiarmed bandits: Frequentist and Bayesian algorithms." In 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798264.
Full textLandgren, Peter, Vaibhav Srivastava, and Naomi Ehrich Leonard. "Social Imitation in Cooperative Multiarmed Bandits: Partition-Based Algorithms with Strictly Local Information." In 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8619744.
Full textAnantharam, V., and P. Varaiya. "Asymptotically efficient rules in multiarmed Bandit problems." In 1986 25th IEEE Conference on Decision and Control. IEEE, 1986. http://dx.doi.org/10.1109/cdc.1986.267217.
Full textGummadi, Ramakrishna, Ramesh Johari, and Jia Yuan Yu. "Mean field equilibria of multiarmed bandit games." In the 13th ACM Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2229012.2229060.
Full textMersereau, Adam J., Paat Rusmevichientong, and John N. Tsitsiklis. "A structured multiarmed bandit problem and the greedy policy." In 2008 47th IEEE Conference on Decision and Control. IEEE, 2008. http://dx.doi.org/10.1109/cdc.2008.4738680.
Full textWei, Lai, and Vaibhav Srivatsva. "On Abruptly-Changing and Slowly-Varying Multiarmed Bandit Problems." In 2018 Annual American Control Conference (ACC). IEEE, 2018. http://dx.doi.org/10.23919/acc.2018.8431265.
Full text