Добірка наукової літератури з теми "Multiarmed Bandits"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Multiarmed Bandits".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Multiarmed Bandits"
Righter, Rhonda, and J. George Shanthikumar. "Independently Expiring Multiarmed Bandits." Probability in the Engineering and Informational Sciences 12, no. 4 (October 1998): 453–68. http://dx.doi.org/10.1017/s0269964800005325.
Повний текст джерелаGao, Xiujuan, Hao Liang, and Tong Wang. "A Common Value Experimentation with Multiarmed Bandits." Mathematical Problems in Engineering 2018 (July 30, 2018): 1–8. http://dx.doi.org/10.1155/2018/4791590.
Повний текст джерелаKalathil, Dileep, Naumaan Nayyar, and Rahul Jain. "Decentralized Learning for Multiplayer Multiarmed Bandits." IEEE Transactions on Information Theory 60, no. 4 (April 2014): 2331–45. http://dx.doi.org/10.1109/tit.2014.2302471.
Повний текст джерелаCesa-Bianchi, Nicolò. "MULTIARMED BANDITS IN THE WORST CASE." IFAC Proceedings Volumes 35, no. 1 (2002): 91–96. http://dx.doi.org/10.3182/20020721-6-es-1901.01001.
Повний текст джерелаBray, Robert L., Decio Coviello, Andrea Ichino, and Nicola Persico. "Multitasking, Multiarmed Bandits, and the Italian Judiciary." Manufacturing & Service Operations Management 18, no. 4 (October 2016): 545–58. http://dx.doi.org/10.1287/msom.2016.0586.
Повний текст джерелаDenardo, Eric V., Haechurl Park, and Uriel G. Rothblum. "Risk-Sensitive and Risk-Neutral Multiarmed Bandits." Mathematics of Operations Research 32, no. 2 (May 2007): 374–94. http://dx.doi.org/10.1287/moor.1060.0240.
Повний текст джерелаWeber, Richard. "On the Gittins Index for Multiarmed Bandits." Annals of Applied Probability 2, no. 4 (November 1992): 1024–33. http://dx.doi.org/10.1214/aoap/1177005588.
Повний текст джерелаDrugan, Madalina M. "Covariance Matrix Adaptation for Multiobjective Multiarmed Bandits." IEEE Transactions on Neural Networks and Learning Systems 30, no. 8 (August 2019): 2493–502. http://dx.doi.org/10.1109/tnnls.2018.2885123.
Повний текст джерелаBurnetas, Apostolos N., and Michael N. Katehakis. "ASYMPTOTIC BAYES ANALYSIS FOR THE FINITE-HORIZON ONE-ARMED-BANDIT PROBLEM." Probability in the Engineering and Informational Sciences 17, no. 1 (January 2003): 53–82. http://dx.doi.org/10.1017/s0269964803171045.
Повний текст джерелаNayyar, Naumaan, Dileep Kalathil, and Rahul Jain. "On Regret-Optimal Learning in Decentralized Multiplayer Multiarmed Bandits." IEEE Transactions on Control of Network Systems 5, no. 1 (March 2018): 597–606. http://dx.doi.org/10.1109/tcns.2016.2635380.
Повний текст джерелаДисертації з теми "Multiarmed Bandits"
Lin, Haixia 1977. "Multiple machine maintenance : applying a separable value function approximation to a variation of the multiarmed bandit." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87269.
Повний текст джерелаSaha, Aadirupa. "Battle of Bandits: Online Learning from Subsetwise Preferences and Other Structured Feedback." Thesis, 2020. https://etd.iisc.ac.in/handle/2005/5184.
Повний текст джерелаMann, Timothy 1984. "Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining Exploration." Thesis, 2012. http://hdl.handle.net/1969.1/148402.
Повний текст джерелаЧастини книг з теми "Multiarmed Bandits"
Lee, Chia-Jung, Yalei Yang, Sheng-Hui Meng, and Tien-Wen Sung. "Adversarial Multiarmed Bandit Problems in Gradually Evolving Worlds." In Advances in Smart Vehicular Technology, Transportation, Communication and Applications, 305–11. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70730-3_36.
Повний текст джерела"Multiarmed Bandits." In Mathematical Analysis of Machine Learning Algorithms, 326–44. Cambridge University Press, 2023. http://dx.doi.org/10.1017/9781009093057.017.
Повний текст джерелаAgrawal, Shipra. "Recent Advances in Multiarmed Bandits for Sequential Decision Making." In Operations Research & Management Science in the Age of Analytics, 167–88. INFORMS, 2019. http://dx.doi.org/10.1287/educ.2019.0204.
Повний текст джерелаТези доповідей конференцій з теми "Multiarmed Bandits"
Niño-Mora, José. "An Index Policy for Multiarmed Multimode Restless Bandits." In 3rd International ICST Conference on Performance Evaluation Methodologies and Tools. ICST, 2008. http://dx.doi.org/10.4108/icst.valuetools2008.4410.
Повний текст джерелаLandgren, Peter, Vaibhav Srivastava, and Naomi Ehrich Leonard. "On distributed cooperative decision-making in multiarmed bandits." In 2016 European Control Conference (ECC). IEEE, 2016. http://dx.doi.org/10.1109/ecc.2016.7810293.
Повний текст джерелаNino-Mora, José. "Computing an Index Policy for Multiarmed Bandits with Deadlines." In 3rd International ICST Conference on Performance Evaluation Methodologies and Tools. ICST, 2008. http://dx.doi.org/10.4108/icst.valuetools2008.4406.
Повний текст джерелаSrivastava, Vaibhav, Paul Reverdy, and Naomi E. Leonard. "Surveillance in an abruptly changing world via multiarmed bandits." In 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039462.
Повний текст джерелаLandgren, Peter, Vaibhav Srivastava, and Naomi Ehrich Leonard. "Distributed cooperative decision-making in multiarmed bandits: Frequentist and Bayesian algorithms." In 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798264.
Повний текст джерелаLandgren, Peter, Vaibhav Srivastava, and Naomi Ehrich Leonard. "Social Imitation in Cooperative Multiarmed Bandits: Partition-Based Algorithms with Strictly Local Information." In 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8619744.
Повний текст джерелаAnantharam, V., and P. Varaiya. "Asymptotically efficient rules in multiarmed Bandit problems." In 1986 25th IEEE Conference on Decision and Control. IEEE, 1986. http://dx.doi.org/10.1109/cdc.1986.267217.
Повний текст джерелаGummadi, Ramakrishna, Ramesh Johari, and Jia Yuan Yu. "Mean field equilibria of multiarmed bandit games." In the 13th ACM Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2229012.2229060.
Повний текст джерелаMersereau, Adam J., Paat Rusmevichientong, and John N. Tsitsiklis. "A structured multiarmed bandit problem and the greedy policy." In 2008 47th IEEE Conference on Decision and Control. IEEE, 2008. http://dx.doi.org/10.1109/cdc.2008.4738680.
Повний текст джерелаWei, Lai, and Vaibhav Srivatsva. "On Abruptly-Changing and Slowly-Varying Multiarmed Bandit Problems." In 2018 Annual American Control Conference (ACC). IEEE, 2018. http://dx.doi.org/10.23919/acc.2018.8431265.
Повний текст джерела