Journal articles on the topic 'Multi-armed bandit formulation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 18 journal articles for your research on the topic 'Multi-armed bandit formulation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Dzhoha, A. S. "Sequential resource allocation in a stochastic environment: an overview and numerical experiments." Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics, no. 3 (2021): 13–25. http://dx.doi.org/10.17721/1812-5409.2021/3.1.
Full textZayas-Cabán, Gabriel, Stefanus Jasin, and Guihua Wang. "An asymptotically optimal heuristic for general nonstationary finite-horizon restless multi-armed, multi-action bandits." Advances in Applied Probability 51, no. 03 (September 2019): 745–72. http://dx.doi.org/10.1017/apr.2019.29.
Full textRoy Chaudhuri, Arghya, and Shivaram Kalyanakrishnan. "Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10085–92. http://dx.doi.org/10.1609/aaai.v34i06.6566.
Full textAi, Jing, and Alhussein A. Abouzeid. "Opportunistic spectrum access based on a constrained multi-armed bandit formulation." Journal of Communications and Networks 11, no. 2 (April 2009): 134–47. http://dx.doi.org/10.1109/jcn.2009.6391388.
Full textBagheri, Saeed, and Anna Scaglione. "The Restless Multi-Armed Bandit Formulation of the Cognitive Compressive Sensing Problem." IEEE Transactions on Signal Processing 63, no. 5 (March 2015): 1183–98. http://dx.doi.org/10.1109/tsp.2015.2389620.
Full textLi, Xinbin, Jiajia Liu, Lei Yan, Song Han, and Xinping Guan. "Relay Selection for Underwater Acoustic Sensor Networks: A Multi-User Multi-Armed Bandit Formulation." IEEE Access 6 (2018): 7839–53. http://dx.doi.org/10.1109/access.2018.2801350.
Full textHo, Chien-Ju, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. "Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems." Journal of Artificial Intelligence Research 55 (February 3, 2016): 317–59. http://dx.doi.org/10.1613/jair.4940.
Full textCavenaghi, Emanuele, Gabriele Sottocornola, Fabio Stella, and Markus Zanker. "Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm." Entropy 23, no. 3 (March 23, 2021): 380. http://dx.doi.org/10.3390/e23030380.
Full textLi, Xinbin, Xianglin Xu, Lei Yan, Haihong Zhao, and Tongwei Zhang. "Energy-Efficient Data Collection Using Autonomous Underwater Glider: A Reinforcement Learning Formulation." Sensors 20, no. 13 (July 4, 2020): 3758. http://dx.doi.org/10.3390/s20133758.
Full textMohamed, Ehab Mahmoud, Mohammad Alnakhli, Sherief Hashima, and Mohamed Abdel-Nasser. "Distribution of Multi MmWave UAV Mounted RIS Using Budget Constraint Multi-Player MAB." Electronics 12, no. 1 (December 20, 2022): 12. http://dx.doi.org/10.3390/electronics12010012.
Full textHuanca-Anquise, Candy A., Ana Lúcia Cetertich Bazzan, and Anderson R. Tavares. "Multi-Objective, Multi-Armed Bandits: Algorithms for Repeated Games and Application to Route Choice." Revista de Informática Teórica e Aplicada 30, no. 1 (January 30, 2023): 11–23. http://dx.doi.org/10.22456/2175-2745.122929.
Full textRodriguez Diaz, Paula, Jackson A. Killian, Lily Xu, Arun Sai Suggala, Aparna Taneja, and Milind Tambe. "Flexible Budgets in Restless Bandits: A Primal-Dual Algorithm for Efficient Budget Allocation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 12103–11. http://dx.doi.org/10.1609/aaai.v37i10.26427.
Full textYang, Yibo, Antoine Blanchard, Themistoklis Sapsis, and Paris Perdikaris. "Output-weighted sampling for multi-armed bandits with extreme payoffs." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 478, no. 2260 (April 2022). http://dx.doi.org/10.1098/rspa.2021.0781.
Full textYang, Yibo, Antoine Blanchard, Themistoklis Sapsis, and Paris Perdikaris. "Output-weighted sampling for multi-armed bandits with extreme payoffs." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 478, no. 2260 (April 2022). http://dx.doi.org/10.1098/rspa.2021.0781.
Full textTaywade, Kshitija, Brent Harrison, Adib Bagh, and Judy Goldsmith. "Modelling Cournot Games as Multi-agent Multi-armed Bandits." International FLAIRS Conference Proceedings 35 (May 4, 2022). http://dx.doi.org/10.32473/flairs.v35i.130697.
Full textMandel, Travis, Yun-En Liu, Emma Brunskill, and Zoran Popović. "The Queue Method: Handling Delay, Heuristics, Prior Data, and Evaluation in Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 29, no. 1 (February 21, 2015). http://dx.doi.org/10.1609/aaai.v29i1.9604.
Full textJagadeesan, Meena, Alexander Wei, Yixin Wang, Michael I. Jordan, and Jacob Steinhardt. "Learning Equilibria in Matching Markets with Bandit Feedback." Journal of the ACM, February 16, 2023. http://dx.doi.org/10.1145/3583681.
Full textGullo, F., D. Mandaglio, and A. Tagarelli. "A combinatorial multi-armed bandit approach to correlation clustering." Data Mining and Knowledge Discovery, June 29, 2023. http://dx.doi.org/10.1007/s10618-023-00937-5.
Full text