Добірка наукової літератури з теми "Combinatorial multi-armed bandits"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Combinatorial multi-armed bandits".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Combinatorial multi-armed bandits"
Ontañón, Santiago. "Combinatorial Multi-armed Bandits for Real-Time Strategy Games." Journal of Artificial Intelligence Research 58 (March 29, 2017): 665–702. http://dx.doi.org/10.1613/jair.5398.
Повний текст джерелаXu, Lily, Elizabeth Bondi, Fei Fang, Andrew Perrault, Kai Wang, and Milind Tambe. "Dual-Mandate Patrols: Multi-Armed Bandits for Green Security." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 17 (May 18, 2021): 14974–82. http://dx.doi.org/10.1609/aaai.v35i17.17757.
Повний текст джерелаZhou, Huozhi, Lingda Wang, Lav Varshney, and Ee-Peng Lim. "A Near-Optimal Change-Detection Based Algorithm for Piecewise-Stationary Combinatorial Semi-Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6933–40. http://dx.doi.org/10.1609/aaai.v34i04.6176.
Повний текст джерелаMoraes, Rubens, Julian Mariño, Levi Lelis, and Mario Nascimento. "Action Abstractions for Combinatorial Multi-Armed Bandit Tree Search." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 14, no. 1 (September 25, 2018): 74–80. http://dx.doi.org/10.1609/aiide.v14i1.13018.
Повний текст джерелаDu, Yihan, Siwei Wang, and Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.
Повний текст джерелаAgarwal, Mridul, Vaneet Aggarwal, Abhishek Kumar Umrawal, and Chris Quinn. "DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6557–65. http://dx.doi.org/10.1609/aaai.v35i8.16812.
Повний текст джерелаGai, Yi, Bhaskar Krishnamachari, and Rahul Jain. "Combinatorial Network Optimization With Unknown Variables: Multi-Armed Bandits With Linear Rewards and Individual Observations." IEEE/ACM Transactions on Networking 20, no. 5 (October 2012): 1466–78. http://dx.doi.org/10.1109/tnet.2011.2181864.
Повний текст джерелаOntanon, Santiago. "The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, no. 1 (June 30, 2021): 58–64. http://dx.doi.org/10.1609/aiide.v9i1.12681.
Повний текст джерелаZhang, Chen, and Steven C. H. Hoi. "Partially Observable Multi-Sensor Sequential Change Detection: A Combinatorial Multi-Armed Bandit Approach." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5733–40. http://dx.doi.org/10.1609/aaai.v33i01.33015733.
Повний текст джерелаNasim, Imtiaz, Ahmed S. Ibrahim, and Seungmo Kim. "Learning-Based Beamforming for Multi-User Vehicular Communications: A Combinatorial Multi-Armed Bandit Approach." IEEE Access 8 (2020): 219891–902. http://dx.doi.org/10.1109/access.2020.3043301.
Повний текст джерелаДисертації з теми "Combinatorial multi-armed bandits"
Talebi, Mazraeh Shahi Mohammad Sadegh. "Minimizing Regret in Combinatorial Bandits and Reinforcement Learning." Doctoral thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219970.
Повний текст джерелаQC 20171215
Talebi, Mazraeh Shahi Mohammad Sadegh. "Online Combinatorial Optimization under Bandit Feedback." Licentiate thesis, KTH, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-181321.
Повний текст джерелаQC 20160201
Ferreira, Alexandre Silvestre. "A cross-domain multi-armed bandit hyper-heuristic." reponame:Repositório Institucional da UFPR, 2016. http://hdl.handle.net/1884/41803.
Повний текст джерелаCo-orientador : Prof. Dr. Richard Aderbal Gonçalves
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 26/02/2016
Inclui referências : f. 64-70
Resumo: Muitos problemas de otimização do mundo real são complexos e possuem muitas variáveis e restrições. Por esta causa, o uso de meta-heurísticas tornou-se a principal maneira de resolver problemas com essas características. Uma das principais desvantagens do uso de meta-heurísticas e que são geralmente desenvolvidas utilizando características do domínio fazendo com que sejam atreladas a ele dificultando sua utilização em outros problemas. Em buscas de algoritmos mais adaptáveis o conceito de hiper-heurísticas surgiu. Hiper- heurísticas são métodos de busca que visam solucionar problemas de otimização selecionando ou gerando heurísticas. Hiper-heurísticas de seleção escolhem uma boa heurística para ser aplicada a partir de um conjunto de heurísticas. O método de seleção e a principal peca de uma hiper-heurística de seleção tendo impacto fundamental em sua performance. Apesar de existirem vários trabalhos sobre hiper-heurísticas de seleção, ainda não existe consenso sobre como uma boa estratégia de seleção deve ser definida. Em busca de uma estratégia de seleção, algoritmos inspirados nos conceitos do problema Multi-Armed Bandit (MAB) serão estudados. Estes algoritmos foram aplicados ao contexto da Seleção Adaptativa de Operadores obtendo resultados promissores. Entretanto, ainda existem poucas abordagens para o contexto de hiper-heurísticas. Nesta dissertação propomos uma hiper-heurística que utiliza algoritmos MAB como sua estratégia de seleção. A abordagem proposta e desenvolvida utilizando o framework HyFlex, que foi proposto para facilitar a implementação e comparação de novas Hiper- heurísticas. Os parâmetros foram configurados através de um estudo empírico, e a melhor configuração encontrada foi comparada com os 10 primeiros colocados da competição CHeSC 2011. Os resultados obtidos foram bons e comparáveis com os das melhores abordagens da literatura. O algoritmo proposto alcançou a quarta colocação. Apesar dos bons resultados, os experimentos demonstram que a abordagem proposta sofre grande influencia dos parâmetros. Trabalhos futuros irão investigar formas de amenizar esta influência.
Abstract: Many real word optimization problems are very complex with many variables and constraints, and cannot be solved by exact methods in a reasonable computational time. As an alternative, meta-heuristics emerged as an efficient way to solve this type of problems even though they cannot ensure optimal values. The main issue of meta-heuristics is that they are built using domain-specific knowledge, therefore they require a great effort to be used in a new domain. In order to solve this problem, the concept of Hyper-heuristics were proposed. Hyper-heuristics are search methods that aim to solve optimization problems by selecting or generating heuristics. Selection hyper-heuristics choose from a pool of heuristics a good one to be applied at the current stage of the optimization process. The selection mechanism is the main part of a selection hyper-heuristic and has a great impact on its performance. Although there are several works focused on selection hyperheuristics, there is no unanimity about which is the best way to define a selection strategy. In this dissertation, a deterministic selection strategy based on the concepts of the MultiArmed Bandit (MAB) problem is proposed to cross-domain optimization. Multi-armed bandit approaches define a selection function with two components, the first is based on the performance of an operator and the second based on the number of times that the operator was used. These approaches had showed a promising performance over the Adaptive Operator Selection context. However, there are few works on literature that aim the hyper-heuristic context, as proposed here. The proposed approach is integrated into the HyFlex framework, that was developed to facilitate the implementation and comparison of hyper-heuristics. An empirical parameter configuration was performed and the best setup was compared to the top ten CHeSC 2011 algorithms using the same methodology adopted during the competition. The results obtained were good comparable to those attained by the literature. Moreover, it was concluded that the behavior of MAB selection is heavily affected by its parameters. As this is not a desirable behavior to hyper-heuristics, future research will investigate ways to better deal with the parameter setting.
Prakash, Gujar Sujit. "Novel Mechanisms For Allocation Of Heterogeneous Items In Strategic Settings." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/1654.
Повний текст джерелаЧастини книг з теми "Combinatorial multi-armed bandits"
Mandaglio, Domenico, and Andrea Tagarelli. "A Combinatorial Multi-Armed Bandit Based Method for Dynamic Consensus Community Detection in Temporal Networks." In Discovery Science, 412–27. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33778-0_31.
Повний текст джерелаТези доповідей конференцій з теми "Combinatorial multi-armed bandits"
Zuo, Jinhang, and Carlee Joe-Wong. "Combinatorial Multi-armed Bandits for Resource Allocation." In 2021 55th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2021. http://dx.doi.org/10.1109/ciss50987.2021.9400228.
Повний текст джерелаTang, Shaojie, Yaqin Zhou, Kai Han, Zhao Zhang, Jing Yuan, and Weili Wu. "Networked Stochastic Multi-armed Bandits with Combinatorial Strategies." In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2017. http://dx.doi.org/10.1109/icdcs.2017.303.
Повний текст джерелаEgger, Maximilian, Rawad Bitar, Antonia Wachter-Zeh, and Deniz Gunduz. "Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits." In 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022. http://dx.doi.org/10.1109/isit50566.2022.9834499.
Повний текст джерелаXu, Huanle, Yang Liu, Wing Cheong Lau, and Rui Li. "Combinatorial Multi-Armed Bandits with Concave Rewards and Fairness Constraints." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/354.
Повний текст джерелаKang, Sunjung, and Changhee Joo. "Combinatorial multi-armed bandits in cognitive radio networks: A brief overview." In 2017 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2017. http://dx.doi.org/10.1109/ictc.2017.8190862.
Повний текст джерелаHuang, Hanxun, Xingjun Ma, Sarah M. Erfani, and James Bailey. "Neural Architecture Search via Combinatorial Multi-Armed Bandit." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533655.
Повний текст джерелаSong, Yiwen, and Haiming Jin. "Minimizing Entropy for Crowdsourcing with Combinatorial Multi-Armed Bandit." In IEEE INFOCOM 2021 - IEEE Conference on Computer Communications. IEEE, 2021. http://dx.doi.org/10.1109/infocom42981.2021.9488800.
Повний текст джерелаMandaglio, Domenico, and Andrea Tagarelli. "Dynamic consensus community detection and combinatorial multi-armed bandit." In ASONAM '19: International Conference on Advances in Social Networks Analysis and Mining. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3341161.3342910.
Повний текст джерелаYi Gai, B. Krishnamachari, and Mingyan Liu. "On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards." In 2011 IEEE Global Communications Conference (GLOBECOM 2011). IEEE, 2011. http://dx.doi.org/10.1109/glocom.2011.6134244.
Повний текст джерелаGao, Guoju, He Huang, Mingjun Xiao, Jie Wu, Yu-E. Sun, and Sheng Zhang. "Auction-Based Combinatorial Multi-Armed Bandit Mechanisms with Strategic Arms." In IEEE INFOCOM 2021 - IEEE Conference on Computer Communications. IEEE, 2021. http://dx.doi.org/10.1109/infocom42981.2021.9488765.
Повний текст джерела