Literatura científica selecionada sobre o tema "Adversarial bandits"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Adversarial bandits".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Adversarial bandits"
Lu, Shiyin, Guanghui Wang e Lijun Zhang. "Stochastic Graphical Bandits with Adversarial Corruptions". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 10 (18 de maio de 2021): 8749–57. http://dx.doi.org/10.1609/aaai.v35i10.17060.
Texto completo da fontePacchiano, Aldo, Heinrich Jiang e Michael I. Jordan. "Robustness Guarantees for Mode Estimation with an Application to Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 10 (18 de maio de 2021): 9277–84. http://dx.doi.org/10.1609/aaai.v35i10.17119.
Texto completo da fonteWang, Zhiwei, Huazheng Wang e Hongning Wang. "Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de março de 2024): 15770–77. http://dx.doi.org/10.1609/aaai.v38i14.29506.
Texto completo da fonteEsfandiari, Hossein, Amin Karbasi, Abbas Mehrabian e Vahab Mirrokni. "Regret Bounds for Batched Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de maio de 2021): 7340–48. http://dx.doi.org/10.1609/aaai.v35i8.16901.
Texto completo da fonteChen, Cheng, Canzhe Zhao e Shuai Li. "Simultaneously Learning Stochastic and Adversarial Bandits under the Position-Based Model". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junho de 2022): 6202–10. http://dx.doi.org/10.1609/aaai.v36i6.20569.
Texto completo da fonteWang, Lingda, Bingcong Li, Huozhi Zhou, Georgios B. Giannakis, Lav R. Varshney e Zhizhen Zhao. "Adversarial Linear Contextual Bandits with Graph-Structured Side Observations". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 11 (18 de maio de 2021): 10156–64. http://dx.doi.org/10.1609/aaai.v35i11.17218.
Texto completo da fonteWachel, Pawel, e Cristian Rojas. "An Adversarial Approach to Adaptive Model Predictive Control". Journal of Advances in Applied & Computational Mathematics 9 (19 de setembro de 2022): 135–46. http://dx.doi.org/10.15377/2409-5761.2022.09.10.
Texto completo da fonteXu, Xiao, e Qing Zhao. "Memory-Constrained No-Regret Learning in Adversarial Multi-Armed Bandits". IEEE Transactions on Signal Processing 69 (2021): 2371–82. http://dx.doi.org/10.1109/tsp.2021.3070201.
Texto completo da fonteShi, Chengshuai, e Cong Shen. "On No-Sensing Adversarial Multi-Player Multi-Armed Bandits With Collision Communications". IEEE Journal on Selected Areas in Information Theory 2, n.º 2 (junho de 2021): 515–33. http://dx.doi.org/10.1109/jsait.2021.3076027.
Texto completo da fonteTae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong e Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits". Proceedings of the VLDB Endowment 17, n.º 5 (janeiro de 2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.
Texto completo da fonteTeses / dissertações sobre o assunto "Adversarial bandits"
Maillard, Odalric-Ambrym. "APPRENTISSAGE SÉQUENTIEL : Bandits, Statistique et Renforcement". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00845410.
Texto completo da fonteAubert, Julien. "Théorie de l'estimation pour les processus d'apprentissage". Electronic Thesis or Diss., Université Côte d'Azur, 2025. http://www.theses.fr/2025COAZ5001.
Texto completo da fonteThis thesis considers the problem of estimating the learning process of an individual during a task based on observed choices or actions of that individual. This question lies at the intersection of cognition, statistics, and reinforcement learning, and involves developing models that accurately capture the dynamics of learning, estimating model parameters, and selecting the best-fitting model. A key difficulty is that learning, by nature, leads to non-independent and non-stationary data, as the individual selects its actions depending on the outcome of its previous choices.Existing statistical theories and methods are well-established for independent and stationary data, but their application to a learning framework introduces significant challenges. This thesis seeks to bridge the gap between empirical methods and theoretical guarantees in computational modeling. I first explore the properties of maximum likelihood estimation on a model of learning based on a bandit problem. I then present general theoretical results on penalized log-likelihood model selection for non-stationary and dependent data, for which I develop a new concentration inequality for the suprema of renormalized processes. I also introduce a hold-out procedure and theoretical guarantees for it in a learning framework. These theoretical results are supported with applications on synthetic data and on real cognitive experiments in psychology and ethology
Livros sobre o assunto "Adversarial bandits"
Parsons, Dave. Bandits!: Pictorial history of American adversarial aircraft. Osceola, WI: Motorbooks International, 1993.
Encontre o texto completo da fonteNelson, Derek, e Dave Parsons. Bandits!: Pictorial History of American Adversarial Aircraft. Motorbooks Intl, 1993.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Adversarial bandits"
Li, Yandi, e Jianxiong Guo. "A Modified EXP3 in Adversarial Bandits with Multi-user Delayed Feedback". In Lecture Notes in Computer Science, 263–78. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-49193-1_20.
Texto completo da fonteZheng, Rong, e Cunqing Hua. "Adversarial Multi-armed Bandit". In Wireless Networks, 41–57. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-50502-2_4.
Texto completo da fonteSt-Pierre, David L., e Olivier Teytaud. "Sharing Information in Adversarial Bandit". In Applications of Evolutionary Computation, 386–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45523-4_32.
Texto completo da fonteUchiya, Taishi, Atsuyoshi Nakamura e Mineichi Kudo. "Algorithms for Adversarial Bandit Problems with Multiple Plays". In Lecture Notes in Computer Science, 375–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16108-7_30.
Texto completo da fonteLee, Chia-Jung, Yalei Yang, Sheng-Hui Meng e Tien-Wen Sung. "Adversarial Multiarmed Bandit Problems in Gradually Evolving Worlds". In Advances in Smart Vehicular Technology, Transportation, Communication and Applications, 305–11. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70730-3_36.
Texto completo da fonte"Exp3 for Adversarial Linear Bandits". In Bandit Algorithms, 278–85. Cambridge University Press, 2020. http://dx.doi.org/10.1017/9781108571401.034.
Texto completo da fonte"The Relation between Adversarial and Stochastic Linear Bandits". In Bandit Algorithms, 306–12. Cambridge University Press, 2020. http://dx.doi.org/10.1017/9781108571401.036.
Texto completo da fonteSrisawad, Phurinut, Juergen Branke e Long Tran-Thanh. "Identifying the Best Arm in the Presence of Global Environment Shifts". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240735.
Texto completo da fonteWissow, Stephen, e Masataro Asai. "Scale-Adaptive Balancing of Exploration and Exploitation in Classical Planning". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240994.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Adversarial bandits"
Huang, Yin, Qingsong Liu e Jie Xu. "Adversarial Combinatorial Bandits with Switching Cost and Arm Selection Constraints". In IEEE INFOCOM 2024 - IEEE Conference on Computer Communications, 371–80. IEEE, 2024. http://dx.doi.org/10.1109/infocom52122.2024.10621364.
Texto completo da fonteLi, Jinpeng, Yunni Xia, Xiaoning Sun, Peng Chen, Xiaobo Li e Jiafeng Feng. "Delay-Aware Service Caching in Edge Cloud: An Adversarial Semi-Bandits Learning-Based Approach". In 2024 IEEE 17th International Conference on Cloud Computing (CLOUD), 411–18. IEEE, 2024. http://dx.doi.org/10.1109/cloud62652.2024.00053.
Texto completo da fonteLa-aiddee, Panithan, Paramin Sangwongngam, Lunchakorn Wuttisittikulkij e Pisit Vanichchanunt. "A Generative Adversarial Network-Based Approach for Reflective-Metasurface Unit-Cell Synthesis in mmWave Bands". In 2024 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/itc-cscc62988.2024.10628337.
Texto completo da fonteImmorlica, Nicole, Karthik Abinav Sankararaman, Robert Schapire e Aleksandrs Slivkins. "Adversarial Bandits with Knapsacks". In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2019. http://dx.doi.org/10.1109/focs.2019.00022.
Texto completo da fonteLykouris, Thodoris, Vahab Mirrokni e Renato Paes Leme. "Stochastic bandits robust to adversarial corruptions". In STOC '18: Symposium on Theory of Computing. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3188745.3188918.
Texto completo da fonteWan, Zongqi, Xiaoming Sun e Jialin Zhang. "Bounded Memory Adversarial Bandits with Composite Anonymous Delayed Feedback". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/486.
Texto completo da fonteBande, Meghana, e Venugopal V. Veeravalli. "Adversarial Multi-user Bandits for Uncoordinated Spectrum Access". In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8682263.
Texto completo da fonteHan, Shuguang, Michael Bendersky, Przemek Gajda, Sergey Novikov, Marc Najork, Bernhard Brodowsky e Alexandrin Popescul. "Adversarial Bandits Policy for Crawling Commercial Web Content". In WWW '20: The Web Conference 2020. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3366423.3380125.
Texto completo da fonteHoward, William W., Anthony F. Martone e R. Michael Buehrer. "Adversarial Multi-Player Bandits for Cognitive Radar Networks". In 2022 IEEE Radar Conference (RadarConf22). IEEE, 2022. http://dx.doi.org/10.1109/radarconf2248738.2022.9764226.
Texto completo da fonteRangi, Anshuka, Massimo Franceschetti e Long Tran-Thanh. "Unifying the Stochastic and the Adversarial Bandits with Knapsack". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/459.
Texto completo da fonte