Letteratura scientifica selezionata sul tema "Bandit Contextuel"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Bandit Contextuel".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Bandit Contextuel"

1

Gisselbrecht, Thibault, Sylvain Lamprier та Patrick Gallinari. "Collecte ciblée à partir de flux de données en ligne dans les médias sociaux. Une approche de bandit contextuel". Document numérique 19, № 2-3 (2016): 11–30. http://dx.doi.org/10.3166/dn.19.2-3.11-30.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Dimakopoulou, Maria, Zhengyuan Zhou, Susan Athey, and Guido Imbens. "Balanced Linear Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3445–53. http://dx.doi.org/10.1609/aaai.v33i01.33013445.

Testo completo
Abstract (sommario):
Contextual bandit algorithms are sensitive to the estimation method of the outcome model as well as the exploration method used, particularly in the presence of rich heterogeneity or complex outcome models, which can lead to difficult estimation problems along the path of learning. We develop algorithms for contextual bandits with linear payoffs that integrate balancing methods from the causal inference literature in their estimation to make it less prone to problems of estimation bias. We provide the first regret bound analyses for linear contextual bandits with balancing and show that our al
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tong, Ruoyi. "A survey of the application and technical improvement of the multi-armed bandit." Applied and Computational Engineering 77, no. 1 (2024): 25–31. http://dx.doi.org/10.54254/2755-2721/77/20240631.

Testo completo
Abstract (sommario):
In recent years, the multi-armed bandit (MAB) model has been widely used and has shown excellent performance. This article provides an overview of the applications and technical improvements of the multi-armed bandit machine problem. First, an overview of the multi-armed bandit problem is presented, including the explanation of a general modeling approach and several existing common algorithms, such as -greedy, ETC, UCB, and Thompson sampling. Then, the real-life applications of the multi-armed bandit model are explored, covering the fields of recommender systems, healthcare, and finance. Then
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yang, Luting, Jianyi Yang, and Shaolei Ren. "Contextual Bandits with Delayed Feedback and Semi-supervised Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (2021): 15943–44. http://dx.doi.org/10.1609/aaai.v35i18.17968.

Testo completo
Abstract (sommario):
Contextual multi-armed bandit (MAB) is a classic online learning problem, where a learner/agent selects actions (i.e., arms) given contextual information and discovers optimal actions based on reward feedback. Applications of contextual bandit have been increasingly expanding, including advertisement, personalization, resource allocation in wireless networks, among others. Nonetheless, the reward feedback is delayed in many applications (e.g., a user may only provide service ratings after a period of time), creating challenges for contextual bandits. In this paper, we address delayed feedback
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Sharaf, Amr, and Hal Daumé III. "Meta-Learning Effective Exploration Strategies for Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 9541–48. http://dx.doi.org/10.1609/aaai.v35i11.17149.

Testo completo
Abstract (sommario):
In contextual bandits, an algorithm must choose actions given ob- served contexts, learning from a reward signal that is observed only for the action chosen. This leads to an exploration/exploitation trade-off: the algorithm must balance taking actions it already believes are good with taking new actions to potentially discover better choices. We develop a meta-learning algorithm, Mêlée, that learns an exploration policy based on simulated, synthetic con- textual bandit tasks. Mêlée uses imitation learning against these simulations to train an exploration policy that can be applied to true con
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Du, Yihan, Siwei Wang, and Longbo Huang. "A One-Size-Fits-All Solution to Conservative Bandit Problems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 7254–61. http://dx.doi.org/10.1609/aaai.v35i8.16891.

Testo completo
Abstract (sommario):
In this paper, we study a family of conservative bandit problems (CBPs) with sample-path reward constraints, i.e., the learner's reward performance must be at least as well as a given baseline at any time. We propose a One-Size-Fits-All solution to CBPs and present its applications to three encompassed problems, i.e. conservative multi-armed bandits (CMAB), conservative linear bandits (CLB) and conservative contextual combinatorial bandits (CCCB). Different from previous works which consider high probability constraints on the expected reward, we focus on a sample-path constraint on the actual
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Varatharajah, Yogatheesan, and Brent Berry. "A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials." Life 12, no. 8 (2022): 1277. http://dx.doi.org/10.3390/life12081277.

Testo completo
Abstract (sommario):
Clinical trials are conducted to evaluate the efficacy of new treatments. Clinical trials involving multiple treatments utilize the randomization of treatment assignments to enable the evaluation of treatment efficacies in an unbiased manner. Such evaluation is performed in post hoc studies that usually use supervised-learning methods that rely on large amounts of data collected in a randomized fashion. That approach often proves to be suboptimal in that some participants may suffer and even die as a result of having not received the most appropriate treatments during the trial. Reinforcement-
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Li, Jialian, Chao Du, and Jun Zhu. "A Bayesian Approach for Subset Selection in Contextual Bandits." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 8384–91. http://dx.doi.org/10.1609/aaai.v35i9.17019.

Testo completo
Abstract (sommario):
Subset selection in Contextual Bandits (CB) is an important task in various applications such as advertisement recommendation. In CB, arms are attached with contexts and thus correlated in the context space. Proper exploration for subset selection in CB should carefully consider the contexts. Previous works mainly concentrate on the best one arm identification in linear bandit problems, where the expected rewards are linearly dependent on the contexts. However, these methods highly rely on linearity, and cannot be easily extended to more general cases. We propose a novel Bayesian approach for
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Qu, Jiaming. "Survey of dynamic pricing based on Multi-Armed Bandit algorithms." Applied and Computational Engineering 37, no. 1 (2024): 160–65. http://dx.doi.org/10.54254/2755-2721/37/20230497.

Testo completo
Abstract (sommario):
Dynamic pricing seeks to determine the most optimal selling price for a product or service, taking into account factors like limited supply and uncertain demand. This study aims to provide a comprehensive exploration of dynamic pricing using the multi-armed bandit problem framework in various contexts. The investigation highlights the prevalence of Thompson sampling in dynamic pricing scenarios with a Bayesian backdrop, where the seller possesses prior knowledge of demand functions. On the other hand, in non-Bayesian situations, the Upper Confidence Bound (UCB) algorithm family gains traction
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Atsidakou, Alexia, Constantine Caramanis, Evangelia Gergatsouli, Orestis Papadigenopoulos, and Christos Tzamos. "Contextual Pandora’s Box." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 10944–52. http://dx.doi.org/10.1609/aaai.v38i10.28969.

Testo completo
Abstract (sommario):
Pandora’s Box is a fundamental stochastic optimization problem, where the decision-maker must find a good alternative, while minimizing the search cost of exploring the value of each alternative. In the original formulation, it is assumed that accurate distributions are given for the values of all the alternatives, while recent work studies the online variant of Pandora’s Box where the distributions are originally unknown. In this work, we study Pandora’s Box in the online setting, while incorporating context. At each round, we are presented with a number of alternatives each having a context,
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Bandit Contextuel"

1

Sakhi, Otmane. "Offline Contextual Bandit : Theory and Large Scale Applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG011.

Testo completo
Abstract (sommario):
Cette thèse s'intéresse au problème de l'apprentissage à partir d'interactions en utilisant le cadre du bandit contextuel hors ligne. En particulier, nous nous intéressons à deux sujets connexes : (1) l'apprentissage de politiques hors ligne avec des certificats de performance, et (2) l'apprentissage rapide et efficace de politiques, pour le problème de recommandation à grande échelle. Pour (1), nous tirons d'abord parti des résultats du cadre d'optimisation distributionnellement robuste pour construire des bornes asymptotiques, sensibles à la variance, qui permettent l'évaluation des performa
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Huix, Tom. "Variational Inference : theory and large scale applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX071.

Testo completo
Abstract (sommario):
Cette thèse développe des méthodes d'Inférence Variationnelle pour l'apprentissage bayésien en grande dimension. L'approche bayésienne en machine learning permet de gérer l'incertitude épistémique des modèles et ainsi de mieux quantifier l'incertitude de ces modèles, ce qui est nécessaire dans de nombreuses applications de machine learning. Cependant, l'inférence bayésienne n'est souvent pas réalisable car la distribution à posteriori des paramètres du modèle n'est pas calculable en général. L'Inférence Variationnelle (VI) est une approche qui permet de contourner ce problème en approximant la
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Bouneffouf, Djallel. "DRARS, A Dynamic Risk-Aware Recommender System." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-01026136.

Testo completo
Abstract (sommario):
L'immense quantité d'information générée et gérée au quotidien par les systèmes d'information et leurs utilisateurs conduit inéluctablement ?a la problématique de surcharge d'information. Dans ce contexte, les systèmes de recommandation traditionnels fournissent des informations pertinentes aux utilisateurs. Néanmoins, avec la propagation récente des dispositifs mobiles (Smartphones et tablettes), nous constatons une migration progressive des utilisateurs vers la manipulation d'environnements pérvasifs. Le problème avec les approches traditionnelles de recommandation est qu'elles n'utilisent p
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Chia, John. "Non-linear contextual bandits." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42191.

Testo completo
Abstract (sommario):
The multi-armed bandit framework can be motivated by any problem where there is an abundance of choice and the utility of trying something new must be balanced with that of going with the status quo. This is a trade-off that is present in the everyday problem of where and what to eat: should I try a new restaurant or go to that Chinese place on the corner? In this work, a multi-armed bandit algorithm is presented which uses a non-parametric non-linear data model (a Gaussian process) to solve problems of this sort. The advantages of this method over existing work is highlighted through exper
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Galichet, Nicolas. "Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112242/document.

Testo completo
Abstract (sommario):
Cette thèse s'inscrit dans le domaine de la prise de décision séquentielle en environnement inconnu, et plus particulièrement dans le cadre des bandits manchots (multi-armed bandits, MAB), défini par Robbins et Lai dans les années 50. Depuis les années 2000, ce cadre a fait l'objet de nombreuses recherches théoriques et algorithmiques centrées sur le compromis entre l'exploration et l'exploitation : L'exploitation consiste à répéter le plus souvent possible les choix qui se sont avérés les meilleurs jusqu'à présent. L'exploration consiste à essayer des choix qui ont rarement été essayés, pour
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Nicol, Olivier. "Data-driven evaluation of contextual bandit algorithms and applications to dynamic recommendation." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10211/document.

Testo completo
Abstract (sommario):
Ce travail de thèse a été réalisé dans le contexte de la recommandation dynamique. La recommandation est l'action de fournir du contenu personnalisé à un utilisateur utilisant une application, dans le but d'améliorer son utilisation e.g. la recommandation d'un produit sur un site marchant ou d'un article sur un blog. La recommandation est considérée comme dynamique lorsque le contenu à recommander ou encore les goûts des utilisateurs évoluent rapidement e.g. la recommandation d'actualités. Beaucoup d'applications auxquelles nous nous intéressons génèrent d'énormes quantités de données grâce à
Gli stili APA, Harvard, Vancouver, ISO e altri
7

May, Benedict C. "Bayesian sampling in contextual-bandit problems with extensions to unknown normal-form games." Thesis, University of Bristol, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627937.

Testo completo
Abstract (sommario):
In sequential decision problems in unknown environments, decision makers often face dilemmas over whether to explore to discover more about the environment, or to exploit current knowledge. In this thesis, we address this exploration/exploitation dilemma in a general setting encompassing both standard and contextualised bandit problems, and also multi-agent (game-theoretic) problems. We consider an approach of Thompson (1933) which makes use of samples from the posterior distributions for the instantaneous value of each action. Our initial focus is on problems with a single decision maker acti
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ju, Weiyu. "Mobile Deep Neural Network Inference in Edge Computing with Resource Restrictions." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25038.

Testo completo
Abstract (sommario):
Recent advances in deep neural networks (DNNs) have substantially improved the accuracy of intelligent applications. However, the pursuit of a higher accuracy has led to an increase in the complexity of DNNs, which inevitably increases the inference latency. For many time-sensitive mobile inferences, such a delay is intolerable and could be fatal in many real-world applications. To solve this problem, one effective scheme known as DNN partition is proposed, which significantly improves the inference latency by partitioning the DNN to a mobile device and an edge server to jointly process the in
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Brégère, Margaux. "Stochastic bandit algorithms for demand side management Simulating Tariff Impact in Electrical Energy Consumption Profiles with Conditional Variational Autoencoders Online Hierarchical Forecasting for Power Consumption Data Target Tracking for Contextual Bandits : Application to Demand Side Management." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM022.

Testo completo
Abstract (sommario):
L'électricité se stockant difficilement à grande échelle, l'équilibre entre la production et la consommation doit être rigoureusement maintenu. Une gestion par anticipation de la demande se complexifie avec l'intégration au mix de production des énergies renouvelables intermittentes. Parallèlement, le déploiement des compteurs communicants permet d'envisager un pilotage dynamique de la consommation électrique. Plus concrètement, l'envoi de signaux - tels que des changements du prix de l'électricité – permettrait d'inciter les usagers à moduler leur consommation afin qu'elle s'ajuste au mieux à
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Wan, Hao. "Tutoring Students with Adaptive Strategies." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/36.

Testo completo
Abstract (sommario):
Adaptive learning is a crucial part in intelligent tutoring systems. It provides students with appropriate tutoring interventions, based on students’ characteristics, status, and other related features, in order to optimize their learning outcomes. It is required to determine students’ knowledge level or learning progress, based on which it then uses proper techniques to choose the optimal interventions. In this dissertation work, I focus on these aspects related to the process in adaptive learning: student modeling, k-armed bandits, and contextual bandits. Student modeling. The main o
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Bandit Contextuel"

1

Pijnenburg, Huub, Jo Hermanns, Tom van Yperen, Giel Hutschemaekers, and Adri van Montfoort. Zorgen dat het werkt: Werkzame factoren in de zorg voor jeugd. 2nd ed. Uitgeverij SWP, 2011. http://dx.doi.org/10.36254/978-90-8850-131-9.

Testo completo
Abstract (sommario):
Evidence based werken in de zorg voor jeugd? Prima! Maar wat doen we met vragen als: - In wiens handen werken interventies; wat kenmerkt effectieve professionals? - Wat is de invloed van de werkalliantie van professionals en cliënten? - Waarom werken interventies, en onder welke condities? - Hoe kunnen we steunfactoren benutten in de leefomgeving van jeugdigen en opvoeders? - Wat betekent dit alles voor de manier waarop we hulp moeten organiseren en beroepskrachten moeten opleiden? Vijf bijdragen maken dit boek waardevol voor jeugdzorgprofessionals en studenten. Vijf auteurs die thuis zijn in
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Bandit Contextuel"

1

Nguyen, Le Minh Duc, Fuhua Lin, and Maiga Chang. "Generating Learning Sequences Using Contextual Bandit Algorithms." In Generative Intelligence and Intelligent Tutoring Systems. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63028-6_26.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Tavakol, Maryam, Sebastian Mair, and Katharina Morik. "HyperUCB: Hyperparameter Optimization Using Contextual Bandits." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Ma, Yuzhe, Kwang-Sung Jun, Lihong Li, and Xiaojin Zhu. "Data Poisoning Attacks in Contextual Bandits." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01554-1_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Labille, Kevin, Wen Huang, and Xintao Wu. "Transferable Contextual Bandits with Prior Observations." In Advances in Knowledge Discovery and Data Mining. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75765-6_32.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Shirey, Heather. "19. Art in the Streets." In Play in a Covid Frame. Open Book Publishers, 2023. http://dx.doi.org/10.11647/obp.0326.19.

Testo completo
Abstract (sommario):
Drawing on photographic documentation of street art, contextual analysis and artist interviews, this essay examines the work of two prolific street artists: The Velvet Bandit, a wheatpaste artist in the Bay Area (California, USA) and SudaLove, a muralist working in Khartoum (Sudan). Both The Velvet Bandit and SudaLove create artistic interventions in the street as a means of engaging with Covid-19 in a manner that is light and playful but also serious and political. As is typical of street art, their work is highly accessible, using simple visual language. At the same time, each piece requires deeper contextual knowledge to understand the underlying political and social significance.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Liu, Weiwen, Shuai Li, and Shengyu Zhang. "Contextual Dependent Click Bandit Algorithm for Web Recommendation." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94776-1_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bouneffouf, Djallel, Romain Laroche, Tanguy Urvoy, Raphael Feraud, and Robin Allesiardo. "Contextual Bandit for Active Learning: Active Thompson Sampling." In Neural Information Processing. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12637-1_51.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bouneffouf, Djallel, Amel Bouzeghoub, and Alda Lopes Gançarski. "Contextual Bandits for Context-Based Information Retrieval." In Neural Information Processing. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42042-9_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Delande, David, Patricia Stolf, Raphaël Feraud, Jean-Marc Pierson, and André Bottaro. "Horizontal Scaling in Cloud Using Contextual Bandits." In Euro-Par 2021: Parallel Processing. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85665-6_18.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Gampa, Phanideep, and Sumio Fujita. "BanditRank: Learning to Rank Using Contextual Bandits." In Advances in Knowledge Discovery and Data Mining. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75768-7_21.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Bandit Contextuel"

1

Chen, Zhaoxin. "Enhancing Recommendation Systems Through Contextual Bandit Models." In International Conference on Engineering Management, Information Technology and Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012960800004508.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Liu, Fangzhou, Zehua Pei, Ziyang Yu, et al. "CBTune: Contextual Bandit Tuning for Logic Synthesis." In 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2024. http://dx.doi.org/10.23919/date58400.2024.10546766.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Zhang, Yufan, Honglin Wen, and Qiuwei Wu. "A Contextual Bandit Approach for Value-oriented Prediction Interval Forecasting." In 2024 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2024. http://dx.doi.org/10.1109/pesgm51994.2024.10688595.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Li, Haowei, Mufeng Wang, Jiarui Zhang, Tianyu Shi, and Alaa Khamis. "A Contextual Multi-armed Bandit Approach to Personalized Trip Itinerary Planning." In 2024 IEEE International Conference on Smart Mobility (SM). IEEE, 2024. http://dx.doi.org/10.1109/sm63044.2024.10733530.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Bouneffouf, Djallel, Irina Rish, Guillermo Cecchi, and Raphaël Féraud. "Context Attentive Bandits: Contextual Bandit with Restricted Context." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/203.

Testo completo
Abstract (sommario):
We consider a novel formulation of the multi-armed bandit model, which we call the contextual bandit with restricted context, where only a limited number of features can be accessed by the learner at every iteration. This novel formulation is motivated by different online problems arising in clinical trials, recommender systems and attention modeling.Herein, we adapt the standard multi-armed bandit algorithm known as Thompson Sampling to take advantage of our restricted context setting, and propose two novel algorithms, called the Thompson Sampling with Restricted Context (TSRC) and the Window
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Pase, Francesco, Deniz Gunduz, and Michele Zorzi. "Remote Contextual Bandits." In 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022. http://dx.doi.org/10.1109/isit50566.2022.9834399.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Lin, Baihan, Djallel Bouneffouf, Guillermo A. Cecchi, and Irina Rish. "Contextual Bandit with Adaptive Feature Extraction." In 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2018. http://dx.doi.org/10.1109/icdmw.2018.00136.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Peng, Yi, Miao Xie, Jiahao Liu, et al. "A Practical Semi-Parametric Contextual Bandit." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/450.

Testo completo
Abstract (sommario):
Classic multi-armed bandit algorithms are inefficient for a large number of arms. On the other hand, contextual bandit algorithms are more efficient, but they suffer from a large regret due to the bias of reward estimation with finite dimensional features. Although recent studies proposed semi-parametric bandits to overcome these defects, they assume arms' features are constant over time. However, this assumption rarely holds in practice, since real-world problems often involve underlying processes that are dynamically evolving over time especially for the special promotions like Singles' Day
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Zhang, Xiaoying, Hong Xie, Hang Li, and John C.S. Lui. "Conversational Contextual Bandit: Algorithm and Application." In WWW '20: The Web Conference 2020. ACM, 2020. http://dx.doi.org/10.1145/3366423.3380148.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Ban, Yikun, Jingrui He, and Curtiss B. Cook. "Multi-facet Contextual Bandits." In KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 2021. http://dx.doi.org/10.1145/3447548.3467299.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Bandit Contextuel"

1

Yun, Seyoung, Jun Hyun Nam, Sangwoo Mo, and Jinwoo Shin. Contextual Multi-armed Bandits under Feature Uncertainty. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1345927.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!