Дисертації з теми "Bandit learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Bandit learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Liu, Fang. "Efficient Online Learning with Bandit Feedback." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587680990430268.
Повний текст джерелаKlein, Nicolas. "Learning and Experimentation in Strategic Bandit Problems." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-122728.
Повний текст джерелаTalebi, Mazraeh Shahi Mohammad Sadegh. "Online Combinatorial Optimization under Bandit Feedback." Licentiate thesis, KTH, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-181321.
Повний текст джерелаQC 20160201
Lomax, S. E. "Cost-sensitive decision tree learning using a multi-armed bandit framework." Thesis, University of Salford, 2013. http://usir.salford.ac.uk/29308/.
Повний текст джерелаJedor, Matthieu. "Bandit algorithms for recommender system optimization." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM027.
Повний текст джерелаIn this PhD thesis, we study the optimization of recommender systems with the objective of providing more refined suggestions of items for a user to benefit.The task is modeled using the multi-armed bandit framework.In a first part, we look upon two problems that commonly occured in recommendation systems: the large number of items to handle and the management of sponsored contents.In a second part, we investigate the empirical performance of bandit algorithms and especially how to tune conventional algorithm to improve results in stationary and non-stationary environments that arise in practice.This leads us to analyze both theoretically and empirically the greedy algorithm that, in some cases, outperforms the state-of-the-art
Louëdec, Jonathan. "Stratégies de bandit pour les systèmes de recommandation." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30257/document.
Повний текст джерелаCurrent recommender systems need to recommend items that are relevant to users (exploitation), but they must also be able to continuously obtain new information about items and users (exploration). This is the exploration / exploitation dilemma. Such an environment is part of what is called "reinforcement learning". In the statistical literature, bandit strategies are known to provide solutions to this dilemma. The contributions of this multidisciplinary thesis the adaptation of these strategies to deal with some problems of the recommendation systems, such as the recommendation of several items simultaneously, taking into account the aging of the popularity of an items or the recommendation in real time
Nakhe, Paresh [Verfasser], Martin [Gutachter] Hoefer, and Georg [Gutachter] Schnitger. "On bandit learning and pricing in markets / Paresh Nakhe ; Gutachter: Martin Hoefer, Georg Schnitger." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2018. http://d-nb.info/1167856740/34.
Повний текст джерелаBesson, Lilian. "Multi-Players Bandit Algorithms for Internet of Things Networks." Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0005.
Повний текст джерелаIn this PhD thesis, we study wireless networks and reconfigurable end-devices that can access Cognitive Radio networks, in unlicensed bands and without central control. We focus on Internet of Things networks (IoT), with the objective of extending the devices’ battery life, by equipping them with low-cost but efficient machine learning algorithms, in order to let them automatically improve the efficiency of their wireless communications. We propose different models of IoT networks, and we show empirically on both numerical simulations and real-world validation the possible gain of our methods, that use Reinforcement Learning. The different network access problems are modeled as Multi-Armed Bandits (MAB), but we found that analyzing the realistic models was intractable, because proving the convergence of many IoT devices playing a collaborative game, without communication nor coordination is hard, when they all follow random activation patterns. The rest of this manuscript thus studies two restricted models, first multi-players bandits in stationary problems, then non-stationary single-player bandits. We also detail another contribution, SMPyBandits, our open-source Python library for numerical MAB simulations, that covers all the studied models and more
Racey, Deborah Elaine. "EFFECTS OF RESPONSE FREQUENCY CONSTRAINTS ON LEARNING IN A NON-STATIONARY MULTI-ARMED BANDIT TASK." OpenSIUC, 2009. https://opensiuc.lib.siu.edu/dissertations/86.
Повний текст джерелаHren, Jean-Francois. "Planification Optimiste pour Systèmes Déterministes." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00845898.
Повний текст джерелаAchab, Mastane. "Ranking and risk-aware reinforcement learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT020.
Повний текст джерелаThis thesis divides into two parts: the first part is on ranking and the second on risk-aware reinforcement learning. While binary classification is the flagship application of empirical risk minimization (ERM), the main paradigm of machine learning, more challenging problems such as bipartite ranking can also be expressed through that setup. In bipartite ranking, the goal is to order, by means of scoring methods, all the elements of some feature space based on a training dataset composed of feature vectors with their binary labels. This thesis extends this setting to the continuous ranking problem, a variant where the labels are taking continuous values instead of being simply binary. The analysis of ranking data, initiated in the 18th century in the context of elections, has led to another ranking problem using ERM, namely ranking aggregation and more precisely the Kemeny's consensus approach. From a training dataset made of ranking data, such as permutations or pairwise comparisons, the goal is to find the single "median permutation" that best corresponds to a consensus order. We present a less drastic dimensionality reduction approach where a distribution on rankings is approximated by a simpler distribution, which is not necessarily reduced to a Dirac mass as in ranking aggregation.For that purpose, we rely on mathematical tools from the theory of optimal transport such as Wasserstein metrics. The second part of this thesis focuses on risk-aware versions of the stochastic multi-armed bandit problem and of reinforcement learning (RL), where an agent is interacting with a dynamic environment by taking actions and receiving rewards, the objective being to maximize the total payoff. In particular, a novel atomic distributional RL approach is provided: the distribution of the total payoff is approximated by particles that correspond to trimmed means
Degenne, Rémy. "Impact of structure on the design and analysis of bandit algorithms." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7179.
Повний текст джерелаIn this Thesis, we study sequential learning problems called stochastic multi-armed bandits. First a new bandit algorithm is presented. The analysis of that algorithm uses confidence intervals on the mean of the arms reward distributions, as most bandit proofs do. In a parametric setting, we derive concentration inequalities which quantify the deviation between the mean parameter of a distribution and its empirical estimation in order to obtain confidence intervals. These inequalities are presented as bounds on the Kullback-Leibler divergence. Three extensions of the stochastic multi-armed bandit problem are then studied. First we study the so-called combinatorial semi-bandit problem, in which an algorithm chooses a set of arms and the reward of each of these arms is observed. The minimal attainable regret then depends on the correlation between the arm distributions. We consider then a setting in which the observation mechanism changes. One source of difficulty of the bandit problem is the scarcity of information: only the arm pulled is observed. We show how to use efficiently eventual supplementary free information (which do not influence the regret). Finally a new family of algorithms is introduced to obtain both regret minimization and est arm identification regret guarantees. Each algorithm of the family realizes a trade-off between regret and time needed to identify the best arm. In a second part we study the so-called pure exploration problem, in which an algorithm is not evaluated on its regret but on the probability that it returns a wrong answer to a question on the arm distributions. We determine the complexity of such problems and design with performance close to that complexity
Banda, Brandon Mathewe. "General Game Playing as a Bandit-Arms Problem: A Multiagent Monte-Carlo Solution Exploiting Nash Equilibria." Oberlin College Honors Theses / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1559142912626158.
Повний текст джерелаBouneffouf, Djallel. "DRARS, A Dynamic Risk-Aware Recommender System." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-01026136.
Повний текст джерелаClement, Benjamin. "Adaptive Personalization of Pedagogical Sequences using Machine Learning." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0373/document.
Повний текст джерелаCan computers teach people? To answer this question, Intelligent Tutoring Systems are a rapidly expanding field of research among the Information and Communication Technologies for the Education community. This subject brings together different issues and researchers from various fields, such as psychology, didactics, neurosciences and, particularly, machine learning. Digital technologies are becoming more and more a part of everyday life with the development of tablets and smartphones. It seems natural to consider using these technologies for educational purposes. This raises several questions, such as how to make user interfaces accessible to everyone, how to make educational content motivating and how to customize it to individual learners. In this PhD, we developed methods, grouped in the aptly-named HMABITS framework, to adapt pedagogical activity sequences based on learners' performances and preferences to maximize their learning speed and motivation. These methods use computational models of intrinsic motivation and curiosity-driven learning to identify the activities providing the highest learning progress and use Multi-Armed Bandit algorithms to manage the exploration/exploitation trade-off inside the activity space. Activities of optimal interest are thus privileged with the target to keep the learner in a state of Flow or in his or her Zone of Proximal Development. Moreover, some of our methods allow the student to make choices about contextual features or pedagogical content, which is a vector of self-determination and motivation. To evaluate the effectiveness and relevance of our algorithms, we carried out several types of experiments. We first evaluated these methods with numerical simulations before applying them to real teaching conditions. To do this, we developed multiple models of learners, since a single model never exactly replicates the behavior of a real learner. The simulation results show the HMABITS framework achieves comparable, and in some cases better, learning results than an optimal solution or an expert sequence. We then developed our own pedagogical scenario and serious game to test our algorithms in classrooms with real students. We developed a game on the theme of number decomposition, through the manipulation of money, for children aged 6 to 8. We then worked with the educational institutions and several schools in the Bordeaux school district. Overall, about 1000 students participated in trial lessons using the tablet application. The results of the real-world studies show that the HMABITS framework allows the students to do more diverse and difficult activities, to achieve better learning and to be more motivated than with an Expert Sequence. The results show that this effect is even greater when the students have the possibility to make choices
Kaufmann, Emilie. "Analyse de stratégies bayésiennes et fréquentistes pour l'allocation séquentielle de ressources." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0056/document.
Повний текст джерелаIn this thesis, we study strategies for sequential resource allocation, under the so-called stochastic multi-armed bandit model. In this model, when an agent draws an arm, he receives as a reward a realization from a probability distribution associated to the arm. In this document, we consider two different bandit problems. In the reward maximization objective, the agent aims at maximizing the sum of rewards obtained during his interaction with the bandit, whereas in the best arm identification objective, his goal is to find the set of m best arms (i.e. arms with highest mean reward), without suffering a loss when drawing ‘bad’ arms. For these two objectives, we propose strategies, also called bandit algorithms, that are optimal (or close to optimal), in a sense precised below. Maximizing the sum of rewards is equivalent to minimizing a quantity called regret. Thanks to an asymptotic lower bound on the regret of any uniformly efficient algorithm given by Lai and Robbins, one can define asymptotically optimal algorithms as algorithms whose regret reaches this lower bound. In this thesis, we propose, for two Bayesian algorithms, Bayes-UCB and Thompson Sampling, a finite-time analysis, that is a non-asymptotic upper bound on their regret, in the particular case of bandits with binary rewards. This upper bound allows to establish the asymptotic optimality of both algorithms. In the best arm identification framework, a possible goal is to determine the number of samples of the armsneeded to identify, with high probability, the set of m best arms. We define a notion of complexity for best arm identification in two different settings considered in the literature: the fixed-budget and fixed-confidence settings. We provide new lower bounds on these complexity terms and we analyse new algorithms, some of which reach the lower bound in particular cases of two-armed bandit models and are therefore optimal
Audibert, Jean-Yves. "PAC-Bayesian aggregation and multi-armed bandits." Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00843972.
Повний текст джерелаJouini, Wassim. "Contribution to learning and decision making under uncertainty for Cognitive Radio." Thesis, Supélec, 2012. http://www.theses.fr/2012SUPL0010/document.
Повний текст джерелаDuring the last century, most of the meaningful frequency bands were licensed to emerging wireless applications. Because of the static model of frequency allocation, the growing number of spectrum demanding services led to a spectrum scarcity. However, recently, series of measurements on the spectrum utilization showed that the different frequency bands were underutilized (sometimes even unoccupied) and thus that the scarcity of the spectrum resource is virtual and only due to the static allocation of the different bands to specific wireless services. Moreover, the underutilization of the spectrum resource varies on different scales in time and space offering many opportunities to an unlicensed user or network to access the spectrum. Cognitive Radio (CR) and Opportunistic Spectrum Access (OSA) were introduced as possible solutions to alleviate the spectrum scarcity issue.In this dissertation, we aim at enabling CR equipments to exploit autonomously communication opportunities found in their vicinity. For that purpose, we suggest decision making mechanisms designed and/or adapted to answer CR related problems in general, and more specifically, OSA related scenarios. Thus, we argue that OSA scenarios can be modeled as Multi-Armed Bandit (MAB) problems. As a matter of fact, within OSA contexts, CR equipments are assumed to have no prior knowledge on their environment. Acquiring the necessary information relies on a sequential interaction between the CR equipment and its environment. Finally, the CR equipment is modeled as a cognitive agent whose purpose is to learn while providing an improving service to its user. Thus, firstly we analyze the performance of UCB1 algorithm when dealing with OSA problems with imperfect sensing. More specifically, we show that UCB1 can efficiently cope with sensing errors. We prove its convergence to the optimal channel and quantify its loss of performance compared to the case with perfect sensing. Secondly, we combine UCB1 algorithm with collaborative and coordination mechanism to model a secondary network (i.e. several SUs). We show that within this complex scenario, a coordinated learning mechanism can lead to efficient secondary networks. These scenarios assume that a SU can efficiently detect incumbent users’ activity while having no prior knowledge on their characteristics. Usually, energy detection is suggested as a possible approach to handle such task. Unfortunately, energy detection in known to perform poorly when dealing with uncertainty. Consequently, we ventured in this Ph.D. to revisit the problem of energy detection limits under uncertainty. We present new results on its performances as well as its limits when the noise level is uncertain and the uncertainty is modeled by a log-normal distribution (as suggested by Alexander Sonnenschein and Philip M. Fishman in 1992). Within OSA contexts, we address a final problem where a sensor aims at quantifying the quality of a channel in fading environments. In such contexts, UCB1 algorithms seem to fail. Consequently, we designed a new algorithm called Multiplicative UCB (UCB) and prove its convergence. Moreover, we prove that MUCB algorithms are order optimal (i.e., the order of their learning rate is optimal). This last work provides a contribution that goes beyond CR and OSA. As a matter of fact, MUCB algorithms are introduced and solved within a general MAB framework
Fruit, Ronan. "Exploration-exploitation dilemma in reinforcement learning under various form of prior knowledge." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I086.
Повний текст джерелаIn combination with Deep Neural Networks (DNNs), several Reinforcement Learning (RL) algorithms such as "Q-learning" of "Policy Gradient" are now able to achieve super-human performaces on most Atari Games as well as the game of Go. Despite these outstanding and promising achievements, such Deep Reinforcement Learning (DRL) algorithms require millions of samples to perform well, thus limiting their deployment to all applications where data acquisition is costly. The lack of sample efficiency of DRL can partly be attributed to the use of DNNs, which are known to be data-intensive in the training phase. But more importantly, it can be attributed to the type of Reinforcement Learning algorithm used, which only perform a very inefficient undirected exploration of the environment. For instance, Q-learning and Policy Gradient rely on randomization for exploration. In most cases, this strategy turns out to be very ineffective to properly balance the exploration needed to discover unknown and potentially highly rewarding regions of the environment, with the exploitation of rewarding regions already identified as such. Other RL approaches with theoretical guarantees on the exploration-exploitation trade-off have been investigated. It is sometimes possible to formally prove that the performances almost match the theoretical optimum. This line of research is inspired by the Multi-Armed Bandit literature, with many algorithms relying on the same underlying principle often referred as "optimism in the face of uncertainty". Even if a significant effort has been made towards understanding the exploration-exploitation dilemma generally, many questions still remain open. In this thesis, we generalize existing work on exploration-exploitation to different contexts with different amounts of prior knowledge on the learning problem. We introduce several algorithmic improvements to current state-of-the-art approaches and derive a new theoretical analysis which allows us to answer several open questions of the literature. We then relax the (very common although not very realistic) assumption that a path between any two distinct regions of the environment should always exist. Relaxing this assumption highlights the impact of prior knowledge on the intrinsic limitations of the exploration-exploitation dilemma. Finally, we show how some prior knowledge such as the range of the value function or a set of macro-actions can be efficiently exploited to speed-up learning. In this thesis, we always strive to take the algorithmic complexity of the proposed algorithms into account. Although all these algorithms are somehow computationally "efficient", they all require a planning phase and therefore suffer from the well-known "curse of dimensionality" which limits their applicability to real-world problems. Nevertheless, the main focus of this work is to derive general principles that may be combined with more heuristic approaches to help overcome current DRL flaws
Barkino, Iliam. "Summary Statistic Selection with Reinforcement Learning." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-390838.
Повний текст джерелаBrégère, Margaux. "Stochastic bandit algorithms for demand side management Simulating Tariff Impact in Electrical Energy Consumption Profiles with Conditional Variational Autoencoders Online Hierarchical Forecasting for Power Consumption Data Target Tracking for Contextual Bandits : Application to Demand Side Management." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM022.
Повний текст джерелаAs electricity is hard to store, the balance between production and consumption must be strictly maintained. With the integration of intermittent renewable energies into the production mix, the management of the balance becomes complex. At the same time, the deployment of smart meters suggests demand response. More precisely, sending signals - such as changes in the price of electricity - would encourage users to modulate their consumption according to the production of electricity. The algorithms used to choose these signals have to learn consumer reactions and, in the same time, to optimize them (exploration-exploration trade-off). Our approach is based on bandit theory and formalizes this sequential learning problem. We propose a first algorithm to control the electrical demand of a homogeneous population of consumers and offer T⅔ upper bound on its regret. Experiments on a real data set in which price incentives were offered illustrate these theoretical results. As a “full information” dataset is required to test bandit algorithms, a consumption data generator based on variational autoencoders is built. In order to drop the assumption of the population homogeneity, we propose an approach to cluster households according to their consumption profile. These different works are finally combined to propose and test a bandit algorithm for personalized demand side management
Modi, Navikkumar. "Machine Learning and Statistical Decision Making for Green Radio." Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017SUPL0002/document.
Повний текст джерелаFuture cellular network technologies are targeted at delivering self-organizable and ultra-high capacity networks, while reducing their energy consumption. This thesis studies intelligent spectrum and topology management through cognitive radio techniques to improve the capacity density and Quality of Service (QoS) as well as to reduce the cooperation overhead and energy consumption. This thesis investigates how reinforcement learning can be used to improve the performance of a cognitive radio system. In this dissertation, we deal with the problem of opportunistic spectrum access in infrastructureless cognitive networks. We assume that there is no information exchange between users, and they have no knowledge of channel statistics and other user's actions. This particular problem is designed as multi-user restless Markov multi-armed bandit framework, in which multiple users collect a priori unknown reward by selecting a channel. The main contribution of the dissertation is to propose a learning policy for distributed users, that takes into account not only the availability criterion of a band but also a quality metric linked to the interference power from the neighboring cells experienced on the sensed band. We also prove that the policy, named distributed restless QoS-UCB (RQoS-UCB), achieves at most logarithmic order regret. Moreover, numerical studies show that the performance of the cognitive radio system can be significantly enhanced by utilizing proposed learning policies since the cognitive devices are able to identify the appropriate resources more efficiently. This dissertation also introduces a reinforcement learning and transfer learning frameworks to improve the energy efficiency (EE) of the heterogeneous cellular network. Specifically, we formulate and solve an energy efficiency maximization problem pertaining to dynamic base stations (BS) switching operation, which is identified as a combinatorial learning problem, with restless Markov multi-armed bandit framework. Furthermore, a dynamic topology management using the previously defined algorithm, RQoS-UCB, is introduced to intelligently control the working modes of BSs, based on traffic load and capacity in multiple cells. Moreover, to cope with initial reward loss and to speed up the learning process, a transfer RQoS-UCB policy, which benefits from the transferred knowledge observed in historical periods, is proposed and provably converges. Then, proposed dynamic BS switching operation is demonstrated to reduce the number of activated BSs while maintaining an adequate QoS. Extensive numerical simulations demonstrate that the transfer learning significantly reduces the QoS fluctuation during traffic variation, and it also contributes to a performance jump-start and presents significant EE improvement under various practical traffic load profiles. Finally, a proof-of-concept is developed to verify the performance of proposed learning policies on a real radio environment and real measurement database of HF band. Results show that proposed multi-armed bandit learning policies using dual criterion (e.g. availability and quality) optimization for opportunistic spectrum access is not only superior in terms of spectrum utilization but also energy efficient
Gutowski, Nicolas. "Recommandation contextuelle de services : application à la recommandation d'évènements culturels dans la ville intelligente." Thesis, Angers, 2019. http://www.theses.fr/2019ANGE0030.
Повний текст джерелаNowadays, Multi-Armed Bandit algorithms for context-aware recommendation systems are extensively studied. In order to meet challenges underlying this field of research, our works and contributions have been organised according to three research directions : 1) recommendation systems ; 2) Multi-Armed Bandit (MAB) and Contextual Multi-Armed Bandit algorithms (CMAB) ; 3) context.The first part of our contributions focuses on MAB and CMAB algorithms for recommendation. It particularly addresses diversification of recommendations for improving individual accuracy. The second part is focused on contextacquisition, on context reasoning for cultural events recommendation systems for Smart Cities, and on dynamic context enrichment for CMAB algorithms
Allmendinger, Richard. "Tuning evolutionary search for closed-loop optimization." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/tuning-evolutionary-search-for-closedloop-optimization(d54e63e2-7927-42aa-b974-c41e717298cb).html.
Повний текст джерелаCayuela, Rafols Marc. "Algorithmic Study on Prediction with Expert Advice : Study of 3 novel paradigms with Grouped Experts." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254344.
Повний текст джерелаHuvudarbetet för den här avhandlingen har varit en grundlig studie av den nya Prediction with Partially Monitored Grouped Expert Advice and Side Information paradigmet. Detta är nyligen föreslagit i denna avhandling, och det utökar det brett studerade Prediction with Expert Advice paradigmet. Förlängningen baseras på två antaganden och en begränsning som ändrar det ursprungliga problemet. Det första antagandet, Grouped, förutsätter att experterna är inbyggda i grupper. Det andra antagandet, Side Information, introducerar ytterligare information som kan användas för att i tid relatera förutsägelser med grupper. Slutligen innebär begränsningen, Partially Monitored, att gruppens förutsägelser endast är kända för en grupp i taget. Studien av detta paradigm innefattar utformningen av en komplett förutsägelsesalgoritm, beviset på en teoretisk bindning till det sämre fallet kumulativa ånger för en sådan algoritm och en experimentell utvärdering av algoritmen (bevisar förekomsten av fall där detta paradigm överträffar Prediction with Expert Advice). Eftersom algoritmens utveckling är konstruktiv tillåter den dessutom att enkelt bygga två ytterligare prediksionsalgoritmer för Prediction with Grouped Expert Advice och Prediction with Grouped Expert Advice and Side Information paradigmer. Därför presenterar denna avhandling tre nya prediktionsalgoritmer med motsvarande ångergränser och en jämförande experimentell utvärdering inklusive det ursprungliga Prediction with Expert Advice paradigmet.
Maillard, Odalric-Ambrym. "APPRENTISSAGE SÉQUENTIEL : Bandits, Statistique et Renforcement." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00845410.
Повний текст джерелаCollet, Timothé. "Méthodes optimistes d’apprentissage actif pour la classification." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0084/document.
Повний текст джерелаA Classification problem makes use of a training set consisting of data labeled by an oracle. The larger the training set, the best the performance. However, requesting the oracle may be costly. The goal of Active Learning is thus to minimize the number of requests to the oracle while achieving the best performance. To do so, the data that are presented to the oracle must be carefully selected among a large number of unlabeled instances acquired at no cost. However, the true profitability of labeling a particular instance may not be known perfectly. It can therefore be estimated along with a measure of uncertainty. To Increase the precision on the estimate, we need to label more data. Thus, there is a dilemma between labeling data in order to increase the performance of the classifier or to better know how to select data. This dilemma is well studied in the context of finite budget optimization under the name of exploration versus exploitation dilemma. The most famous solutions make use of the principle of Optimism in the Face of Uncertainty. In this thesis, we show that it is possible to adapt this principle to the active learning problem for classification. Several algorithms have been developed for classifiers of increasing complexity, each one of them using the principle of Optimism in the Face of Uncertainty, and their performances have been empirically evaluated
Magureanu, Stefan. "Structured Stochastic Bandits." Licentiate thesis, KTH, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-182816.
Повний текст джерелаQC 20160223
Guillou, Frédéric. "On recommendation systems in a sequential context." Thesis, Lille 3, 2016. http://www.theses.fr/2016LIL30041/document.
Повний текст джерелаThis thesis is dedicated to the study of Recommendation Systems under a sequential setting, where the feedback given by users on items arrive one after another in the system. After each feedback, the system has to integrate it and try to improve future recommendations. Many techniques or evaluation methods have already been proposed to study the recommendation problem. Despite that, such sequential setting, which is more realistic and represent a closer framework to a real Recommendation System evaluation, has surprisingly been left aside. Under a sequential context, recommendation techniques need to take into consideration several aspects which are not visible for a fixed setting. The first one is the exploration-exploitation dilemma: the model making recommendations needs to find a good balance between gathering information about users' tastes or items through exploratory recommendation steps, and exploiting its current knowledge of the users and items to try to maximize the feedback received. We highlight the importance of this point through the first evaluation study and propose a simple yet efficient approach to make effective recommendation, based on Matrix Factorization and Multi-Armed Bandit algorithms. The second aspect emphasized by the sequential context appears when a list of items is recommended to the user instead of a single item. In such a case, the feedback given by the user includes two parts: the explicit feedback as the rating, but also the implicit feedback given by clicking (or not clicking) on other items of the list. By integrating both feedback into a Matrix Factorization model, we propose an approach which can suggest better ranked list of items, and we evaluate it in a particular setting
Caelen, Olivier. "Sélection séquentielle en environnement aléatoire appliquée à l'apprentissage supervisé." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210265.
Повний текст джерелаDans un premier cas, les tests viseront à maximiser la somme des gains collectés. Un juste compromis doit alors être trouvé entre l'exploitation et l'exploration. Ce problème est couramment dénommé dans la littérature scientifique "multi-armed bandit problem".
Dans un second cas, un nombre de sélections maximal est imposé et l'objectif consistera à répartir ces sélections de façon à augmenter les chances de trouver l'alternative présentant le gain moyen le plus élevé. Ce deuxième problème est couramment repris dans la littérature scientifique sous l'appellation "selecting the best".
La sélection de type gloutonne joue un rôle important dans la résolution de ces problèmes de décision et opère en choisissant l'alternative qui s'est jusqu'ici montrée optimale. Or, la nature généralement aléatoire de l'environnement rend incertains les résultats d'une telle sélection.
Dans cette thèse, nous introduisons une nouvelle quantité, appelée le "gain espéré d'une action gloutonne". Sur base de quelques propriétés de cette quantité, de nouveaux algorithmes permettant de résoudre les deux problèmes décisionnels précités seront proposés.
Une attention particulière sera ici prêtée à l'application des techniques présentées au domaine de la sélection de modèles en l'apprentissage artificiel supervisé.
La collaboration avec le service d'anesthésie de l'Hôpital Erasme nous a permis d'appliquer les algorithmes proposés à des données réelles, provenant du milieu médical. Nous avons également développé un système d'aide à la décision dont un prototype a déjà été testé en conditions réelles sur un échantillon restreint de patients.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Talebi, Mazraeh Shahi Mohammad Sadegh. "Minimizing Regret in Combinatorial Bandits and Reinforcement Learning." Doctoral thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219970.
Повний текст джерелаQC 20171215
Ameen, S. A. "Optimizing deep learning networks using multi-armed bandits." Thesis, University of Salford, 2017. http://usir.salford.ac.uk/45018/.
Повний текст джерелаOlkhovskaya, Julia. "Large-scale online learning under partial feedback." Doctoral thesis, Universitat Pompeu Fabra, 2022. http://hdl.handle.net/10803/673926.
Повний текст джерелаSequential decision making under uncertainty covers a broad class of problems. Real-world applications require the algorithms to be computationally efficient and scalable. We study a range of sequential learning problems, where the learner observe only partial information about the rewards we develop the algorithms that are robust and computationally efficient in large-scale settings. First problem that we consider is an online influence maximization problem in which a decision maker sequentiaonally selects a node in the graph in order to spread the information throughout the graph by placing the information in the chosen node. The available feedback is only some information about a small neighbourhood of the selected vertex. Our results show that such partial local observations can be sufficient for maximizing global influence. We propose sequential learning algorithms that aim at maximizing influence, and provide their theoretical analysis in both the subcritical and supercritical regimes of broadly studied graph models. Thus this is the first algorithms in the sequential influence maximization setting, that perform efficiently in the graph with a huge number of nodes. In another line of work, we study the contextual bandit problem, where the reward function is allowed to change in an adversarial manner and the learner only gets to observe the rewards associated with its actions. We assume that the number of arms is finite and the context space can be infinite. We develop a computationally efficient algorithm under the assumption that the d-dimensional contexts are generated i.i.d. at random from a known distribution. We also propose an algorithm that is shown to be robust to misspecification in the setting where the true reward function is linear up to an additive nonlinear error. To our knowledge, our performance guarantees constitute the very first results on this problem setting. We also provide an extension when the context is an element of a reproducing kernel Hilbert space. Finally, we consider an extension of the contextual bandit problem described above. We study a setting where the learner interacts with a Markov decision process in a sequence of episodes, where an adversary chooses the reward function and the reward observations are available only for the selected action. We allow the state space to be arbitrarily large, but we assume that all action-value functions can be represented as linear functions in terms of a known low-dimensional feature map, and that the learner at least has access to the simulator of the trajectories in the MDP. Our main contributions are the first algorithms that are shown to be robust and efficient in this problem setting.
Allesiardo, Robin. "Bandits Manchots sur Flux de Données Non Stationnaires." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS334/document.
Повний текст джерелаThe multi-armed bandit is a framework allowing the study of the trade-off between exploration and exploitation under partial feedback. At each turn t Є [1,T] of the game, a player has to choose an arm kt in a set of K and receives a reward ykt drawn from a reward distribution D(µkt) of mean µkt and support [0,1]. This is a challeging problem as the player only knows the reward associated with the played arm and does not know what would be the reward if she had played another arm. Before each play, she is confronted to the dilemma between exploration and exploitation; exploring allows to increase the confidence of the reward estimators and exploiting allows to increase the cumulative reward by playing the empirical best arm (under the assumption that the empirical best arm is indeed the actual best arm).In the first part of the thesis, we will tackle the multi-armed bandit problem when reward distributions are non-stationary. Firstly, we will study the case where, even if reward distributions change during the game, the best arm stays the same. Secondly, we will study the case where the best arm changes during the game. The second part of the thesis tacles the contextual bandit problem where means of reward distributions are now dependent of the environment's current state. We will study the use of neural networks and random forests in the case of contextual bandits. We will then propose meta-bandit based approach for selecting online the most performant expert during its learning
Das, Sanmay 1979. "Dealers, insiders and bandits : learning and its effects on market outcomes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37916.
Повний текст джерелаIncludes bibliographical references (p. 145-149).
This thesis seeks to contribute to the understanding of markets populated by boundedly rational agents who learn from experience. Bounded rationality and learning have both been the focus of much research in computer science, economics and finance theory. However, we are at a critical stage in defining the direction of future research in these areas. It is now clear that realistic learning problems faced by agents in market environments are often too hard to solve in a classically rational fashion. At the same time, the greatly increased computational power available today allows us to develop and analyze richer market models and to evaluate different learning procedures and algorithms within these models. The danger is that the ease with which complex markets can be simulated could lead to a plethora of models that attempt to explain every known fact about different markets. The first two chapters of this thesis define a principled approach to studying learning in rich models of market environments, and the rest of the thesis provides a proof of concept by demonstrating the applicability of this approach in modeling settings drawn from two different broad domains, financial market microstructure and search theory. In the domain of market microstructure, this thesis extends two important models from the theoretical finance literature.
(cont.) The third chapter introduces an algorithm for setting prices in dealer markets based on the model of Glosten and Milgrom (1985), and produces predictions about the behavior of prices in securities markets. In some cases, these results confirm economic intuitions in a significantly more complex setting (like the existence of a local profit maximum for a monopolistic market-maker) and in others they can be used to provide quantitative guesses for variables such as rates of convergence to efficient market conditions following price jumps that provide insider information. The fourth chapter studies the problem faced by a trader with insider information in Kyle's (1985) model. I show how the insider trading problem can be usefully analyzed from the perspective of reinforcement learning when some important market parameters are unknown, and that the equilibrium behavior of an insider who knows these parameters can be learned by one who does not, but also that the time scale of convergence to the equilibrium behavior may be impractical, and agents with limited time horizons may be better off using approximate algorithms that do not converge to equilibrium behavior. The fifth and sixth chapters relate to search problems. Chapter 5 introduces models for a class of problems in which there is a search "season" prior to hiring or matching, like academic job markets.
(cont.) It solves for expected values in many cases, and studies the difference between a "high information" process where applicants are immediately told when they have been rejected and a "low information" process where employers do not send any signal when they reject an applicant. The most important intuition to emerge from the results is that the relative benefit of the high information process is much greater when applicants do not know their own "attractiveness," which implies that search markets might be able to eliminate inefficiencies effectively by providing good information, and we do not always have to think about redesigning markets as a whole. Chapter 6 studies two-sided search explicitly and introduces a new class of multi-agent learning problems, two-sided bandit problems, that capture the learning and decision problems of agents in matching markets in which agents must learn their preferences. It also empirically studies outcomes under different periodwise matching mechanisms and shows that some basic intuitions about the asymptotic stability of matchings are preserved in the model. For example, when agents are matched in each period using the Gale-Shapley algorithm, asymptotic outcomes are always stable, while a matching mechanism that induces a stopping problem for some agents leads to the lowest probabilities of stability.
(cont.) By contributing to the state of the art in modeling different domains using computational techniques, this thesis demonstrates the success of the approach to modeling complex economic and social systems that is prescribed in the first two chapters.
by Sanmay Das.
Ph.D.
Hauser, Kristen. "Hyperparameter Tuning for Reinforcement Learning with Bandits and Off-Policy Sampling." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1613034993418088.
Повний текст джерелаMcInerney, Robert E. "Decision making under uncertainty." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:a34e87ad-8330-42df-8ba6-d55f10529331.
Повний текст джерелаGalichet, Nicolas. "Contributions to Multi-Armed Bandits : Risk-Awareness and Sub-Sampling for Linear Contextual Bandits." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112242/document.
Повний текст джерелаThis thesis focuses on sequential decision making in unknown environment, and more particularly on the Multi-Armed Bandit (MAB) setting, defined by Lai and Robbins in the 50s. During the last decade, many theoretical and algorithmic studies have been aimed at cthe exploration vs exploitation tradeoff at the core of MABs, where Exploitation is biased toward the best options visited so far while Exploration is biased toward options rarely visited, to enforce the discovery of the the true best choices. MAB applications range from medicine (the elicitation of the best prescriptions) to e-commerce (recommendations, advertisements) and optimal policies (e.g., in the energy domain). The contributions presented in this dissertation tackle the exploration vs exploitation dilemma under two angles. The first contribution is centered on risk avoidance. Exploration in unknown environments often has adverse effects: for instance exploratory trajectories of a robot can entail physical damages for the robot or its environment. We thus define the exploration vs exploitation vs safety (EES) tradeoff, and propose three new algorithms addressing the EES dilemma. Firstly and under strong assumptions, the MIN algorithm provides a robust behavior with guarantees of logarithmic regret, matching the state of the art with a high robustness w.r.t. hyper-parameter setting (as opposed to, e.g. UCB (Auer 2002)). Secondly, the MARAB algorithm aims at optimizing the cumulative 'Conditional Value at Risk' (CVar) rewards, originated from the economics domain, with excellent empirical performances compared to (Sani et al. 2012), though without any theoretical guarantees. Finally, the MARABOUT algorithm modifies the CVar estimation and yields both theoretical guarantees and a good empirical behavior. The second contribution concerns the contextual bandit setting, where additional informations are provided to support the decision making, such as the user details in the ontent recommendation domain, or the patient history in the medical domain. The study focuses on how to make a choice between two arms with different numbers of samples. Traditionally, a confidence region is derived for each arm based on the associated samples, and the 'Optimism in front of the unknown' principle implements the choice of the arm with maximal upper confidence bound. An alternative, pioneered by (Baransi et al. 2014), and called BESA, proceeds instead by subsampling without replacement the larger sample set. In this framework, we designed a contextual bandit algorithm based on sub-sampling without replacement, relaxing the (unrealistic) assumption that all arm reward distributions rely on the same parameter. The CL-BESA algorithm yields both theoretical guarantees of logarithmic regret and good empirical behavior
Alghamdi, Bandar Abdulrahman. "Topic-based feature selection and a hybrid approach for detecting spammers on twitter." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/204112/1/Bandar%20Abdulrahman%20A_Alghamdi_Thesis.pdf.
Повний текст джерелаSieh, May-po Mabel, and 薛美寶. "Approaches to learning in school and the banding system in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31956701.
Повний текст джерелаBubeck, Sébastien. "JEUX DE BANDITS ET FONDATIONS DU CLUSTERING." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2010. http://tel.archives-ouvertes.fr/tel-00845565.
Повний текст джерелаSilva, Francinaldo Rodrigues da Silva. "A aprendizagem musical e as contribuições sociais nas bandas de música: um estudo com duas bandas escolares." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/3533.
Повний текст джерелаApproved for entry into archive by Jaqueline Silva (jtas29@gmail.com) on 2014-11-05T09:17:32Z (GMT) No. of bitstreams: 2 Dissertação - Francinaldo Rodrigues da Silva - 2014.pdf: 7637692 bytes, checksum: 06fd3fca0b1795bede4b60f2cbeef94a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2014-11-05T09:17:32Z (GMT). No. of bitstreams: 2 Dissertação - Francinaldo Rodrigues da Silva - 2014.pdf: 7637692 bytes, checksum: 06fd3fca0b1795bede4b60f2cbeef94a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-03-31
The bands are an important area of musical learning. There are many perspectives of education involved in them: instrumental teaching in group and individual, musical theory lessons, music theory, martialness and discipline. The brass bands have an active participation in Brazilian communities. They play in public ceremonies, civic and military parades, religious festival, and many others different events. Taking Part of a band gives to the members a kind of learning that goes beyound playing an instrument. With a view to these bands particularities this study was developed by taking two martial bands of the city of Aparecida de Goiania/GO, in order to inquire how the bands contribute to the musical and social process of the students whom participate. The research includes both qualitative and quantitative datas. In order to obtain meaniful results toarch, the study was achieved by literature review, direct observation of theoretical lessons, rehearsals and select bands performances. It was applied questionnaires to these groups of students who take part in these bands. And for teachers, teachers’ aides, group manager of the institutions in which the bands belongs to were performed semistructured interviews. In answer to the questionnaires and interviews were collected datas which support the objectives of this research.
As bandas constituem um espaço importante de aprendizagem musical. Nelas estão envolvidas muitas perspectivas de ensino: ensino de instrumento individual e coletivo, aulas de teoria musical, marcialidade e disciplina. As bandas de música tem uma participação ativa nas comunidades brasileiras, se apresentando em solenidades públicas, desfiles cívico-militar, festas religiosas e eventos culturais de natureza diversa. Fazer parte de uma banda proporciona aos integrantes aprendizados que vão além do tocar um instrumento. Com vistas a estas peculiaridades das bandas, esta pesquisa foi desenvolvida tomando duas bandas marciais da cidade de Aparecida de Goiânia/GO, com o objetivo de averiguar de que forma as bandas de música contribuem para o processo musical e social dos alunos que dela participam. A pesquisa é de natureza qualitativa e quantitativa. Visando obter resultados significativos para a análise dessa pesquisa, o trabalho foi realizado a partir de revisão de literatura, observação direta das aulas teóricas, dos ensaios e das apresentações das bandas selecionadas. Foram aplicados questionários para os alunos participantes dessas bandas. Para os maestros, professores auxiliares, grupo gestor das instituições que cediam as bandas foram realizadas entrevistas semiestruturadas. Nas respostas obtidas aos questionários e entrevistas foram colhidos dados que corroboram com os objetivos dessa pesquisa.
Wan, Hao. "Tutoring Students with Adaptive Strategies." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/36.
Повний текст джерелаWang, Yu-Xiang. "New Paradigms and Optimality Guarantees in Statistical Learning and Estimation." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1113.
Повний текст джерелаChafaa, Irched. "Machine learning for beam alignment in mmWave networks." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG044.
Повний текст джерелаTo cope with the ever increasing mobile data traffic, an envisioned solution for future wireless networks is to exploit the large available spectrum in the millimeter wave (mmWave) band. However, communicating at these high frequencies is very challenging as the transmitted signal suffers from strong attenuation, which leads to a limited propagation range and few multipath components (sparse mmWave channels). Hence, highly-directional beams have to be employed to focus the signal energy towards the intended user and compensate all those losses. Such beams need to be steered appropriately to guarantee a reliable communication link. This represents the so called beam alignment problem where the beams of the transmitter and the receiver need to be constantly aligned. Moreover, beam alignment policies need to support devices mobility and the unpredicted dynamics of the network, which result in significant signaling and training overhead affecting the overall performance. In the first part of the thesis, we formulate the beam alignment problem via the adversarial multi-armed bandit framework, which copes with arbitrary network dynamics including non-stationary or adversarial components. We propose online and adaptive beam alignment policies relying only on one-bit feedback to steer the beams of both nodes of the communication link in a distributed manner. Building on the well-known exponential weights algorithm (EXP3) and by exploiting the sparse nature of mmWave channels, we propose a modified policy (MEXP3), which comes with optimal theoretical guarantees in terms of asymptotic regret. Moreover, for finite horizons, our regret upper-bound is tighter than that of the original EXP3 suggesting better performance in practice. We then introduce an additional modification that accounts for the temporal correlation between successive beams and propose another beam alignment policy (NBT-MEXP3). In the second part of the thesis, deep learning tools are investigated to select mmWave beams in an access point -- user link. We leverage unsupervised deep learning to exploit the channel knowledge at sub-6 GHz and predict beamforming vectors in the mmWave band; this complex channel-beam mapping is learned via data issued from the DeepMIMO dataset and lacking the ground truth. We also show how to choose an optimal size of our neural network depending on the number of transmit and receive antennas at the access point. Furthermore, we investigate the impact of training data availability and introduce a federated learning (FL) approach to predict the beams of multiple links by sharing only the parameters of the locally trained neural networks (and not the local data). We investigate both synchronous and asynchronous FL methods. Our numerical simulations show the high potential of our approach, especially when the local available data is scarce or imperfect (noisy). At last, we compare our proposed deep learning methods with reinforcement learning methods derived in the first part. Simulations show that choosing an appropriate beam steering method depends on the target application and is a tradeoff between rate performance and computational complexity
King, Tyler C. "Factors influencing adults' participation in community bands of Central Ohio." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1243881978.
Повний текст джерелаCayci, Semih. "Online Learning for Optimal Control of Communication and Computing Systems." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595516470389826.
Повний текст джерелаVitorino, Luís Miguel do Nascimento. "Composição de bandas sonoras para filmes de animação: aprendizagens musicais e sociais em jovens do 2º ciclo." Master's thesis, Escola Superior de Educação, Instituto Politécnico de Setúbal, 2014. http://hdl.handle.net/10400.26/6489.
Повний текст джерелаA composição de bandas sonoras para pequenos filmes de animação, constituiu um desafio apelativo para promover uma prática criativa, no âmbito das aulas de Educação Musical do 2º ciclo do Ensino Básico. Implementado com uma turma do sexto ano de escolaridade ao longo de onze aulas, este projeto pretendeu fomentar um conjunto significativo de aprendizagens musicais e estimular, por via do trabalho cooperativo, uma vertente mais colaborativa da criatividade dos alunos. Partindo de sessões de improvisação planeada, os alunos exploraram diferentes formas e técnicas de se expressarem musicalmente, com o objetivo de criar um produto musical que valorizasse o filme de animação. São vários os conceitos teóricos mobilizados para sustentar e enquadrar a temática presente neste trabalho. Se por um lado os conceitos de banda sonora, de sonoplastia, de improvisação e de composição musical, são desde logo indissociáveis do universo que é explorado, por outro lado, conceitos como os de criatividade e de criatividade colaborativa são centrais para a compreensão do processo criativo. A Espiral de Desenvolvimento Musical de Swanwick e Tillman constitui-se como referencial para a avaliação das composições, e são ainda convocados conceitos relacionados com a dinâmica do trabalho cooperativo e da aprendizagem entre pares, nomeadamente o conceito de Zona de Desenvolvimento Proximal de Vygotsky. A investigação de caráter qualitativo, desenvolvida paralelamente, procurou identificar quais as aprendizagens musicais e as aprendizagens sociais que foram realizadas pelos alunos ao longo do processo de composição. O tratamento dos dados recolhidos permitiu concluir que a composição de bandas sonoras, em contexto de trabalho cooperativo, favorece um conjunto significativo de aprendizagens musicais e sociais, ainda que os produtos composicionais dos alunos estejam ligeiramente aquém do que seria expetável neste nível etário.
The musical composition of soundtracks for short films of animation, was an appealing challenge to promote a creative practice within the Music Education classes, from the 2nd cycle of basic education. Implemented with a class of sixthgraders during eleven classes, this project sought to foster a significant set of musical learning and encourage, through cooperative work, a more collaborative aspect of students' creativity. Starting from a planned improvisation sessions, students explored different ways and techniques to express themselves musically, with the goal of creating a musical product that valued the animated film. Several theoretical concepts are mobilized to support and frame this study. If on one hand the concepts of soundtrack, sound editing, improvisation and musical composition, are inseparable from the thematic explored in this work, on the other hand, concepts like the creativity and collaborative creativity are central to understanding the creative process. The Spiral of Musical Development by Swanwick and Tillman was established as a benchmark for the evaluation of the compositions, and still are called concepts related to the dynamics of cooperative work and peer learning, namely the concept of Zone of Proximal Development by Vygotsky. A qualitative investigation was developed in parallel, which sought to identify the musical and social learning that were performed by the students throughout the writing process. The treatment of the data collected showed that the composition of soundtracks, in the context of cooperative work, favors a significant number of musical and social learning, although the compositional products of the students are slightly below what would be expected in this age level.
Wilhelmi, Roca Francesc. "Towards spatial reuse in future wireless local area networks: a sequential learning approach." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/669970.
Повний текст джерелаL'operació de reutilització espacial (SR) està guanyant impuls per a la darrera família d'estàndards IEEE 802.11 a causa dels aclaparadors requisits que presenten les xarxes sense fils de nova generació. En particular, la creixent necessitat de tràfic i el nombre de dispositius concurrents comprometen l'eficiència de les xarxes d'àrea local sense fils (WLANs) cada cop més concorregudes i posen en dubte la seva naturalesa descentralitzada. L'operació SR, inicialment introduïda per l'estàndard IEEE 802.11ax-2021 i estudiada posteriorment a IEEE 802.11be-2024, pretén augmentar el nombre de transmissions concurrents en un conjunt bàsic de serveis superposats (OBSS) mitjançant l'ajustament de la sensibilitat i el control de potència de transmissió, millorant així l'eficiència espectral. El nostre estudi sobre el funcionament de SR mostra un potencial destacat per millorar el nombre de transmissions simultànies en desplegaments multitudinaris, contribuint així al desenvolupament d'aplicacions de nova generació de baixa latència. Tot i això, els beneficis potencials de SR són actualment limitats per la rigidesa del mecanisme introduït per a l'11ax, i la manca de coordinació entre els BSS que ho implementen. L'operació SR evoluciona cap a esquemes coordinats on cooperen diferents BSS. En canvi, la coordinació comporta una sobrecàrrega de comunicació i sincronització, el qual té un impacte en el rendiment de les WLAN. D'altra banda, l'esquema coordinat és incompatible amb els dispositius que utilitzen versions anteriors IEEE 802.11, la qual cosa podria deteriorar el rendiment de les xarxes ja existents. Per aquests motius, en aquesta tesi s'avalua la viabilitat de mecanismes descentralitzats per a SR i s'analitzen minuciosament els principals impediments i mancances que se'n poden derivar. El nostre objectiu és donar llum a la futura forma de les WLAN pel que fa a l?optimització de SR i si s'ha de mantenir el seu caràcter descentralitzat, o bé és preferible evolucionar cap a desplegaments coordinats i centralitzats. Per abordar SR de forma descentralitzada, ens centrem en la Intel·ligència Artificial (AI) i ens proposem utilitzar una classe de mètodes seqüencials basats en l'aprenentatge, anomenats Multi-Armed Bandits (MAB). L'esquema MAB s'adapta al problema descentralitzat de SR perquè aborda la incertesa causada pel funcionament simultani de diversos dispositius (és a dir, un entorn multi-jugador) i la falta d'informació que se'n deriva. Els MAB poden fer front a la complexitat darrera les interaccions espacials entre dispositius que resulten de modificar la seva sensibilitat i potència de transmissió. En aquest sentit, els nostres resultats indiquen guanys importants de rendiment (fins al 100 \%) en desplegaments altament densos. Tot i això, l'aplicació d'aprenentatge automàtic amb múltiples agents planteja diversos problemes que poden comprometre el rendiment dels dispositius d'una xarxa (definició d'objectius conjunts, horitzó de convergència, aspectes d'escalabilitat o manca d'estacionarietat). A més, el nostre estudi d'aprenentatge multi-agent per a SR multi-agent inclou aspectes d'infraestructura per a xarxes de nova generació que integrin AI de manera intrínseca.
Selent, Douglas A. "Creating Systems and Applying Large-Scale Methods to Improve Student Remediation in Online Tutoring Systems in Real-time and at Scale." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/308.
Повний текст джерела