Gotowa bibliografia na temat „Reinforcement Motor Learning”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Reinforcement Motor Learning”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Reinforcement Motor Learning"

1

Vassiliadis, Pierre, Gerard Derosiere, Cecile Dubuc, Aegryan Lete, Frederic Crevecoeur, Friedhelm C. Hummel i Julie Duque. "Reward boosts reinforcement-based motor learning". iScience 24, nr 7 (lipiec 2021): 102821. http://dx.doi.org/10.1016/j.isci.2021.102821.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Uehara, Shintaro, Firas Mawase, Amanda S. Therrien, Kendra M. Cherry-Allen i Pablo Celnik. "Interactions between motor exploration and reinforcement learning". Journal of Neurophysiology 122, nr 2 (1.08.2019): 797–808. http://dx.doi.org/10.1152/jn.00390.2018.

Pełny tekst źródła
Streszczenie:
Motor exploration, a trial-and-error process in search for better motor outcomes, is known to serve a critical role in motor learning. This is particularly relevant during reinforcement learning, where actions leading to a successful outcome are reinforced while unsuccessful actions are avoided. Although early on motor exploration is beneficial to finding the correct solution, maintaining high levels of exploration later in the learning process might be deleterious. Whether and how the level of exploration changes over the course of reinforcement learning, however, remains poorly understood. Here we evaluated temporal changes in motor exploration while healthy participants learned a reinforcement-based motor task. We defined exploration as the magnitude of trial-to-trial change in movements as a function of whether the preceding trial resulted in success or failure. Participants were required to find the optimal finger-pointing direction using binary feedback of success or failure. We found that the magnitude of exploration gradually increased over time when participants were learning the task. Conversely, exploration remained low in participants who were unable to correctly adjust their pointing direction. Interestingly, exploration remained elevated when participants underwent a second training session, which was associated with faster relearning. These results indicate that the motor system may flexibly upregulate the extent of exploration during reinforcement learning as if acquiring a specific strategy to facilitate subsequent learning. Also, our findings showed that exploration affects reinforcement learning and vice versa, indicating an interactive relationship between them. Reinforcement-based tasks could be used as primers to increase exploratory behavior leading to more efficient subsequent learning. NEW & NOTEWORTHY Motor exploration, the ability to search for the correct actions, is critical to learning motor skills. Despite this, whether and how the level of exploration changes over the course of training remains poorly understood. We showed that exploration increased and remained high throughout training of a reinforcement-based motor task. Interestingly, elevated exploration persisted and facilitated subsequent learning. These results suggest that the motor system upregulates exploration as if learning a strategy to facilitate subsequent learning.
Style APA, Harvard, Vancouver, ISO itp.
3

Sistani, Mohammad Bagher Naghibi, i Sadegh Hesari. "Decreasing Induction Motor Loss Using Reinforcement Learning". Journal of Automation and Control Engineering 3, nr 6 (2015): 13–17. http://dx.doi.org/10.12720/joace.4.1.13-17.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Palidis, Dimitrios J., Heather R. McGregor, Andrew Vo, Penny A. MacDonald i Paul L. Gribble. "Null effects of levodopa on reward- and error-based motor adaptation, savings, and anterograde interference". Journal of Neurophysiology 126, nr 1 (1.07.2021): 47–67. http://dx.doi.org/10.1152/jn.00696.2020.

Pełny tekst źródła
Streszczenie:
Motor adaptation relies on multiple processes including reinforcement of successful actions. Cognitive reinforcement learning is impaired by levodopa-induced disruption of dopamine function. We administered levodopa to healthy adults who participated in multiple motor adaptation tasks. We found no effects of levodopa on any component of motor adaptation. This suggests that motor adaptation may not depend on the same dopaminergic mechanisms as cognitive forms or reinforcement learning that have been shown to be impaired by levodopa.
Style APA, Harvard, Vancouver, ISO itp.
5

IZAWA, Jun, Toshiyuki KONDO i Koji ITO. "Motor Learning Model through Reinforcement Learning with Neural Internal Model". Transactions of the Society of Instrument and Control Engineers 39, nr 7 (2003): 679–87. http://dx.doi.org/10.9746/sicetr1965.39.679.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Peters, Jan, i Stefan Schaal. "Reinforcement learning of motor skills with policy gradients". Neural Networks 21, nr 4 (maj 2008): 682–97. http://dx.doi.org/10.1016/j.neunet.2008.02.003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sidarta, Ananda, John Komar i David J. Ostry. "Clustering analysis of movement kinematics in reinforcement learning". Journal of Neurophysiology 127, nr 2 (1.02.2022): 341–53. http://dx.doi.org/10.1152/jn.00229.2021.

Pełny tekst źródła
Streszczenie:
The choice of exploration versus exploitation is a fundamental problem in learning new motor skills through reinforcement. In this study, we employed a data-driven approach to characterize movements on a trial-by-trial basis with an unsupervised clustering algorithm. Using this technique, we found that changes in task demands and, in particular, in the required accuracy of movements, influenced the ratio of exploration to exploitation. This analysis framework provides an attractive tool to investigate mechanisms of explorative and exploitative behavior while studying motor learning.
Style APA, Harvard, Vancouver, ISO itp.
8

Sidarta, Ananda, Floris T. van Vugt i David J. Ostry. "Somatosensory working memory in human reinforcement-based motor learning". Journal of Neurophysiology 120, nr 6 (1.12.2018): 3275–86. http://dx.doi.org/10.1152/jn.00442.2018.

Pełny tekst źródła
Streszczenie:
Recent studies using visuomotor adaptation and sequence learning tasks have assessed the involvement of working memory in the visuospatial domain. The capacity to maintain previously performed movements in working memory is perhaps even more important in reinforcement-based learning to repeat accurate movements and avoid mistakes. Using this kind of task in the present work, we tested the relationship between somatosensory working memory and motor learning. The first experiment involved separate memory and motor learning tasks. In the memory task, the participant’s arm was displaced in different directions by a robotic arm, and the participant was asked to judge whether a subsequent test direction was one of the previously presented directions. In the motor learning task, participants made reaching movements to a hidden visual target and were provided with positive feedback as reinforcement when the movement ended in the target zone. It was found that participants that had better somatosensory working memory showed greater motor learning. In a second experiment, we designed a new task in which learning and working memory trials were interleaved, allowing us to study participants’ memory for movements they performed as part of learning. As in the first experiment, we found that participants with better somatosensory working memory also learned more. Moreover, memory performance for successful movements was better than for movements that failed to reach the target. These results suggest that somatosensory working memory is involved in reinforcement motor learning and that this memory preferentially keeps track of reinforced movements. NEW & NOTEWORTHY The present work examined somatosensory working memory in reinforcement-based motor learning. Working memory performance was reliably correlated with the extent of learning. With the use of a paradigm in which learning and memory trials were interleaved, memory was assessed for movements performed during learning. Movements that received positive feedback were better remembered than movements that did not. Thus working memory does not track all movements equally but is biased to retain movements that were rewarded.
Style APA, Harvard, Vancouver, ISO itp.
9

Tian, Mengqi, Ke Wang, Hongyu Lv i Wubin Shi. "Reinforcement learning control method of torque stability of three-phase permanent magnet synchronous motor". Journal of Physics: Conference Series 2183, nr 1 (1.01.2022): 012024. http://dx.doi.org/10.1088/1742-6596/2183/1/012024.

Pełny tekst źródła
Streszczenie:
Abstract Regarding the control strategy of the permanent magnet synchronous motor, the field-oriented control based on the PI controller have the instability of the output torque. In order to stabilize the output torque of the permanent magnet synchronous motor, this paper adopts reinforcement learning to improve traditional PI controller. Finally, in the MATLAB/Simulink simulation environment, a new control method based on reinforcement learning is established. The simulation results show that the reinforcement learning control method used in this paper can improve the stability of the output torque.
Style APA, Harvard, Vancouver, ISO itp.
10

Uehara, Shintaro, Firas Mawase i Pablo Celnik. "Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms". Cerebral Cortex 28, nr 10 (14.09.2017): 3478–90. http://dx.doi.org/10.1093/cercor/bhx214.

Pełny tekst źródła
Streszczenie:
Abstract Humans can acquire knowledge of new motor behavior via different forms of learning. The two forms most commonly studied have been the development of internal models based on sensory-prediction errors (error-based learning) and success-based feedback (reinforcement learning). Human behavioral studies suggest these are distinct learning processes, though the neurophysiological mechanisms that are involved have not been characterized. Here, we evaluated physiological markers from the cerebellum and the primary motor cortex (M1) using noninvasive brain stimulations while healthy participants trained finger-reaching tasks. We manipulated the extent to which subjects rely on error-based or reinforcement by providing either vector or binary feedback about task performance. Our results demonstrated a double dissociation where learning the task mainly via error-based mechanisms leads to cerebellar plasticity modifications but not long-term potentiation (LTP)-like plasticity changes in M1; while learning a similar action via reinforcement mechanisms elicited M1 LTP-like plasticity but not cerebellar plasticity changes. Our findings indicate that learning complex motor behavior is mediated by the interplay of different forms of learning, weighing distinct neural mechanisms in M1 and the cerebellum. Our study provides insights for designing effective interventions to enhance human motor learning.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Reinforcement Motor Learning"

1

Zhang, Fangyi. "Learning real-world visuo-motor policies from simulation". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121471/1/Fangyi%20Zhang%20Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis explores how simulation can be used to create the large amount of data required to teach a robot certain hand-eye coordination skills. It advances the state-of-the-art of deep visuo-motor policy learning by introducing a new modular architecture, a novel reinforcement learning exploration strategy, and adversarial discriminative transfer.
Style APA, Harvard, Vancouver, ISO itp.
2

De, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics". Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.

Pełny tekst źródła
Streszczenie:
La thèse consiste en l'apprentissage d'une tâche complexe de robotique de manipulation en utilisant très peu d'aprioris. Plus précisément, la tâche apprise consiste à atteindre un objet avec un robot série. L'objectif est de réaliser cet apprentissage sans paramètres de calibrage des caméras, modèles géométriques directs, descripteurs faits à la main ou des démonstrations d'expert. L'apprentissage par renforcement profond est une classe d'algorithmes particulièrement intéressante dans cette optique. En effet, l'apprentissage par renforcement permet d’apprendre une compétence sensori-motrice en se passant de modèles dynamiques. Par ailleurs, l'apprentissage profond permet de se passer de descripteurs faits à la main pour la représentation d'état. Cependant, spécifier les objectifs sans supervision humaine est un défi important. Certaines solutions consistent à utiliser des signaux de récompense informatifs ou des démonstrations d'experts pour guider le robot vers les solutions. D'autres consistent à décomposer l'apprentissage. Par exemple, l'apprentissage "petit à petit" ou "du simple au compliqué" peut être utilisé. Cependant, cette stratégie nécessite la connaissance de l'objectif en termes d'état. Une autre solution est de décomposer une tâche complexe en plusieurs tâches plus simples. Néanmoins, cela n'implique pas l'absence de supervision pour les sous tâches mentionnées. D'autres approches utilisant plusieurs robots en parallèle peuvent également être utilisés mais nécessite du matériel coûteux. Pour notre approche, nous nous inspirons du comportement des êtres humains. Ces derniers généralement regardent l'objet avant de le manipuler. Ainsi, nous décomposons la tâche d'atteinte en 3 sous tâches. La première tâche consiste à apprendre à fixer un objet avec un système de deux caméras pour le localiser dans l'espace. Cette tâche est apprise avec de l'apprentissage par renforcement profond et un signal de récompense faiblement supervisé. Pour la tâche suivante, deux compétences sont apprises en parallèle : la fixation d'effecteur et une fonction de coordination main-oeil. Comme la précédente tâche, un algorithme d'apprentissage par renforcement profond est utilisé avec un signal de récompense faiblement supervisé. Le but de cette tâche est d'être capable de localiser l'effecteur du robot à partir des coordonnées articulaires. La dernière tâche utilise les compétences apprises lors des deux précédentes étapes pour apprendre au robot à atteindre un objet. Cet apprentissage utilise les mêmes aprioris que pour les tâches précédentes. En plus de la tâche d'atteinte, un predicteur d'atteignabilité d'objet est appris. La principale contribution de ces travaux est l'apprentissage d'une tâche de robotique complexe en n'utilisant que très peu de supervision
The thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Jiexin. "Policy Hyperparameter Exploration for Behavioral Learning of Smartphone Robots". 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225744.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Frömer, Romy. "Learning to throw". Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17427.

Pełny tekst źródła
Streszczenie:
Feedback, Trainingsplan und individuelle Unterschiede zwischen Lernern sind drei Faktoren die den motorischen Fertigkeitserwerb beeinflussen und wurden in der vorliegenden Dissertation untersucht. Ein besonderer Fokus lag auf den zugrundeliegenden Gehirnprozessen von Feedbackverarbeitung und Handlungsvorbereitung, die mittels ereigniskorrelierter Potenziale (EKPs) untersucht wurden. 120 Teilnehmer trainierten auf virtuelle Zielscheiben zu werfen und wurden in einer Folgesitzung auf Abruf und Transfer getestet. Der Trainingsplan verursachte entweder hohe contextual interference (CI) (randomisiert) oder niedrige CI (geblockt). In einer anschließenden Onlinestudie, bearbeiteten 80% der Teilnehmer eine Untermenge der Raven advanced progressive matrices, die schlussfolgerndes Denken (SD) erfassen. Unter hoher CI hängt besseres SD mit größerem Zuwachs im Training und höherer Performanz in Abruf und Transfer zusammen. Ähnliche Effekte von SD im späten Trainingsverlauf unter niedriger CI lassen darauf schließen, dass Variabilität eine notwendige Voraussetzung für positive Effekte von SD ist. Wir folgern, dass CI das Ausmaß an Praxisvariabilität über den Trainingsverlauf beeinflusst und darüber moduliert, ob Regeln abstrahiert werden (Studie 1). Diese Interpretation wird durch differenzielle Lerneffekte auf EKPs in der Vorbereitungsphase gestützt. Hohe CI führt zu einer stärkeren Abnahme von aufmerksamkeits- und kontrollbezogenen EKPs während der Vorbereitungsphase. Die CNV Amplitude, als Maß motorischer Vorbereitungsaktivität nimmt zu, wenn die Anforderungen in Training und Abruf gleich sind, wie bei niedriger CI. Das spricht für zwei parallele Mechanismen motorischen Lernens, die gemeinsam zur CNV Amplitude beitragen (Studie 2). Wir zeigten außerdem, dass sich graduelle Verarbeitung positiven Performanz-Feedbacks in der Variation der Amplitude der Reward Positivity widerspiegelt (Studie 3).
Feedback, training schedule and individual differences between learners influence the acquisition of motor skills and were investigated in the present thesis. A special focus was on brain processes underlying feedback processing and motor preparation, investigated using event related potentials (ERPs). 120 participants trained to throw at virtual targets and were tested for retention and transfer. Training schedule was manipulated with half of the participants practicing under high contextual interference (CI) (randomized training) and the other half under low CI (blocked training). In a follow-up online study, 80% of the participants completed a subset of the Raven advanced progressive matrices, testing reasoning ability. Under high CI, participants’ reasoning ability was related to higher performance increase during training and higher subsequent performance in retention and transfer. Similar effects in late stages of low CI training indicate, that variability is a necessary prerequisite for beneficial effects of reasoning ability. We conclude, that CI affects the amount of variability of practice across the course of training and the abstraction of rules (Study 1). Differential learning effects on ERPs in the preparatory phase foster this interpretation. High CI shows a larger decline in attention- and control-related ERPs than low CI. CNV amplitude, as a measure of motor preparatory activity, increases with learning only, when attention demands of training and retention are similar, as in low CI training. This points to two parallel mechanisms in motor learning, with a cognitive and a motor processor, mutually contributing to CNV amplitude (Study 2). In the framework of the “reinforcement learning theory of the error related negativity”, we showed, that positive performance feedback is processed gradually and that this processing is reflected in varying amplitudes of reward positivity (Study 3). Together these results provide new insights on motor learning.
Style APA, Harvard, Vancouver, ISO itp.
5

PAQUIER, Williams. "Apprentissage ouvert de representations et de fonctionnalites en robotique : anayse, modeles et implementation". Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00009324.

Pełny tekst źródła
Streszczenie:
L'acquisition autonome de representations et de fonctionnalites en robotique pose de nombreux problemes theoriques. Aujourd'hui, les systemes robotiques autonomes sont concus autour d'un ensemble de fonctionnalites. Leurs representations du monde sont issues de l'analyse d'un probleme et d'une modelisation prealablement donnees par les concepteurs. Cette approche limite les capacites d'apprentissage. Nous proposons dans cette these un systeme ouvert de representations et de fonctionnalites. Ce systeme apprend en experimentant son environnement et est guide par l'augmentation d'une fonction de valeur. L'objectif du systeme consiste a agir sur son environnement pour reactiver les representations dont il avait appris une connotation positive. Une analyse de la capacite a generaliser la production d'actions appropriees pour ces reactivations conduit a definir un ensemble de proprietes necessaires pour un tel systeme. Le systeme de representation est constitue d'un reseau d'unites de traitement semblables et utilise un codage par position. Le sens de l'etat d'une unite depend de sa position dans le reseau. Ce systeme de representation possede des similitudes avec le principe de numeration par position. Une representation correspond a l'activation d'un ensemble d'unites. Ce systeme a ete implemente dans une suite logicielle appelee NeuSter qui permet de simuler des reseaux de plusieurs millions d'unites et milliard de connexions sur des grappes heterogenes de machines POSIX. Les premiers resultats permettent de valider les contraintes deduites de l'analyse. Un tel systeme permet d'apprendre dans un meme reseau, de facon hierarchique et non supervisee, des detecteurs de bords et de traits, de coins, de terminaisons de traits, de visages, de directions de mouvement, de rotations, d'expansions, et de phonemes. NeuSter apprend en ligne en utilisant uniquement les donnees de ses capteurs. Il a ete teste sur des robots mobiles pour l'apprentissage et le suivi d'objets.
Style APA, Harvard, Vancouver, ISO itp.
6

Trska, Robert. "Motor expectancy: the modulation of the reward positivity in a reinforcement learning motor task". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9992.

Pełny tekst źródła
Streszczenie:
An adage posits that we learn from our mistakes; however, this is not entirely true. According to reinforcement learning theory, we learn when the expectation of our actions differs from outcomes. Here, we examined whether expectancy driven learning lends a role in motor learning. Given the vast amount of overlapping anatomy and circuitry within the brain with respect to reward and motor processes, it is appropriate to examine both motor control and expectancy processes within a singular task. In the current study, participants performed a line drawing task via tablet under conditions of changing expectancies. Participants were provided feedback in a reinforcement-learning manner, as positive (✓) or negative (x) based off their performance. Modulation of expected outcomes were reflected by changes in amplitude of the human event-related potential (ERP), the reward positivity. The reward positivity is thought to reflect phasic dopamine release from the mesolimbic dopaminergic system to the basal ganglia and cingulate cortex. Due to the overlapping circuitry of reward and motor pathways, another human ERP, the bereitschatftspotential (BP), was examined. The BP is implicated in motor planning and execution; however, the late aspect of the BP shares similarity with the contingent negative variability (CNV). Current evidence demonstrates a relationship between expectancy and reward positivity amplitude in a motor learning context, as well as modulation of the BP under difficult task conditions. Behavioural data supports prior literature and may suggest a connection between sensory motor prediction errors working in concert with reward prediction errors. Further evidence supports a frontal-medial evaluation system for motor errors. Additionally, results support prior evidence of motor plans being formed upon target observation and held in memory until motor execution, rather than their formation before movement onset.
Graduate
Style APA, Harvard, Vancouver, ISO itp.
7

Sendhilnathan, Naveen. "The role of the cerebellum in reinforcement learning". Thesis, 2021. https://doi.org/10.7916/d8-p13c-3955.

Pełny tekst źródła
Streszczenie:
How do we learn to establish associations between arbitrary visual cues (like a red light) and movements (like braking the car)? We investigated the neural correlates of visuomotor association learning in the mid-lateral cerebellum. Although cerebellum has been considered to be a motor control center involved in monitoring and correcting the motor error through supervised learning, in this thesis, we show that its role can also be extended to non-motor learning. Specifically, when primates learned to associate arbitrary visual cues with well-learned stereotypic movements, the simple spikes of the mid-lateral cerebellar Purkinje cells reported the monkey’s most recent decision’s outcome during learning. The magnitude of this reinforcement error signal changed with learning, finally disappearing when the association had been overlearned. We modeled this change in neural activity through a drift diffusion-reinforcement learning based model. The concurrent complex spikes, contrary to traditional theories, did not play the role of teaching signal, but encoded the probability of error as a function of the state of learning. They also encoded features that indicate the beginning of a trial. Inactivating the mid-lateral cerebellum significantly affected the monkey’s learning performance while it did not affect motor performance. This is because the mid-lateral cerebellum is in a loop with other cognitive processing centers of the brain including the prefrontal cortex and the basal ganglia. Finally, we verified that the features we identified in primate experiments can also be extended to humans, by studying the visuomotor association learning in humans through functional magnetic resonance imaging. In summary, through electrophysiological and causal experiments in monkeys, imaging in humans, computational models and an anatomical framework, we delineate mechanisms through which the cerebellum can be involved in reinforcement learning and specifically, learning new visuomotor associations.
Style APA, Harvard, Vancouver, ISO itp.
8

Krigolson, Olave. "Hierarchical error processing during motor control". Thesis, 2007. http://hdl.handle.net/1828/239.

Pełny tekst źródła
Streszczenie:
The successful execution of goal-directed movement requires the evaluation of many levels of errors. On one hand, the motor system needs to be able to evaluate ‘high-level’ errors indicating the success or failure of a given movement. On the other hand, as a movement is executed the motor system also has to be able to correct for ‘low-level’ errors - an error in the initial motor command or change in the motor command necessary to compensate for an unexpected change in the movement environment. The goal of the present research was to provide electroencephalographic evidence that error processing during motor control is evaluated hierarchically. The present research demonstrated that high-level motor errors indicating the failure of a system goal elicited the error-related negativity, a component of the event-related brain potential (ERP) evoked by incorrect responses and error feedback. The present research also demonstrated that low-level motor errors are associated with parietally distributed ERP component related to the focusing of visuo-spatial attention and context-updating. Finally, the present research includes a viable neural model for hierarchical error processing during motor control.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Reinforcement Motor Learning"

1

The contextual interference effect in learning an open motor skill. 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

The contextual interference effect in learning an open motor skill. 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

The effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

The effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Herreros, Ivan. Learning and control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0026.

Pełny tekst źródła
Streszczenie:
This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.
Style APA, Harvard, Vancouver, ISO itp.
6

Effects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Effects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Effects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Effects of cognitive learning strategies and reinforcement: On the acquisition of closed motor skills in older adults. 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Yun, Chi-Hong. Pre- and post-knowledge of results intervals and motor performance of mentally retarded individuals. 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Reinforcement Motor Learning"

1

Mannes, Christian. "Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning". W Konnektionismus in Artificial Intelligence und Kognitionsforschung, 95–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-76070-9_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Manjunatha, Hemanth, i Ehsan T. Esfahani. "Application of Reinforcement and Deep Learning Techniques in Brain–Machine Interfaces". W Advances in Motor Neuroprostheses, 1–14. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38740-2_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lohse, Keith, Matthew Miller, Mariane Bacelar i Olav Krigolson. "Errors, rewards, and reinforcement in motor skill learning". W Skill Acquisition in Sport, 39–60. Third Edition. | New York : Routledge, 2019. | “First edition published by Routledge 2004”--T.p. verso. | Previous edition: 2012.: Routledge, 2019. http://dx.doi.org/10.4324/9781351189750-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Lane, Stephen H., David A. Handelman i Jack J. Gelfand. "Modulation of Robotic Motor Synergies Using Reinforcement Learning Optimization". W Neural Networks in Robotics, 521–38. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3180-7_29.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kober, Jens, i Jan Peters. "Reinforcement Learning to Adjust Parametrized Motor Primitives to New Situations". W Springer Tracts in Advanced Robotics, 119–47. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03194-1_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kober, Jens, Betty Mohler i Jan Peters. "Imitation and Reinforcement Learning for Motor Primitives with Perceptual Coupling". W Studies in Computational Intelligence, 209–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05181-4_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Patil, Gaurav, Patrick Nalepka, Lillian Rigoli, Rachel W. Kallen i Michael J. Richardson. "Dynamical Perceptual-Motor Primitives for Better Deep Reinforcement Learning Agents". W Lecture Notes in Computer Science, 176–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85739-4_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Coulom, Rémi. "Feedforward Neural Networks in Reinforcement Learning Applied to High-Dimensional Motor Control". W Lecture Notes in Computer Science, 403–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36169-3_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Koprinkova-Hristova, Petia, i Nadejda Bocheva. "Spike Timing Neural Model of Eye Movement Motor Response with Reinforcement Learning". W Advanced Computing in Industrial Mathematics, 139–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71616-5_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Simpson, Thomas G., i Karen Rafferty. "Evaluating the Effect of Reinforcement Haptics on Motor Learning and Cognitive Workload in Driver Training". W Lecture Notes in Computer Science, 203–11. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58465-8_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Reinforcement Motor Learning"

1

Peters, J., i S. Schaal. "Reinforcement Learning for Parameterized Motor Primitives". W The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246662.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Fung, Bowen, Xin Sui, Colin Camerer i Dean Mobbs. "Reinforcement learning predicts frustration-related motor invigoration". W 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1020-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Liu, Kainan, Xiaoshi Cai, Xiaojun Ban i Jian Zhang. "Galvanometer Motor Control Based on Reinforcement Learning". W 2022 5th International Conference on Intelligent Autonomous Systems (ICoIAS). IEEE, 2022. http://dx.doi.org/10.1109/icoias56028.2022.9931291.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Shinohara, Daisuke, Takamitsu Matsubara i Masatsugu Kidode. "Learning motor skills with non-rigid materials by reinforcement learning". W 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2011. http://dx.doi.org/10.1109/robio.2011.6181709.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bujgoi, Gheorghe, i Dorin Sendrescu. "DC Motor Control based on Integral Reinforcement Learning". W 2022 23rd International Carpathian Control Conference (ICCC). IEEE, 2022. http://dx.doi.org/10.1109/iccc54292.2022.9805935.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Stulp, Freek, Jonas Buchli, Evangelos Theodorou i Stefan Schaal. "Reinforcement learning of full-body humanoid motor skills". W 2010 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010). IEEE, 2010. http://dx.doi.org/10.1109/ichr.2010.5686320.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Cosmin, Bucur, i Tasu Sorin. "Reinforcement Learning for a Continuous DC Motor Controller". W 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2023. http://dx.doi.org/10.1109/ecai58194.2023.10193912.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

HuH, Dongsung, i Emanuel Todorov. "Real-time motor control using recurrent neural networks". W 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2009. http://dx.doi.org/10.1109/adprl.2009.4927524.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

WARLAUMONT, ANNE S. "REINFORCEMENT-MODULATED SELF-ORGANIZATION IN INFANT MOTOR SPEECH LEARNING". W Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kormushev, Petar, Sylvain Calinon i Darwin G. Caldwell. "Robot motor skill coordination with EM-based Reinforcement Learning". W 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5649089.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii