Добірка наукової літератури з теми "Reinforcement Motor Learning"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Reinforcement Motor Learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Reinforcement Motor Learning"

1

Vassiliadis, Pierre, Gerard Derosiere, Cecile Dubuc, Aegryan Lete, Frederic Crevecoeur, Friedhelm C. Hummel, and Julie Duque. "Reward boosts reinforcement-based motor learning." iScience 24, no. 7 (July 2021): 102821. http://dx.doi.org/10.1016/j.isci.2021.102821.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Uehara, Shintaro, Firas Mawase, Amanda S. Therrien, Kendra M. Cherry-Allen, and Pablo Celnik. "Interactions between motor exploration and reinforcement learning." Journal of Neurophysiology 122, no. 2 (August 1, 2019): 797–808. http://dx.doi.org/10.1152/jn.00390.2018.

Повний текст джерела
Анотація:
Motor exploration, a trial-and-error process in search for better motor outcomes, is known to serve a critical role in motor learning. This is particularly relevant during reinforcement learning, where actions leading to a successful outcome are reinforced while unsuccessful actions are avoided. Although early on motor exploration is beneficial to finding the correct solution, maintaining high levels of exploration later in the learning process might be deleterious. Whether and how the level of exploration changes over the course of reinforcement learning, however, remains poorly understood. Here we evaluated temporal changes in motor exploration while healthy participants learned a reinforcement-based motor task. We defined exploration as the magnitude of trial-to-trial change in movements as a function of whether the preceding trial resulted in success or failure. Participants were required to find the optimal finger-pointing direction using binary feedback of success or failure. We found that the magnitude of exploration gradually increased over time when participants were learning the task. Conversely, exploration remained low in participants who were unable to correctly adjust their pointing direction. Interestingly, exploration remained elevated when participants underwent a second training session, which was associated with faster relearning. These results indicate that the motor system may flexibly upregulate the extent of exploration during reinforcement learning as if acquiring a specific strategy to facilitate subsequent learning. Also, our findings showed that exploration affects reinforcement learning and vice versa, indicating an interactive relationship between them. Reinforcement-based tasks could be used as primers to increase exploratory behavior leading to more efficient subsequent learning. NEW & NOTEWORTHY Motor exploration, the ability to search for the correct actions, is critical to learning motor skills. Despite this, whether and how the level of exploration changes over the course of training remains poorly understood. We showed that exploration increased and remained high throughout training of a reinforcement-based motor task. Interestingly, elevated exploration persisted and facilitated subsequent learning. These results suggest that the motor system upregulates exploration as if learning a strategy to facilitate subsequent learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sistani, Mohammad Bagher Naghibi, and Sadegh Hesari. "Decreasing Induction Motor Loss Using Reinforcement Learning." Journal of Automation and Control Engineering 3, no. 6 (2015): 13–17. http://dx.doi.org/10.12720/joace.4.1.13-17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Palidis, Dimitrios J., Heather R. McGregor, Andrew Vo, Penny A. MacDonald, and Paul L. Gribble. "Null effects of levodopa on reward- and error-based motor adaptation, savings, and anterograde interference." Journal of Neurophysiology 126, no. 1 (July 1, 2021): 47–67. http://dx.doi.org/10.1152/jn.00696.2020.

Повний текст джерела
Анотація:
Motor adaptation relies on multiple processes including reinforcement of successful actions. Cognitive reinforcement learning is impaired by levodopa-induced disruption of dopamine function. We administered levodopa to healthy adults who participated in multiple motor adaptation tasks. We found no effects of levodopa on any component of motor adaptation. This suggests that motor adaptation may not depend on the same dopaminergic mechanisms as cognitive forms or reinforcement learning that have been shown to be impaired by levodopa.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

IZAWA, Jun, Toshiyuki KONDO, and Koji ITO. "Motor Learning Model through Reinforcement Learning with Neural Internal Model." Transactions of the Society of Instrument and Control Engineers 39, no. 7 (2003): 679–87. http://dx.doi.org/10.9746/sicetr1965.39.679.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Peters, Jan, and Stefan Schaal. "Reinforcement learning of motor skills with policy gradients." Neural Networks 21, no. 4 (May 2008): 682–97. http://dx.doi.org/10.1016/j.neunet.2008.02.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sidarta, Ananda, John Komar, and David J. Ostry. "Clustering analysis of movement kinematics in reinforcement learning." Journal of Neurophysiology 127, no. 2 (February 1, 2022): 341–53. http://dx.doi.org/10.1152/jn.00229.2021.

Повний текст джерела
Анотація:
The choice of exploration versus exploitation is a fundamental problem in learning new motor skills through reinforcement. In this study, we employed a data-driven approach to characterize movements on a trial-by-trial basis with an unsupervised clustering algorithm. Using this technique, we found that changes in task demands and, in particular, in the required accuracy of movements, influenced the ratio of exploration to exploitation. This analysis framework provides an attractive tool to investigate mechanisms of explorative and exploitative behavior while studying motor learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sidarta, Ananda, Floris T. van Vugt, and David J. Ostry. "Somatosensory working memory in human reinforcement-based motor learning." Journal of Neurophysiology 120, no. 6 (December 1, 2018): 3275–86. http://dx.doi.org/10.1152/jn.00442.2018.

Повний текст джерела
Анотація:
Recent studies using visuomotor adaptation and sequence learning tasks have assessed the involvement of working memory in the visuospatial domain. The capacity to maintain previously performed movements in working memory is perhaps even more important in reinforcement-based learning to repeat accurate movements and avoid mistakes. Using this kind of task in the present work, we tested the relationship between somatosensory working memory and motor learning. The first experiment involved separate memory and motor learning tasks. In the memory task, the participant’s arm was displaced in different directions by a robotic arm, and the participant was asked to judge whether a subsequent test direction was one of the previously presented directions. In the motor learning task, participants made reaching movements to a hidden visual target and were provided with positive feedback as reinforcement when the movement ended in the target zone. It was found that participants that had better somatosensory working memory showed greater motor learning. In a second experiment, we designed a new task in which learning and working memory trials were interleaved, allowing us to study participants’ memory for movements they performed as part of learning. As in the first experiment, we found that participants with better somatosensory working memory also learned more. Moreover, memory performance for successful movements was better than for movements that failed to reach the target. These results suggest that somatosensory working memory is involved in reinforcement motor learning and that this memory preferentially keeps track of reinforced movements. NEW & NOTEWORTHY The present work examined somatosensory working memory in reinforcement-based motor learning. Working memory performance was reliably correlated with the extent of learning. With the use of a paradigm in which learning and memory trials were interleaved, memory was assessed for movements performed during learning. Movements that received positive feedback were better remembered than movements that did not. Thus working memory does not track all movements equally but is biased to retain movements that were rewarded.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tian, Mengqi, Ke Wang, Hongyu Lv, and Wubin Shi. "Reinforcement learning control method of torque stability of three-phase permanent magnet synchronous motor." Journal of Physics: Conference Series 2183, no. 1 (January 1, 2022): 012024. http://dx.doi.org/10.1088/1742-6596/2183/1/012024.

Повний текст джерела
Анотація:
Abstract Regarding the control strategy of the permanent magnet synchronous motor, the field-oriented control based on the PI controller have the instability of the output torque. In order to stabilize the output torque of the permanent magnet synchronous motor, this paper adopts reinforcement learning to improve traditional PI controller. Finally, in the MATLAB/Simulink simulation environment, a new control method based on reinforcement learning is established. The simulation results show that the reinforcement learning control method used in this paper can improve the stability of the output torque.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Uehara, Shintaro, Firas Mawase, and Pablo Celnik. "Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms." Cerebral Cortex 28, no. 10 (September 14, 2017): 3478–90. http://dx.doi.org/10.1093/cercor/bhx214.

Повний текст джерела
Анотація:
Abstract Humans can acquire knowledge of new motor behavior via different forms of learning. The two forms most commonly studied have been the development of internal models based on sensory-prediction errors (error-based learning) and success-based feedback (reinforcement learning). Human behavioral studies suggest these are distinct learning processes, though the neurophysiological mechanisms that are involved have not been characterized. Here, we evaluated physiological markers from the cerebellum and the primary motor cortex (M1) using noninvasive brain stimulations while healthy participants trained finger-reaching tasks. We manipulated the extent to which subjects rely on error-based or reinforcement by providing either vector or binary feedback about task performance. Our results demonstrated a double dissociation where learning the task mainly via error-based mechanisms leads to cerebellar plasticity modifications but not long-term potentiation (LTP)-like plasticity changes in M1; while learning a similar action via reinforcement mechanisms elicited M1 LTP-like plasticity but not cerebellar plasticity changes. Our findings indicate that learning complex motor behavior is mediated by the interplay of different forms of learning, weighing distinct neural mechanisms in M1 and the cerebellum. Our study provides insights for designing effective interventions to enhance human motor learning.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Reinforcement Motor Learning"

1

Zhang, Fangyi. "Learning real-world visuo-motor policies from simulation." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121471/1/Fangyi%20Zhang%20Thesis.pdf.

Повний текст джерела
Анотація:
This thesis explores how simulation can be used to create the large amount of data required to teach a robot certain hand-eye coordination skills. It advances the state-of-the-art of deep visuo-motor policy learning by introducing a new modular architecture, a novel reinforcement learning exploration strategy, and adversarial discriminative transfer.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

De, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.

Повний текст джерела
Анотація:
La thèse consiste en l'apprentissage d'une tâche complexe de robotique de manipulation en utilisant très peu d'aprioris. Plus précisément, la tâche apprise consiste à atteindre un objet avec un robot série. L'objectif est de réaliser cet apprentissage sans paramètres de calibrage des caméras, modèles géométriques directs, descripteurs faits à la main ou des démonstrations d'expert. L'apprentissage par renforcement profond est une classe d'algorithmes particulièrement intéressante dans cette optique. En effet, l'apprentissage par renforcement permet d’apprendre une compétence sensori-motrice en se passant de modèles dynamiques. Par ailleurs, l'apprentissage profond permet de se passer de descripteurs faits à la main pour la représentation d'état. Cependant, spécifier les objectifs sans supervision humaine est un défi important. Certaines solutions consistent à utiliser des signaux de récompense informatifs ou des démonstrations d'experts pour guider le robot vers les solutions. D'autres consistent à décomposer l'apprentissage. Par exemple, l'apprentissage "petit à petit" ou "du simple au compliqué" peut être utilisé. Cependant, cette stratégie nécessite la connaissance de l'objectif en termes d'état. Une autre solution est de décomposer une tâche complexe en plusieurs tâches plus simples. Néanmoins, cela n'implique pas l'absence de supervision pour les sous tâches mentionnées. D'autres approches utilisant plusieurs robots en parallèle peuvent également être utilisés mais nécessite du matériel coûteux. Pour notre approche, nous nous inspirons du comportement des êtres humains. Ces derniers généralement regardent l'objet avant de le manipuler. Ainsi, nous décomposons la tâche d'atteinte en 3 sous tâches. La première tâche consiste à apprendre à fixer un objet avec un système de deux caméras pour le localiser dans l'espace. Cette tâche est apprise avec de l'apprentissage par renforcement profond et un signal de récompense faiblement supervisé. Pour la tâche suivante, deux compétences sont apprises en parallèle : la fixation d'effecteur et une fonction de coordination main-oeil. Comme la précédente tâche, un algorithme d'apprentissage par renforcement profond est utilisé avec un signal de récompense faiblement supervisé. Le but de cette tâche est d'être capable de localiser l'effecteur du robot à partir des coordonnées articulaires. La dernière tâche utilise les compétences apprises lors des deux précédentes étapes pour apprendre au robot à atteindre un objet. Cet apprentissage utilise les mêmes aprioris que pour les tâches précédentes. En plus de la tâche d'atteinte, un predicteur d'atteignabilité d'objet est appris. La principale contribution de ces travaux est l'apprentissage d'une tâche de robotique complexe en n'utilisant que très peu de supervision
The thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Jiexin. "Policy Hyperparameter Exploration for Behavioral Learning of Smartphone Robots." 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225744.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Frömer, Romy. "Learning to throw." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17427.

Повний текст джерела
Анотація:
Feedback, Trainingsplan und individuelle Unterschiede zwischen Lernern sind drei Faktoren die den motorischen Fertigkeitserwerb beeinflussen und wurden in der vorliegenden Dissertation untersucht. Ein besonderer Fokus lag auf den zugrundeliegenden Gehirnprozessen von Feedbackverarbeitung und Handlungsvorbereitung, die mittels ereigniskorrelierter Potenziale (EKPs) untersucht wurden. 120 Teilnehmer trainierten auf virtuelle Zielscheiben zu werfen und wurden in einer Folgesitzung auf Abruf und Transfer getestet. Der Trainingsplan verursachte entweder hohe contextual interference (CI) (randomisiert) oder niedrige CI (geblockt). In einer anschließenden Onlinestudie, bearbeiteten 80% der Teilnehmer eine Untermenge der Raven advanced progressive matrices, die schlussfolgerndes Denken (SD) erfassen. Unter hoher CI hängt besseres SD mit größerem Zuwachs im Training und höherer Performanz in Abruf und Transfer zusammen. Ähnliche Effekte von SD im späten Trainingsverlauf unter niedriger CI lassen darauf schließen, dass Variabilität eine notwendige Voraussetzung für positive Effekte von SD ist. Wir folgern, dass CI das Ausmaß an Praxisvariabilität über den Trainingsverlauf beeinflusst und darüber moduliert, ob Regeln abstrahiert werden (Studie 1). Diese Interpretation wird durch differenzielle Lerneffekte auf EKPs in der Vorbereitungsphase gestützt. Hohe CI führt zu einer stärkeren Abnahme von aufmerksamkeits- und kontrollbezogenen EKPs während der Vorbereitungsphase. Die CNV Amplitude, als Maß motorischer Vorbereitungsaktivität nimmt zu, wenn die Anforderungen in Training und Abruf gleich sind, wie bei niedriger CI. Das spricht für zwei parallele Mechanismen motorischen Lernens, die gemeinsam zur CNV Amplitude beitragen (Studie 2). Wir zeigten außerdem, dass sich graduelle Verarbeitung positiven Performanz-Feedbacks in der Variation der Amplitude der Reward Positivity widerspiegelt (Studie 3).
Feedback, training schedule and individual differences between learners influence the acquisition of motor skills and were investigated in the present thesis. A special focus was on brain processes underlying feedback processing and motor preparation, investigated using event related potentials (ERPs). 120 participants trained to throw at virtual targets and were tested for retention and transfer. Training schedule was manipulated with half of the participants practicing under high contextual interference (CI) (randomized training) and the other half under low CI (blocked training). In a follow-up online study, 80% of the participants completed a subset of the Raven advanced progressive matrices, testing reasoning ability. Under high CI, participants’ reasoning ability was related to higher performance increase during training and higher subsequent performance in retention and transfer. Similar effects in late stages of low CI training indicate, that variability is a necessary prerequisite for beneficial effects of reasoning ability. We conclude, that CI affects the amount of variability of practice across the course of training and the abstraction of rules (Study 1). Differential learning effects on ERPs in the preparatory phase foster this interpretation. High CI shows a larger decline in attention- and control-related ERPs than low CI. CNV amplitude, as a measure of motor preparatory activity, increases with learning only, when attention demands of training and retention are similar, as in low CI training. This points to two parallel mechanisms in motor learning, with a cognitive and a motor processor, mutually contributing to CNV amplitude (Study 2). In the framework of the “reinforcement learning theory of the error related negativity”, we showed, that positive performance feedback is processed gradually and that this processing is reflected in varying amplitudes of reward positivity (Study 3). Together these results provide new insights on motor learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

PAQUIER, Williams. "Apprentissage ouvert de representations et de fonctionnalites en robotique : anayse, modeles et implementation." Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00009324.

Повний текст джерела
Анотація:
L'acquisition autonome de representations et de fonctionnalites en robotique pose de nombreux problemes theoriques. Aujourd'hui, les systemes robotiques autonomes sont concus autour d'un ensemble de fonctionnalites. Leurs representations du monde sont issues de l'analyse d'un probleme et d'une modelisation prealablement donnees par les concepteurs. Cette approche limite les capacites d'apprentissage. Nous proposons dans cette these un systeme ouvert de representations et de fonctionnalites. Ce systeme apprend en experimentant son environnement et est guide par l'augmentation d'une fonction de valeur. L'objectif du systeme consiste a agir sur son environnement pour reactiver les representations dont il avait appris une connotation positive. Une analyse de la capacite a generaliser la production d'actions appropriees pour ces reactivations conduit a definir un ensemble de proprietes necessaires pour un tel systeme. Le systeme de representation est constitue d'un reseau d'unites de traitement semblables et utilise un codage par position. Le sens de l'etat d'une unite depend de sa position dans le reseau. Ce systeme de representation possede des similitudes avec le principe de numeration par position. Une representation correspond a l'activation d'un ensemble d'unites. Ce systeme a ete implemente dans une suite logicielle appelee NeuSter qui permet de simuler des reseaux de plusieurs millions d'unites et milliard de connexions sur des grappes heterogenes de machines POSIX. Les premiers resultats permettent de valider les contraintes deduites de l'analyse. Un tel systeme permet d'apprendre dans un meme reseau, de facon hierarchique et non supervisee, des detecteurs de bords et de traits, de coins, de terminaisons de traits, de visages, de directions de mouvement, de rotations, d'expansions, et de phonemes. NeuSter apprend en ligne en utilisant uniquement les donnees de ses capteurs. Il a ete teste sur des robots mobiles pour l'apprentissage et le suivi d'objets.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Trska, Robert. "Motor expectancy: the modulation of the reward positivity in a reinforcement learning motor task." Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9992.

Повний текст джерела
Анотація:
An adage posits that we learn from our mistakes; however, this is not entirely true. According to reinforcement learning theory, we learn when the expectation of our actions differs from outcomes. Here, we examined whether expectancy driven learning lends a role in motor learning. Given the vast amount of overlapping anatomy and circuitry within the brain with respect to reward and motor processes, it is appropriate to examine both motor control and expectancy processes within a singular task. In the current study, participants performed a line drawing task via tablet under conditions of changing expectancies. Participants were provided feedback in a reinforcement-learning manner, as positive (✓) or negative (x) based off their performance. Modulation of expected outcomes were reflected by changes in amplitude of the human event-related potential (ERP), the reward positivity. The reward positivity is thought to reflect phasic dopamine release from the mesolimbic dopaminergic system to the basal ganglia and cingulate cortex. Due to the overlapping circuitry of reward and motor pathways, another human ERP, the bereitschatftspotential (BP), was examined. The BP is implicated in motor planning and execution; however, the late aspect of the BP shares similarity with the contingent negative variability (CNV). Current evidence demonstrates a relationship between expectancy and reward positivity amplitude in a motor learning context, as well as modulation of the BP under difficult task conditions. Behavioural data supports prior literature and may suggest a connection between sensory motor prediction errors working in concert with reward prediction errors. Further evidence supports a frontal-medial evaluation system for motor errors. Additionally, results support prior evidence of motor plans being formed upon target observation and held in memory until motor execution, rather than their formation before movement onset.
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sendhilnathan, Naveen. "The role of the cerebellum in reinforcement learning." Thesis, 2021. https://doi.org/10.7916/d8-p13c-3955.

Повний текст джерела
Анотація:
How do we learn to establish associations between arbitrary visual cues (like a red light) and movements (like braking the car)? We investigated the neural correlates of visuomotor association learning in the mid-lateral cerebellum. Although cerebellum has been considered to be a motor control center involved in monitoring and correcting the motor error through supervised learning, in this thesis, we show that its role can also be extended to non-motor learning. Specifically, when primates learned to associate arbitrary visual cues with well-learned stereotypic movements, the simple spikes of the mid-lateral cerebellar Purkinje cells reported the monkey’s most recent decision’s outcome during learning. The magnitude of this reinforcement error signal changed with learning, finally disappearing when the association had been overlearned. We modeled this change in neural activity through a drift diffusion-reinforcement learning based model. The concurrent complex spikes, contrary to traditional theories, did not play the role of teaching signal, but encoded the probability of error as a function of the state of learning. They also encoded features that indicate the beginning of a trial. Inactivating the mid-lateral cerebellum significantly affected the monkey’s learning performance while it did not affect motor performance. This is because the mid-lateral cerebellum is in a loop with other cognitive processing centers of the brain including the prefrontal cortex and the basal ganglia. Finally, we verified that the features we identified in primate experiments can also be extended to humans, by studying the visuomotor association learning in humans through functional magnetic resonance imaging. In summary, through electrophysiological and causal experiments in monkeys, imaging in humans, computational models and an anatomical framework, we delineate mechanisms through which the cerebellum can be involved in reinforcement learning and specifically, learning new visuomotor associations.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Krigolson, Olave. "Hierarchical error processing during motor control." Thesis, 2007. http://hdl.handle.net/1828/239.

Повний текст джерела
Анотація:
The successful execution of goal-directed movement requires the evaluation of many levels of errors. On one hand, the motor system needs to be able to evaluate ‘high-level’ errors indicating the success or failure of a given movement. On the other hand, as a movement is executed the motor system also has to be able to correct for ‘low-level’ errors - an error in the initial motor command or change in the motor command necessary to compensate for an unexpected change in the movement environment. The goal of the present research was to provide electroencephalographic evidence that error processing during motor control is evaluated hierarchically. The present research demonstrated that high-level motor errors indicating the failure of a system goal elicited the error-related negativity, a component of the event-related brain potential (ERP) evoked by incorrect responses and error feedback. The present research also demonstrated that low-level motor errors are associated with parietally distributed ERP component related to the focusing of visuo-spatial attention and context-updating. Finally, the present research includes a viable neural model for hierarchical error processing during motor control.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Reinforcement Motor Learning"

1

The contextual interference effect in learning an open motor skill. 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

The contextual interference effect in learning an open motor skill. 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

The effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

The effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Herreros, Ivan. Learning and control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0026.

Повний текст джерела
Анотація:
This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Effects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Effects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Effects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Effects of cognitive learning strategies and reinforcement: On the acquisition of closed motor skills in older adults. 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yun, Chi-Hong. Pre- and post-knowledge of results intervals and motor performance of mentally retarded individuals. 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Reinforcement Motor Learning"

1

Mannes, Christian. "Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning." In Konnektionismus in Artificial Intelligence und Kognitionsforschung, 95–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-76070-9_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Manjunatha, Hemanth, and Ehsan T. Esfahani. "Application of Reinforcement and Deep Learning Techniques in Brain–Machine Interfaces." In Advances in Motor Neuroprostheses, 1–14. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38740-2_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lohse, Keith, Matthew Miller, Mariane Bacelar, and Olav Krigolson. "Errors, rewards, and reinforcement in motor skill learning." In Skill Acquisition in Sport, 39–60. Third Edition. | New York : Routledge, 2019. | “First edition published by Routledge 2004”--T.p. verso. | Previous edition: 2012.: Routledge, 2019. http://dx.doi.org/10.4324/9781351189750-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lane, Stephen H., David A. Handelman, and Jack J. Gelfand. "Modulation of Robotic Motor Synergies Using Reinforcement Learning Optimization." In Neural Networks in Robotics, 521–38. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3180-7_29.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kober, Jens, and Jan Peters. "Reinforcement Learning to Adjust Parametrized Motor Primitives to New Situations." In Springer Tracts in Advanced Robotics, 119–47. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03194-1_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kober, Jens, Betty Mohler, and Jan Peters. "Imitation and Reinforcement Learning for Motor Primitives with Perceptual Coupling." In Studies in Computational Intelligence, 209–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05181-4_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Patil, Gaurav, Patrick Nalepka, Lillian Rigoli, Rachel W. Kallen, and Michael J. Richardson. "Dynamical Perceptual-Motor Primitives for Better Deep Reinforcement Learning Agents." In Lecture Notes in Computer Science, 176–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85739-4_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Coulom, Rémi. "Feedforward Neural Networks in Reinforcement Learning Applied to High-Dimensional Motor Control." In Lecture Notes in Computer Science, 403–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36169-3_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Koprinkova-Hristova, Petia, and Nadejda Bocheva. "Spike Timing Neural Model of Eye Movement Motor Response with Reinforcement Learning." In Advanced Computing in Industrial Mathematics, 139–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71616-5_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Simpson, Thomas G., and Karen Rafferty. "Evaluating the Effect of Reinforcement Haptics on Motor Learning and Cognitive Workload in Driver Training." In Lecture Notes in Computer Science, 203–11. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58465-8_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Reinforcement Motor Learning"

1

Peters, J., and S. Schaal. "Reinforcement Learning for Parameterized Motor Primitives." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246662.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Fung, Bowen, Xin Sui, Colin Camerer, and Dean Mobbs. "Reinforcement learning predicts frustration-related motor invigoration." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1020-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Kainan, Xiaoshi Cai, Xiaojun Ban, and Jian Zhang. "Galvanometer Motor Control Based on Reinforcement Learning." In 2022 5th International Conference on Intelligent Autonomous Systems (ICoIAS). IEEE, 2022. http://dx.doi.org/10.1109/icoias56028.2022.9931291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shinohara, Daisuke, Takamitsu Matsubara, and Masatsugu Kidode. "Learning motor skills with non-rigid materials by reinforcement learning." In 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2011. http://dx.doi.org/10.1109/robio.2011.6181709.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bujgoi, Gheorghe, and Dorin Sendrescu. "DC Motor Control based on Integral Reinforcement Learning." In 2022 23rd International Carpathian Control Conference (ICCC). IEEE, 2022. http://dx.doi.org/10.1109/iccc54292.2022.9805935.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Stulp, Freek, Jonas Buchli, Evangelos Theodorou, and Stefan Schaal. "Reinforcement learning of full-body humanoid motor skills." In 2010 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010). IEEE, 2010. http://dx.doi.org/10.1109/ichr.2010.5686320.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cosmin, Bucur, and Tasu Sorin. "Reinforcement Learning for a Continuous DC Motor Controller." In 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2023. http://dx.doi.org/10.1109/ecai58194.2023.10193912.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

HuH, Dongsung, and Emanuel Todorov. "Real-time motor control using recurrent neural networks." In 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2009. http://dx.doi.org/10.1109/adprl.2009.4927524.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

WARLAUMONT, ANNE S. "REINFORCEMENT-MODULATED SELF-ORGANIZATION IN INFANT MOTOR SPEECH LEARNING." In Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kormushev, Petar, Sylvain Calinon, and Darwin G. Caldwell. "Robot motor skill coordination with EM-based Reinforcement Learning." In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5649089.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії