Academic literature on the topic 'Reinforcement Motor Learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reinforcement Motor Learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Reinforcement Motor Learning"
Vassiliadis, Pierre, Gerard Derosiere, Cecile Dubuc, Aegryan Lete, Frederic Crevecoeur, Friedhelm C. Hummel, and Julie Duque. "Reward boosts reinforcement-based motor learning." iScience 24, no. 7 (July 2021): 102821. http://dx.doi.org/10.1016/j.isci.2021.102821.
Full textUehara, Shintaro, Firas Mawase, Amanda S. Therrien, Kendra M. Cherry-Allen, and Pablo Celnik. "Interactions between motor exploration and reinforcement learning." Journal of Neurophysiology 122, no. 2 (August 1, 2019): 797–808. http://dx.doi.org/10.1152/jn.00390.2018.
Full textSistani, Mohammad Bagher Naghibi, and Sadegh Hesari. "Decreasing Induction Motor Loss Using Reinforcement Learning." Journal of Automation and Control Engineering 3, no. 6 (2015): 13–17. http://dx.doi.org/10.12720/joace.4.1.13-17.
Full textPalidis, Dimitrios J., Heather R. McGregor, Andrew Vo, Penny A. MacDonald, and Paul L. Gribble. "Null effects of levodopa on reward- and error-based motor adaptation, savings, and anterograde interference." Journal of Neurophysiology 126, no. 1 (July 1, 2021): 47–67. http://dx.doi.org/10.1152/jn.00696.2020.
Full textIZAWA, Jun, Toshiyuki KONDO, and Koji ITO. "Motor Learning Model through Reinforcement Learning with Neural Internal Model." Transactions of the Society of Instrument and Control Engineers 39, no. 7 (2003): 679–87. http://dx.doi.org/10.9746/sicetr1965.39.679.
Full textPeters, Jan, and Stefan Schaal. "Reinforcement learning of motor skills with policy gradients." Neural Networks 21, no. 4 (May 2008): 682–97. http://dx.doi.org/10.1016/j.neunet.2008.02.003.
Full textSidarta, Ananda, John Komar, and David J. Ostry. "Clustering analysis of movement kinematics in reinforcement learning." Journal of Neurophysiology 127, no. 2 (February 1, 2022): 341–53. http://dx.doi.org/10.1152/jn.00229.2021.
Full textSidarta, Ananda, Floris T. van Vugt, and David J. Ostry. "Somatosensory working memory in human reinforcement-based motor learning." Journal of Neurophysiology 120, no. 6 (December 1, 2018): 3275–86. http://dx.doi.org/10.1152/jn.00442.2018.
Full textTian, Mengqi, Ke Wang, Hongyu Lv, and Wubin Shi. "Reinforcement learning control method of torque stability of three-phase permanent magnet synchronous motor." Journal of Physics: Conference Series 2183, no. 1 (January 1, 2022): 012024. http://dx.doi.org/10.1088/1742-6596/2183/1/012024.
Full textUehara, Shintaro, Firas Mawase, and Pablo Celnik. "Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms." Cerebral Cortex 28, no. 10 (September 14, 2017): 3478–90. http://dx.doi.org/10.1093/cercor/bhx214.
Full textDissertations / Theses on the topic "Reinforcement Motor Learning"
Zhang, Fangyi. "Learning real-world visuo-motor policies from simulation." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121471/1/Fangyi%20Zhang%20Thesis.pdf.
Full textDe, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.
Full textThe thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
Wang, Jiexin. "Policy Hyperparameter Exploration for Behavioral Learning of Smartphone Robots." 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225744.
Full textFrömer, Romy. "Learning to throw." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17427.
Full textFeedback, training schedule and individual differences between learners influence the acquisition of motor skills and were investigated in the present thesis. A special focus was on brain processes underlying feedback processing and motor preparation, investigated using event related potentials (ERPs). 120 participants trained to throw at virtual targets and were tested for retention and transfer. Training schedule was manipulated with half of the participants practicing under high contextual interference (CI) (randomized training) and the other half under low CI (blocked training). In a follow-up online study, 80% of the participants completed a subset of the Raven advanced progressive matrices, testing reasoning ability. Under high CI, participants’ reasoning ability was related to higher performance increase during training and higher subsequent performance in retention and transfer. Similar effects in late stages of low CI training indicate, that variability is a necessary prerequisite for beneficial effects of reasoning ability. We conclude, that CI affects the amount of variability of practice across the course of training and the abstraction of rules (Study 1). Differential learning effects on ERPs in the preparatory phase foster this interpretation. High CI shows a larger decline in attention- and control-related ERPs than low CI. CNV amplitude, as a measure of motor preparatory activity, increases with learning only, when attention demands of training and retention are similar, as in low CI training. This points to two parallel mechanisms in motor learning, with a cognitive and a motor processor, mutually contributing to CNV amplitude (Study 2). In the framework of the “reinforcement learning theory of the error related negativity”, we showed, that positive performance feedback is processed gradually and that this processing is reflected in varying amplitudes of reward positivity (Study 3). Together these results provide new insights on motor learning.
PAQUIER, Williams. "Apprentissage ouvert de representations et de fonctionnalites en robotique : anayse, modeles et implementation." Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00009324.
Full textTrska, Robert. "Motor expectancy: the modulation of the reward positivity in a reinforcement learning motor task." Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9992.
Full textGraduate
Sendhilnathan, Naveen. "The role of the cerebellum in reinforcement learning." Thesis, 2021. https://doi.org/10.7916/d8-p13c-3955.
Full textKrigolson, Olave. "Hierarchical error processing during motor control." Thesis, 2007. http://hdl.handle.net/1828/239.
Full textBooks on the topic "Reinforcement Motor Learning"
The contextual interference effect in learning an open motor skill. 1986.
Find full textThe contextual interference effect in learning an open motor skill. 1988.
Find full textThe effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1991.
Find full textThe effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1990.
Find full textHerreros, Ivan. Learning and control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0026.
Full textEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.
Find full textEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1990.
Find full textEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.
Find full textEffects of cognitive learning strategies and reinforcement: On the acquisition of closed motor skills in older adults. 1991.
Find full textYun, Chi-Hong. Pre- and post-knowledge of results intervals and motor performance of mentally retarded individuals. 1989.
Find full textBook chapters on the topic "Reinforcement Motor Learning"
Mannes, Christian. "Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning." In Konnektionismus in Artificial Intelligence und Kognitionsforschung, 95–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-76070-9_10.
Full textManjunatha, Hemanth, and Ehsan T. Esfahani. "Application of Reinforcement and Deep Learning Techniques in Brain–Machine Interfaces." In Advances in Motor Neuroprostheses, 1–14. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38740-2_1.
Full textLohse, Keith, Matthew Miller, Mariane Bacelar, and Olav Krigolson. "Errors, rewards, and reinforcement in motor skill learning." In Skill Acquisition in Sport, 39–60. Third Edition. | New York : Routledge, 2019. | “First edition published by Routledge 2004”--T.p. verso. | Previous edition: 2012.: Routledge, 2019. http://dx.doi.org/10.4324/9781351189750-3.
Full textLane, Stephen H., David A. Handelman, and Jack J. Gelfand. "Modulation of Robotic Motor Synergies Using Reinforcement Learning Optimization." In Neural Networks in Robotics, 521–38. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3180-7_29.
Full textKober, Jens, and Jan Peters. "Reinforcement Learning to Adjust Parametrized Motor Primitives to New Situations." In Springer Tracts in Advanced Robotics, 119–47. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03194-1_5.
Full textKober, Jens, Betty Mohler, and Jan Peters. "Imitation and Reinforcement Learning for Motor Primitives with Perceptual Coupling." In Studies in Computational Intelligence, 209–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05181-4_10.
Full textPatil, Gaurav, Patrick Nalepka, Lillian Rigoli, Rachel W. Kallen, and Michael J. Richardson. "Dynamical Perceptual-Motor Primitives for Better Deep Reinforcement Learning Agents." In Lecture Notes in Computer Science, 176–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85739-4_15.
Full textCoulom, Rémi. "Feedforward Neural Networks in Reinforcement Learning Applied to High-Dimensional Motor Control." In Lecture Notes in Computer Science, 403–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36169-3_32.
Full textKoprinkova-Hristova, Petia, and Nadejda Bocheva. "Spike Timing Neural Model of Eye Movement Motor Response with Reinforcement Learning." In Advanced Computing in Industrial Mathematics, 139–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71616-5_14.
Full textSimpson, Thomas G., and Karen Rafferty. "Evaluating the Effect of Reinforcement Haptics on Motor Learning and Cognitive Workload in Driver Training." In Lecture Notes in Computer Science, 203–11. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58465-8_16.
Full textConference papers on the topic "Reinforcement Motor Learning"
Peters, J., and S. Schaal. "Reinforcement Learning for Parameterized Motor Primitives." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246662.
Full textFung, Bowen, Xin Sui, Colin Camerer, and Dean Mobbs. "Reinforcement learning predicts frustration-related motor invigoration." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1020-0.
Full textLiu, Kainan, Xiaoshi Cai, Xiaojun Ban, and Jian Zhang. "Galvanometer Motor Control Based on Reinforcement Learning." In 2022 5th International Conference on Intelligent Autonomous Systems (ICoIAS). IEEE, 2022. http://dx.doi.org/10.1109/icoias56028.2022.9931291.
Full textShinohara, Daisuke, Takamitsu Matsubara, and Masatsugu Kidode. "Learning motor skills with non-rigid materials by reinforcement learning." In 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2011. http://dx.doi.org/10.1109/robio.2011.6181709.
Full textBujgoi, Gheorghe, and Dorin Sendrescu. "DC Motor Control based on Integral Reinforcement Learning." In 2022 23rd International Carpathian Control Conference (ICCC). IEEE, 2022. http://dx.doi.org/10.1109/iccc54292.2022.9805935.
Full textStulp, Freek, Jonas Buchli, Evangelos Theodorou, and Stefan Schaal. "Reinforcement learning of full-body humanoid motor skills." In 2010 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010). IEEE, 2010. http://dx.doi.org/10.1109/ichr.2010.5686320.
Full textCosmin, Bucur, and Tasu Sorin. "Reinforcement Learning for a Continuous DC Motor Controller." In 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2023. http://dx.doi.org/10.1109/ecai58194.2023.10193912.
Full textHuH, Dongsung, and Emanuel Todorov. "Real-time motor control using recurrent neural networks." In 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2009. http://dx.doi.org/10.1109/adprl.2009.4927524.
Full textWARLAUMONT, ANNE S. "REINFORCEMENT-MODULATED SELF-ORGANIZATION IN INFANT MOTOR SPEECH LEARNING." In Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0009.
Full textKormushev, Petar, Sylvain Calinon, and Darwin G. Caldwell. "Robot motor skill coordination with EM-based Reinforcement Learning." In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5649089.
Full text