Literatura académica sobre el tema "Reinforcement Motor Learning"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Reinforcement Motor Learning".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Reinforcement Motor Learning"
Vassiliadis, Pierre, Gerard Derosiere, Cecile Dubuc, Aegryan Lete, Frederic Crevecoeur, Friedhelm C. Hummel y Julie Duque. "Reward boosts reinforcement-based motor learning". iScience 24, n.º 7 (julio de 2021): 102821. http://dx.doi.org/10.1016/j.isci.2021.102821.
Texto completoUehara, Shintaro, Firas Mawase, Amanda S. Therrien, Kendra M. Cherry-Allen y Pablo Celnik. "Interactions between motor exploration and reinforcement learning". Journal of Neurophysiology 122, n.º 2 (1 de agosto de 2019): 797–808. http://dx.doi.org/10.1152/jn.00390.2018.
Texto completoSistani, Mohammad Bagher Naghibi y Sadegh Hesari. "Decreasing Induction Motor Loss Using Reinforcement Learning". Journal of Automation and Control Engineering 3, n.º 6 (2015): 13–17. http://dx.doi.org/10.12720/joace.4.1.13-17.
Texto completoPalidis, Dimitrios J., Heather R. McGregor, Andrew Vo, Penny A. MacDonald y Paul L. Gribble. "Null effects of levodopa on reward- and error-based motor adaptation, savings, and anterograde interference". Journal of Neurophysiology 126, n.º 1 (1 de julio de 2021): 47–67. http://dx.doi.org/10.1152/jn.00696.2020.
Texto completoIZAWA, Jun, Toshiyuki KONDO y Koji ITO. "Motor Learning Model through Reinforcement Learning with Neural Internal Model". Transactions of the Society of Instrument and Control Engineers 39, n.º 7 (2003): 679–87. http://dx.doi.org/10.9746/sicetr1965.39.679.
Texto completoPeters, Jan y Stefan Schaal. "Reinforcement learning of motor skills with policy gradients". Neural Networks 21, n.º 4 (mayo de 2008): 682–97. http://dx.doi.org/10.1016/j.neunet.2008.02.003.
Texto completoSidarta, Ananda, John Komar y David J. Ostry. "Clustering analysis of movement kinematics in reinforcement learning". Journal of Neurophysiology 127, n.º 2 (1 de febrero de 2022): 341–53. http://dx.doi.org/10.1152/jn.00229.2021.
Texto completoSidarta, Ananda, Floris T. van Vugt y David J. Ostry. "Somatosensory working memory in human reinforcement-based motor learning". Journal of Neurophysiology 120, n.º 6 (1 de diciembre de 2018): 3275–86. http://dx.doi.org/10.1152/jn.00442.2018.
Texto completoTian, Mengqi, Ke Wang, Hongyu Lv y Wubin Shi. "Reinforcement learning control method of torque stability of three-phase permanent magnet synchronous motor". Journal of Physics: Conference Series 2183, n.º 1 (1 de enero de 2022): 012024. http://dx.doi.org/10.1088/1742-6596/2183/1/012024.
Texto completoUehara, Shintaro, Firas Mawase y Pablo Celnik. "Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms". Cerebral Cortex 28, n.º 10 (14 de septiembre de 2017): 3478–90. http://dx.doi.org/10.1093/cercor/bhx214.
Texto completoTesis sobre el tema "Reinforcement Motor Learning"
Zhang, Fangyi. "Learning real-world visuo-motor policies from simulation". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121471/1/Fangyi%20Zhang%20Thesis.pdf.
Texto completoDe, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics". Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.
Texto completoThe thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
Wang, Jiexin. "Policy Hyperparameter Exploration for Behavioral Learning of Smartphone Robots". 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225744.
Texto completoFrömer, Romy. "Learning to throw". Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17427.
Texto completoFeedback, training schedule and individual differences between learners influence the acquisition of motor skills and were investigated in the present thesis. A special focus was on brain processes underlying feedback processing and motor preparation, investigated using event related potentials (ERPs). 120 participants trained to throw at virtual targets and were tested for retention and transfer. Training schedule was manipulated with half of the participants practicing under high contextual interference (CI) (randomized training) and the other half under low CI (blocked training). In a follow-up online study, 80% of the participants completed a subset of the Raven advanced progressive matrices, testing reasoning ability. Under high CI, participants’ reasoning ability was related to higher performance increase during training and higher subsequent performance in retention and transfer. Similar effects in late stages of low CI training indicate, that variability is a necessary prerequisite for beneficial effects of reasoning ability. We conclude, that CI affects the amount of variability of practice across the course of training and the abstraction of rules (Study 1). Differential learning effects on ERPs in the preparatory phase foster this interpretation. High CI shows a larger decline in attention- and control-related ERPs than low CI. CNV amplitude, as a measure of motor preparatory activity, increases with learning only, when attention demands of training and retention are similar, as in low CI training. This points to two parallel mechanisms in motor learning, with a cognitive and a motor processor, mutually contributing to CNV amplitude (Study 2). In the framework of the “reinforcement learning theory of the error related negativity”, we showed, that positive performance feedback is processed gradually and that this processing is reflected in varying amplitudes of reward positivity (Study 3). Together these results provide new insights on motor learning.
PAQUIER, Williams. "Apprentissage ouvert de representations et de fonctionnalites en robotique : anayse, modeles et implementation". Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00009324.
Texto completoTrska, Robert. "Motor expectancy: the modulation of the reward positivity in a reinforcement learning motor task". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9992.
Texto completoGraduate
Sendhilnathan, Naveen. "The role of the cerebellum in reinforcement learning". Thesis, 2021. https://doi.org/10.7916/d8-p13c-3955.
Texto completoKrigolson, Olave. "Hierarchical error processing during motor control". Thesis, 2007. http://hdl.handle.net/1828/239.
Texto completoLibros sobre el tema "Reinforcement Motor Learning"
The contextual interference effect in learning an open motor skill. 1986.
Buscar texto completoThe contextual interference effect in learning an open motor skill. 1988.
Buscar texto completoThe effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1991.
Buscar texto completoThe effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1990.
Buscar texto completoHerreros, Ivan. Learning and control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0026.
Texto completoEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.
Buscar texto completoEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1990.
Buscar texto completoEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.
Buscar texto completoEffects of cognitive learning strategies and reinforcement: On the acquisition of closed motor skills in older adults. 1991.
Buscar texto completoYun, Chi-Hong. Pre- and post-knowledge of results intervals and motor performance of mentally retarded individuals. 1989.
Buscar texto completoCapítulos de libros sobre el tema "Reinforcement Motor Learning"
Mannes, Christian. "Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning". En Konnektionismus in Artificial Intelligence und Kognitionsforschung, 95–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-76070-9_10.
Texto completoManjunatha, Hemanth y Ehsan T. Esfahani. "Application of Reinforcement and Deep Learning Techniques in Brain–Machine Interfaces". En Advances in Motor Neuroprostheses, 1–14. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38740-2_1.
Texto completoLohse, Keith, Matthew Miller, Mariane Bacelar y Olav Krigolson. "Errors, rewards, and reinforcement in motor skill learning". En Skill Acquisition in Sport, 39–60. Third Edition. | New York : Routledge, 2019. | “First edition published by Routledge 2004”--T.p. verso. | Previous edition: 2012.: Routledge, 2019. http://dx.doi.org/10.4324/9781351189750-3.
Texto completoLane, Stephen H., David A. Handelman y Jack J. Gelfand. "Modulation of Robotic Motor Synergies Using Reinforcement Learning Optimization". En Neural Networks in Robotics, 521–38. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3180-7_29.
Texto completoKober, Jens y Jan Peters. "Reinforcement Learning to Adjust Parametrized Motor Primitives to New Situations". En Springer Tracts in Advanced Robotics, 119–47. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03194-1_5.
Texto completoKober, Jens, Betty Mohler y Jan Peters. "Imitation and Reinforcement Learning for Motor Primitives with Perceptual Coupling". En Studies in Computational Intelligence, 209–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05181-4_10.
Texto completoPatil, Gaurav, Patrick Nalepka, Lillian Rigoli, Rachel W. Kallen y Michael J. Richardson. "Dynamical Perceptual-Motor Primitives for Better Deep Reinforcement Learning Agents". En Lecture Notes in Computer Science, 176–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85739-4_15.
Texto completoCoulom, Rémi. "Feedforward Neural Networks in Reinforcement Learning Applied to High-Dimensional Motor Control". En Lecture Notes in Computer Science, 403–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36169-3_32.
Texto completoKoprinkova-Hristova, Petia y Nadejda Bocheva. "Spike Timing Neural Model of Eye Movement Motor Response with Reinforcement Learning". En Advanced Computing in Industrial Mathematics, 139–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71616-5_14.
Texto completoSimpson, Thomas G. y Karen Rafferty. "Evaluating the Effect of Reinforcement Haptics on Motor Learning and Cognitive Workload in Driver Training". En Lecture Notes in Computer Science, 203–11. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58465-8_16.
Texto completoActas de conferencias sobre el tema "Reinforcement Motor Learning"
Peters, J. y S. Schaal. "Reinforcement Learning for Parameterized Motor Primitives". En The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246662.
Texto completoFung, Bowen, Xin Sui, Colin Camerer y Dean Mobbs. "Reinforcement learning predicts frustration-related motor invigoration". En 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1020-0.
Texto completoLiu, Kainan, Xiaoshi Cai, Xiaojun Ban y Jian Zhang. "Galvanometer Motor Control Based on Reinforcement Learning". En 2022 5th International Conference on Intelligent Autonomous Systems (ICoIAS). IEEE, 2022. http://dx.doi.org/10.1109/icoias56028.2022.9931291.
Texto completoShinohara, Daisuke, Takamitsu Matsubara y Masatsugu Kidode. "Learning motor skills with non-rigid materials by reinforcement learning". En 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2011. http://dx.doi.org/10.1109/robio.2011.6181709.
Texto completoBujgoi, Gheorghe y Dorin Sendrescu. "DC Motor Control based on Integral Reinforcement Learning". En 2022 23rd International Carpathian Control Conference (ICCC). IEEE, 2022. http://dx.doi.org/10.1109/iccc54292.2022.9805935.
Texto completoStulp, Freek, Jonas Buchli, Evangelos Theodorou y Stefan Schaal. "Reinforcement learning of full-body humanoid motor skills". En 2010 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010). IEEE, 2010. http://dx.doi.org/10.1109/ichr.2010.5686320.
Texto completoCosmin, Bucur y Tasu Sorin. "Reinforcement Learning for a Continuous DC Motor Controller". En 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2023. http://dx.doi.org/10.1109/ecai58194.2023.10193912.
Texto completoHuH, Dongsung y Emanuel Todorov. "Real-time motor control using recurrent neural networks". En 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2009. http://dx.doi.org/10.1109/adprl.2009.4927524.
Texto completoWARLAUMONT, ANNE S. "REINFORCEMENT-MODULATED SELF-ORGANIZATION IN INFANT MOTOR SPEECH LEARNING". En Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0009.
Texto completoKormushev, Petar, Sylvain Calinon y Darwin G. Caldwell. "Robot motor skill coordination with EM-based Reinforcement Learning". En 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5649089.
Texto completo