Gotowa bibliografia na temat „Reinforcement Motor Learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Reinforcement Motor Learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Reinforcement Motor Learning"
Vassiliadis, Pierre, Gerard Derosiere, Cecile Dubuc, Aegryan Lete, Frederic Crevecoeur, Friedhelm C. Hummel i Julie Duque. "Reward boosts reinforcement-based motor learning". iScience 24, nr 7 (lipiec 2021): 102821. http://dx.doi.org/10.1016/j.isci.2021.102821.
Pełny tekst źródłaUehara, Shintaro, Firas Mawase, Amanda S. Therrien, Kendra M. Cherry-Allen i Pablo Celnik. "Interactions between motor exploration and reinforcement learning". Journal of Neurophysiology 122, nr 2 (1.08.2019): 797–808. http://dx.doi.org/10.1152/jn.00390.2018.
Pełny tekst źródłaSistani, Mohammad Bagher Naghibi, i Sadegh Hesari. "Decreasing Induction Motor Loss Using Reinforcement Learning". Journal of Automation and Control Engineering 3, nr 6 (2015): 13–17. http://dx.doi.org/10.12720/joace.4.1.13-17.
Pełny tekst źródłaPalidis, Dimitrios J., Heather R. McGregor, Andrew Vo, Penny A. MacDonald i Paul L. Gribble. "Null effects of levodopa on reward- and error-based motor adaptation, savings, and anterograde interference". Journal of Neurophysiology 126, nr 1 (1.07.2021): 47–67. http://dx.doi.org/10.1152/jn.00696.2020.
Pełny tekst źródłaIZAWA, Jun, Toshiyuki KONDO i Koji ITO. "Motor Learning Model through Reinforcement Learning with Neural Internal Model". Transactions of the Society of Instrument and Control Engineers 39, nr 7 (2003): 679–87. http://dx.doi.org/10.9746/sicetr1965.39.679.
Pełny tekst źródłaPeters, Jan, i Stefan Schaal. "Reinforcement learning of motor skills with policy gradients". Neural Networks 21, nr 4 (maj 2008): 682–97. http://dx.doi.org/10.1016/j.neunet.2008.02.003.
Pełny tekst źródłaSidarta, Ananda, John Komar i David J. Ostry. "Clustering analysis of movement kinematics in reinforcement learning". Journal of Neurophysiology 127, nr 2 (1.02.2022): 341–53. http://dx.doi.org/10.1152/jn.00229.2021.
Pełny tekst źródłaSidarta, Ananda, Floris T. van Vugt i David J. Ostry. "Somatosensory working memory in human reinforcement-based motor learning". Journal of Neurophysiology 120, nr 6 (1.12.2018): 3275–86. http://dx.doi.org/10.1152/jn.00442.2018.
Pełny tekst źródłaTian, Mengqi, Ke Wang, Hongyu Lv i Wubin Shi. "Reinforcement learning control method of torque stability of three-phase permanent magnet synchronous motor". Journal of Physics: Conference Series 2183, nr 1 (1.01.2022): 012024. http://dx.doi.org/10.1088/1742-6596/2183/1/012024.
Pełny tekst źródłaUehara, Shintaro, Firas Mawase i Pablo Celnik. "Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms". Cerebral Cortex 28, nr 10 (14.09.2017): 3478–90. http://dx.doi.org/10.1093/cercor/bhx214.
Pełny tekst źródłaRozprawy doktorskie na temat "Reinforcement Motor Learning"
Zhang, Fangyi. "Learning real-world visuo-motor policies from simulation". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/121471/1/Fangyi%20Zhang%20Thesis.pdf.
Pełny tekst źródłaDe, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics". Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.
Pełny tekst źródłaThe thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
Wang, Jiexin. "Policy Hyperparameter Exploration for Behavioral Learning of Smartphone Robots". 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225744.
Pełny tekst źródłaFrömer, Romy. "Learning to throw". Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17427.
Pełny tekst źródłaFeedback, training schedule and individual differences between learners influence the acquisition of motor skills and were investigated in the present thesis. A special focus was on brain processes underlying feedback processing and motor preparation, investigated using event related potentials (ERPs). 120 participants trained to throw at virtual targets and were tested for retention and transfer. Training schedule was manipulated with half of the participants practicing under high contextual interference (CI) (randomized training) and the other half under low CI (blocked training). In a follow-up online study, 80% of the participants completed a subset of the Raven advanced progressive matrices, testing reasoning ability. Under high CI, participants’ reasoning ability was related to higher performance increase during training and higher subsequent performance in retention and transfer. Similar effects in late stages of low CI training indicate, that variability is a necessary prerequisite for beneficial effects of reasoning ability. We conclude, that CI affects the amount of variability of practice across the course of training and the abstraction of rules (Study 1). Differential learning effects on ERPs in the preparatory phase foster this interpretation. High CI shows a larger decline in attention- and control-related ERPs than low CI. CNV amplitude, as a measure of motor preparatory activity, increases with learning only, when attention demands of training and retention are similar, as in low CI training. This points to two parallel mechanisms in motor learning, with a cognitive and a motor processor, mutually contributing to CNV amplitude (Study 2). In the framework of the “reinforcement learning theory of the error related negativity”, we showed, that positive performance feedback is processed gradually and that this processing is reflected in varying amplitudes of reward positivity (Study 3). Together these results provide new insights on motor learning.
PAQUIER, Williams. "Apprentissage ouvert de representations et de fonctionnalites en robotique : anayse, modeles et implementation". Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00009324.
Pełny tekst źródłaTrska, Robert. "Motor expectancy: the modulation of the reward positivity in a reinforcement learning motor task". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9992.
Pełny tekst źródłaGraduate
Sendhilnathan, Naveen. "The role of the cerebellum in reinforcement learning". Thesis, 2021. https://doi.org/10.7916/d8-p13c-3955.
Pełny tekst źródłaKrigolson, Olave. "Hierarchical error processing during motor control". Thesis, 2007. http://hdl.handle.net/1828/239.
Pełny tekst źródłaKsiążki na temat "Reinforcement Motor Learning"
The contextual interference effect in learning an open motor skill. 1986.
Znajdź pełny tekst źródłaThe contextual interference effect in learning an open motor skill. 1988.
Znajdź pełny tekst źródłaThe effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1991.
Znajdź pełny tekst źródłaThe effect of competitive anxiety and reinforcement on the performance of collegiate student-athletes. 1990.
Znajdź pełny tekst źródłaHerreros, Ivan. Learning and control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0026.
Pełny tekst źródłaEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.
Znajdź pełny tekst źródłaEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1990.
Znajdź pełny tekst źródłaEffects of cognitive learning strategies and reinforcement on the acquisition of closed motor skills in older adults. 1991.
Znajdź pełny tekst źródłaEffects of cognitive learning strategies and reinforcement: On the acquisition of closed motor skills in older adults. 1991.
Znajdź pełny tekst źródłaYun, Chi-Hong. Pre- and post-knowledge of results intervals and motor performance of mentally retarded individuals. 1989.
Znajdź pełny tekst źródłaCzęści książek na temat "Reinforcement Motor Learning"
Mannes, Christian. "Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning". W Konnektionismus in Artificial Intelligence und Kognitionsforschung, 95–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-76070-9_10.
Pełny tekst źródłaManjunatha, Hemanth, i Ehsan T. Esfahani. "Application of Reinforcement and Deep Learning Techniques in Brain–Machine Interfaces". W Advances in Motor Neuroprostheses, 1–14. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38740-2_1.
Pełny tekst źródłaLohse, Keith, Matthew Miller, Mariane Bacelar i Olav Krigolson. "Errors, rewards, and reinforcement in motor skill learning". W Skill Acquisition in Sport, 39–60. Third Edition. | New York : Routledge, 2019. | “First edition published by Routledge 2004”--T.p. verso. | Previous edition: 2012.: Routledge, 2019. http://dx.doi.org/10.4324/9781351189750-3.
Pełny tekst źródłaLane, Stephen H., David A. Handelman i Jack J. Gelfand. "Modulation of Robotic Motor Synergies Using Reinforcement Learning Optimization". W Neural Networks in Robotics, 521–38. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3180-7_29.
Pełny tekst źródłaKober, Jens, i Jan Peters. "Reinforcement Learning to Adjust Parametrized Motor Primitives to New Situations". W Springer Tracts in Advanced Robotics, 119–47. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03194-1_5.
Pełny tekst źródłaKober, Jens, Betty Mohler i Jan Peters. "Imitation and Reinforcement Learning for Motor Primitives with Perceptual Coupling". W Studies in Computational Intelligence, 209–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05181-4_10.
Pełny tekst źródłaPatil, Gaurav, Patrick Nalepka, Lillian Rigoli, Rachel W. Kallen i Michael J. Richardson. "Dynamical Perceptual-Motor Primitives for Better Deep Reinforcement Learning Agents". W Lecture Notes in Computer Science, 176–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85739-4_15.
Pełny tekst źródłaCoulom, Rémi. "Feedforward Neural Networks in Reinforcement Learning Applied to High-Dimensional Motor Control". W Lecture Notes in Computer Science, 403–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36169-3_32.
Pełny tekst źródłaKoprinkova-Hristova, Petia, i Nadejda Bocheva. "Spike Timing Neural Model of Eye Movement Motor Response with Reinforcement Learning". W Advanced Computing in Industrial Mathematics, 139–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71616-5_14.
Pełny tekst źródłaSimpson, Thomas G., i Karen Rafferty. "Evaluating the Effect of Reinforcement Haptics on Motor Learning and Cognitive Workload in Driver Training". W Lecture Notes in Computer Science, 203–11. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58465-8_16.
Pełny tekst źródłaStreszczenia konferencji na temat "Reinforcement Motor Learning"
Peters, J., i S. Schaal. "Reinforcement Learning for Parameterized Motor Primitives". W The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246662.
Pełny tekst źródłaFung, Bowen, Xin Sui, Colin Camerer i Dean Mobbs. "Reinforcement learning predicts frustration-related motor invigoration". W 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1020-0.
Pełny tekst źródłaLiu, Kainan, Xiaoshi Cai, Xiaojun Ban i Jian Zhang. "Galvanometer Motor Control Based on Reinforcement Learning". W 2022 5th International Conference on Intelligent Autonomous Systems (ICoIAS). IEEE, 2022. http://dx.doi.org/10.1109/icoias56028.2022.9931291.
Pełny tekst źródłaShinohara, Daisuke, Takamitsu Matsubara i Masatsugu Kidode. "Learning motor skills with non-rigid materials by reinforcement learning". W 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2011. http://dx.doi.org/10.1109/robio.2011.6181709.
Pełny tekst źródłaBujgoi, Gheorghe, i Dorin Sendrescu. "DC Motor Control based on Integral Reinforcement Learning". W 2022 23rd International Carpathian Control Conference (ICCC). IEEE, 2022. http://dx.doi.org/10.1109/iccc54292.2022.9805935.
Pełny tekst źródłaStulp, Freek, Jonas Buchli, Evangelos Theodorou i Stefan Schaal. "Reinforcement learning of full-body humanoid motor skills". W 2010 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010). IEEE, 2010. http://dx.doi.org/10.1109/ichr.2010.5686320.
Pełny tekst źródłaCosmin, Bucur, i Tasu Sorin. "Reinforcement Learning for a Continuous DC Motor Controller". W 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2023. http://dx.doi.org/10.1109/ecai58194.2023.10193912.
Pełny tekst źródłaHuH, Dongsung, i Emanuel Todorov. "Real-time motor control using recurrent neural networks". W 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2009. http://dx.doi.org/10.1109/adprl.2009.4927524.
Pełny tekst źródłaWARLAUMONT, ANNE S. "REINFORCEMENT-MODULATED SELF-ORGANIZATION IN INFANT MOTOR SPEECH LEARNING". W Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0009.
Pełny tekst źródłaKormushev, Petar, Sylvain Calinon i Darwin G. Caldwell. "Robot motor skill coordination with EM-based Reinforcement Learning". W 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5649089.
Pełny tekst źródła