Статті в журналах з теми "Reinforcement Motor Learning"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Reinforcement Motor Learning.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Reinforcement Motor Learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Vassiliadis, Pierre, Gerard Derosiere, Cecile Dubuc, Aegryan Lete, Frederic Crevecoeur, Friedhelm C. Hummel, and Julie Duque. "Reward boosts reinforcement-based motor learning." iScience 24, no. 7 (July 2021): 102821. http://dx.doi.org/10.1016/j.isci.2021.102821.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Uehara, Shintaro, Firas Mawase, Amanda S. Therrien, Kendra M. Cherry-Allen, and Pablo Celnik. "Interactions between motor exploration and reinforcement learning." Journal of Neurophysiology 122, no. 2 (August 1, 2019): 797–808. http://dx.doi.org/10.1152/jn.00390.2018.

Повний текст джерела
Анотація:
Motor exploration, a trial-and-error process in search for better motor outcomes, is known to serve a critical role in motor learning. This is particularly relevant during reinforcement learning, where actions leading to a successful outcome are reinforced while unsuccessful actions are avoided. Although early on motor exploration is beneficial to finding the correct solution, maintaining high levels of exploration later in the learning process might be deleterious. Whether and how the level of exploration changes over the course of reinforcement learning, however, remains poorly understood. Here we evaluated temporal changes in motor exploration while healthy participants learned a reinforcement-based motor task. We defined exploration as the magnitude of trial-to-trial change in movements as a function of whether the preceding trial resulted in success or failure. Participants were required to find the optimal finger-pointing direction using binary feedback of success or failure. We found that the magnitude of exploration gradually increased over time when participants were learning the task. Conversely, exploration remained low in participants who were unable to correctly adjust their pointing direction. Interestingly, exploration remained elevated when participants underwent a second training session, which was associated with faster relearning. These results indicate that the motor system may flexibly upregulate the extent of exploration during reinforcement learning as if acquiring a specific strategy to facilitate subsequent learning. Also, our findings showed that exploration affects reinforcement learning and vice versa, indicating an interactive relationship between them. Reinforcement-based tasks could be used as primers to increase exploratory behavior leading to more efficient subsequent learning. NEW & NOTEWORTHY Motor exploration, the ability to search for the correct actions, is critical to learning motor skills. Despite this, whether and how the level of exploration changes over the course of training remains poorly understood. We showed that exploration increased and remained high throughout training of a reinforcement-based motor task. Interestingly, elevated exploration persisted and facilitated subsequent learning. These results suggest that the motor system upregulates exploration as if learning a strategy to facilitate subsequent learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sistani, Mohammad Bagher Naghibi, and Sadegh Hesari. "Decreasing Induction Motor Loss Using Reinforcement Learning." Journal of Automation and Control Engineering 3, no. 6 (2015): 13–17. http://dx.doi.org/10.12720/joace.4.1.13-17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Palidis, Dimitrios J., Heather R. McGregor, Andrew Vo, Penny A. MacDonald, and Paul L. Gribble. "Null effects of levodopa on reward- and error-based motor adaptation, savings, and anterograde interference." Journal of Neurophysiology 126, no. 1 (July 1, 2021): 47–67. http://dx.doi.org/10.1152/jn.00696.2020.

Повний текст джерела
Анотація:
Motor adaptation relies on multiple processes including reinforcement of successful actions. Cognitive reinforcement learning is impaired by levodopa-induced disruption of dopamine function. We administered levodopa to healthy adults who participated in multiple motor adaptation tasks. We found no effects of levodopa on any component of motor adaptation. This suggests that motor adaptation may not depend on the same dopaminergic mechanisms as cognitive forms or reinforcement learning that have been shown to be impaired by levodopa.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

IZAWA, Jun, Toshiyuki KONDO, and Koji ITO. "Motor Learning Model through Reinforcement Learning with Neural Internal Model." Transactions of the Society of Instrument and Control Engineers 39, no. 7 (2003): 679–87. http://dx.doi.org/10.9746/sicetr1965.39.679.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Peters, Jan, and Stefan Schaal. "Reinforcement learning of motor skills with policy gradients." Neural Networks 21, no. 4 (May 2008): 682–97. http://dx.doi.org/10.1016/j.neunet.2008.02.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sidarta, Ananda, John Komar, and David J. Ostry. "Clustering analysis of movement kinematics in reinforcement learning." Journal of Neurophysiology 127, no. 2 (February 1, 2022): 341–53. http://dx.doi.org/10.1152/jn.00229.2021.

Повний текст джерела
Анотація:
The choice of exploration versus exploitation is a fundamental problem in learning new motor skills through reinforcement. In this study, we employed a data-driven approach to characterize movements on a trial-by-trial basis with an unsupervised clustering algorithm. Using this technique, we found that changes in task demands and, in particular, in the required accuracy of movements, influenced the ratio of exploration to exploitation. This analysis framework provides an attractive tool to investigate mechanisms of explorative and exploitative behavior while studying motor learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sidarta, Ananda, Floris T. van Vugt, and David J. Ostry. "Somatosensory working memory in human reinforcement-based motor learning." Journal of Neurophysiology 120, no. 6 (December 1, 2018): 3275–86. http://dx.doi.org/10.1152/jn.00442.2018.

Повний текст джерела
Анотація:
Recent studies using visuomotor adaptation and sequence learning tasks have assessed the involvement of working memory in the visuospatial domain. The capacity to maintain previously performed movements in working memory is perhaps even more important in reinforcement-based learning to repeat accurate movements and avoid mistakes. Using this kind of task in the present work, we tested the relationship between somatosensory working memory and motor learning. The first experiment involved separate memory and motor learning tasks. In the memory task, the participant’s arm was displaced in different directions by a robotic arm, and the participant was asked to judge whether a subsequent test direction was one of the previously presented directions. In the motor learning task, participants made reaching movements to a hidden visual target and were provided with positive feedback as reinforcement when the movement ended in the target zone. It was found that participants that had better somatosensory working memory showed greater motor learning. In a second experiment, we designed a new task in which learning and working memory trials were interleaved, allowing us to study participants’ memory for movements they performed as part of learning. As in the first experiment, we found that participants with better somatosensory working memory also learned more. Moreover, memory performance for successful movements was better than for movements that failed to reach the target. These results suggest that somatosensory working memory is involved in reinforcement motor learning and that this memory preferentially keeps track of reinforced movements. NEW & NOTEWORTHY The present work examined somatosensory working memory in reinforcement-based motor learning. Working memory performance was reliably correlated with the extent of learning. With the use of a paradigm in which learning and memory trials were interleaved, memory was assessed for movements performed during learning. Movements that received positive feedback were better remembered than movements that did not. Thus working memory does not track all movements equally but is biased to retain movements that were rewarded.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tian, Mengqi, Ke Wang, Hongyu Lv, and Wubin Shi. "Reinforcement learning control method of torque stability of three-phase permanent magnet synchronous motor." Journal of Physics: Conference Series 2183, no. 1 (January 1, 2022): 012024. http://dx.doi.org/10.1088/1742-6596/2183/1/012024.

Повний текст джерела
Анотація:
Abstract Regarding the control strategy of the permanent magnet synchronous motor, the field-oriented control based on the PI controller have the instability of the output torque. In order to stabilize the output torque of the permanent magnet synchronous motor, this paper adopts reinforcement learning to improve traditional PI controller. Finally, in the MATLAB/Simulink simulation environment, a new control method based on reinforcement learning is established. The simulation results show that the reinforcement learning control method used in this paper can improve the stability of the output torque.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Uehara, Shintaro, Firas Mawase, and Pablo Celnik. "Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms." Cerebral Cortex 28, no. 10 (September 14, 2017): 3478–90. http://dx.doi.org/10.1093/cercor/bhx214.

Повний текст джерела
Анотація:
Abstract Humans can acquire knowledge of new motor behavior via different forms of learning. The two forms most commonly studied have been the development of internal models based on sensory-prediction errors (error-based learning) and success-based feedback (reinforcement learning). Human behavioral studies suggest these are distinct learning processes, though the neurophysiological mechanisms that are involved have not been characterized. Here, we evaluated physiological markers from the cerebellum and the primary motor cortex (M1) using noninvasive brain stimulations while healthy participants trained finger-reaching tasks. We manipulated the extent to which subjects rely on error-based or reinforcement by providing either vector or binary feedback about task performance. Our results demonstrated a double dissociation where learning the task mainly via error-based mechanisms leads to cerebellar plasticity modifications but not long-term potentiation (LTP)-like plasticity changes in M1; while learning a similar action via reinforcement mechanisms elicited M1 LTP-like plasticity but not cerebellar plasticity changes. Our findings indicate that learning complex motor behavior is mediated by the interplay of different forms of learning, weighing distinct neural mechanisms in M1 and the cerebellum. Our study provides insights for designing effective interventions to enhance human motor learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Holland, Peter, Olivier Codol, and Joseph M. Galea. "Contribution of explicit processes to reinforcement-based motor learning." Journal of Neurophysiology 119, no. 6 (June 1, 2018): 2241–55. http://dx.doi.org/10.1152/jn.00901.2017.

Повний текст джерела
Анотація:
Despite increasing interest in the role of reward in motor learning, the underlying mechanisms remain ill defined. In particular, the contribution of explicit processes to reward-based motor learning is unclear. To address this, we examined subjects’ ( n = 30) ability to learn to compensate for a gradually introduced 25° visuomotor rotation with only reward-based feedback (binary success/failure). Only two-thirds of subjects ( n = 20) were successful at the maximum angle. The remaining subjects initially followed the rotation but after a variable number of trials began to reach at an insufficiently large angle and subsequently returned to near-baseline performance ( n = 10). Furthermore, those who were successful accomplished this via a large explicit component, evidenced by a reduction in reach angle when they were asked to remove any strategy they employed. However, both groups displayed a small degree of remaining retention even after the removal of this explicit component. All subjects made greater and more variable changes in reach angle after incorrect (unrewarded) trials. However, subjects who failed to learn showed decreased sensitivity to errors, even in the initial period in which they followed the rotation, a pattern previously found in parkinsonian patients. In a second experiment, the addition of a secondary mental rotation task completely abolished learning ( n = 10), while a control group replicated the results of the first experiment ( n = 10). These results emphasize a pivotal role of explicit processes during reinforcement-based motor learning, and the susceptibility of this form of learning to disruption has important implications for its potential therapeutic benefits. NEW & NOTEWORTHY We demonstrate that learning a visuomotor rotation with only reward-based feedback is principally accomplished via the development of a large explicit component. Furthermore, this form of learning is susceptible to disruption with a secondary task. The results suggest that future experiments utilizing reward-based feedback should aim to dissect the roles of implicit and explicit reinforcement learning systems. Therapeutic motor learning approaches based on reward should be aware of the sensitivity to disruption.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Warlaumont, Anne S., Gert Westermann, Eugene H. Buder, and D. Kimbrough Oller. "Prespeech motor learning in a neural network using reinforcement." Neural Networks 38 (February 2013): 64–75. http://dx.doi.org/10.1016/j.neunet.2012.11.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Therrien, Amanda S., Daniel M. Wolpert, and Amy J. Bastian. "Increasing Motor Noise Impairs Reinforcement Learning in Healthy Individuals." eneuro 5, no. 3 (May 2018): ENEURO.0050–18.2018. http://dx.doi.org/10.1523/eneuro.0050-18.2018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Hahnloser, Richard, and Anja Zai. "A computational view on motor exploration during reinforcement learning." IBRO Reports 6 (September 2019): S50. http://dx.doi.org/10.1016/j.ibror.2019.07.155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Pyle, Ryan, and Robert Rosenbaum. "A Reservoir Computing Model of Reward-Modulated Motor Learning and Automaticity." Neural Computation 31, no. 7 (July 2019): 1430–61. http://dx.doi.org/10.1162/neco_a_01198.

Повний текст джерела
Анотація:
Reservoir computing is a biologically inspired class of learning algorithms in which the intrinsic dynamics of a recurrent neural network are mined to produce target time series. Most existing reservoir computing algorithms rely on fully supervised learning rules, which require access to an exact copy of the target response, greatly reducing the utility of the system. Reinforcement learning rules have been developed for reservoir computing, but we find that they fail to converge on complex motor tasks. Current theories of biological motor learning pose that early learning is controlled by dopamine-modulated plasticity in the basal ganglia that trains parallel cortical pathways through unsupervised plasticity as a motor task becomes well learned. We developed a novel learning algorithm for reservoir computing that models the interaction between reinforcement and unsupervised learning observed in experiments. This novel learning algorithm converges on simulated motor tasks on which previous reservoir computing algorithms fail and reproduces experimental findings that relate Parkinson's disease and its treatments to motor learning. Hence, incorporating biological theories of motor learning improves the effectiveness and biological relevance of reservoir computing models.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yu, Wanming, Chuanyu Yang, Christopher McGreavy, Eleftherios Triantafyllidis, Guillaume Bellegarda, Milad Shafiee, Auke Jan Ijspeert, and Zhibin Li. "Identifying important sensory feedback for learning locomotion skills." Nature Machine Intelligence 5, no. 8 (August 21, 2023): 919–32. http://dx.doi.org/10.1038/s42256-023-00701-w.

Повний текст джерела
Анотація:
AbstractRobot motor skills can be acquired by deep reinforcement learning as neural networks to reflect state–action mapping. The selection of states has been demonstrated to be crucial for successful robot motor learning. However, because of the complexity of neural networks, human insights and engineering efforts are often required to select appropriate states through qualitative approaches, such as ablation studies, without a quantitative analysis of the state importance. Here we present a systematic saliency analysis that quantitatively evaluates the relative importance of different feedback states for motor skills learned through deep reinforcement learning. Our approach provides a guideline to identify the most essential feedback states for robot motor learning. By using only the important states including joint positions, gravity vector and base linear and angular velocities, we demonstrate that a simulated quadruped robot can learn various robust locomotion skills. We find that locomotion skills learned only with important states can achieve task performance comparable to the performance of those with more states. This work provides quantitative insights into the impacts of state observations on specific types of motor skills, enabling the learning of a wide range of motor skills with minimal sensing dependencies.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yin, Fengyuan, Xiaoming Yuan, Zhiao Ma, and Xinyu Xu. "Vector Control of PMSM Using TD3 Reinforcement Learning Algorithm." Algorithms 16, no. 9 (August 24, 2023): 404. http://dx.doi.org/10.3390/a16090404.

Повний текст джерела
Анотація:
Permanent magnet synchronous motor (PMSM) drive systems are commonly utilized in mobile electric drive systems due to their high efficiency, high power density, and low maintenance cost. To reduce the tracking error of the permanent magnet synchronous motor, a reinforcement learning (RL) control algorithm based on double delay deterministic gradient algorithm (TD3) is proposed. The physical modeling of PMSM is carried out in Simulink, and the current controller controlling id-axis and iq-axis in the current loop is replaced by a reinforcement learning controller. The optimal control network parameters were obtained through simulation learning, and DDPG, BP, and LQG algorithms were simulated and compared under the same conditions. In the experiment part, the trained RL network was compiled into C code according to the workflow with the help of rapid prototyping control, and then downloaded to the controller for testing. The measured output signal is consistent with the simulation results, which shows that the algorithm can significantly reduce the tracking error under the variable speed of the motor, making the system have a fast response.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

McDougle, Samuel D., Matthew J. Boggess, Matthew J. Crossley, Darius Parvin, Richard B. Ivry, and Jordan A. Taylor. "Credit assignment in movement-dependent reinforcement learning." Proceedings of the National Academy of Sciences 113, no. 24 (May 31, 2016): 6797–802. http://dx.doi.org/10.1073/pnas.1523669113.

Повний текст джерела
Анотація:
When a person fails to obtain an expected reward from an object in the environment, they face a credit assignment problem: Did the absence of reward reflect an extrinsic property of the environment or an intrinsic error in motor execution? To explore this problem, we modified a popular decision-making task used in studies of reinforcement learning, the two-armed bandit task. We compared a version in which choices were indicated by key presses, the standard response in such tasks, to a version in which the choices were indicated by reaching movements, which affords execution failures. In the key press condition, participants exhibited a strong risk aversion bias; strikingly, this bias reversed in the reaching condition. This result can be explained by a reinforcement model wherein movement errors influence decision-making, either by gating reward prediction errors or by modifying an implicit representation of motor competence. Two further experiments support the gating hypothesis. First, we used a condition in which we provided visual cues indicative of movement errors but informed the participants that trial outcomes were independent of their actual movements. The main result was replicated, indicating that the gating process is independent of participants’ explicit sense of control. Second, individuals with cerebellar degeneration failed to modulate their behavior between the key press and reach conditions, providing converging evidence of an implicit influence of movement error signals on reinforcement learning. These results provide a mechanistically tractable solution to the credit assignment problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Maia, T. "A Reinforcement-learning Account of Tourette Syndrome." European Psychiatry 41, S1 (April 2017): S10. http://dx.doi.org/10.1016/j.eurpsy.2017.01.083.

Повний текст джерела
Анотація:
BackgroundTourette syndrome (TS) has long been thought to involve dopaminergic disturbances, given the effectiveness of antipsychotics in diminishing tics. Molecular-imaging studies have, by and large, confirmed that there are specific alterations in the dopaminergic system in TS. In parallel, multiple lines of evidence have implicated the motor cortico-basal ganglia-thalamo-cortical (CBGTC) loop in TS. Finally, several studies demonstrate that patients with TS exhibit exaggerated habit learning. This talk will present a computational theory of TS that ties together these multiple findings.MethodsThe computational theory builds on computational reinforcement-learning models, and more specifically on a recent model of the role of the direct and indirect basal-ganglia pathways in learning from positive and negative outcomes, respectively.ResultsA model defined by a small set of equations that characterize the role of dopamine in modulating learning and excitability in the direct and indirect pathways explains, in an integrated way: (1) the role of dopamine in the development of tics; (2) the relation between dopaminergic disturbances, involvement of the motor CBGTC loop, and excessive habit learning in TS; (3) the mechanism of action of antipsychotics in TS; and (4) the psychological and neural mechanisms of action of habit-reversal training, the main behavioral therapy for TS.ConclusionsA simple computational model, thoroughly grounded on computational theory and basic-science findings concerning dopamine and the basal ganglia, provides an integrated, rigorous mathematical explanation for a broad range of empirical findings in TS.Disclosure of interestThe author has not supplied his declaration of competing interest.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Guzman, Luis, Vassilios Morellas, and Nikolaos Papanikolopoulos. "Robotic Embodiment of Human-Like Motor Skills via Reinforcement Learning." IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 3711–17. http://dx.doi.org/10.1109/lra.2022.3147453.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chai, Jiazheng, and Mitsuhiro Hayashibe. "Motor Synergy Development in High-Performing Deep Reinforcement Learning Algorithms." IEEE Robotics and Automation Letters 5, no. 2 (April 2020): 1271–78. http://dx.doi.org/10.1109/lra.2020.2968067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Arie, Hiroaki, Tetsuya Ogata, Jun Tani, and Shigeki Sugano. "Reinforcement learning of a continuous motor sequence with hidden states." Advanced Robotics 21, no. 10 (January 2007): 1215–29. http://dx.doi.org/10.1163/156855307781389365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lu, Huimin, Yujie Li, Shenglin Mu, Dong Wang, Hyoungseop Kim, and Seiichi Serikawa. "Motor Anomaly Detection for Unmanned Aerial Vehicles Using Reinforcement Learning." IEEE Internet of Things Journal 5, no. 4 (August 2018): 2315–22. http://dx.doi.org/10.1109/jiot.2017.2737479.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kober, Jens, Andreas Wilhelm, Erhan Oztop, and Jan Peters. "Reinforcement learning to adjust parametrized motor primitives to new situations." Autonomous Robots 33, no. 4 (April 5, 2012): 361–79. http://dx.doi.org/10.1007/s10514-012-9290-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Yang, Yuguang, Michael A. Bevan, and Bo Li. "Micro/Nano Motor Navigation and Localization via Deep Reinforcement Learning." Advanced Theory and Simulations 3, no. 6 (April 28, 2020): 2000034. http://dx.doi.org/10.1002/adts.202000034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Korivand, Soroush, Nader Jalili, and Jiaqi Gong. "Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling." Sensors 23, no. 5 (March 1, 2023): 2698. http://dx.doi.org/10.3390/s23052698.

Повний текст джерела
Анотація:
Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal drivers and clinical conditions. Most recent efforts to utilize reinforcement learning (RL) techniques are promising in the simulation of human locomotion and reveal musculoskeletal drives. However, these simulations often fail to mimic natural human locomotion because most reinforcement strategies have yet to consider any reference data regarding human movement. To address these challenges, in this study, we designed a reward function based on the trajectory optimization rewards (TOR) and bio-inspired rewards, which includes the rewards obtained from reference motion data captured by a single Inertial Moment Unit (IMU) sensor. The sensor was equipped on the participants’ pelvis to capture reference motion data. We also adapted the reward function by leveraging previous research on walking simulations for TOR. The experimental results showed that the simulated agents with the modified reward function performed better in mimicking the collected IMU data from participants, which means that the simulated human locomotion was more realistic. As a bio-inspired defined cost, IMU data enhanced the agent’s capacity to converge during the training process. As a result, the models’ convergence was faster than those developed without reference motion data. Consequently, human locomotion can be simulated more quickly and in a broader range of environments, with a better simulation performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Cho, Nam Jun, Sang Hyoung Lee, Jong Bok Kim, and Il Hong Suh. "Learning, Improving, and Generalizing Motor Skills for the Peg-in-Hole Tasks Based on Imitation Learning and Self-Learning." Applied Sciences 10, no. 8 (April 15, 2020): 2719. http://dx.doi.org/10.3390/app10082719.

Повний текст джерела
Анотація:
We propose a framework based on imitation learning and self-learning to enable robots to learn, improve, and generalize motor skills. The peg-in-hole task is important in manufacturing assembly work. Two motor skills for the peg-in-hole task are targeted: “hole search” and “peg insertion”. The robots learn initial motor skills from human demonstrations and then improve and/or generalize them through reinforcement learning (RL). An initial motor skill is represented as a concatenation of the parameters of a hidden Markov model (HMM) and a dynamic movement primitive (DMP) to classify input signals and generate motion trajectories. Reactions are classified as familiar or unfamiliar (i.e., modeled or not modeled), and initial motor skills are improved to solve familiar reactions and generalized to solve unfamiliar reactions. The proposed framework includes processes, algorithms, and reward functions that can be used for various motor skill types. To evaluate our framework, the motor skills were performed using an actual robotic arm and two reward functions for RL. To verify the learning and improving/generalizing processes, we successfully applied our framework to different shapes of pegs and holes. Moreover, the execution time steps and path optimization of RL were evaluated experimentally.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Bacon, Pierre-Luc, and Doina Precup. "Constructing Temporal Abstractions Autonomously in Reinforcement Learning." AI Magazine 39, no. 1 (March 27, 2018): 39–50. http://dx.doi.org/10.1609/aimag.v39i1.2780.

Повний текст джерела
Анотація:
The idea of temporal abstraction, i.e. learning, planning and representing the world at multiple time scales, has been a constant thread in AI research, spanning sub-fields from classical planning and search to control and reinforcement learning. For example, programming a robot typically involves making decisions over a set of controllers, rather than working at the level of motor torques. While temporal abstraction is a very natural concept, learning such abstractions with no human input has proved quite daunting. In this paper, we present a general architecture, called option-critic, which allows learning temporal abstractions automatically, end-to-end, simply from the agent’s experience. This approach allows continual learning and provides interesting qualitative and quantitative results in several tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Dominey, Peter F. "Complex sensory-motor sequence learning based on recurrent state representation and reinforcement learning." Biological Cybernetics 73, no. 3 (August 1995): 265–74. http://dx.doi.org/10.1007/bf00201428.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Fan, Jiang, Zhu Yunpu, Zou Quan, and Wang Manyi. "Reinforcement Learning based position control of shell-fetching manipulator with extreme random trees." Journal of Physics: Conference Series 2460, no. 1 (April 1, 2023): 012160. http://dx.doi.org/10.1088/1742-6596/2460/1/012160.

Повний текст джерела
Анотація:
Abstract The automatic loading system of artillery includes the gun-fetching manipulator, which is crucial. The stability and control accuracy of the manipulator, however, are comparatively subpar when following the position trajectory as a result of the changes in the settings of the permanent magnet synchronous motor. This research suggests a reinforcement learning-based strategy for controlling a gun manipulator’s motor behavior in light of the current scenario. In this algorithm, the state variable is the feedback output of the control variable, and the reward function is utilized to calculate the associated reward value. The reinforcement learning agent then makes decisions based on the environment’s state and reward value, adjusting the manipulator’s trajectory in real-time. By comparing different simulation results, the gun-taking manipulator can achieve more accurate position trajectory control based on reinforcement learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Bucur, C. "Artificial intelligence driven speed controller for DC motor in series." Scientific Bulletin of Naval Academy XIV, no. 2 (December 15, 2021): 83–88. http://dx.doi.org/10.21279/1454-864x-21-i2-007.

Повний текст джерела
Анотація:
Recently a lot of work have been done to implement artificial intelligence controllers in the field of electrical motors. This paper presents a novel speed controller, developed through Reinforcement learning techniques, applied to series dc motors. We emphasize the ease of developed controller in available off the shelf hardware for industrial use. We used the open- source Python package gym-electric-motor [1] for environment setup, pytorch framework for developing the controller and .NET for performance evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hwangbo, Jemin, Joonho Lee, Alexey Dosovitskiy, Dario Bellicoso, Vassilios Tsounis, Vladlen Koltun, and Marco Hutter. "Learning agile and dynamic motor skills for legged robots." Science Robotics 4, no. 26 (January 16, 2019): eaau5872. http://dx.doi.org/10.1126/scirobotics.aau5872.

Повний текст джерела
Анотація:
Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. In the present work, we introduce a method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby leveraging fast, automated, and cost-effective data generation schemes. The approach is applied to the ANYmal robot, a sophisticated medium-dog–sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than before, and recovering from falling even in complex configurations.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Celemin, Carlos, Guilherme Maeda, Javier Ruiz-del-Solar, Jan Peters, and Jens Kober. "Reinforcement learning of motor skills using Policy Search and human corrective advice." International Journal of Robotics Research 38, no. 14 (September 12, 2019): 1560–80. http://dx.doi.org/10.1177/0278364919871998.

Повний текст джерела
Анотація:
Robot learning problems are limited by physical constraints, which make learning successful policies for complex motor skills on real systems unfeasible. Some reinforcement learning methods, like Policy Search, offer stable convergence toward locally optimal solutions, whereas interactive machine learning or learning-from-demonstration methods allow fast transfer of human knowledge to the agents. However, most methods require expert demonstrations. In this work, we propose the use of human corrective advice in the actions domain for learning motor trajectories. Additionally, we combine this human feedback with reward functions in a Policy Search learning scheme. The use of both sources of information speeds up the learning process, since the intuitive knowledge of the human teacher can be easily transferred to the agent, while the Policy Search method with the cost/reward function take over for supervising the process and reducing the influence of occasional wrong human corrections. This interactive approach has been validated for learning movement primitives with simulated arms with several degrees of freedom in reaching via-point movements, and also using real robots in such tasks as “writing characters” and the ball-in-a-cup game. Compared with standard reinforcement learning without human advice, the results show that the proposed method not only converges to higher rewards when learning movement primitives, but also that the learning is sped up by a factor of 4–40 times, depending on the task.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Wang, Quan, Juan Ying Qin, and Jun Hua Zhou. "Reinforcement Learning Based Self-Constructing Fuzzy Neural Network Controller for AC Motor Drives." Advanced Materials Research 139-141 (October 2010): 1763–68. http://dx.doi.org/10.4028/www.scientific.net/amr.139-141.1763.

Повний текст джерела
Анотація:
A self-constructing fuzzy neural network (SCFNN) based on reinforcement learning is proposed in this study. In the SCFNN, structure and parameter learning are implemented simultaneously. Structure learning is based on uniform division of the input space and distribution of membership function. The structure and membership parameters are organized as real value chromosomes, and the chromosomes are trained by the reinforcement learning based on genetic algorithm. This paper uses Matlab/Simulink to establish simulation platform and several simulations are provided to demonstrate the effectiveness of the proposed SCFNN control stratagem with the implementation of AC motor speed drive. The simulation results show that the AC drive system with SCFNN has good anti-disturbance performance while the load change randomly.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Naros, G., I. Naros, F. Grimm, U. Ziemann та A. Gharabaghi. "Reinforcement learning of self-regulated sensorimotor β-oscillations improves motor performance". NeuroImage 134 (липень 2016): 142–52. http://dx.doi.org/10.1016/j.neuroimage.2016.03.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Colino, Francisco L., Matthew Heath, Cameron D. Hassall, and Olave E. Krigolson. "Electroencephalographic evidence for a reinforcement learning advantage during motor skill acquisition." Biological Psychology 151 (March 2020): 107849. http://dx.doi.org/10.1016/j.biopsycho.2020.107849.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ting, Chih-Chung, Stefano Palminteri, Jan B. Engelmann, and Maël Lebreton. "Robust valence-induced biases on motor response and confidence in human reinforcement learning." Cognitive, Affective, & Behavioral Neuroscience 20, no. 6 (September 1, 2020): 1184–99. http://dx.doi.org/10.3758/s13415-020-00826-0.

Повний текст джерела
Анотація:
AbstractIn simple instrumental-learning tasks, humans learn to seek gains and to avoid losses equally well. Yet, two effects of valence are observed. First, decisions in loss-contexts are slower. Second, loss contexts decrease individuals’ confidence in their choices. Whether these two effects are two manifestations of a single mechanism or whether they can be partially dissociated is unknown. Across six experiments, we attempted to disrupt the valence-induced motor bias effects by manipulating the mapping between decisions and actions and imposing constraints on response times (RTs). Our goal was to assess the presence of the valence-induced confidence bias in the absence of the RT bias. We observed both motor and confidence biases despite our disruption attempts, establishing that the effects of valence on motor and metacognitive responses are very robust and replicable. Nonetheless, within- and between-individual inferences reveal that the confidence bias resists the disruption of the RT bias. Therefore, although concomitant in most cases, valence-induced motor and confidence biases seem to be partly dissociable. These results highlight new important mechanistic constraints that should be incorporated in learning models to jointly explain choice, reaction times and confidence.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Li, Zhongxing, Jiufeng He, Haixia Ma, Guojian Huang, and Junyu Li. "Research on motor speed control algorithm of UAV Based on Reinforcement Learning." Journal of Physics: Conference Series 2396, no. 1 (December 1, 2022): 012041. http://dx.doi.org/10.1088/1742-6596/2396/1/012041.

Повний текст джерела
Анотація:
Abstract Aiming at the hovering control problem of four-rotor UAVs, this paper proposes to use the reinforcement learning method to control the motor speed of UAVs to improve the intellectual control level of four-rotor UAVs. Firstly, two reinforcement learning algorithm models of DDPG (deep deterministic policy gradient) and PPO (proximal policy optimization) are built using the PARL framework, and the super parameters of the algorithm model are set. Secondly, the algorithms are trained and optimized to obtain scores in the RLSchool simulation environment. Finally, the performance differences between the two algorithm models in the same simulation environment are analyzed. The test and analysis results show that the two algorithms can realize the four-axis aircraft suspension control in the simulation environment. Between them, the PPO algorithm features high scores, simple and convenient parameter adjustment, stable control, and good performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Rusanen, Anna-Mari, Otto Lappi, Jesse Kuokkanen, and Jami Pekkanen. "Action control, forward models and expected rewards: representations in reinforcement learning." Synthese 199, no. 5-6 (November 1, 2021): 14017–33. http://dx.doi.org/10.1007/s11229-021-03408-w.

Повний текст джерела
Анотація:
AbstractThe fundamental cognitive problem for active organisms is to decide what to do next in a changing environment. In this article, we analyze motor and action control in computational models that utilize reinforcement learning (RL) algorithms. In reinforcement learning, action control is governed by an action selection policy that maximizes the expected future reward in light of a predictive world model. In this paper we argue that RL provides a way to explicate the so-called action-oriented views of cognitive systems in representational terms.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Wang, Jinsung, Yuming Lei, and Jeffrey R. Binder. "Performing a reaching task with one arm while adapting to a visuomotor rotation with the other can lead to complete transfer of motor learning across the arms." Journal of Neurophysiology 113, no. 7 (April 2015): 2302–8. http://dx.doi.org/10.1152/jn.00974.2014.

Повний текст джерела
Анотація:
The extent to which motor learning is generalized across the limbs is typically very limited. Here, we investigated how two motor learning hypotheses could be used to enhance the extent of interlimb transfer. According to one hypothesis, we predicted that reinforcement of successful actions by providing binary error feedback regarding task success or failure, in addition to terminal error feedback, during initial training would increase the extent of interlimb transfer following visuomotor adaptation ( experiment 1). According to the other hypothesis, we predicted that performing a reaching task repeatedly with one arm without providing performance feedback (which prevented learning the task with this arm), while concurrently adapting to a visuomotor rotation with the other arm, would increase the extent of transfer ( experiment 2). Results indicate that providing binary error feedback, compared with continuous visual feedback that provided movement direction and amplitude information, had no influence on the extent of transfer. In contrast, repeatedly performing (but not learning) a specific task with one arm while visuomotor adaptation occurred with the other arm led to nearly complete transfer. This suggests that the absence of motor instances associated with specific effectors and task conditions is the major reason for limited interlimb transfer and that reinforcement of successful actions during initial training is not beneficial for interlimb transfer. These findings indicate crucial contributions of effector- and task-specific motor instances, which are thought to underlie (a type of) model-free learning, to optimal motor learning and interlimb transfer.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Zhang, Niao Na, and Guo Liang Wang. "Rolling Bag Station Motor Decoupling Control Based on Multi-Agent on Automobile Safety Airbag." Applied Mechanics and Materials 668-669 (October 2014): 370–73. http://dx.doi.org/10.4028/www.scientific.net/amm.668-669.370.

Повний текст джерела
Анотація:
This paper based on multi-agent technology, according to automobile safety airbag roll bag station production technology, at first adopt TS fuzzy neural regression network to distributed modeling for the controlled object, the methods of combining supervision learning and reinforcement learning are used, according to multi-agent external reinforcement signals and the value function of evaluate network ,using adaptive genetic co-evolution algorithm to optimize the action network, so can adapt to the mutational environment, and engineering application supply the proof about the effectiveness of the control strategy.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Muthurajan, S., Rajaji Loganathan, and R. Rani Hemamalini. "Deep Reinforcement Learning Algorithm based PMSM Motor Control for Energy Management of Hybrid Electric Vehicles." WSEAS TRANSACTIONS ON POWER SYSTEMS 18 (March 7, 2023): 18–25. http://dx.doi.org/10.37394/232016.2023.18.3.

Повний текст джерела
Анотація:
Hybrid electric vehicles (HEV) have great potential to reduce emissions and improve fuel economy. The application of artificial intelligence-based control algorithms for controlling the electric motor speed and torque yields excellent fuel economy by reducing the losses drastically. In this paper, a novel strategy to improve the performance of an electric motor-like control system for Permanent Magnet Synchronous Motor (PMSM) with the help of a sensorless vector control method where a trained reinforcement learning agent is used and provides accurate signals which will be added to the control signals. Control Signals referred to here are direct and quadrature voltage signals with reference quadrature current signals. The types of reinforcement learning used are the Deep Deterministic Policy Gradient (DDPG) and Deep Q Network (DQN) agents. Integration and implementation of these control systems are presented, and results are published in this paper. The advantages of the proposed method over the conventional vector control strategy are validated by numerical simulation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Manohar, Sanjay G. "Tremor in Parkinson's disease inverts the effect of dopamine on reinforcement." Brain 143, no. 11 (November 2020): 3178–80. http://dx.doi.org/10.1093/brain/awaa363.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Izawa, J., T. Kondou, and K. Itou. "2P1-G5 motor jeaning model of upper limbs basd on reinforcement Learning." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2001 (2001): 60. http://dx.doi.org/10.1299/jsmermd.2001.60_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Neymotin, Samuel A., George L. Chadderdon, Cliff C. Kerr, Joseph T. Francis, and William W. Lytton. "Reinforcement Learning of Two-Joint Virtual Arm Reaching in a Computer Model of Sensorimotor Cortex." Neural Computation 25, no. 12 (December 2013): 3263–93. http://dx.doi.org/10.1162/neco_a_00521.

Повний текст джерела
Анотація:
Neocortical mechanisms of learning sensorimotor control involve a complex series of interactions at multiple levels, from synaptic mechanisms to cellular dynamics to network connectomics. We developed a model of sensory and motor neocortex consisting of 704 spiking model neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a two-joint virtual arm to reach to a fixed target. For each of 125 trained networks, we used 200 training sessions, each involving 15 s reaches to the target from 16 starting positions. Learning altered network dynamics, with enhancements to neuronal synchrony and behaviorally relevant information flow between neurons. After learning, networks demonstrated retention of behaviorally relevant memories by using proprioceptive information to perform reach-to-target from multiple starting positions. Networks dynamically controlled which joint rotations to use to reach a target, depending on current arm position. Learning-dependent network reorganization was evident in both sensory and motor populations: learned synaptic weights showed target-specific patterning optimized for particular reach movements. Our model embodies an integrative hypothesis of sensorimotor cortical learning that could be used to interpret future electrophysiological data recorded in vivo from sensorimotor learning experiments. We used our model to make the following predictions: learning enhances synchrony in neuronal populations and behaviorally relevant information flow across neuronal populations, enhanced sensory processing aids task-relevant motor performance and the relative ease of a particular movement in vivo depends on the amount of sensory information required to complete the movement.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

van Nuland, Annelies J., Rick C. Helmich, Michiel F. Dirkx, Heidemarie Zach, Ivan Toni, Roshan Cools, and Hanneke E. M. den Ouden. "Effects of dopamine on reinforcement learning in Parkinson’s disease depend on motor phenotype." Brain 143, no. 11 (November 2020): 3422–34. http://dx.doi.org/10.1093/brain/awaa335.

Повний текст джерела
Анотація:
Abstract Parkinson’s disease is clinically defined by bradykinesia, along with rigidity and tremor. However, the severity of these motor signs is greatly variable between individuals, particularly the presence or absence of tremor. This variability in tremor relates to variation in cognitive/motivational impairment, as well as the spatial distribution of neurodegeneration in the midbrain and dopamine depletion in the striatum. Here we ask whether interindividual heterogeneity in tremor symptoms could account for the puzzlingly large variability in the effects of dopaminergic medication on reinforcement learning, a fundamental cognitive function known to rely on dopamine. Given that tremor-dominant and non-tremor Parkinson’s disease patients have different dopaminergic phenotypes, we hypothesized that effects of dopaminergic medication on reinforcement learning differ between tremor-dominant and non-tremor patients. Forty-three tremor-dominant and 20 non-tremor patients with Parkinson’s disease were recruited to be tested both OFF and ON dopaminergic medication (200/50 mg levodopa-benserazide), while 22 age-matched control subjects were recruited to be tested twice OFF medication. Participants performed a reinforcement learning task designed to dissociate effects on learning rate from effects on motivational choice (i.e. the tendency to ‘Go/NoGo’ in the face of reward/threat of punishment). In non-tremor patients, dopaminergic medication improved reward-based choice, replicating previous studies. In contrast, in tremor-dominant patients, dopaminergic medication improved learning from punishment. Formal modelling showed divergent computational effects of dopaminergic medication as a function of Parkinson’s disease motor phenotype, with a modulation of motivational choice bias and learning rate in non-tremor and tremor patients, respectively. This finding establishes a novel cognitive/motivational difference between tremor and non-tremor Parkinson’s disease patients, and highlights the importance of considering motor phenotype in future work.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Hodgkinson, S., J. Steyer, M. Jandl, and W. P. Kaschka. "Action-inhibition hierarchies: Using a simple gastropod model to investigate serotonergic and dopaminergic control of action selection and reinforcement learning." European Psychiatry 26, S2 (March 2011): 905. http://dx.doi.org/10.1016/s0924-9338(11)72610-4.

Повний текст джерела
Анотація:
IntroductionBasal ganglia (BG) activity plays an important role in action selection and reinforcement learning. Inputs from and to other areas of the brain are modulated by a number of neurotransmitter pathways in the BG. Disturbances in the normal function of the BG may play a role in the aetiology of psychiatric disorders such as schizophrenia and bipolar disorder.AimsDevelop a simple animal model to evaluate interactions between glutamatergic, dopaminergic, serotonergic and GABAergic neurones in the modulation of action selection and reinforcement learning.ObjectivesTo characterise the effects of changing dopaminergic and serotonergic activity on action selection and reinforcement learning in an animal model.MethodsThe food seeking / consummation (FSC) activity of the gastropod Planorbis corneus was suppressed by operant conditioning using a repeated unconditioned stimulus-punishment regime. The effects of elevated serotonin or dopamine levels (administration into cerebral, pedal and buccal ganglia), on operantly-conditioned FSC activity was assessed.ResultsOperantly-conditioned behaviour was reversed by elevated ganglia serotonin levels but snails showed no food consummation motor activity in the absence of food. In contrast, elevated ganglia dopamine levels in conditioned snails elicited food consummation motor movements in the absence of food but not orientation towards a food source.ConclusionsThe modulation of FSC activity elicited by reinforcement learning is subject to hierarchical control in gastropods. Serotoninergic activity is responsible establishing the general activity level whilst dopaminergic activity appears to play a more localised and subordinate ‘command’ role.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Taylor, Heather B., Marcia A. Barnes, Susan H. Landry, Paul Swank, Jack M. Fletcher, and Furong Huang. "Motor Contingency Learning and Infants with Spina Bifida." Journal of the International Neuropsychological Society 19, no. 2 (January 8, 2013): 206–15. http://dx.doi.org/10.1017/s1355617712001233.

Повний текст джерела
Анотація:
AbstractInfants with Spina Bifida (SB) were compared to typically developing infants (TD) using a conjugate reinforcement paradigm at 6 months-of-age (n= 98) to evaluate learning, and retention of a sensory-motor contingency. Analyses evaluated infant arm-waving rates at baseline (wrist not tethered to mobile), during acquisition of the sensory-motor contingency (wrist tethered), and immediately after the acquisition phase and then after a delay (wrist not tethered), controlling for arm reaching ability, gestational age, and socioeconomic status. Although both groups responded to the contingency with increased arm-waving from baseline to acquisition, 15% to 29% fewer infants with SB than TD were found to learn the contingency depending on the criterion used to determine contingency learning. In addition, infants with SB who had learned the contingency had more difficulty retaining the contingency over time when sensory feedback was absent. The findings suggest that infants with SB do not learn motor contingencies as easily or at the same rate as TD infants, and are more likely to decrease motor responses when sensory feedback is absent. Results are discussed with reference to research on contingency learning in infants with and without neurodevelopmental disorders, and with reference to motor learning in school-age children with SB. (JINS, 2013,19, 1–10)
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wang, Fang, Kai Xu, Qiao Sheng Zhang, Yi Wen Wang, and Xiao Xiang Zheng. "A Multi-Step Neural Control for Motor Brain-Machine Interface by Reinforcement Learning." Applied Mechanics and Materials 461 (November 2013): 565–69. http://dx.doi.org/10.4028/www.scientific.net/amm.461.565.

Повний текст джерела
Анотація:
Brain-machine interfaces (BMIs) decode cortical neural spikes of paralyzed patients to control external devices for the purpose of movement restoration. Neuroplasticity induced by conducting a relatively complex task within multistep, is helpful to performance improvements of BMI system. Reinforcement learning (RL) allows the BMI system to interact with the environment to learn the task adaptively without a teacher signal, which is more appropriate to the case for paralyzed patients. In this work, we proposed to apply Q(λ)-learning to multistep goal-directed tasks using users neural activity. Neural data were recorded from M1 of a monkey manipulating a joystick in a center-out task. Compared with a supervised learning approach, significant BMI control was achieved with correct directional decoding in 84.2% and 81% of the trials from naïve states. The results demonstrate that the BMI system was able to complete a task by interacting with the environment, indicating that RL-based methods have the potential to develop more natural BMI systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Alharkan, Hamad. "Torque Ripple Minimization of Variable Reluctance Motor Using Reinforcement Dual NNs Learning Architecture." Energies 16, no. 13 (June 21, 2023): 4839. http://dx.doi.org/10.3390/en16134839.

Повний текст джерела
Анотація:
The torque ripples in a switched reluctance motor (SRM) are minimized via an optimal adaptive dynamic regulator that is presented in this research. A novel reinforcement neural network learning approach based on machine learning is adopted to find the best solution for the tracking problem of the SRM drive in real time. The reference signal model which minimizes the torque pulsations is combined with tracking error to construct the augmented structure of the SRM drive. A discounted cost function for the augmented SRM model is described to assess the tracking performance of the signal. In order to track the optimal trajectory, a neural network (NN)-based RL approach has been developed. This method achieves the optimal tracking response to the Hamilton–Jacobi–Bellman (HJB) equation for a nonlinear tracking system. To do so, two neural networks (NNs) have been trained online individually to acquire the best control policy to allow tracking performance for the motor. Simulation findings have been undertaken for SRM to confirm the viability of the suggested control strategy.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії