To see the other types of publications on this topic, follow the link: Learning dynamical systems.

Journal articles on the topic 'Learning dynamical systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Learning dynamical systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hein, Helle, and Ulo Lepik. "LEARNING TRAJECTORIES OF DYNAMICAL SYSTEMS." Mathematical Modelling and Analysis 17, no. 4 (September 1, 2012): 519–31. http://dx.doi.org/10.3846/13926292.2012.706654.

Full text
Abstract:
The aim of the present paper is to describe the method that is capable of adjusting the parameters of a dynamical system so that the trajectories gain certain specified properties. Three problems are considered: (i) learning fixed points, (ii) learning to periodic trajectories, (iii) restrictions on the trajectories. An error function, which measures the discrepancy between the actual and desired trajectories is introduced. Numerical results of several examples, which illustrate the efficiency of the method, are presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Khadivar, Farshad, Ilaria Lauzana, and Aude Billard. "Learning dynamical systems with bifurcations." Robotics and Autonomous Systems 136 (February 2021): 103700. http://dx.doi.org/10.1016/j.robot.2020.103700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Berry, Tyrus, and Suddhasattwa Das. "Learning Theory for Dynamical Systems." SIAM Journal on Applied Dynamical Systems 22, no. 3 (August 8, 2023): 2082–122. http://dx.doi.org/10.1137/22m1516865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roy, Sayan, and Debanjan Rana. "Machine Learning in Nonlinear Dynamical Systems." Resonance 26, no. 7 (July 2021): 953–70. http://dx.doi.org/10.1007/s12045-021-1194-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

WANG, CONG, TIANRUI CHEN, GUANRONG CHEN, and DAVID J. HILL. "DETERMINISTIC LEARNING OF NONLINEAR DYNAMICAL SYSTEMS." International Journal of Bifurcation and Chaos 19, no. 04 (April 2009): 1307–28. http://dx.doi.org/10.1142/s0218127409023640.

Full text
Abstract:
In this paper, we investigate the problem of identifying or modeling nonlinear dynamical systems undergoing periodic and period-like (recurrent) motions. For accurate identification of nonlinear dynamical systems, the persistent excitation condition is normally required to be satisfied. Firstly, by using localized radial basis function networks, a relationship between the recurrent trajectories and the persistence of excitation condition is established. Secondly, for a broad class of recurrent trajectories generated from nonlinear dynamical systems, a deterministic learning approach is presented which achieves locally-accurate identification of the underlying system dynamics in a local region along the recurrent trajectory. This study reveals that even for a random-like chaotic trajectory, which is extremely sensitive to initial conditions and is long-term unpredictable, the system dynamics of a nonlinear chaotic system can still be locally-accurate identified along the chaotic trajectory in a deterministic way. Numerical experiments on the Rossler system are included to demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmadi, Amir Ali, and Bachir El Khadir. "Learning Dynamical Systems with Side Information." SIAM Review 65, no. 1 (February 2023): 183–223. http://dx.doi.org/10.1137/20m1388644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grigoryeva, Lyudmila, Allen Hart, and Juan-Pablo Ortega. "Learning strange attractors with reservoir systems." Nonlinearity 36, no. 9 (July 27, 2023): 4674–708. http://dx.doi.org/10.1088/1361-6544/ace492.

Full text
Abstract:
Abstract This paper shows that the celebrated embedding theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of generic observations of an invertible dynamical system carry in their wake an embedding of the phase space dynamics into the chosen Euclidean state space. This embedding coincides with a natural generalized synchronization that arises in this setup and that yields a topological conjugacy between the state-space dynamics driven by the generic observations of the dynamical system and the dynamical system itself. This result provides additional tools for the representation, learning, and analysis of chaotic attractors and sheds additional light on the reservoir computing phenomenon that appears in the context of recurrent neural networks.
APA, Harvard, Vancouver, ISO, and other styles
8

Davids, Keith. "Learning design for Nonlinear Dynamical Movement Systems." Open Sports Sciences Journal 5, no. 1 (September 13, 2012): 9–16. http://dx.doi.org/10.2174/1875399x01205010009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Campi, M. C., and P. R. Kumar. "Learning dynamical systems in a stationary environment." Systems & Control Letters 34, no. 3 (June 1998): 125–32. http://dx.doi.org/10.1016/s0167-6911(98)00005-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rajendra, P., and V. Brahmajirao. "Modeling of dynamical systems through deep learning." Biophysical Reviews 12, no. 6 (November 22, 2020): 1311–20. http://dx.doi.org/10.1007/s12551-020-00776-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cheng, Sen, and Philip N. Sabes. "Modeling Sensorimotor Learning with Linear Dynamical Systems." Neural Computation 18, no. 4 (April 1, 2006): 760–93. http://dx.doi.org/10.1162/neco.2006.18.4.760.

Full text
Abstract:
Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this framework, the state of the system is a set of parameters that define the current sensorimotor transformation— the function that maps sensory inputs to motor outputs. The class of LDS models provides a first-order approximation for any Markovian (state-dependent) learning rule that specifies the changes in the sensorimotor transformation that result from sensory feedback on each movement. We show that modeling the trial-by-trial dynamics of learning provides a sub-stantially enhanced picture of the process of adaptation compared to measurements of the steady state of adaptation derived from more traditional blocked-exposure experiments. Specifically, these models can be used to quantify sensory and performance biases, the extent to which learned changes in the sensorimotor transformation decay over time, and the portion of motor variability due to either learning or performance variability. We show that previous attempts to fit such models with linear regression have not generally yielded consistent parameter estimates. Instead, we present an expectation-maximization algorithm for fitting LDS models to experimental data and describe the difficulties inherent in estimating the parameters associated with feedback-driven learning. Finally, we demonstrate the application of these methods in a simple sensorimotor learning experiment: adaptation to shifted visual feedback during reaching.
APA, Harvard, Vancouver, ISO, and other styles
12

Qiu, Zirou, Abhijin Adiga, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard E. Stearns, and Anil Vullikanti. "Learning the Topology and Behavior of Discrete Dynamical Systems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14722–30. http://dx.doi.org/10.1609/aaai.v38i13.29390.

Full text
Abstract:
Discrete dynamical systems are commonly used to model the spread of contagions on real-world networks. Under the PAC framework, existing research has studied the problem of learning the behavior of a system, assuming that the underlying network is known. In this work, we focus on a more challenging setting: to learn both the behavior and the underlying topology of a black-box system. We show that, in general, this learning problem is computationally intractable. On the positive side, we present efficient learning methods under the PAC model when the underlying graph of the dynamical system belongs to certain classes. Further, we examine a relaxed setting where the topology of an unknown system is partially observed. For this case, we develop an efficient PAC learner to infer the system and establish the sample complexity. Lastly, we present a formal analysis of the expressive power of the hypothesis class of dynamical systems where both the topology and behavior are unknown, using the well-known Natarajan dimension formalism. Our results provide a theoretical foundation for learning both the topology and behavior of discrete dynamical systems.
APA, Harvard, Vancouver, ISO, and other styles
13

Bavandpour, Mohammad, Hamid Soleimani, Saeed Bagheri-Shouraki, Arash Ahmadi, Derek Abbott, and Leon O. Chua. "Cellular Memristive Dynamical Systems (CMDS)." International Journal of Bifurcation and Chaos 24, no. 05 (May 2014): 1430016. http://dx.doi.org/10.1142/s021812741430016x.

Full text
Abstract:
This study presents a cellular-based mapping for a special class of dynamical systems for embedding neuron models, by exploiting an efficient memristor crossbar-based circuit for its implementation. The resultant reconfigurable memristive dynamical circuit exhibits various bifurcation phenomena, and responses that are characteristic of dynamical systems. High programmability of the circuit enables it to be applied to real-time applications, learning systems, and analytically indescribable dynamical systems. Moreover, its efficient implementation platform makes it an appropriate choice for on-chip applications and prostheses. We apply this method to the Izhikevich, and FitzHugh–Nagumo neuron models as case studies, and investigate the dynamical behaviors of these circuits.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhou, Quan, Jakub Marecek, and Robert N. Shorten. "Fairness in Forecasting and Learning Linear Dynamical Systems." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 11134–42. http://dx.doi.org/10.1609/aaai.v35i12.17328.

Full text
Abstract:
In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. When the amounts of training data for the subgroups are not controlled carefully, under-representation bias arises. We introduce two natural notions of subgroup fairness and instantaneous fairness to address such under-representation bias in time-series forecasting problems. In particular, we consider the subgroup-fair and instant-fair learning of a linear dynamical system (LDS) from multiple trajectories of varying lengths and the associated forecasting problems. We provide globally convergent methods for the learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate both the beneficial impact of fairness considerations on statistical performance and the encouraging effects of exploiting sparsity on run time.
APA, Harvard, Vancouver, ISO, and other styles
15

Mezić, Igor. "Koopman Operator, Geometry, and Learning of Dynamical Systems." Notices of the American Mathematical Society 68, no. 07 (August 1, 2021): 1. http://dx.doi.org/10.1090/noti2306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Monga, Bharat, and Jeff Moehlis. "Supervised learning algorithms for controlling underactuated dynamical systems." Physica D: Nonlinear Phenomena 412 (November 2020): 132621. http://dx.doi.org/10.1016/j.physd.2020.132621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kronander, K., M. Khansari, and A. Billard. "Incremental motion learning with locally modulated dynamical systems." Robotics and Autonomous Systems 70 (August 2015): 52–62. http://dx.doi.org/10.1016/j.robot.2015.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Tokuda, Isao, Ryuji Tokunaga, and Kazuyuki Aihara. "Back-propagation learning of infinite-dimensional dynamical systems." Neural Networks 16, no. 8 (October 2003): 1179–93. http://dx.doi.org/10.1016/s0893-6080(03)00076-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sugie, Toshiharu, and Toshiro Ono. "An iterative learning control law for dynamical systems." Automatica 27, no. 4 (July 1991): 729–32. http://dx.doi.org/10.1016/0005-1098(91)90066-b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Beek, P. J., and A. A. M. van Santvoord. "Learning the Cascade Juggle: A Dynamical Systems Analysis." Journal of Motor Behavior 24, no. 1 (March 1992): 85–94. http://dx.doi.org/10.1080/00222895.1992.9941604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

E, Weinan. "A Proposal on Machine Learning via Dynamical Systems." Communications in Mathematics and Statistics 5, no. 1 (March 2017): 1–11. http://dx.doi.org/10.1007/s40304-017-0103-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

B. Brugarolas, Paul, and Michael G. Safonov. "Learning about dynamical systems via unfalsification of hypotheses." International Journal of Robust and Nonlinear Control 14, no. 11 (April 20, 2004): 933–43. http://dx.doi.org/10.1002/rnc.924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Giannakis, Dimitrios, Amelia Henriksen, Joel A. Tropp, and Rachel Ward. "Learning to Forecast Dynamical Systems from Streaming Data." SIAM Journal on Applied Dynamical Systems 22, no. 2 (May 5, 2023): 527–58. http://dx.doi.org/10.1137/21m144983x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Modi, Aditya, Mohamad Kazem Shirani Faradonbeh, Ambuj Tewari, and George Michailidis. "Joint learning of linear time-invariant dynamical systems." Automatica 164 (June 2024): 111635. http://dx.doi.org/10.1016/j.automatica.2024.111635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Horbacz, Katarzyna. "Random dynamical systems with jumps." Journal of Applied Probability 41, no. 3 (September 2004): 890–910. http://dx.doi.org/10.1239/jap/1091543432.

Full text
Abstract:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
APA, Harvard, Vancouver, ISO, and other styles
26

Horbacz, Katarzyna. "Random dynamical systems with jumps." Journal of Applied Probability 41, no. 03 (September 2004): 890–910. http://dx.doi.org/10.1017/s0021900200020611.

Full text
Abstract:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
APA, Harvard, Vancouver, ISO, and other styles
27

Jena, Amit, Dileep Kalathil, and Le Xie. "Meta-Learning-Based Adaptive Stability Certificates for Dynamical Systems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12801–9. http://dx.doi.org/10.1609/aaai.v38i11.29176.

Full text
Abstract:
This paper addresses the problem of Neural Network (NN) based adaptive stability certification in a dynamical system. The state-of-the-art methods, such as Neural Lyapunov Functions (NLFs), use NN-based formulations to assess the stability of a non-linear dynamical system and compute a Region of Attraction (ROA) in the state space. However, under parametric uncertainty, if the values of system parameters vary over time, the NLF methods fail to adapt to such changes and may lead to conservative stability assessment performance. We circumvent this issue by integrating Model Agnostic Meta-learning (MAML) with NLFs and propose meta-NLFs. In this process, we train a meta-function that adapts to any parametric shifts and updates into an NLF for the system with new test-time parameter values. We demonstrate the stability assessment performance of meta-NLFs on some standard benchmark autonomous dynamical systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Feng, Lingyu, Ting Gao, Min Dai, and Jinqiao Duan. "Learning effective dynamics from data-driven stochastic systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 4 (April 2023): 043131. http://dx.doi.org/10.1063/5.0126667.

Full text
Abstract:
Multiscale stochastic dynamical systems have been widely adopted to a variety of scientific and engineering problems due to their capability of depicting complex phenomena in many real-world applications. This work is devoted to investigating the effective dynamics for slow–fast stochastic dynamical systems. Given observation data on a short-term period satisfying some unknown slow–fast stochastic systems, we propose a novel algorithm, including a neural network called Auto-SDE, to learn an invariant slow manifold. Our approach captures the evolutionary nature of a series of time-dependent autoencoder neural networks with the loss constructed from a discretized stochastic differential equation. Our algorithm is also validated to be accurate, stable, and effective through numerical experiments under various evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
29

Ell, Shawn W., and F. Gregory Ashby. "Dynamical trajectories in category learning." Perception & Psychophysics 66, no. 8 (November 2004): 1318–40. http://dx.doi.org/10.3758/bf03195001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Vereijken, B., H. T. A. Whiting, and W. J. Beek. "A Dynamical Systems Approach to Skill Acquisition." Quarterly Journal of Experimental Psychology Section A 45, no. 2 (August 1992): 323–44. http://dx.doi.org/10.1080/14640749208401329.

Full text
Abstract:
This paper argues that the answer to the question, what has to be learned, needs to be established before the question, how is it learned, can be meaningfully addressed. Based on this conviction, some of the limitations of current and past research on skill acquisition are discussed. Motivated by the dynamical systems approach, the question of “what has to be learned” was tackled by setting up a non-linear mathematical model of the task (i.e. learning to make sideways movements on a ski apparatus). On the basis of this model, the phase lag between movements of the platform of the apparatus and the actions of the subject was isolated as an ensemble variable reflecting the timing of the subject in relation to the dynamics of the apparatus. This variable was subsequently used to study “how” the task was learned in a discovery learning experiment, in which predictions stemming from the model were tested and confirmed. Overall, these findings provided support for the hypothesis, formulated by Bernstein (1967), that one of the important effects of practice is learning to make use of reactive forces, thereby reducing the need for active muscular forces. In addition, the data from a previous learning experiment on the ski apparatus—the results of which had been equivocal—were reconsidered. The use of phase lag as a dependent variable provided a resolution of those findings. On the basis of the confirmatory testing of predictions stemming from the model and the clarification of findings from a previous experiment, it is argued that the dynamical systems approach put forward here provides a powerful method for pursuing issues in skill acquisition. Suggestions are made as to how this approach can be used to systematically pursue the questions that arise as a natural outcome of the experimental evidence presented here.
APA, Harvard, Vancouver, ISO, and other styles
31

Ijspeert, Auke Jan, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, and Stefan Schaal. "Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors." Neural Computation 25, no. 2 (February 2013): 328–73. http://dx.doi.org/10.1162/neco_a_00393.

Full text
Abstract:
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by means of a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics.
APA, Harvard, Vancouver, ISO, and other styles
32

Gabriel, Nicholas, and Neil F. Johnson. "Using Neural Architectures to Model Complex Dynamical Systems." Advances in Artificial Intelligence and Machine Learning 02, no. 02 (2022): 366–84. http://dx.doi.org/10.54364/aaiml.2022.1124.

Full text
Abstract:
The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable successin leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.
APA, Harvard, Vancouver, ISO, and other styles
33

Forgione, Marco, and Dario Piga. "dynoNet : A neural network architecture for learning dynamical systems." International Journal of Adaptive Control and Signal Processing 35, no. 4 (January 14, 2021): 612–26. http://dx.doi.org/10.1002/acs.3216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xiao, Wenxin, Armin Lederer, and Sandra Hirche. "Learning Stable Nonparametric Dynamical Systems with Gaussian Process Regression." IFAC-PapersOnLine 53, no. 2 (2020): 1194–99. http://dx.doi.org/10.1016/j.ifacol.2020.12.1335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Ruilin, Xiaowei Jin, Shujin Laima, Yong Huang, and Hui Li. "Intelligent modeling of nonlinear dynamical systems by machine learning." International Journal of Non-Linear Mechanics 142 (June 2022): 103984. http://dx.doi.org/10.1016/j.ijnonlinmec.2022.103984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Qin, Zengyi, Dawei Sun, and Chuchu Fan. "Sablas: Learning Safe Control for Black-Box Dynamical Systems." IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 1928–35. http://dx.doi.org/10.1109/lra.2022.3142743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Pulch, Roland, and Maha Youssef. "MACHINE LEARNING FOR TRAJECTORIES OF PARAMETRIC NONLINEAR DYNAMICAL SYSTEMS." Journal of Machine Learning for Modeling and Computing 1, no. 1 (2020): 75–95. http://dx.doi.org/10.1615/jmachlearnmodelcomput.2020034093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chu, S. R., and R. Shoureshi. "Applications of neural networks in learning of dynamical systems." IEEE Transactions on Systems, Man, and Cybernetics 22, no. 1 (1992): 161–64. http://dx.doi.org/10.1109/21.141320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Khansari-Zadeh, S. Mohammad, and Aude Billard. "Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models." IEEE Transactions on Robotics 27, no. 5 (October 2011): 943–57. http://dx.doi.org/10.1109/tro.2011.2159412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mukhopadhyay, Sumona, and Santo Banerjee. "Learning dynamical systems in noise using convolutional neural networks." Chaos: An Interdisciplinary Journal of Nonlinear Science 30, no. 10 (October 2020): 103125. http://dx.doi.org/10.1063/5.0009326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sugie, T., and T. Ono. "On an Iterative Learning Control Law for Dynamical Systems." IFAC Proceedings Volumes 20, no. 5 (July 1987): 339–44. http://dx.doi.org/10.1016/s1474-6670(17)55109-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kimura, M., and R. Nakano. "Learning dynamical systems by recurrent neural networks from orbits." Neural Networks 11, no. 9 (December 1998): 1589–99. http://dx.doi.org/10.1016/s0893-6080(98)00098-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Berwald, Jesse, Tomáš Gedeon, and John Sheppard. "Using machine learning to predict catastrophes in dynamical systems." Journal of Computational and Applied Mathematics 236, no. 9 (March 2012): 2235–45. http://dx.doi.org/10.1016/j.cam.2011.11.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Talmon, Ronen, Stephane Mallat, Hitten Zaveri, and Ronald R. Coifman. "Manifold Learning for Latent Variable Inference in Dynamical Systems." IEEE Transactions on Signal Processing 63, no. 15 (August 2015): 3843–56. http://dx.doi.org/10.1109/tsp.2015.2432731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhao, Qingye, Yi Zhang, and Xuandong Li. "Safe reinforcement learning for dynamical systems using barrier certificates." Connection Science 34, no. 1 (December 12, 2022): 2822–44. http://dx.doi.org/10.1080/09540091.2022.2151567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kelso, J. A. S. "Anticipatory dynamical systems, intrinsic pattern dynamics and skill learning." Human Movement Science 10, no. 1 (February 1991): 93–111. http://dx.doi.org/10.1016/0167-9457(91)90034-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gauthier, Daniel J., Ingo Fischer, and André Röhm. "Learning unseen coexisting attractors." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 11 (November 2022): 113107. http://dx.doi.org/10.1063/5.0116784.

Full text
Abstract:
Reservoir computing is a machine learning approach that can generate a surrogate model of a dynamical system. It can learn the underlying dynamical system using fewer trainable parameters and, hence, smaller training data sets than competing approaches. Recently, a simpler formulation, known as next-generation reservoir computing, removed many algorithm metaparameters and identified a well-performing traditional reservoir computer, thus simplifying training even further. Here, we study a particularly challenging problem of learning a dynamical system that has both disparate time scales and multiple co-existing dynamical states (attractors). We compare the next-generation and traditional reservoir computer using metrics quantifying the geometry of the ground-truth and forecasted attractors. For the studied four-dimensional system, the next-generation reservoir computing approach uses [Formula: see text] less training data, requires [Formula: see text] shorter “warmup” time, has fewer metaparameters, and has an [Formula: see text] higher accuracy in predicting the co-existing attractor characteristics in comparison to a traditional reservoir computer. Furthermore, we demonstrate that it predicts the basin of attraction with high accuracy. This work lends further support to the superior learning ability of this new machine learning algorithm for dynamical systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Pontes-Filho, Sidney, Pedro Lind, Anis Yazidi, Jianhua Zhang, Hugo Hammer, Gustavo B. M. Mello, Ioanna Sandvig, Gunnar Tufte, and Stefano Nichele. "A neuro-inspired general framework for the evolution of stochastic dynamical systems: Cellular automata, random Boolean networks and echo state networks towards criticality." Cognitive Neurodynamics 14, no. 5 (June 11, 2020): 657–74. http://dx.doi.org/10.1007/s11571-020-09600-x.

Full text
Abstract:
Abstract Although deep learning has recently increased in popularity, it suffers from various problems including high computational complexity, energy greedy computation, and lack of scalability, to mention a few. In this paper, we investigate an alternative brain-inspired method for data analysis that circumvents the deep learning drawbacks by taking the actual dynamical behavior of biological neural networks into account. For this purpose, we develop a general framework for dynamical systems that can evolve and model a variety of substrates that possess computational capacity. Therefore, dynamical systems can be exploited in the reservoir computing paradigm, i.e., an untrained recurrent nonlinear network with a trained linear readout layer. Moreover, our general framework, called EvoDynamic, is based on an optimized deep neural network library. Hence, generalization and performance can be balanced. The EvoDynamic framework contains three kinds of dynamical systems already implemented, namely cellular automata, random Boolean networks, and echo state networks. The evolution of such systems towards a dynamical behavior, called criticality, is investigated because systems with such behavior may be better suited to do useful computation. The implemented dynamical systems are stochastic and their evolution with genetic algorithm mutates their update rules or network initialization. The obtained results are promising and demonstrate that criticality is achieved. In addition to the presented results, our framework can also be utilized to evolve the dynamical systems connectivity, update and learning rules to improve the quality of the reservoir used for solving computational tasks and physical substrate modeling.
APA, Harvard, Vancouver, ISO, and other styles
49

Sharma, Shalini, and Angshul Majumdar. "Sequential Transform Learning." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (June 26, 2021): 1–18. http://dx.doi.org/10.1145/3447394.

Full text
Abstract:
This work proposes a new approach for dynamical modeling; we call it sequential transform learning. This is loosely based on the transform (analysis dictionary) learning formulation. This is the first work on this topic. Transform learning, was originally developed for static problems; we modify it to model dynamical systems by introducing a feedback loop. The learnt transform coefficients for the t th instant are fed back along with the t + 1st sample, thereby establishing a Markovian relationship. Furthermore, the formulation is made supervised by the label consistency cost. Our approach keeps the best of two worlds, marrying the interpretability and uncertainty measure of signal processing with the function approximation ability of neural networks. We have carried out experiments on one of the most challenging problems in dynamical modeling - stock forecasting. Benchmarking with the state-of-the-art has shown that our method excels over the rest.
APA, Harvard, Vancouver, ISO, and other styles
50

Duan, Jianghua, Yongsheng Ou, Jianbing Hu, Zhiyang Wang, Shaokun Jin, and Chao Xu. "Fast and Stable Learning of Dynamical Systems Based on Extreme Learning Machine." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, no. 6 (June 2019): 1175–85. http://dx.doi.org/10.1109/tsmc.2017.2705279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography