Zeitschriftenartikel zum Thema „Learning dynamical systems“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Learning dynamical systems.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Learning dynamical systems" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Hein, Helle, und Ulo Lepik. „LEARNING TRAJECTORIES OF DYNAMICAL SYSTEMS“. Mathematical Modelling and Analysis 17, Nr. 4 (01.09.2012): 519–31. http://dx.doi.org/10.3846/13926292.2012.706654.

Der volle Inhalt der Quelle
Annotation:
The aim of the present paper is to describe the method that is capable of adjusting the parameters of a dynamical system so that the trajectories gain certain specified properties. Three problems are considered: (i) learning fixed points, (ii) learning to periodic trajectories, (iii) restrictions on the trajectories. An error function, which measures the discrepancy between the actual and desired trajectories is introduced. Numerical results of several examples, which illustrate the efficiency of the method, are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Khadivar, Farshad, Ilaria Lauzana und Aude Billard. „Learning dynamical systems with bifurcations“. Robotics and Autonomous Systems 136 (Februar 2021): 103700. http://dx.doi.org/10.1016/j.robot.2020.103700.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Berry, Tyrus, und Suddhasattwa Das. „Learning Theory for Dynamical Systems“. SIAM Journal on Applied Dynamical Systems 22, Nr. 3 (08.08.2023): 2082–122. http://dx.doi.org/10.1137/22m1516865.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Roy, Sayan, und Debanjan Rana. „Machine Learning in Nonlinear Dynamical Systems“. Resonance 26, Nr. 7 (Juli 2021): 953–70. http://dx.doi.org/10.1007/s12045-021-1194-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

WANG, CONG, TIANRUI CHEN, GUANRONG CHEN und DAVID J. HILL. „DETERMINISTIC LEARNING OF NONLINEAR DYNAMICAL SYSTEMS“. International Journal of Bifurcation and Chaos 19, Nr. 04 (April 2009): 1307–28. http://dx.doi.org/10.1142/s0218127409023640.

Der volle Inhalt der Quelle
Annotation:
In this paper, we investigate the problem of identifying or modeling nonlinear dynamical systems undergoing periodic and period-like (recurrent) motions. For accurate identification of nonlinear dynamical systems, the persistent excitation condition is normally required to be satisfied. Firstly, by using localized radial basis function networks, a relationship between the recurrent trajectories and the persistence of excitation condition is established. Secondly, for a broad class of recurrent trajectories generated from nonlinear dynamical systems, a deterministic learning approach is presented which achieves locally-accurate identification of the underlying system dynamics in a local region along the recurrent trajectory. This study reveals that even for a random-like chaotic trajectory, which is extremely sensitive to initial conditions and is long-term unpredictable, the system dynamics of a nonlinear chaotic system can still be locally-accurate identified along the chaotic trajectory in a deterministic way. Numerical experiments on the Rossler system are included to demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ahmadi, Amir Ali, und Bachir El Khadir. „Learning Dynamical Systems with Side Information“. SIAM Review 65, Nr. 1 (Februar 2023): 183–223. http://dx.doi.org/10.1137/20m1388644.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Grigoryeva, Lyudmila, Allen Hart und Juan-Pablo Ortega. „Learning strange attractors with reservoir systems“. Nonlinearity 36, Nr. 9 (27.07.2023): 4674–708. http://dx.doi.org/10.1088/1361-6544/ace492.

Der volle Inhalt der Quelle
Annotation:
Abstract This paper shows that the celebrated embedding theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of generic observations of an invertible dynamical system carry in their wake an embedding of the phase space dynamics into the chosen Euclidean state space. This embedding coincides with a natural generalized synchronization that arises in this setup and that yields a topological conjugacy between the state-space dynamics driven by the generic observations of the dynamical system and the dynamical system itself. This result provides additional tools for the representation, learning, and analysis of chaotic attractors and sheds additional light on the reservoir computing phenomenon that appears in the context of recurrent neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Davids, Keith. „Learning design for Nonlinear Dynamical Movement Systems“. Open Sports Sciences Journal 5, Nr. 1 (13.09.2012): 9–16. http://dx.doi.org/10.2174/1875399x01205010009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Campi, M. C., und P. R. Kumar. „Learning dynamical systems in a stationary environment“. Systems & Control Letters 34, Nr. 3 (Juni 1998): 125–32. http://dx.doi.org/10.1016/s0167-6911(98)00005-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rajendra, P., und V. Brahmajirao. „Modeling of dynamical systems through deep learning“. Biophysical Reviews 12, Nr. 6 (22.11.2020): 1311–20. http://dx.doi.org/10.1007/s12551-020-00776-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Cheng, Sen, und Philip N. Sabes. „Modeling Sensorimotor Learning with Linear Dynamical Systems“. Neural Computation 18, Nr. 4 (01.04.2006): 760–93. http://dx.doi.org/10.1162/neco.2006.18.4.760.

Der volle Inhalt der Quelle
Annotation:
Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this framework, the state of the system is a set of parameters that define the current sensorimotor transformation— the function that maps sensory inputs to motor outputs. The class of LDS models provides a first-order approximation for any Markovian (state-dependent) learning rule that specifies the changes in the sensorimotor transformation that result from sensory feedback on each movement. We show that modeling the trial-by-trial dynamics of learning provides a sub-stantially enhanced picture of the process of adaptation compared to measurements of the steady state of adaptation derived from more traditional blocked-exposure experiments. Specifically, these models can be used to quantify sensory and performance biases, the extent to which learned changes in the sensorimotor transformation decay over time, and the portion of motor variability due to either learning or performance variability. We show that previous attempts to fit such models with linear regression have not generally yielded consistent parameter estimates. Instead, we present an expectation-maximization algorithm for fitting LDS models to experimental data and describe the difficulties inherent in estimating the parameters associated with feedback-driven learning. Finally, we demonstrate the application of these methods in a simple sensorimotor learning experiment: adaptation to shifted visual feedback during reaching.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Qiu, Zirou, Abhijin Adiga, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard E. Stearns und Anil Vullikanti. „Learning the Topology and Behavior of Discrete Dynamical Systems“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 13 (24.03.2024): 14722–30. http://dx.doi.org/10.1609/aaai.v38i13.29390.

Der volle Inhalt der Quelle
Annotation:
Discrete dynamical systems are commonly used to model the spread of contagions on real-world networks. Under the PAC framework, existing research has studied the problem of learning the behavior of a system, assuming that the underlying network is known. In this work, we focus on a more challenging setting: to learn both the behavior and the underlying topology of a black-box system. We show that, in general, this learning problem is computationally intractable. On the positive side, we present efficient learning methods under the PAC model when the underlying graph of the dynamical system belongs to certain classes. Further, we examine a relaxed setting where the topology of an unknown system is partially observed. For this case, we develop an efficient PAC learner to infer the system and establish the sample complexity. Lastly, we present a formal analysis of the expressive power of the hypothesis class of dynamical systems where both the topology and behavior are unknown, using the well-known Natarajan dimension formalism. Our results provide a theoretical foundation for learning both the topology and behavior of discrete dynamical systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Bavandpour, Mohammad, Hamid Soleimani, Saeed Bagheri-Shouraki, Arash Ahmadi, Derek Abbott und Leon O. Chua. „Cellular Memristive Dynamical Systems (CMDS)“. International Journal of Bifurcation and Chaos 24, Nr. 05 (Mai 2014): 1430016. http://dx.doi.org/10.1142/s021812741430016x.

Der volle Inhalt der Quelle
Annotation:
This study presents a cellular-based mapping for a special class of dynamical systems for embedding neuron models, by exploiting an efficient memristor crossbar-based circuit for its implementation. The resultant reconfigurable memristive dynamical circuit exhibits various bifurcation phenomena, and responses that are characteristic of dynamical systems. High programmability of the circuit enables it to be applied to real-time applications, learning systems, and analytically indescribable dynamical systems. Moreover, its efficient implementation platform makes it an appropriate choice for on-chip applications and prostheses. We apply this method to the Izhikevich, and FitzHugh–Nagumo neuron models as case studies, and investigate the dynamical behaviors of these circuits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Zhou, Quan, Jakub Marecek und Robert N. Shorten. „Fairness in Forecasting and Learning Linear Dynamical Systems“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 12 (18.05.2021): 11134–42. http://dx.doi.org/10.1609/aaai.v35i12.17328.

Der volle Inhalt der Quelle
Annotation:
In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. When the amounts of training data for the subgroups are not controlled carefully, under-representation bias arises. We introduce two natural notions of subgroup fairness and instantaneous fairness to address such under-representation bias in time-series forecasting problems. In particular, we consider the subgroup-fair and instant-fair learning of a linear dynamical system (LDS) from multiple trajectories of varying lengths and the associated forecasting problems. We provide globally convergent methods for the learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate both the beneficial impact of fairness considerations on statistical performance and the encouraging effects of exploiting sparsity on run time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Mezić, Igor. „Koopman Operator, Geometry, and Learning of Dynamical Systems“. Notices of the American Mathematical Society 68, Nr. 07 (01.08.2021): 1. http://dx.doi.org/10.1090/noti2306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Monga, Bharat, und Jeff Moehlis. „Supervised learning algorithms for controlling underactuated dynamical systems“. Physica D: Nonlinear Phenomena 412 (November 2020): 132621. http://dx.doi.org/10.1016/j.physd.2020.132621.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Kronander, K., M. Khansari und A. Billard. „Incremental motion learning with locally modulated dynamical systems“. Robotics and Autonomous Systems 70 (August 2015): 52–62. http://dx.doi.org/10.1016/j.robot.2015.03.010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Tokuda, Isao, Ryuji Tokunaga und Kazuyuki Aihara. „Back-propagation learning of infinite-dimensional dynamical systems“. Neural Networks 16, Nr. 8 (Oktober 2003): 1179–93. http://dx.doi.org/10.1016/s0893-6080(03)00076-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sugie, Toshiharu, und Toshiro Ono. „An iterative learning control law for dynamical systems“. Automatica 27, Nr. 4 (Juli 1991): 729–32. http://dx.doi.org/10.1016/0005-1098(91)90066-b.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Beek, P. J., und A. A. M. van Santvoord. „Learning the Cascade Juggle: A Dynamical Systems Analysis“. Journal of Motor Behavior 24, Nr. 1 (März 1992): 85–94. http://dx.doi.org/10.1080/00222895.1992.9941604.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

E, Weinan. „A Proposal on Machine Learning via Dynamical Systems“. Communications in Mathematics and Statistics 5, Nr. 1 (März 2017): 1–11. http://dx.doi.org/10.1007/s40304-017-0103-z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

B. Brugarolas, Paul, und Michael G. Safonov. „Learning about dynamical systems via unfalsification of hypotheses“. International Journal of Robust and Nonlinear Control 14, Nr. 11 (20.04.2004): 933–43. http://dx.doi.org/10.1002/rnc.924.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Giannakis, Dimitrios, Amelia Henriksen, Joel A. Tropp und Rachel Ward. „Learning to Forecast Dynamical Systems from Streaming Data“. SIAM Journal on Applied Dynamical Systems 22, Nr. 2 (05.05.2023): 527–58. http://dx.doi.org/10.1137/21m144983x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Modi, Aditya, Mohamad Kazem Shirani Faradonbeh, Ambuj Tewari und George Michailidis. „Joint learning of linear time-invariant dynamical systems“. Automatica 164 (Juni 2024): 111635. http://dx.doi.org/10.1016/j.automatica.2024.111635.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Horbacz, Katarzyna. „Random dynamical systems with jumps“. Journal of Applied Probability 41, Nr. 3 (September 2004): 890–910. http://dx.doi.org/10.1239/jap/1091543432.

Der volle Inhalt der Quelle
Annotation:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Horbacz, Katarzyna. „Random dynamical systems with jumps“. Journal of Applied Probability 41, Nr. 03 (September 2004): 890–910. http://dx.doi.org/10.1017/s0021900200020611.

Der volle Inhalt der Quelle
Annotation:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Jena, Amit, Dileep Kalathil und Le Xie. „Meta-Learning-Based Adaptive Stability Certificates for Dynamical Systems“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 11 (24.03.2024): 12801–9. http://dx.doi.org/10.1609/aaai.v38i11.29176.

Der volle Inhalt der Quelle
Annotation:
This paper addresses the problem of Neural Network (NN) based adaptive stability certification in a dynamical system. The state-of-the-art methods, such as Neural Lyapunov Functions (NLFs), use NN-based formulations to assess the stability of a non-linear dynamical system and compute a Region of Attraction (ROA) in the state space. However, under parametric uncertainty, if the values of system parameters vary over time, the NLF methods fail to adapt to such changes and may lead to conservative stability assessment performance. We circumvent this issue by integrating Model Agnostic Meta-learning (MAML) with NLFs and propose meta-NLFs. In this process, we train a meta-function that adapts to any parametric shifts and updates into an NLF for the system with new test-time parameter values. We demonstrate the stability assessment performance of meta-NLFs on some standard benchmark autonomous dynamical systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Feng, Lingyu, Ting Gao, Min Dai und Jinqiao Duan. „Learning effective dynamics from data-driven stochastic systems“. Chaos: An Interdisciplinary Journal of Nonlinear Science 33, Nr. 4 (April 2023): 043131. http://dx.doi.org/10.1063/5.0126667.

Der volle Inhalt der Quelle
Annotation:
Multiscale stochastic dynamical systems have been widely adopted to a variety of scientific and engineering problems due to their capability of depicting complex phenomena in many real-world applications. This work is devoted to investigating the effective dynamics for slow–fast stochastic dynamical systems. Given observation data on a short-term period satisfying some unknown slow–fast stochastic systems, we propose a novel algorithm, including a neural network called Auto-SDE, to learn an invariant slow manifold. Our approach captures the evolutionary nature of a series of time-dependent autoencoder neural networks with the loss constructed from a discretized stochastic differential equation. Our algorithm is also validated to be accurate, stable, and effective through numerical experiments under various evaluation metrics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Ell, Shawn W., und F. Gregory Ashby. „Dynamical trajectories in category learning“. Perception & Psychophysics 66, Nr. 8 (November 2004): 1318–40. http://dx.doi.org/10.3758/bf03195001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Vereijken, B., H. T. A. Whiting und W. J. Beek. „A Dynamical Systems Approach to Skill Acquisition“. Quarterly Journal of Experimental Psychology Section A 45, Nr. 2 (August 1992): 323–44. http://dx.doi.org/10.1080/14640749208401329.

Der volle Inhalt der Quelle
Annotation:
This paper argues that the answer to the question, what has to be learned, needs to be established before the question, how is it learned, can be meaningfully addressed. Based on this conviction, some of the limitations of current and past research on skill acquisition are discussed. Motivated by the dynamical systems approach, the question of “what has to be learned” was tackled by setting up a non-linear mathematical model of the task (i.e. learning to make sideways movements on a ski apparatus). On the basis of this model, the phase lag between movements of the platform of the apparatus and the actions of the subject was isolated as an ensemble variable reflecting the timing of the subject in relation to the dynamics of the apparatus. This variable was subsequently used to study “how” the task was learned in a discovery learning experiment, in which predictions stemming from the model were tested and confirmed. Overall, these findings provided support for the hypothesis, formulated by Bernstein (1967), that one of the important effects of practice is learning to make use of reactive forces, thereby reducing the need for active muscular forces. In addition, the data from a previous learning experiment on the ski apparatus—the results of which had been equivocal—were reconsidered. The use of phase lag as a dependent variable provided a resolution of those findings. On the basis of the confirmatory testing of predictions stemming from the model and the clarification of findings from a previous experiment, it is argued that the dynamical systems approach put forward here provides a powerful method for pursuing issues in skill acquisition. Suggestions are made as to how this approach can be used to systematically pursue the questions that arise as a natural outcome of the experimental evidence presented here.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Ijspeert, Auke Jan, Jun Nakanishi, Heiko Hoffmann, Peter Pastor und Stefan Schaal. „Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors“. Neural Computation 25, Nr. 2 (Februar 2013): 328–73. http://dx.doi.org/10.1162/neco_a_00393.

Der volle Inhalt der Quelle
Annotation:
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by means of a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Gabriel, Nicholas, und Neil F. Johnson. „Using Neural Architectures to Model Complex Dynamical Systems“. Advances in Artificial Intelligence and Machine Learning 02, Nr. 02 (2022): 366–84. http://dx.doi.org/10.54364/aaiml.2022.1124.

Der volle Inhalt der Quelle
Annotation:
The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable successin leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Forgione, Marco, und Dario Piga. „dynoNet : A neural network architecture for learning dynamical systems“. International Journal of Adaptive Control and Signal Processing 35, Nr. 4 (14.01.2021): 612–26. http://dx.doi.org/10.1002/acs.3216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Xiao, Wenxin, Armin Lederer und Sandra Hirche. „Learning Stable Nonparametric Dynamical Systems with Gaussian Process Regression“. IFAC-PapersOnLine 53, Nr. 2 (2020): 1194–99. http://dx.doi.org/10.1016/j.ifacol.2020.12.1335.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Chen, Ruilin, Xiaowei Jin, Shujin Laima, Yong Huang und Hui Li. „Intelligent modeling of nonlinear dynamical systems by machine learning“. International Journal of Non-Linear Mechanics 142 (Juni 2022): 103984. http://dx.doi.org/10.1016/j.ijnonlinmec.2022.103984.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Qin, Zengyi, Dawei Sun und Chuchu Fan. „Sablas: Learning Safe Control for Black-Box Dynamical Systems“. IEEE Robotics and Automation Letters 7, Nr. 2 (April 2022): 1928–35. http://dx.doi.org/10.1109/lra.2022.3142743.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Pulch, Roland, und Maha Youssef. „MACHINE LEARNING FOR TRAJECTORIES OF PARAMETRIC NONLINEAR DYNAMICAL SYSTEMS“. Journal of Machine Learning for Modeling and Computing 1, Nr. 1 (2020): 75–95. http://dx.doi.org/10.1615/jmachlearnmodelcomput.2020034093.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Chu, S. R., und R. Shoureshi. „Applications of neural networks in learning of dynamical systems“. IEEE Transactions on Systems, Man, and Cybernetics 22, Nr. 1 (1992): 161–64. http://dx.doi.org/10.1109/21.141320.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Khansari-Zadeh, S. Mohammad, und Aude Billard. „Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models“. IEEE Transactions on Robotics 27, Nr. 5 (Oktober 2011): 943–57. http://dx.doi.org/10.1109/tro.2011.2159412.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Mukhopadhyay, Sumona, und Santo Banerjee. „Learning dynamical systems in noise using convolutional neural networks“. Chaos: An Interdisciplinary Journal of Nonlinear Science 30, Nr. 10 (Oktober 2020): 103125. http://dx.doi.org/10.1063/5.0009326.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Sugie, T., und T. Ono. „On an Iterative Learning Control Law for Dynamical Systems“. IFAC Proceedings Volumes 20, Nr. 5 (Juli 1987): 339–44. http://dx.doi.org/10.1016/s1474-6670(17)55109-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Kimura, M., und R. Nakano. „Learning dynamical systems by recurrent neural networks from orbits“. Neural Networks 11, Nr. 9 (Dezember 1998): 1589–99. http://dx.doi.org/10.1016/s0893-6080(98)00098-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Berwald, Jesse, Tomáš Gedeon und John Sheppard. „Using machine learning to predict catastrophes in dynamical systems“. Journal of Computational and Applied Mathematics 236, Nr. 9 (März 2012): 2235–45. http://dx.doi.org/10.1016/j.cam.2011.11.006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Talmon, Ronen, Stephane Mallat, Hitten Zaveri und Ronald R. Coifman. „Manifold Learning for Latent Variable Inference in Dynamical Systems“. IEEE Transactions on Signal Processing 63, Nr. 15 (August 2015): 3843–56. http://dx.doi.org/10.1109/tsp.2015.2432731.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Zhao, Qingye, Yi Zhang und Xuandong Li. „Safe reinforcement learning for dynamical systems using barrier certificates“. Connection Science 34, Nr. 1 (12.12.2022): 2822–44. http://dx.doi.org/10.1080/09540091.2022.2151567.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Kelso, J. A. S. „Anticipatory dynamical systems, intrinsic pattern dynamics and skill learning“. Human Movement Science 10, Nr. 1 (Februar 1991): 93–111. http://dx.doi.org/10.1016/0167-9457(91)90034-u.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Gauthier, Daniel J., Ingo Fischer und André Röhm. „Learning unseen coexisting attractors“. Chaos: An Interdisciplinary Journal of Nonlinear Science 32, Nr. 11 (November 2022): 113107. http://dx.doi.org/10.1063/5.0116784.

Der volle Inhalt der Quelle
Annotation:
Reservoir computing is a machine learning approach that can generate a surrogate model of a dynamical system. It can learn the underlying dynamical system using fewer trainable parameters and, hence, smaller training data sets than competing approaches. Recently, a simpler formulation, known as next-generation reservoir computing, removed many algorithm metaparameters and identified a well-performing traditional reservoir computer, thus simplifying training even further. Here, we study a particularly challenging problem of learning a dynamical system that has both disparate time scales and multiple co-existing dynamical states (attractors). We compare the next-generation and traditional reservoir computer using metrics quantifying the geometry of the ground-truth and forecasted attractors. For the studied four-dimensional system, the next-generation reservoir computing approach uses [Formula: see text] less training data, requires [Formula: see text] shorter “warmup” time, has fewer metaparameters, and has an [Formula: see text] higher accuracy in predicting the co-existing attractor characteristics in comparison to a traditional reservoir computer. Furthermore, we demonstrate that it predicts the basin of attraction with high accuracy. This work lends further support to the superior learning ability of this new machine learning algorithm for dynamical systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Pontes-Filho, Sidney, Pedro Lind, Anis Yazidi, Jianhua Zhang, Hugo Hammer, Gustavo B. M. Mello, Ioanna Sandvig, Gunnar Tufte und Stefano Nichele. „A neuro-inspired general framework for the evolution of stochastic dynamical systems: Cellular automata, random Boolean networks and echo state networks towards criticality“. Cognitive Neurodynamics 14, Nr. 5 (11.06.2020): 657–74. http://dx.doi.org/10.1007/s11571-020-09600-x.

Der volle Inhalt der Quelle
Annotation:
Abstract Although deep learning has recently increased in popularity, it suffers from various problems including high computational complexity, energy greedy computation, and lack of scalability, to mention a few. In this paper, we investigate an alternative brain-inspired method for data analysis that circumvents the deep learning drawbacks by taking the actual dynamical behavior of biological neural networks into account. For this purpose, we develop a general framework for dynamical systems that can evolve and model a variety of substrates that possess computational capacity. Therefore, dynamical systems can be exploited in the reservoir computing paradigm, i.e., an untrained recurrent nonlinear network with a trained linear readout layer. Moreover, our general framework, called EvoDynamic, is based on an optimized deep neural network library. Hence, generalization and performance can be balanced. The EvoDynamic framework contains three kinds of dynamical systems already implemented, namely cellular automata, random Boolean networks, and echo state networks. The evolution of such systems towards a dynamical behavior, called criticality, is investigated because systems with such behavior may be better suited to do useful computation. The implemented dynamical systems are stochastic and their evolution with genetic algorithm mutates their update rules or network initialization. The obtained results are promising and demonstrate that criticality is achieved. In addition to the presented results, our framework can also be utilized to evolve the dynamical systems connectivity, update and learning rules to improve the quality of the reservoir used for solving computational tasks and physical substrate modeling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Sharma, Shalini, und Angshul Majumdar. „Sequential Transform Learning“. ACM Transactions on Knowledge Discovery from Data 15, Nr. 5 (26.06.2021): 1–18. http://dx.doi.org/10.1145/3447394.

Der volle Inhalt der Quelle
Annotation:
This work proposes a new approach for dynamical modeling; we call it sequential transform learning. This is loosely based on the transform (analysis dictionary) learning formulation. This is the first work on this topic. Transform learning, was originally developed for static problems; we modify it to model dynamical systems by introducing a feedback loop. The learnt transform coefficients for the t th instant are fed back along with the t + 1st sample, thereby establishing a Markovian relationship. Furthermore, the formulation is made supervised by the label consistency cost. Our approach keeps the best of two worlds, marrying the interpretability and uncertainty measure of signal processing with the function approximation ability of neural networks. We have carried out experiments on one of the most challenging problems in dynamical modeling - stock forecasting. Benchmarking with the state-of-the-art has shown that our method excels over the rest.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Duan, Jianghua, Yongsheng Ou, Jianbing Hu, Zhiyang Wang, Shaokun Jin und Chao Xu. „Fast and Stable Learning of Dynamical Systems Based on Extreme Learning Machine“. IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, Nr. 6 (Juni 2019): 1175–85. http://dx.doi.org/10.1109/tsmc.2017.2705279.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie