Artykuły w czasopismach na temat „Learning dynamical systems”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Learning dynamical systems.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Learning dynamical systems”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Hein, Helle, i Ulo Lepik. "LEARNING TRAJECTORIES OF DYNAMICAL SYSTEMS". Mathematical Modelling and Analysis 17, nr 4 (1.09.2012): 519–31. http://dx.doi.org/10.3846/13926292.2012.706654.

Pełny tekst źródła
Streszczenie:
The aim of the present paper is to describe the method that is capable of adjusting the parameters of a dynamical system so that the trajectories gain certain specified properties. Three problems are considered: (i) learning fixed points, (ii) learning to periodic trajectories, (iii) restrictions on the trajectories. An error function, which measures the discrepancy between the actual and desired trajectories is introduced. Numerical results of several examples, which illustrate the efficiency of the method, are presented.
Style APA, Harvard, Vancouver, ISO itp.
2

Khadivar, Farshad, Ilaria Lauzana i Aude Billard. "Learning dynamical systems with bifurcations". Robotics and Autonomous Systems 136 (luty 2021): 103700. http://dx.doi.org/10.1016/j.robot.2020.103700.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Berry, Tyrus, i Suddhasattwa Das. "Learning Theory for Dynamical Systems". SIAM Journal on Applied Dynamical Systems 22, nr 3 (8.08.2023): 2082–122. http://dx.doi.org/10.1137/22m1516865.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Roy, Sayan, i Debanjan Rana. "Machine Learning in Nonlinear Dynamical Systems". Resonance 26, nr 7 (lipiec 2021): 953–70. http://dx.doi.org/10.1007/s12045-021-1194-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

WANG, CONG, TIANRUI CHEN, GUANRONG CHEN i DAVID J. HILL. "DETERMINISTIC LEARNING OF NONLINEAR DYNAMICAL SYSTEMS". International Journal of Bifurcation and Chaos 19, nr 04 (kwiecień 2009): 1307–28. http://dx.doi.org/10.1142/s0218127409023640.

Pełny tekst źródła
Streszczenie:
In this paper, we investigate the problem of identifying or modeling nonlinear dynamical systems undergoing periodic and period-like (recurrent) motions. For accurate identification of nonlinear dynamical systems, the persistent excitation condition is normally required to be satisfied. Firstly, by using localized radial basis function networks, a relationship between the recurrent trajectories and the persistence of excitation condition is established. Secondly, for a broad class of recurrent trajectories generated from nonlinear dynamical systems, a deterministic learning approach is presented which achieves locally-accurate identification of the underlying system dynamics in a local region along the recurrent trajectory. This study reveals that even for a random-like chaotic trajectory, which is extremely sensitive to initial conditions and is long-term unpredictable, the system dynamics of a nonlinear chaotic system can still be locally-accurate identified along the chaotic trajectory in a deterministic way. Numerical experiments on the Rossler system are included to demonstrate the effectiveness of the proposed approach.
Style APA, Harvard, Vancouver, ISO itp.
6

Ahmadi, Amir Ali, i Bachir El Khadir. "Learning Dynamical Systems with Side Information". SIAM Review 65, nr 1 (luty 2023): 183–223. http://dx.doi.org/10.1137/20m1388644.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Grigoryeva, Lyudmila, Allen Hart i Juan-Pablo Ortega. "Learning strange attractors with reservoir systems". Nonlinearity 36, nr 9 (27.07.2023): 4674–708. http://dx.doi.org/10.1088/1361-6544/ace492.

Pełny tekst źródła
Streszczenie:
Abstract This paper shows that the celebrated embedding theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of generic observations of an invertible dynamical system carry in their wake an embedding of the phase space dynamics into the chosen Euclidean state space. This embedding coincides with a natural generalized synchronization that arises in this setup and that yields a topological conjugacy between the state-space dynamics driven by the generic observations of the dynamical system and the dynamical system itself. This result provides additional tools for the representation, learning, and analysis of chaotic attractors and sheds additional light on the reservoir computing phenomenon that appears in the context of recurrent neural networks.
Style APA, Harvard, Vancouver, ISO itp.
8

Davids, Keith. "Learning design for Nonlinear Dynamical Movement Systems". Open Sports Sciences Journal 5, nr 1 (13.09.2012): 9–16. http://dx.doi.org/10.2174/1875399x01205010009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Campi, M. C., i P. R. Kumar. "Learning dynamical systems in a stationary environment". Systems & Control Letters 34, nr 3 (czerwiec 1998): 125–32. http://dx.doi.org/10.1016/s0167-6911(98)00005-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Rajendra, P., i V. Brahmajirao. "Modeling of dynamical systems through deep learning". Biophysical Reviews 12, nr 6 (22.11.2020): 1311–20. http://dx.doi.org/10.1007/s12551-020-00776-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Cheng, Sen, i Philip N. Sabes. "Modeling Sensorimotor Learning with Linear Dynamical Systems". Neural Computation 18, nr 4 (1.04.2006): 760–93. http://dx.doi.org/10.1162/neco.2006.18.4.760.

Pełny tekst źródła
Streszczenie:
Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this framework, the state of the system is a set of parameters that define the current sensorimotor transformation— the function that maps sensory inputs to motor outputs. The class of LDS models provides a first-order approximation for any Markovian (state-dependent) learning rule that specifies the changes in the sensorimotor transformation that result from sensory feedback on each movement. We show that modeling the trial-by-trial dynamics of learning provides a sub-stantially enhanced picture of the process of adaptation compared to measurements of the steady state of adaptation derived from more traditional blocked-exposure experiments. Specifically, these models can be used to quantify sensory and performance biases, the extent to which learned changes in the sensorimotor transformation decay over time, and the portion of motor variability due to either learning or performance variability. We show that previous attempts to fit such models with linear regression have not generally yielded consistent parameter estimates. Instead, we present an expectation-maximization algorithm for fitting LDS models to experimental data and describe the difficulties inherent in estimating the parameters associated with feedback-driven learning. Finally, we demonstrate the application of these methods in a simple sensorimotor learning experiment: adaptation to shifted visual feedback during reaching.
Style APA, Harvard, Vancouver, ISO itp.
12

Qiu, Zirou, Abhijin Adiga, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard E. Stearns i Anil Vullikanti. "Learning the Topology and Behavior of Discrete Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 13 (24.03.2024): 14722–30. http://dx.doi.org/10.1609/aaai.v38i13.29390.

Pełny tekst źródła
Streszczenie:
Discrete dynamical systems are commonly used to model the spread of contagions on real-world networks. Under the PAC framework, existing research has studied the problem of learning the behavior of a system, assuming that the underlying network is known. In this work, we focus on a more challenging setting: to learn both the behavior and the underlying topology of a black-box system. We show that, in general, this learning problem is computationally intractable. On the positive side, we present efficient learning methods under the PAC model when the underlying graph of the dynamical system belongs to certain classes. Further, we examine a relaxed setting where the topology of an unknown system is partially observed. For this case, we develop an efficient PAC learner to infer the system and establish the sample complexity. Lastly, we present a formal analysis of the expressive power of the hypothesis class of dynamical systems where both the topology and behavior are unknown, using the well-known Natarajan dimension formalism. Our results provide a theoretical foundation for learning both the topology and behavior of discrete dynamical systems.
Style APA, Harvard, Vancouver, ISO itp.
13

Bavandpour, Mohammad, Hamid Soleimani, Saeed Bagheri-Shouraki, Arash Ahmadi, Derek Abbott i Leon O. Chua. "Cellular Memristive Dynamical Systems (CMDS)". International Journal of Bifurcation and Chaos 24, nr 05 (maj 2014): 1430016. http://dx.doi.org/10.1142/s021812741430016x.

Pełny tekst źródła
Streszczenie:
This study presents a cellular-based mapping for a special class of dynamical systems for embedding neuron models, by exploiting an efficient memristor crossbar-based circuit for its implementation. The resultant reconfigurable memristive dynamical circuit exhibits various bifurcation phenomena, and responses that are characteristic of dynamical systems. High programmability of the circuit enables it to be applied to real-time applications, learning systems, and analytically indescribable dynamical systems. Moreover, its efficient implementation platform makes it an appropriate choice for on-chip applications and prostheses. We apply this method to the Izhikevich, and FitzHugh–Nagumo neuron models as case studies, and investigate the dynamical behaviors of these circuits.
Style APA, Harvard, Vancouver, ISO itp.
14

Zhou, Quan, Jakub Marecek i Robert N. Shorten. "Fairness in Forecasting and Learning Linear Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 12 (18.05.2021): 11134–42. http://dx.doi.org/10.1609/aaai.v35i12.17328.

Pełny tekst źródła
Streszczenie:
In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. When the amounts of training data for the subgroups are not controlled carefully, under-representation bias arises. We introduce two natural notions of subgroup fairness and instantaneous fairness to address such under-representation bias in time-series forecasting problems. In particular, we consider the subgroup-fair and instant-fair learning of a linear dynamical system (LDS) from multiple trajectories of varying lengths and the associated forecasting problems. We provide globally convergent methods for the learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate both the beneficial impact of fairness considerations on statistical performance and the encouraging effects of exploiting sparsity on run time.
Style APA, Harvard, Vancouver, ISO itp.
15

Mezić, Igor. "Koopman Operator, Geometry, and Learning of Dynamical Systems". Notices of the American Mathematical Society 68, nr 07 (1.08.2021): 1. http://dx.doi.org/10.1090/noti2306.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Monga, Bharat, i Jeff Moehlis. "Supervised learning algorithms for controlling underactuated dynamical systems". Physica D: Nonlinear Phenomena 412 (listopad 2020): 132621. http://dx.doi.org/10.1016/j.physd.2020.132621.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Kronander, K., M. Khansari i A. Billard. "Incremental motion learning with locally modulated dynamical systems". Robotics and Autonomous Systems 70 (sierpień 2015): 52–62. http://dx.doi.org/10.1016/j.robot.2015.03.010.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Tokuda, Isao, Ryuji Tokunaga i Kazuyuki Aihara. "Back-propagation learning of infinite-dimensional dynamical systems". Neural Networks 16, nr 8 (październik 2003): 1179–93. http://dx.doi.org/10.1016/s0893-6080(03)00076-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Sugie, Toshiharu, i Toshiro Ono. "An iterative learning control law for dynamical systems". Automatica 27, nr 4 (lipiec 1991): 729–32. http://dx.doi.org/10.1016/0005-1098(91)90066-b.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Beek, P. J., i A. A. M. van Santvoord. "Learning the Cascade Juggle: A Dynamical Systems Analysis". Journal of Motor Behavior 24, nr 1 (marzec 1992): 85–94. http://dx.doi.org/10.1080/00222895.1992.9941604.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

E, Weinan. "A Proposal on Machine Learning via Dynamical Systems". Communications in Mathematics and Statistics 5, nr 1 (marzec 2017): 1–11. http://dx.doi.org/10.1007/s40304-017-0103-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

B. Brugarolas, Paul, i Michael G. Safonov. "Learning about dynamical systems via unfalsification of hypotheses". International Journal of Robust and Nonlinear Control 14, nr 11 (20.04.2004): 933–43. http://dx.doi.org/10.1002/rnc.924.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Giannakis, Dimitrios, Amelia Henriksen, Joel A. Tropp i Rachel Ward. "Learning to Forecast Dynamical Systems from Streaming Data". SIAM Journal on Applied Dynamical Systems 22, nr 2 (5.05.2023): 527–58. http://dx.doi.org/10.1137/21m144983x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Modi, Aditya, Mohamad Kazem Shirani Faradonbeh, Ambuj Tewari i George Michailidis. "Joint learning of linear time-invariant dynamical systems". Automatica 164 (czerwiec 2024): 111635. http://dx.doi.org/10.1016/j.automatica.2024.111635.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Horbacz, Katarzyna. "Random dynamical systems with jumps". Journal of Applied Probability 41, nr 3 (wrzesień 2004): 890–910. http://dx.doi.org/10.1239/jap/1091543432.

Pełny tekst źródła
Streszczenie:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
Style APA, Harvard, Vancouver, ISO itp.
26

Horbacz, Katarzyna. "Random dynamical systems with jumps". Journal of Applied Probability 41, nr 03 (wrzesień 2004): 890–910. http://dx.doi.org/10.1017/s0021900200020611.

Pełny tekst źródła
Streszczenie:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
Style APA, Harvard, Vancouver, ISO itp.
27

Jena, Amit, Dileep Kalathil i Le Xie. "Meta-Learning-Based Adaptive Stability Certificates for Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 11 (24.03.2024): 12801–9. http://dx.doi.org/10.1609/aaai.v38i11.29176.

Pełny tekst źródła
Streszczenie:
This paper addresses the problem of Neural Network (NN) based adaptive stability certification in a dynamical system. The state-of-the-art methods, such as Neural Lyapunov Functions (NLFs), use NN-based formulations to assess the stability of a non-linear dynamical system and compute a Region of Attraction (ROA) in the state space. However, under parametric uncertainty, if the values of system parameters vary over time, the NLF methods fail to adapt to such changes and may lead to conservative stability assessment performance. We circumvent this issue by integrating Model Agnostic Meta-learning (MAML) with NLFs and propose meta-NLFs. In this process, we train a meta-function that adapts to any parametric shifts and updates into an NLF for the system with new test-time parameter values. We demonstrate the stability assessment performance of meta-NLFs on some standard benchmark autonomous dynamical systems.
Style APA, Harvard, Vancouver, ISO itp.
28

Feng, Lingyu, Ting Gao, Min Dai i Jinqiao Duan. "Learning effective dynamics from data-driven stochastic systems". Chaos: An Interdisciplinary Journal of Nonlinear Science 33, nr 4 (kwiecień 2023): 043131. http://dx.doi.org/10.1063/5.0126667.

Pełny tekst źródła
Streszczenie:
Multiscale stochastic dynamical systems have been widely adopted to a variety of scientific and engineering problems due to their capability of depicting complex phenomena in many real-world applications. This work is devoted to investigating the effective dynamics for slow–fast stochastic dynamical systems. Given observation data on a short-term period satisfying some unknown slow–fast stochastic systems, we propose a novel algorithm, including a neural network called Auto-SDE, to learn an invariant slow manifold. Our approach captures the evolutionary nature of a series of time-dependent autoencoder neural networks with the loss constructed from a discretized stochastic differential equation. Our algorithm is also validated to be accurate, stable, and effective through numerical experiments under various evaluation metrics.
Style APA, Harvard, Vancouver, ISO itp.
29

Ell, Shawn W., i F. Gregory Ashby. "Dynamical trajectories in category learning". Perception & Psychophysics 66, nr 8 (listopad 2004): 1318–40. http://dx.doi.org/10.3758/bf03195001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Vereijken, B., H. T. A. Whiting i W. J. Beek. "A Dynamical Systems Approach to Skill Acquisition". Quarterly Journal of Experimental Psychology Section A 45, nr 2 (sierpień 1992): 323–44. http://dx.doi.org/10.1080/14640749208401329.

Pełny tekst źródła
Streszczenie:
This paper argues that the answer to the question, what has to be learned, needs to be established before the question, how is it learned, can be meaningfully addressed. Based on this conviction, some of the limitations of current and past research on skill acquisition are discussed. Motivated by the dynamical systems approach, the question of “what has to be learned” was tackled by setting up a non-linear mathematical model of the task (i.e. learning to make sideways movements on a ski apparatus). On the basis of this model, the phase lag between movements of the platform of the apparatus and the actions of the subject was isolated as an ensemble variable reflecting the timing of the subject in relation to the dynamics of the apparatus. This variable was subsequently used to study “how” the task was learned in a discovery learning experiment, in which predictions stemming from the model were tested and confirmed. Overall, these findings provided support for the hypothesis, formulated by Bernstein (1967), that one of the important effects of practice is learning to make use of reactive forces, thereby reducing the need for active muscular forces. In addition, the data from a previous learning experiment on the ski apparatus—the results of which had been equivocal—were reconsidered. The use of phase lag as a dependent variable provided a resolution of those findings. On the basis of the confirmatory testing of predictions stemming from the model and the clarification of findings from a previous experiment, it is argued that the dynamical systems approach put forward here provides a powerful method for pursuing issues in skill acquisition. Suggestions are made as to how this approach can be used to systematically pursue the questions that arise as a natural outcome of the experimental evidence presented here.
Style APA, Harvard, Vancouver, ISO itp.
31

Ijspeert, Auke Jan, Jun Nakanishi, Heiko Hoffmann, Peter Pastor i Stefan Schaal. "Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors". Neural Computation 25, nr 2 (luty 2013): 328–73. http://dx.doi.org/10.1162/neco_a_00393.

Pełny tekst źródła
Streszczenie:
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by means of a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics.
Style APA, Harvard, Vancouver, ISO itp.
32

Gabriel, Nicholas, i Neil F. Johnson. "Using Neural Architectures to Model Complex Dynamical Systems". Advances in Artificial Intelligence and Machine Learning 02, nr 02 (2022): 366–84. http://dx.doi.org/10.54364/aaiml.2022.1124.

Pełny tekst źródła
Streszczenie:
The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable successin leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.
Style APA, Harvard, Vancouver, ISO itp.
33

Forgione, Marco, i Dario Piga. "dynoNet : A neural network architecture for learning dynamical systems". International Journal of Adaptive Control and Signal Processing 35, nr 4 (14.01.2021): 612–26. http://dx.doi.org/10.1002/acs.3216.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Xiao, Wenxin, Armin Lederer i Sandra Hirche. "Learning Stable Nonparametric Dynamical Systems with Gaussian Process Regression". IFAC-PapersOnLine 53, nr 2 (2020): 1194–99. http://dx.doi.org/10.1016/j.ifacol.2020.12.1335.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Chen, Ruilin, Xiaowei Jin, Shujin Laima, Yong Huang i Hui Li. "Intelligent modeling of nonlinear dynamical systems by machine learning". International Journal of Non-Linear Mechanics 142 (czerwiec 2022): 103984. http://dx.doi.org/10.1016/j.ijnonlinmec.2022.103984.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Qin, Zengyi, Dawei Sun i Chuchu Fan. "Sablas: Learning Safe Control for Black-Box Dynamical Systems". IEEE Robotics and Automation Letters 7, nr 2 (kwiecień 2022): 1928–35. http://dx.doi.org/10.1109/lra.2022.3142743.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Pulch, Roland, i Maha Youssef. "MACHINE LEARNING FOR TRAJECTORIES OF PARAMETRIC NONLINEAR DYNAMICAL SYSTEMS". Journal of Machine Learning for Modeling and Computing 1, nr 1 (2020): 75–95. http://dx.doi.org/10.1615/jmachlearnmodelcomput.2020034093.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Chu, S. R., i R. Shoureshi. "Applications of neural networks in learning of dynamical systems". IEEE Transactions on Systems, Man, and Cybernetics 22, nr 1 (1992): 161–64. http://dx.doi.org/10.1109/21.141320.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Khansari-Zadeh, S. Mohammad, i Aude Billard. "Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models". IEEE Transactions on Robotics 27, nr 5 (październik 2011): 943–57. http://dx.doi.org/10.1109/tro.2011.2159412.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Mukhopadhyay, Sumona, i Santo Banerjee. "Learning dynamical systems in noise using convolutional neural networks". Chaos: An Interdisciplinary Journal of Nonlinear Science 30, nr 10 (październik 2020): 103125. http://dx.doi.org/10.1063/5.0009326.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Sugie, T., i T. Ono. "On an Iterative Learning Control Law for Dynamical Systems". IFAC Proceedings Volumes 20, nr 5 (lipiec 1987): 339–44. http://dx.doi.org/10.1016/s1474-6670(17)55109-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Kimura, M., i R. Nakano. "Learning dynamical systems by recurrent neural networks from orbits". Neural Networks 11, nr 9 (grudzień 1998): 1589–99. http://dx.doi.org/10.1016/s0893-6080(98)00098-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Berwald, Jesse, Tomáš Gedeon i John Sheppard. "Using machine learning to predict catastrophes in dynamical systems". Journal of Computational and Applied Mathematics 236, nr 9 (marzec 2012): 2235–45. http://dx.doi.org/10.1016/j.cam.2011.11.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Talmon, Ronen, Stephane Mallat, Hitten Zaveri i Ronald R. Coifman. "Manifold Learning for Latent Variable Inference in Dynamical Systems". IEEE Transactions on Signal Processing 63, nr 15 (sierpień 2015): 3843–56. http://dx.doi.org/10.1109/tsp.2015.2432731.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Zhao, Qingye, Yi Zhang i Xuandong Li. "Safe reinforcement learning for dynamical systems using barrier certificates". Connection Science 34, nr 1 (12.12.2022): 2822–44. http://dx.doi.org/10.1080/09540091.2022.2151567.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Kelso, J. A. S. "Anticipatory dynamical systems, intrinsic pattern dynamics and skill learning". Human Movement Science 10, nr 1 (luty 1991): 93–111. http://dx.doi.org/10.1016/0167-9457(91)90034-u.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Gauthier, Daniel J., Ingo Fischer i André Röhm. "Learning unseen coexisting attractors". Chaos: An Interdisciplinary Journal of Nonlinear Science 32, nr 11 (listopad 2022): 113107. http://dx.doi.org/10.1063/5.0116784.

Pełny tekst źródła
Streszczenie:
Reservoir computing is a machine learning approach that can generate a surrogate model of a dynamical system. It can learn the underlying dynamical system using fewer trainable parameters and, hence, smaller training data sets than competing approaches. Recently, a simpler formulation, known as next-generation reservoir computing, removed many algorithm metaparameters and identified a well-performing traditional reservoir computer, thus simplifying training even further. Here, we study a particularly challenging problem of learning a dynamical system that has both disparate time scales and multiple co-existing dynamical states (attractors). We compare the next-generation and traditional reservoir computer using metrics quantifying the geometry of the ground-truth and forecasted attractors. For the studied four-dimensional system, the next-generation reservoir computing approach uses [Formula: see text] less training data, requires [Formula: see text] shorter “warmup” time, has fewer metaparameters, and has an [Formula: see text] higher accuracy in predicting the co-existing attractor characteristics in comparison to a traditional reservoir computer. Furthermore, we demonstrate that it predicts the basin of attraction with high accuracy. This work lends further support to the superior learning ability of this new machine learning algorithm for dynamical systems.
Style APA, Harvard, Vancouver, ISO itp.
48

Pontes-Filho, Sidney, Pedro Lind, Anis Yazidi, Jianhua Zhang, Hugo Hammer, Gustavo B. M. Mello, Ioanna Sandvig, Gunnar Tufte i Stefano Nichele. "A neuro-inspired general framework for the evolution of stochastic dynamical systems: Cellular automata, random Boolean networks and echo state networks towards criticality". Cognitive Neurodynamics 14, nr 5 (11.06.2020): 657–74. http://dx.doi.org/10.1007/s11571-020-09600-x.

Pełny tekst źródła
Streszczenie:
Abstract Although deep learning has recently increased in popularity, it suffers from various problems including high computational complexity, energy greedy computation, and lack of scalability, to mention a few. In this paper, we investigate an alternative brain-inspired method for data analysis that circumvents the deep learning drawbacks by taking the actual dynamical behavior of biological neural networks into account. For this purpose, we develop a general framework for dynamical systems that can evolve and model a variety of substrates that possess computational capacity. Therefore, dynamical systems can be exploited in the reservoir computing paradigm, i.e., an untrained recurrent nonlinear network with a trained linear readout layer. Moreover, our general framework, called EvoDynamic, is based on an optimized deep neural network library. Hence, generalization and performance can be balanced. The EvoDynamic framework contains three kinds of dynamical systems already implemented, namely cellular automata, random Boolean networks, and echo state networks. The evolution of such systems towards a dynamical behavior, called criticality, is investigated because systems with such behavior may be better suited to do useful computation. The implemented dynamical systems are stochastic and their evolution with genetic algorithm mutates their update rules or network initialization. The obtained results are promising and demonstrate that criticality is achieved. In addition to the presented results, our framework can also be utilized to evolve the dynamical systems connectivity, update and learning rules to improve the quality of the reservoir used for solving computational tasks and physical substrate modeling.
Style APA, Harvard, Vancouver, ISO itp.
49

Sharma, Shalini, i Angshul Majumdar. "Sequential Transform Learning". ACM Transactions on Knowledge Discovery from Data 15, nr 5 (26.06.2021): 1–18. http://dx.doi.org/10.1145/3447394.

Pełny tekst źródła
Streszczenie:
This work proposes a new approach for dynamical modeling; we call it sequential transform learning. This is loosely based on the transform (analysis dictionary) learning formulation. This is the first work on this topic. Transform learning, was originally developed for static problems; we modify it to model dynamical systems by introducing a feedback loop. The learnt transform coefficients for the t th instant are fed back along with the t + 1st sample, thereby establishing a Markovian relationship. Furthermore, the formulation is made supervised by the label consistency cost. Our approach keeps the best of two worlds, marrying the interpretability and uncertainty measure of signal processing with the function approximation ability of neural networks. We have carried out experiments on one of the most challenging problems in dynamical modeling - stock forecasting. Benchmarking with the state-of-the-art has shown that our method excels over the rest.
Style APA, Harvard, Vancouver, ISO itp.
50

Duan, Jianghua, Yongsheng Ou, Jianbing Hu, Zhiyang Wang, Shaokun Jin i Chao Xu. "Fast and Stable Learning of Dynamical Systems Based on Extreme Learning Machine". IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, nr 6 (czerwiec 2019): 1175–85. http://dx.doi.org/10.1109/tsmc.2017.2705279.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii