Artigos de revistas sobre o tema "Learning dynamical systems"

Siga este link para ver outros tipos de publicações sobre o tema: Learning dynamical systems.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Learning dynamical systems".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Hein, Helle, e Ulo Lepik. "LEARNING TRAJECTORIES OF DYNAMICAL SYSTEMS". Mathematical Modelling and Analysis 17, n.º 4 (1 de setembro de 2012): 519–31. http://dx.doi.org/10.3846/13926292.2012.706654.

Texto completo da fonte
Resumo:
The aim of the present paper is to describe the method that is capable of adjusting the parameters of a dynamical system so that the trajectories gain certain specified properties. Three problems are considered: (i) learning fixed points, (ii) learning to periodic trajectories, (iii) restrictions on the trajectories. An error function, which measures the discrepancy between the actual and desired trajectories is introduced. Numerical results of several examples, which illustrate the efficiency of the method, are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Khadivar, Farshad, Ilaria Lauzana e Aude Billard. "Learning dynamical systems with bifurcations". Robotics and Autonomous Systems 136 (fevereiro de 2021): 103700. http://dx.doi.org/10.1016/j.robot.2020.103700.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Berry, Tyrus, e Suddhasattwa Das. "Learning Theory for Dynamical Systems". SIAM Journal on Applied Dynamical Systems 22, n.º 3 (8 de agosto de 2023): 2082–122. http://dx.doi.org/10.1137/22m1516865.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Roy, Sayan, e Debanjan Rana. "Machine Learning in Nonlinear Dynamical Systems". Resonance 26, n.º 7 (julho de 2021): 953–70. http://dx.doi.org/10.1007/s12045-021-1194-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

WANG, CONG, TIANRUI CHEN, GUANRONG CHEN e DAVID J. HILL. "DETERMINISTIC LEARNING OF NONLINEAR DYNAMICAL SYSTEMS". International Journal of Bifurcation and Chaos 19, n.º 04 (abril de 2009): 1307–28. http://dx.doi.org/10.1142/s0218127409023640.

Texto completo da fonte
Resumo:
In this paper, we investigate the problem of identifying or modeling nonlinear dynamical systems undergoing periodic and period-like (recurrent) motions. For accurate identification of nonlinear dynamical systems, the persistent excitation condition is normally required to be satisfied. Firstly, by using localized radial basis function networks, a relationship between the recurrent trajectories and the persistence of excitation condition is established. Secondly, for a broad class of recurrent trajectories generated from nonlinear dynamical systems, a deterministic learning approach is presented which achieves locally-accurate identification of the underlying system dynamics in a local region along the recurrent trajectory. This study reveals that even for a random-like chaotic trajectory, which is extremely sensitive to initial conditions and is long-term unpredictable, the system dynamics of a nonlinear chaotic system can still be locally-accurate identified along the chaotic trajectory in a deterministic way. Numerical experiments on the Rossler system are included to demonstrate the effectiveness of the proposed approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Ahmadi, Amir Ali, e Bachir El Khadir. "Learning Dynamical Systems with Side Information". SIAM Review 65, n.º 1 (fevereiro de 2023): 183–223. http://dx.doi.org/10.1137/20m1388644.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Grigoryeva, Lyudmila, Allen Hart e Juan-Pablo Ortega. "Learning strange attractors with reservoir systems". Nonlinearity 36, n.º 9 (27 de julho de 2023): 4674–708. http://dx.doi.org/10.1088/1361-6544/ace492.

Texto completo da fonte
Resumo:
Abstract This paper shows that the celebrated embedding theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of generic observations of an invertible dynamical system carry in their wake an embedding of the phase space dynamics into the chosen Euclidean state space. This embedding coincides with a natural generalized synchronization that arises in this setup and that yields a topological conjugacy between the state-space dynamics driven by the generic observations of the dynamical system and the dynamical system itself. This result provides additional tools for the representation, learning, and analysis of chaotic attractors and sheds additional light on the reservoir computing phenomenon that appears in the context of recurrent neural networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Davids, Keith. "Learning design for Nonlinear Dynamical Movement Systems". Open Sports Sciences Journal 5, n.º 1 (13 de setembro de 2012): 9–16. http://dx.doi.org/10.2174/1875399x01205010009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Campi, M. C., e P. R. Kumar. "Learning dynamical systems in a stationary environment". Systems & Control Letters 34, n.º 3 (junho de 1998): 125–32. http://dx.doi.org/10.1016/s0167-6911(98)00005-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Rajendra, P., e V. Brahmajirao. "Modeling of dynamical systems through deep learning". Biophysical Reviews 12, n.º 6 (22 de novembro de 2020): 1311–20. http://dx.doi.org/10.1007/s12551-020-00776-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Cheng, Sen, e Philip N. Sabes. "Modeling Sensorimotor Learning with Linear Dynamical Systems". Neural Computation 18, n.º 4 (1 de abril de 2006): 760–93. http://dx.doi.org/10.1162/neco.2006.18.4.760.

Texto completo da fonte
Resumo:
Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this framework, the state of the system is a set of parameters that define the current sensorimotor transformation— the function that maps sensory inputs to motor outputs. The class of LDS models provides a first-order approximation for any Markovian (state-dependent) learning rule that specifies the changes in the sensorimotor transformation that result from sensory feedback on each movement. We show that modeling the trial-by-trial dynamics of learning provides a sub-stantially enhanced picture of the process of adaptation compared to measurements of the steady state of adaptation derived from more traditional blocked-exposure experiments. Specifically, these models can be used to quantify sensory and performance biases, the extent to which learned changes in the sensorimotor transformation decay over time, and the portion of motor variability due to either learning or performance variability. We show that previous attempts to fit such models with linear regression have not generally yielded consistent parameter estimates. Instead, we present an expectation-maximization algorithm for fitting LDS models to experimental data and describe the difficulties inherent in estimating the parameters associated with feedback-driven learning. Finally, we demonstrate the application of these methods in a simple sensorimotor learning experiment: adaptation to shifted visual feedback during reaching.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Qiu, Zirou, Abhijin Adiga, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard E. Stearns e Anil Vullikanti. "Learning the Topology and Behavior of Discrete Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de março de 2024): 14722–30. http://dx.doi.org/10.1609/aaai.v38i13.29390.

Texto completo da fonte
Resumo:
Discrete dynamical systems are commonly used to model the spread of contagions on real-world networks. Under the PAC framework, existing research has studied the problem of learning the behavior of a system, assuming that the underlying network is known. In this work, we focus on a more challenging setting: to learn both the behavior and the underlying topology of a black-box system. We show that, in general, this learning problem is computationally intractable. On the positive side, we present efficient learning methods under the PAC model when the underlying graph of the dynamical system belongs to certain classes. Further, we examine a relaxed setting where the topology of an unknown system is partially observed. For this case, we develop an efficient PAC learner to infer the system and establish the sample complexity. Lastly, we present a formal analysis of the expressive power of the hypothesis class of dynamical systems where both the topology and behavior are unknown, using the well-known Natarajan dimension formalism. Our results provide a theoretical foundation for learning both the topology and behavior of discrete dynamical systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Bavandpour, Mohammad, Hamid Soleimani, Saeed Bagheri-Shouraki, Arash Ahmadi, Derek Abbott e Leon O. Chua. "Cellular Memristive Dynamical Systems (CMDS)". International Journal of Bifurcation and Chaos 24, n.º 05 (maio de 2014): 1430016. http://dx.doi.org/10.1142/s021812741430016x.

Texto completo da fonte
Resumo:
This study presents a cellular-based mapping for a special class of dynamical systems for embedding neuron models, by exploiting an efficient memristor crossbar-based circuit for its implementation. The resultant reconfigurable memristive dynamical circuit exhibits various bifurcation phenomena, and responses that are characteristic of dynamical systems. High programmability of the circuit enables it to be applied to real-time applications, learning systems, and analytically indescribable dynamical systems. Moreover, its efficient implementation platform makes it an appropriate choice for on-chip applications and prostheses. We apply this method to the Izhikevich, and FitzHugh–Nagumo neuron models as case studies, and investigate the dynamical behaviors of these circuits.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Zhou, Quan, Jakub Marecek e Robert N. Shorten. "Fairness in Forecasting and Learning Linear Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de maio de 2021): 11134–42. http://dx.doi.org/10.1609/aaai.v35i12.17328.

Texto completo da fonte
Resumo:
In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. When the amounts of training data for the subgroups are not controlled carefully, under-representation bias arises. We introduce two natural notions of subgroup fairness and instantaneous fairness to address such under-representation bias in time-series forecasting problems. In particular, we consider the subgroup-fair and instant-fair learning of a linear dynamical system (LDS) from multiple trajectories of varying lengths and the associated forecasting problems. We provide globally convergent methods for the learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate both the beneficial impact of fairness considerations on statistical performance and the encouraging effects of exploiting sparsity on run time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Mezić, Igor. "Koopman Operator, Geometry, and Learning of Dynamical Systems". Notices of the American Mathematical Society 68, n.º 07 (1 de agosto de 2021): 1. http://dx.doi.org/10.1090/noti2306.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Monga, Bharat, e Jeff Moehlis. "Supervised learning algorithms for controlling underactuated dynamical systems". Physica D: Nonlinear Phenomena 412 (novembro de 2020): 132621. http://dx.doi.org/10.1016/j.physd.2020.132621.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Kronander, K., M. Khansari e A. Billard. "Incremental motion learning with locally modulated dynamical systems". Robotics and Autonomous Systems 70 (agosto de 2015): 52–62. http://dx.doi.org/10.1016/j.robot.2015.03.010.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Tokuda, Isao, Ryuji Tokunaga e Kazuyuki Aihara. "Back-propagation learning of infinite-dimensional dynamical systems". Neural Networks 16, n.º 8 (outubro de 2003): 1179–93. http://dx.doi.org/10.1016/s0893-6080(03)00076-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Sugie, Toshiharu, e Toshiro Ono. "An iterative learning control law for dynamical systems". Automatica 27, n.º 4 (julho de 1991): 729–32. http://dx.doi.org/10.1016/0005-1098(91)90066-b.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Beek, P. J., e A. A. M. van Santvoord. "Learning the Cascade Juggle: A Dynamical Systems Analysis". Journal of Motor Behavior 24, n.º 1 (março de 1992): 85–94. http://dx.doi.org/10.1080/00222895.1992.9941604.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

E, Weinan. "A Proposal on Machine Learning via Dynamical Systems". Communications in Mathematics and Statistics 5, n.º 1 (março de 2017): 1–11. http://dx.doi.org/10.1007/s40304-017-0103-z.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

B. Brugarolas, Paul, e Michael G. Safonov. "Learning about dynamical systems via unfalsification of hypotheses". International Journal of Robust and Nonlinear Control 14, n.º 11 (20 de abril de 2004): 933–43. http://dx.doi.org/10.1002/rnc.924.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Giannakis, Dimitrios, Amelia Henriksen, Joel A. Tropp e Rachel Ward. "Learning to Forecast Dynamical Systems from Streaming Data". SIAM Journal on Applied Dynamical Systems 22, n.º 2 (5 de maio de 2023): 527–58. http://dx.doi.org/10.1137/21m144983x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Modi, Aditya, Mohamad Kazem Shirani Faradonbeh, Ambuj Tewari e George Michailidis. "Joint learning of linear time-invariant dynamical systems". Automatica 164 (junho de 2024): 111635. http://dx.doi.org/10.1016/j.automatica.2024.111635.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Horbacz, Katarzyna. "Random dynamical systems with jumps". Journal of Applied Probability 41, n.º 3 (setembro de 2004): 890–910. http://dx.doi.org/10.1239/jap/1091543432.

Texto completo da fonte
Resumo:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Horbacz, Katarzyna. "Random dynamical systems with jumps". Journal of Applied Probability 41, n.º 03 (setembro de 2004): 890–910. http://dx.doi.org/10.1017/s0021900200020611.

Texto completo da fonte
Resumo:
We consider random dynamical systems with randomly chosen jumps on infinite-dimensional spaces. The choice of deterministic dynamical systems and jumps depends on a position. The system generalizes dynamical systems corresponding to learning systems, Poisson driven stochastic differential equations, iterated function system with infinite family of transformations and random evolutions. We will show that distributions which describe the dynamics of this system converge to an invariant distribution. We use recent results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by T. Szarek.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Jena, Amit, Dileep Kalathil e Le Xie. "Meta-Learning-Based Adaptive Stability Certificates for Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 11 (24 de março de 2024): 12801–9. http://dx.doi.org/10.1609/aaai.v38i11.29176.

Texto completo da fonte
Resumo:
This paper addresses the problem of Neural Network (NN) based adaptive stability certification in a dynamical system. The state-of-the-art methods, such as Neural Lyapunov Functions (NLFs), use NN-based formulations to assess the stability of a non-linear dynamical system and compute a Region of Attraction (ROA) in the state space. However, under parametric uncertainty, if the values of system parameters vary over time, the NLF methods fail to adapt to such changes and may lead to conservative stability assessment performance. We circumvent this issue by integrating Model Agnostic Meta-learning (MAML) with NLFs and propose meta-NLFs. In this process, we train a meta-function that adapts to any parametric shifts and updates into an NLF for the system with new test-time parameter values. We demonstrate the stability assessment performance of meta-NLFs on some standard benchmark autonomous dynamical systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Feng, Lingyu, Ting Gao, Min Dai e Jinqiao Duan. "Learning effective dynamics from data-driven stochastic systems". Chaos: An Interdisciplinary Journal of Nonlinear Science 33, n.º 4 (abril de 2023): 043131. http://dx.doi.org/10.1063/5.0126667.

Texto completo da fonte
Resumo:
Multiscale stochastic dynamical systems have been widely adopted to a variety of scientific and engineering problems due to their capability of depicting complex phenomena in many real-world applications. This work is devoted to investigating the effective dynamics for slow–fast stochastic dynamical systems. Given observation data on a short-term period satisfying some unknown slow–fast stochastic systems, we propose a novel algorithm, including a neural network called Auto-SDE, to learn an invariant slow manifold. Our approach captures the evolutionary nature of a series of time-dependent autoencoder neural networks with the loss constructed from a discretized stochastic differential equation. Our algorithm is also validated to be accurate, stable, and effective through numerical experiments under various evaluation metrics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Ell, Shawn W., e F. Gregory Ashby. "Dynamical trajectories in category learning". Perception & Psychophysics 66, n.º 8 (novembro de 2004): 1318–40. http://dx.doi.org/10.3758/bf03195001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Vereijken, B., H. T. A. Whiting e W. J. Beek. "A Dynamical Systems Approach to Skill Acquisition". Quarterly Journal of Experimental Psychology Section A 45, n.º 2 (agosto de 1992): 323–44. http://dx.doi.org/10.1080/14640749208401329.

Texto completo da fonte
Resumo:
This paper argues that the answer to the question, what has to be learned, needs to be established before the question, how is it learned, can be meaningfully addressed. Based on this conviction, some of the limitations of current and past research on skill acquisition are discussed. Motivated by the dynamical systems approach, the question of “what has to be learned” was tackled by setting up a non-linear mathematical model of the task (i.e. learning to make sideways movements on a ski apparatus). On the basis of this model, the phase lag between movements of the platform of the apparatus and the actions of the subject was isolated as an ensemble variable reflecting the timing of the subject in relation to the dynamics of the apparatus. This variable was subsequently used to study “how” the task was learned in a discovery learning experiment, in which predictions stemming from the model were tested and confirmed. Overall, these findings provided support for the hypothesis, formulated by Bernstein (1967), that one of the important effects of practice is learning to make use of reactive forces, thereby reducing the need for active muscular forces. In addition, the data from a previous learning experiment on the ski apparatus—the results of which had been equivocal—were reconsidered. The use of phase lag as a dependent variable provided a resolution of those findings. On the basis of the confirmatory testing of predictions stemming from the model and the clarification of findings from a previous experiment, it is argued that the dynamical systems approach put forward here provides a powerful method for pursuing issues in skill acquisition. Suggestions are made as to how this approach can be used to systematically pursue the questions that arise as a natural outcome of the experimental evidence presented here.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Ijspeert, Auke Jan, Jun Nakanishi, Heiko Hoffmann, Peter Pastor e Stefan Schaal. "Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors". Neural Computation 25, n.º 2 (fevereiro de 2013): 328–73. http://dx.doi.org/10.1162/neco_a_00393.

Texto completo da fonte
Resumo:
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by means of a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Gabriel, Nicholas, e Neil F. Johnson. "Using Neural Architectures to Model Complex Dynamical Systems". Advances in Artificial Intelligence and Machine Learning 02, n.º 02 (2022): 366–84. http://dx.doi.org/10.54364/aaiml.2022.1124.

Texto completo da fonte
Resumo:
The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable successin leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Forgione, Marco, e Dario Piga. "dynoNet : A neural network architecture for learning dynamical systems". International Journal of Adaptive Control and Signal Processing 35, n.º 4 (14 de janeiro de 2021): 612–26. http://dx.doi.org/10.1002/acs.3216.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Xiao, Wenxin, Armin Lederer e Sandra Hirche. "Learning Stable Nonparametric Dynamical Systems with Gaussian Process Regression". IFAC-PapersOnLine 53, n.º 2 (2020): 1194–99. http://dx.doi.org/10.1016/j.ifacol.2020.12.1335.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Chen, Ruilin, Xiaowei Jin, Shujin Laima, Yong Huang e Hui Li. "Intelligent modeling of nonlinear dynamical systems by machine learning". International Journal of Non-Linear Mechanics 142 (junho de 2022): 103984. http://dx.doi.org/10.1016/j.ijnonlinmec.2022.103984.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Qin, Zengyi, Dawei Sun e Chuchu Fan. "Sablas: Learning Safe Control for Black-Box Dynamical Systems". IEEE Robotics and Automation Letters 7, n.º 2 (abril de 2022): 1928–35. http://dx.doi.org/10.1109/lra.2022.3142743.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Pulch, Roland, e Maha Youssef. "MACHINE LEARNING FOR TRAJECTORIES OF PARAMETRIC NONLINEAR DYNAMICAL SYSTEMS". Journal of Machine Learning for Modeling and Computing 1, n.º 1 (2020): 75–95. http://dx.doi.org/10.1615/jmachlearnmodelcomput.2020034093.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Chu, S. R., e R. Shoureshi. "Applications of neural networks in learning of dynamical systems". IEEE Transactions on Systems, Man, and Cybernetics 22, n.º 1 (1992): 161–64. http://dx.doi.org/10.1109/21.141320.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Khansari-Zadeh, S. Mohammad, e Aude Billard. "Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models". IEEE Transactions on Robotics 27, n.º 5 (outubro de 2011): 943–57. http://dx.doi.org/10.1109/tro.2011.2159412.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Mukhopadhyay, Sumona, e Santo Banerjee. "Learning dynamical systems in noise using convolutional neural networks". Chaos: An Interdisciplinary Journal of Nonlinear Science 30, n.º 10 (outubro de 2020): 103125. http://dx.doi.org/10.1063/5.0009326.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Sugie, T., e T. Ono. "On an Iterative Learning Control Law for Dynamical Systems". IFAC Proceedings Volumes 20, n.º 5 (julho de 1987): 339–44. http://dx.doi.org/10.1016/s1474-6670(17)55109-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Kimura, M., e R. Nakano. "Learning dynamical systems by recurrent neural networks from orbits". Neural Networks 11, n.º 9 (dezembro de 1998): 1589–99. http://dx.doi.org/10.1016/s0893-6080(98)00098-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Berwald, Jesse, Tomáš Gedeon e John Sheppard. "Using machine learning to predict catastrophes in dynamical systems". Journal of Computational and Applied Mathematics 236, n.º 9 (março de 2012): 2235–45. http://dx.doi.org/10.1016/j.cam.2011.11.006.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Talmon, Ronen, Stephane Mallat, Hitten Zaveri e Ronald R. Coifman. "Manifold Learning for Latent Variable Inference in Dynamical Systems". IEEE Transactions on Signal Processing 63, n.º 15 (agosto de 2015): 3843–56. http://dx.doi.org/10.1109/tsp.2015.2432731.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Zhao, Qingye, Yi Zhang e Xuandong Li. "Safe reinforcement learning for dynamical systems using barrier certificates". Connection Science 34, n.º 1 (12 de dezembro de 2022): 2822–44. http://dx.doi.org/10.1080/09540091.2022.2151567.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Kelso, J. A. S. "Anticipatory dynamical systems, intrinsic pattern dynamics and skill learning". Human Movement Science 10, n.º 1 (fevereiro de 1991): 93–111. http://dx.doi.org/10.1016/0167-9457(91)90034-u.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Gauthier, Daniel J., Ingo Fischer e André Röhm. "Learning unseen coexisting attractors". Chaos: An Interdisciplinary Journal of Nonlinear Science 32, n.º 11 (novembro de 2022): 113107. http://dx.doi.org/10.1063/5.0116784.

Texto completo da fonte
Resumo:
Reservoir computing is a machine learning approach that can generate a surrogate model of a dynamical system. It can learn the underlying dynamical system using fewer trainable parameters and, hence, smaller training data sets than competing approaches. Recently, a simpler formulation, known as next-generation reservoir computing, removed many algorithm metaparameters and identified a well-performing traditional reservoir computer, thus simplifying training even further. Here, we study a particularly challenging problem of learning a dynamical system that has both disparate time scales and multiple co-existing dynamical states (attractors). We compare the next-generation and traditional reservoir computer using metrics quantifying the geometry of the ground-truth and forecasted attractors. For the studied four-dimensional system, the next-generation reservoir computing approach uses [Formula: see text] less training data, requires [Formula: see text] shorter “warmup” time, has fewer metaparameters, and has an [Formula: see text] higher accuracy in predicting the co-existing attractor characteristics in comparison to a traditional reservoir computer. Furthermore, we demonstrate that it predicts the basin of attraction with high accuracy. This work lends further support to the superior learning ability of this new machine learning algorithm for dynamical systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Pontes-Filho, Sidney, Pedro Lind, Anis Yazidi, Jianhua Zhang, Hugo Hammer, Gustavo B. M. Mello, Ioanna Sandvig, Gunnar Tufte e Stefano Nichele. "A neuro-inspired general framework for the evolution of stochastic dynamical systems: Cellular automata, random Boolean networks and echo state networks towards criticality". Cognitive Neurodynamics 14, n.º 5 (11 de junho de 2020): 657–74. http://dx.doi.org/10.1007/s11571-020-09600-x.

Texto completo da fonte
Resumo:
Abstract Although deep learning has recently increased in popularity, it suffers from various problems including high computational complexity, energy greedy computation, and lack of scalability, to mention a few. In this paper, we investigate an alternative brain-inspired method for data analysis that circumvents the deep learning drawbacks by taking the actual dynamical behavior of biological neural networks into account. For this purpose, we develop a general framework for dynamical systems that can evolve and model a variety of substrates that possess computational capacity. Therefore, dynamical systems can be exploited in the reservoir computing paradigm, i.e., an untrained recurrent nonlinear network with a trained linear readout layer. Moreover, our general framework, called EvoDynamic, is based on an optimized deep neural network library. Hence, generalization and performance can be balanced. The EvoDynamic framework contains three kinds of dynamical systems already implemented, namely cellular automata, random Boolean networks, and echo state networks. The evolution of such systems towards a dynamical behavior, called criticality, is investigated because systems with such behavior may be better suited to do useful computation. The implemented dynamical systems are stochastic and their evolution with genetic algorithm mutates their update rules or network initialization. The obtained results are promising and demonstrate that criticality is achieved. In addition to the presented results, our framework can also be utilized to evolve the dynamical systems connectivity, update and learning rules to improve the quality of the reservoir used for solving computational tasks and physical substrate modeling.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Sharma, Shalini, e Angshul Majumdar. "Sequential Transform Learning". ACM Transactions on Knowledge Discovery from Data 15, n.º 5 (26 de junho de 2021): 1–18. http://dx.doi.org/10.1145/3447394.

Texto completo da fonte
Resumo:
This work proposes a new approach for dynamical modeling; we call it sequential transform learning. This is loosely based on the transform (analysis dictionary) learning formulation. This is the first work on this topic. Transform learning, was originally developed for static problems; we modify it to model dynamical systems by introducing a feedback loop. The learnt transform coefficients for the t th instant are fed back along with the t + 1st sample, thereby establishing a Markovian relationship. Furthermore, the formulation is made supervised by the label consistency cost. Our approach keeps the best of two worlds, marrying the interpretability and uncertainty measure of signal processing with the function approximation ability of neural networks. We have carried out experiments on one of the most challenging problems in dynamical modeling - stock forecasting. Benchmarking with the state-of-the-art has shown that our method excels over the rest.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Duan, Jianghua, Yongsheng Ou, Jianbing Hu, Zhiyang Wang, Shaokun Jin e Chao Xu. "Fast and Stable Learning of Dynamical Systems Based on Extreme Learning Machine". IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, n.º 6 (junho de 2019): 1175–85. http://dx.doi.org/10.1109/tsmc.2017.2705279.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia