Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Chaotic Recurrent Neural Networks.

Zeitschriftenartikel zum Thema „Chaotic Recurrent Neural Networks“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Chaotic Recurrent Neural Networks" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Wang, Jeff, und Raymond Lee. „Chaotic Recurrent Neural Networks for Financial Forecast“. American Journal of Neural Networks and Applications 7, Nr. 1 (2021): 7. http://dx.doi.org/10.11648/j.ajnna.20210701.12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Marković, Dimitrije, und Claudius Gros. „Intrinsic Adaptation in Autonomous Recurrent Neural Networks“. Neural Computation 24, Nr. 2 (Februar 2012): 523–40. http://dx.doi.org/10.1162/neco_a_00232.

Der volle Inhalt der Quelle
Annotation:
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wang, Xing-Yuan, und Yi Zhang. „Chaotic diagonal recurrent neural network“. Chinese Physics B 21, Nr. 3 (März 2012): 038703. http://dx.doi.org/10.1088/1674-1056/21/3/038703.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dong, En Zeng, Yang Du, Cheng Cheng Li und Zai Ping Chen. „Image Encryption Scheme Based on Dual Hyper-Chaotic Recurrent Neural Networks“. Key Engineering Materials 474-476 (April 2011): 599–604. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.599.

Der volle Inhalt der Quelle
Annotation:
Based on two hyper-chaotic recurrent neural networks, a new image encryption scheme is presented in this paper. In the encryption scheme, the shuffling matrix is generated by using a Hopfield neural network, which is used to shuffle the pixels location; the diffusing matrix is generated by using a cellular neural network, which is used to diffuse the pixels grey value by OXRoperation. Finally, through numerical simulation and security analysis, the effectiveness of the encryption scheme is verified. Duo to the complex dynamical behavior of the hyper-chaotic systems, the encryption scheme has the advantage of large secret key space and high security, and can resist brute-force attacks and statistical attacks effectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kandıran, Engin, und Avadis Hacınlıyan. „Comparison of Feedforward and Recurrent Neural Network in Forecasting Chaotic Dynamical System“. AJIT-e Online Academic Journal of Information Technology 10, Nr. 37 (01.04.2019): 31–44. http://dx.doi.org/10.5824/1309-1581.2019.2.002.x.

Der volle Inhalt der Quelle
Annotation:
Artificial neural networks are commonly accepted as a very successful tool for global function approximation. Because of this reason, they are considered as a good approach to forecasting chaotic time series in many studies. For a given time series, the Lyapunov exponent is a good parameter to characterize the series as chaotic or not. In this study, we use three different neural network architectures to test capabilities of the neural network in forecasting time series generated from different dynamical systems. In addition to forecasting time series, using the feedforward neural network with single hidden layer, Lyapunov exponents of the studied systems are forecasted.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bertschinger, Nils, und Thomas Natschläger. „Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks“. Neural Computation 16, Nr. 7 (01.07.2004): 1413–36. http://dx.doi.org/10.1162/089976604323057443.

Der volle Inhalt der Quelle
Annotation:
Depending on the connectivity, recurrent networks of simple computational units can show very different types of dynamics, ranging from totally ordered to chaotic. We analyze how the type of dynamics (ordered or chaotic) exhibited by randomly connected networks of threshold gates driven by a time-varying input signal depends on the parameters describing the distribution of the connectivity matrix. In particular, we calculate the critical boundary in parameter space where the transition from ordered to chaotic dynamics takes place. Employing a recently developed framework for analyzing real-time computations, we show that only near the critical boundary can such networks perform complex computations on time series. Hence, this result strongly supports conjectures that dynamical systems that are capable of doing complex computational tasks should operate near the edge of chaos, that is, the transition from ordered to chaotic dynamics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wen, Tan, und Wang Yao-Nan. „Synchronization of an uncertain chaotic system via recurrent neural networks“. Chinese Physics 14, Nr. 1 (23.12.2004): 72–76. http://dx.doi.org/10.1088/1009-1963/14/1/015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cechin, Adelmo L., Denise R. Pechmann und Luiz P. L. de Oliveira. „Optimizing Markovian modeling of chaotic systems with recurrent neural networks“. Chaos, Solitons & Fractals 37, Nr. 5 (September 2008): 1317–27. http://dx.doi.org/10.1016/j.chaos.2006.10.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ryeu, Jin Kyung, und Ho Sun Chung. „Chaotic recurrent neural networks and their application to speech recognition“. Neurocomputing 13, Nr. 2-4 (Oktober 1996): 281–94. http://dx.doi.org/10.1016/0925-2312(95)00093-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wu, Xiaoying, Yuanlong Chen, Jing Tian und Liangliang Li. „Chaotic Dynamics of Discrete Multiple-Time Delayed Neural Networks of Ring Architecture Evoked by External Inputs“. International Journal of Bifurcation and Chaos 26, Nr. 11 (Oktober 2016): 1650179. http://dx.doi.org/10.1142/s0218127416501790.

Der volle Inhalt der Quelle
Annotation:
In this paper, we consider a general class of discrete multiple-time delayed recurrent neural networks with external inputs. By applying a new transformation, we transform an m-neuron network model into a parameterized map from [Formula: see text] to [Formula: see text]. A chaotic invariant set of the neural networks system is obtained by using a family of projections from [Formula: see text] onto [Formula: see text]. Furthermore, we prove that the dynamics of this neural networks system restricted to the chaotic invariant set is topologically conjugate to the dynamics of the full shift map with two symbols, which indicates that chaos occurs. Numerical simulations are presented to illustrate the theoretical outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Anh, Duong Tuan, und Ta Ngoc Huy Nam. „Chaotic time series prediction with deep belief networks: an empirical evaluation“. Science & Technology Development Journal - Engineering and Technology 3, SI1 (04.12.2020): SI102—SI112. http://dx.doi.org/10.32508/stdjet.v3isi1.571.

Der volle Inhalt der Quelle
Annotation:
Chaotic time series are widespread in several real world areas such as finance, environment, meteorology, traffic flow, weather. A chaotic time series is considered as generated from the deterministic dynamics of a nonlinear system. The chaotic system is sensitive to initial conditions; points that are arbitrarily close initially become exponentially further apart with progressing time. Therefore, it is challenging to make accurate prediction in chaotic time series. The prediction using conventional statistical techniques, k-nearest-nearest neighbors algorithm, Multi-Layer-Perceptron (MPL) neural networks, Recurrent Neural Networks, Radial-Basis-Function (RBF) Networks and Support Vector Machines, do not give reliable prediction results for chaotic time series. In this paper, we investigate the use of a deep learning method, Deep Belief Network (DBN), combined with chaos theory to forecast chaotic time series. DBN should be used to forecast chaotic time series. First, the chaotic time series are analyzed by calculating the largest Lyapunov exponent, reconstructing the time series by phase-space reconstruction and determining the best embedding dimension and the best delay time. When the forecasting model is constructed, the deep belief network is used to feature learning and the neural network is used for prediction. We also compare the DBN –based method to RBF network-based method, which is the state-of-the-art method for forecasting chaotic time series. The predictive performance of the two models is examined using mean absolute error (MAE), mean squared error (MSE) and mean absolute percentage error (MAPE). Experimental results on several synthetic and real world chaotic datasets revealed that the DBN model is applicable to the prediction of chaotic time series since it achieves better performance than RBF network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zhang, Jia-Shu, und Xian-Ci Xiao. „Predicting Chaotic Time Series Using Recurrent Neural Network“. Chinese Physics Letters 17, Nr. 2 (01.02.2000): 88–90. http://dx.doi.org/10.1088/0256-307x/17/2/004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Lu, Zhao, Leang-San Shieh, Guanrong Chen und Jagdish Chandra. „Identification and Control Of Chaotic Systems Via Recurrent High-Order Neural Networks“. Intelligent Automation & Soft Computing 13, Nr. 4 (Januar 2007): 357–72. http://dx.doi.org/10.1080/10798587.2007.10642969.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Chandra, Rohitash, und Mengjie Zhang. „Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction“. Neurocomputing 86 (Juni 2012): 116–23. http://dx.doi.org/10.1016/j.neucom.2012.01.014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Serrano-Pérez, José de Jesús, Guillermo Fernández-Anaya, Salvador Carrillo-Moreno und Wen Yu. „New Results for Prediction of Chaotic Systems Using Deep Recurrent Neural Networks“. Neural Processing Letters 53, Nr. 2 (07.03.2021): 1579–96. http://dx.doi.org/10.1007/s11063-021-10466-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Soma, Ken-ichiro, Ryota Mori, Ryuichi Sato, Noriyuki Furumai und Shigetoshi Nara. „Simultaneous Multichannel Signal Transfers via Chaos in a Recurrent Neural Network“. Neural Computation 27, Nr. 5 (Mai 2015): 1083–101. http://dx.doi.org/10.1162/neco_a_00715.

Der volle Inhalt der Quelle
Annotation:
We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the signal is successfully transferred to the corresponding output group, although no correlation is observed between the input signal and the intermediary neurons. Signal transfer is also observed when multiple signals are applied simultaneously to separate input groups belonging to different memory attractors. In this sense simultaneous multichannel communications are realized, and the chaotic neural dynamics acts as a signal transfer medium in which the signal appears to be hidden.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Seifter, Jared, und James A. Reggia. „Lambda and the Edge of Chaos in Recurrent Neural Networks“. Artificial Life 21, Nr. 1 (Februar 2015): 55–71. http://dx.doi.org/10.1162/artl_a_00152.

Der volle Inhalt der Quelle
Annotation:
The idea that there is an edge of chaos, a region in the space of dynamical systems having special meaning for complex living entities, has a long history in artificial life. The significance of this region was first emphasized in cellular automata models when a single simple measure, λCA, identified it as a transitional region between order and chaos. Here we introduce a parameter λNN that is inspired by λCA but is defined for recurrent neural networks. We show through a series of systematic computational experiments that λNN generally orders the dynamical behaviors of randomly connected/weighted recurrent neural networks in the same way that λCA does for cellular automata. By extending this ordering to larger values of λNN than has typically been done with λCA and cellular automata, we find that a second edge-of-chaos region exists on the opposite side of the chaotic region. These basic results are found to hold under different assumptions about network connectivity, but vary substantially in their details. The results show that the basic concept underlying the lambda parameter can usefully be extended to other types of complex dynamical systems than just cellular automata.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Tino, P., und M. Koteles. „Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences“. IEEE Transactions on Neural Networks 10, Nr. 2 (März 1999): 284–302. http://dx.doi.org/10.1109/72.750555.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Li, Xiaofan, Jian-an Fang und Huiyuan Li. „Exponential Synchronization of Memristive Chaotic Recurrent Neural Networks Via Alternate Output Feedback Control“. Asian Journal of Control 20, Nr. 1 (15.06.2017): 469–82. http://dx.doi.org/10.1002/asjc.1562.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Li, Zhanying, Jun Xing, Li Bo und Jue Wang. „Prediction of Ship Roll Motion based on Optimized Chaotic Diagonal Recurrent Neural Networks“. International Journal of Multimedia and Ubiquitous Engineering 10, Nr. 4 (30.04.2015): 231–42. http://dx.doi.org/10.14257/ijmue.2015.10.4.22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wang, Dingsu, Huiyue Tang, Yuan Wang und JingShen Wu. „Beautiful chaotic patterns generated using simple untrained recurrent neural networks under harmonic excitation“. Nonlinear Dynamics 100, Nr. 4 (Juni 2020): 3887–905. http://dx.doi.org/10.1007/s11071-020-05640-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

LU, Z., L. SHIEH, G. CHEN und N. COLEMAN. „Adaptive feedback linearization control of chaotic systems via recurrent high-order neural networks“. Information Sciences 176, Nr. 16 (22.08.2006): 2337–54. http://dx.doi.org/10.1016/j.ins.2005.08.002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Perez-Padron, J., C. Posadas-Castillo, J. Paz-Perez, E. Zambrano-Serrano und M. A. Platas-Garza. „FPGA Realization and Lyapunov–Krasovskii Analysis for a Master-Slave Synchronization Scheme Involving Chaotic Systems and Time-Delay Neural Networks“. Mathematical Problems in Engineering 2021 (23.09.2021): 1–17. http://dx.doi.org/10.1155/2021/2604874.

Der volle Inhalt der Quelle
Annotation:
In this paper, the trajectory tracking control and the field programmable gate array (FPGA) implementation between a recurrent neural network with time delay and a chaotic system are presented. The tracking error is globally asymptotically stabilized by means of a control law generated from the Lyapunov–Krasovskii and Lur’e theory. The applicability of the approach is illustrated by considering two different chaotic systems: Liu chaotic system and Genesio–Tesi chaotic system. The numerical results have shown the effectiveness of obtained theoretical results. Finally, the theoretical results are implemented on an FPGA, confirming the feasibility of the synchronization scheme and showing that it is hardware realizable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Quoy, Mathias, Jean-Paul Banquet und Emmanuel Daucé. „Learning and control with chaos: From biology to robotics“. Behavioral and Brain Sciences 24, Nr. 5 (Oktober 2001): 824–25. http://dx.doi.org/10.1017/s0140525x01380093.

Der volle Inhalt der Quelle
Annotation:
After critical appraisal of mathematical and biological characteristics of the model, we discuss how a classical hippocampal neural network expresses functions similar to those of the chaotic model, and then present an alternative stimulus-driven chaotic random recurrent neural network (RRNN) that learns patterns as well as sequences, and controls the navigation of a mobile robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

ZHOU, ZHAN, JINLIANG WANG, ZHUJUN JING und RUQI WANG. „COMPLEX DYNAMICAL BEHAVIORS IN DISCRETE-TIME RECURRENT NEURAL NETWORKS WITH ASYMMETRIC CONNECTION MATRIX“. International Journal of Bifurcation and Chaos 16, Nr. 08 (August 2006): 2221–33. http://dx.doi.org/10.1142/s0218127406016021.

Der volle Inhalt der Quelle
Annotation:
This paper investigates the discrete-time recurrent neural networks and aims to extend the previous works with symmetric connection matrix to the asymmetric connection matrix. We provide the sufficient conditions of existence for asymptotical stability of fixed point, flip and fold bifurcations, Marotto's chaos. Besides, we state the conditions of existence for the bounded trapping region including many fixed points, and attracting set contained in bounded region and chaotic set. To demonstrate the theoretical results of the paper, several numerical examples are provided.The theorems in this paper are available more than in the previous works.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Smith, Anthony W., und David Zipser. „LEARNING SEQUENTIAL STRUCTURE WITH THE REAL-TIME RECURRENT LEARNING ALGORITHM“. International Journal of Neural Systems 01, Nr. 02 (Januar 1989): 125–31. http://dx.doi.org/10.1142/s0129065789000037.

Der volle Inhalt der Quelle
Annotation:
Recurrent connections in neural networks potentially allow information about events occurring in the past to be preserved and used in current computations. How effectively this potential is realized depends on the power of the learning algorithm used. As an example of a task requiring recurrency, Servan-Schreiber, Cleeremans, and McClelland1 have applied a simple recurrent learning algorithm to the task of recognizing finite-state grammars of increasing difficulty. These nets showed considerable power and were able to learn fairly complex grammars by emulating the state machines that produced them. However, there was a limit to the difficulty of the grammars that could be learned. We have applied a more powerful recurrent learning procedure, called real-time recurrent learning2,6 (RTRL), to some of the same problems studied by Servan-Schreiber, Cleeremans, and McClelland. The RTRL algorithm solved more difficult forms of the task than the simple recurrent networks. The internal representations developed by RTRL networks revealed that they learn a rich set of internal states that represent more about the past than is required by the underlying grammar. The dynamics of the networks are determined by the state structure and are not chaotic.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ríos-Rivera, Daniel, Alma Y. Alanis und Edgar N. Sanchez. „Neural-Impulsive Pinning Control for Complex Networks Based on V-Stability“. Mathematics 8, Nr. 9 (19.08.2020): 1388. http://dx.doi.org/10.3390/math8091388.

Der volle Inhalt der Quelle
Annotation:
In this work, a neural impulsive pinning controller for a twenty-node dynamical discrete complex network is presented. The node dynamics of the network are all different types of discrete versions of chaotic attractors of three dimensions. Using the V-stability method, we propose a criterion for selecting nodes to design pinning control, in which only a small fraction of the nodes is locally controlled in order to stabilize the network states at zero. A discrete recurrent high order neural network (RHONN) trained with extended Kalman filter (EKF) is used to identify the dynamics of controlled nodes and synthesize the control law.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Li, Qinghai, und Rui-Chang Lin. „A New Approach for Chaotic Time Series Prediction Using Recurrent Neural Network“. Mathematical Problems in Engineering 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/3542898.

Der volle Inhalt der Quelle
Annotation:
A self-constructing fuzzy neural network (SCFNN) has been successfully used for chaotic time series prediction in the literature. In this paper, we propose the strategy of adding a recurrent path in each node of the hidden layer of SCFNN, resulting in a self-constructing recurrent fuzzy neural network (SCRFNN). This novel network does not increase complexity in fuzzy inference or learning process. Specifically, the structure learning is based on partition of the input space, and the parameter learning is based on the supervised gradient descent method using a delta adaptation law. This novel network can also be applied for chaotic time series prediction including Logistic and Henon time series. More significantly, it features rapider convergence and higher prediction accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Lee, Seungwon, Sung Hwan Won, Iickho Song, Seokho Yoon und Sun Yong Kim. „On the Identification and Generation of Discrete-Time Chaotic Systems with Recurrent Neural Networks“. Journal of Electrical Engineering & Technology 14, Nr. 4 (05.02.2019): 1699–706. http://dx.doi.org/10.1007/s42835-019-00103-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Li, Xiaofan, Jian-an Fang und Huiyuan Li. „Exponential adaptive synchronization of stochastic memristive chaotic recurrent neural networks with time-varying delays“. Neurocomputing 267 (Dezember 2017): 396–405. http://dx.doi.org/10.1016/j.neucom.2017.06.049.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Cui, Baotong, und Xuyang Lou. „Synchronization of chaotic recurrent neural networks with time-varying delays using nonlinear feedback control“. Chaos, Solitons & Fractals 39, Nr. 1 (Januar 2009): 288–94. http://dx.doi.org/10.1016/j.chaos.2007.01.100.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Yan, Zhilian, Yamin Liu, Xia Huang, Jianping Zhou und Hao Shen. „Mixed ℋ∞ and ℒ2 — ℒ∞ Anti-synchronization Control for Chaotic Delayed Recurrent Neural Networks“. International Journal of Control, Automation and Systems 17, Nr. 12 (06.11.2019): 3158–69. http://dx.doi.org/10.1007/s12555-019-0263-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Alomar, Miquel L., Vincent Canals, Nicolas Perez-Mora, Víctor Martínez-Moll und Josep L. Rosselló. „FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting“. Computational Intelligence and Neuroscience 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/3917892.

Der volle Inhalt der Quelle
Annotation:
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Molter, Colin, Utku Salihoglu und Hugues Bersini. „The Road to Chaos by Time-Asymmetric Hebbian Learning in Recurrent Neural Networks“. Neural Computation 19, Nr. 1 (Januar 2007): 80–110. http://dx.doi.org/10.1162/neco.2007.19.1.80.

Der volle Inhalt der Quelle
Annotation:
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides “on its own” the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more “respectful” of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as “frustrated chaos.”
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Vlachas, Pantelis R., Wonmin Byeon, Zhong Y. Wan, Themistoklis P. Sapsis und Petros Koumoutsakos. „Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks“. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, Nr. 2213 (Mai 2018): 20170844. http://dx.doi.org/10.1098/rspa.2017.0844.

Der volle Inhalt der Quelle
Annotation:
We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto–Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM–LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Zhang, Lei. „Chaotic System Design Based on Recurrent Artificial Neural Network for the Simulation of EEG Time Series“. International Journal of Cognitive Informatics and Natural Intelligence 13, Nr. 1 (Januar 2019): 25–35. http://dx.doi.org/10.4018/ijcini.2019010103.

Der volle Inhalt der Quelle
Annotation:
Electroencephalogram (EEG) signals captured from brain activities demonstrate chaotic features, and can be simulated by nonlinear dynamic time series outputs of chaotic systems. This article presents the research work of chaotic system generator design based on artificial neural network (ANN), for studying the chaotic features of human brain dynamics. The ANN training performances of Nonlinear Auto-Regressive (NAR) model are evaluated for the generation and prediction of chaotic system time series outputs, based on varying the ANN architecture and the precision of the generated training data. The NAR model is trained in open loop form with 1,000 training samples generated using Lorenz system equations and the forward Euler method. The close loop NAR model is used for the generation and prediction of Lorenz chaotic time series outputs. The training results show that better training performance can be achieved by increasing the number of feedback delays and the number of hidden neurons, at the cost of increasing the computational load.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Meade, Andrew J., und Rafael Moreno. „Recurrent artificial neural network simulation of a chaotic system without training“. Journal of Guidance, Control, and Dynamics 18, Nr. 6 (November 1995): 1463–66. http://dx.doi.org/10.2514/3.21570.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Wang, Libiao, Zhuo Meng, Yize Sun, Lei Guo und Mingxing Zhou. „Design and analysis of a novel chaotic diagonal recurrent neural network“. Communications in Nonlinear Science and Numerical Simulation 26, Nr. 1-3 (September 2015): 11–23. http://dx.doi.org/10.1016/j.cnsns.2015.01.021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Suemitsu, Yoshikazu, und Shigetoshi Nara. „A Solution for Two-Dimensional Mazes with Use of Chaotic Dynamics in a Recurrent Neural Network Model“. Neural Computation 16, Nr. 9 (01.09.2004): 1943–57. http://dx.doi.org/10.1162/0899766041336440.

Der volle Inhalt der Quelle
Annotation:
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

NARA, SHIGETOSHI, PETER DAVIS, MASAYOSHI KAWACHI und HIROO TOTSUJI. „CHAOTIC MEMORY DYNAMICS IN A RECURRENT NEURAL NETWORK WITH CYCLE MEMORIES EMBEDDED BY PSEUDO-INVERSE METHOD“. International Journal of Bifurcation and Chaos 05, Nr. 04 (August 1995): 1205–12. http://dx.doi.org/10.1142/s0218127495000867.

Der volle Inhalt der Quelle
Annotation:
It is shown that hierarchical bifurcation of chaotic intermittency among memories can be induced by reducing neural connectivity when sequences of similar patterns are stored in a recurrent neural network using the pseudo-inverse method. This chaos is potentially useful for memory search and synthesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

WANG, JINLIANG, und ZHUJUN JING. „TOPOLOGICAL STRUCTURE OF CHAOS IN DISCRETE-TIME NEURAL NETWORKS WITH GENERALIZED INPUT–OUTPUT FUNCTION“. International Journal of Bifurcation and Chaos 11, Nr. 07 (Juli 2001): 1835–51. http://dx.doi.org/10.1142/s0218127401003097.

Der volle Inhalt der Quelle
Annotation:
By analogue of [Chen & Aihara, 1995, 1997, 1999], we theoretically investigate the topologically chaotic structure, attracting set and global searching ability of discrete-time recurrent neural networks with the form of [Formula: see text] where the input–output function is defined as a generalized sigmoid function, such as vi = tanh (μiui), [Formula: see text] and [Formula: see text], etc. We first derive sufficient conditions of existence for a fixed point, and then prove that this fixed point eventually evolves into a snap-back repeller which generates chaotic structure when certain conditions are satisfied. Furthermore we prove that there exists an attracting set which includes not only the homoclinic orbit but also the globally unstable set of fixed points, thereby ensuring the neural networks to have global searching ability. Numerical simulations are also provided to demonstrate the theoretical results. The results indicated in this paper can be viewed as an extension of the works of [Chen & Aihara, 1997, 1999] and others [Gopalsamy & He, 1994; Wang & Smith, 1998].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Jacobsson, Henrik. „The Crystallizing Substochastic Sequential Machine Extractor: CrySSMEx“. Neural Computation 18, Nr. 9 (September 2006): 2211–55. http://dx.doi.org/10.1162/neco.2006.18.9.2211.

Der volle Inhalt der Quelle
Annotation:
This letter presents an algorithm, CrySSMEx, for extracting minimal finite state machine descriptions of dynamic systems such as recurrent neural networks. Unlike previous algorithms, CrySSMEx is parameter free and deterministic, and it efficiently generates a series of increasingly refined models. A novel finite stochastic model of dynamic systems and a novel vector quantization function have been developed to take into account the state-space dynamics of the system. The experiments show that (1) extraction from systems that can be described as regular grammars is trivial, (2) extraction from high-dimensional systems is feasible, and (3) extraction of approximative models from chaotic systems is possible. The results are promising, and an analysis of shortcomings suggests some possible further improvements. Some largely overlooked connections, of the field of rule extraction from recurrent neural networks, to other fields are also identified.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Tran, Trang Thi Kieu, Sayed M. Bateni, Seo Jin Ki und Hamidreza Vosoughifar. „A Review of Neural Networks for Air Temperature Forecasting“. Water 13, Nr. 9 (04.05.2021): 1294. http://dx.doi.org/10.3390/w13091294.

Der volle Inhalt der Quelle
Annotation:
The accurate forecast of air temperature plays an important role in water resources management, land–atmosphere interaction, and agriculture. However, it is difficult to accurately predict air temperature due to its non-linear and chaotic nature. Several deep learning techniques have been proposed over the last few decades to forecast air temperature. This study provides a comprehensive review of artificial neural network (ANN)-based approaches (such as recurrent neural network (RNN), long short-term memory (LSTM), etc.), which were used to forecast air temperature. The focus is on the works during 2005–2020. The review shows that the neural network models can be employed as promising tools to forecast air temperature. Although the ANN-based approaches have been utilized widely to predict air temperature due to their fast computing speed and ability to deal with complex problems, no consensus yet exists on the best existing method. Additionally, it is found that the ANN methods are mainly viable for short-term air temperature forecasting. Finally, some future directions and recommendations are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Büsing, Lars, Benjamin Schrauwen und Robert Legenstein. „Connectivity, Dynamics, and Memory in Reservoir Computing with Binary and Analog Neurons“. Neural Computation 22, Nr. 5 (Mai 2010): 1272–311. http://dx.doi.org/10.1162/neco.2009.01-09-947.

Der volle Inhalt der Quelle
Annotation:
Reservoir computing (RC) systems are powerful models for online computations on input sequences. They consist of a memoryless readout neuron that is trained on top of a randomly connected recurrent neural network. RC systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work indicated a fundamental difference in the behavior of these two implementations of the RC idea. The performance of an RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons, such clear dependency has not been observed. In this letter, we address this apparent dichotomy by investigating the influence of the network connectivity (parameterized by the neuron in-degree) on a family of network models that interpolates between analog and binary networks. Our analyses are based on a novel estimation of the Lyapunov exponent of the network dynamics with the help of branching process theory, rank measures that estimate the kernel quality and generalization capabilities of recurrent networks, and a novel mean field predictor for computational performance. These analyses reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits, leading to differences in the integration of information over short and long timescales. This explains the decreased computational performance observed in binary circuits that are densely connected. The mean field predictor is also used to bound the memory function of recurrent circuits of binary neurons.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Badjate, Sanjay L., und Sanjay V. Dudul. „Novel FTLRNN with Gamma Memory for Short-Term and Long-Term Predictions of Chaotic Time Series“. Applied Computational Intelligence and Soft Computing 2009 (2009): 1–21. http://dx.doi.org/10.1155/2009/364532.

Der volle Inhalt der Quelle
Annotation:
Multistep ahead prediction of a chaotic time series is a difficult task that has attracted increasing interest in the recent years. The interest in this work is the development of nonlinear neural network models for the purpose of building multistep chaotic time series prediction. In the literature there is a wide range of different approaches but their success depends on the predicting performance of the individual methods. Also the most popular neural models are based on the statistical and traditional feed forward neural networks. But it is seen that this kind of neural model may present some disadvantages when long-term prediction is required. In this paper focused time-lagged recurrent neural network (FTLRNN) model with gamma memory is developed for different prediction horizons. It is observed that this predictor performs remarkably well for short-term predictions as well as medium-term predictions. For coupled partial differential equations generated chaotic time series such as Mackey Glass and Duffing, FTLRNN-based predictor performs consistently well for different depths of predictions ranging from short term to long term, with only slight deterioration after k is increased beyond 50. For real-world highly complex and nonstationary time series like Sunspots and Laser, though the proposed predictor does perform reasonably for short term and medium-term predictions, its prediction ability drops for long term ahead prediction. However, still this is the best possible prediction results considering the facts that these are nonstationary time series. As a matter of fact, no other NN configuration can match the performance of FTLRNN model. The authors experimented the performance of this FTLRNN model on predicting the dynamic behavior of typical Chaotic Mackey-Glass time series, Duffing time series, and two real-time chaotic time series such as monthly sunspots and laser. Static multi layer perceptron (MLP) model is also attempted and compared against the proposed model on the performance measures like mean squared error (MSE), Normalized mean squared error (NMSE), and Correlation Coefficient (r). The standard back-propagation algorithm with momentum term has been used for both the models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Wang, Xiao Sheng, Ying Li und Yan Hui Guo. „Embedded Differential Evolution Algorithm for Recurrent Fuzzy Neural Network Controller Optimization“. Applied Mechanics and Materials 321-324 (Juni 2013): 2141–45. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.2141.

Der volle Inhalt der Quelle
Annotation:
A chaos concise differential evolution algorithm (CcDE) is proposed for the embedded controller with limited memory, which introduces chaotic local search based on basic differential evolution algorithm to increase exploring and prevent premature convergence. Using virtual population and Gaussian sampling, the CcDE becomes simple and reduces the memory requirements at run time. Experimental simulation on optimizing parameters of the recurrent fuzzy neural network shows that the proposed CcDE can obtain better performance than other concise algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Han, M., J. Xi, S. Xu und F. L. Yin. „Prediction of Chaotic Time Series Based on the Recurrent Predictor Neural Network“. IEEE Transactions on Signal Processing 52, Nr. 12 (Dezember 2004): 3409–16. http://dx.doi.org/10.1109/tsp.2004.837418.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

DOYON, B., B. CESSAC, M. QUOY und M. SAMUELIDES. „CONTROL OF THE TRANSITION TO CHAOS IN NEURAL NETWORKS WITH RANDOM CONNECTIVITY“. International Journal of Bifurcation and Chaos 03, Nr. 02 (April 1993): 279–91. http://dx.doi.org/10.1142/s0218127493000222.

Der volle Inhalt der Quelle
Annotation:
The occurrence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. We produce a bifurcation parameter independent of the connectivity that allows a sustained activity and the occurrence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasiperiodic one, whatever the type of the first bifurcation is. In the discussion, we connect these results to some recent theoretical results about highly diluted networks. Hints are provided for further investigations to elicit the role of chaotic dynamics in the cognitive processes of the brain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Faranda, Davide, Mathieu Vrac, Pascal Yiou, Flavio Maria Emanuele Pons, Adnane Hamid, Giulia Carella, Cedric Ngoungue Langue, Soulivanh Thao und Valerie Gautard. „Enhancing geophysical flow machine learning performance via scale separation“. Nonlinear Processes in Geophysics 28, Nr. 3 (10.09.2021): 423–43. http://dx.doi.org/10.5194/npg-28-423-2021.

Der volle Inhalt der Quelle
Annotation:
Abstract. Recent advances in statistical and machine learning have opened the possibility of forecasting the behaviour of chaotic systems using recurrent neural networks. In this article we investigate the applicability of such a framework to geophysical flows, known to involve multiple scales in length, time and energy and to feature intermittency. We show that both multiscale dynamics and intermittency introduce severe limitations to the applicability of recurrent neural networks, both for short-term forecasts as well as for the reconstruction of the underlying attractor. We suggest that possible strategies to overcome such limitations should be based on separating the smooth large-scale dynamics from the intermittent/small-scale features. We test these ideas on global sea-level pressure data for the past 40 years, a proxy of the atmospheric circulation dynamics. Better short- and long-term forecasts of sea-level pressure data can be obtained with an optimal choice of spatial coarse graining and time filtering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Velichko, Andrei. „Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map“. Electronics 9, Nr. 9 (02.09.2020): 1432. http://dx.doi.org/10.3390/electronics9091432.

Der volle Inhalt der Quelle
Annotation:
This study presents a neural network which uses filters based on logistic mapping (LogNNet). LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks. The input weight matrix, set by a recurrent logistic mapping, forms the kernels that transform the input space to the higher-dimensional feature space. The most effective recognition of a handwritten digit from MNIST-10 occurs under chaotic behavior of the logistic map. The correlation of classification accuracy with the value of the Lyapunov exponent was obtained. An advantage of LogNNet implementation on IoT devices is the significant savings in memory used. At the same time, LogNNet has a simple algorithm and performance indicators comparable to those of the best resource-efficient algorithms available at the moment. The presented network architecture uses an array of weights with a total memory size from 1 to 29 kB and achieves a classification accuracy of 80.3–96.3%. Memory is saved due to the processor, which sequentially calculates the required weight coefficients during the network operation using the analytical equation of the logistic mapping. The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory, which are integral blocks for creating ambient intelligence in modern IoT environments. From a research perspective, LogNNet can contribute to the understanding of the fundamental issues of the influence of chaos on the behavior of reservoir-type neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie