Auswahl der wissenschaftlichen Literatur zum Thema „Chaotic Recurrent Neural Networks“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Chaotic Recurrent Neural Networks" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Chaotic Recurrent Neural Networks"

1

Wang, Jeff, und Raymond Lee. „Chaotic Recurrent Neural Networks for Financial Forecast“. American Journal of Neural Networks and Applications 7, Nr. 1 (2021): 7. http://dx.doi.org/10.11648/j.ajnna.20210701.12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Marković, Dimitrije, und Claudius Gros. „Intrinsic Adaptation in Autonomous Recurrent Neural Networks“. Neural Computation 24, Nr. 2 (Februar 2012): 523–40. http://dx.doi.org/10.1162/neco_a_00232.

Der volle Inhalt der Quelle
Annotation:
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wang, Xing-Yuan, und Yi Zhang. „Chaotic diagonal recurrent neural network“. Chinese Physics B 21, Nr. 3 (März 2012): 038703. http://dx.doi.org/10.1088/1674-1056/21/3/038703.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dong, En Zeng, Yang Du, Cheng Cheng Li und Zai Ping Chen. „Image Encryption Scheme Based on Dual Hyper-Chaotic Recurrent Neural Networks“. Key Engineering Materials 474-476 (April 2011): 599–604. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.599.

Der volle Inhalt der Quelle
Annotation:
Based on two hyper-chaotic recurrent neural networks, a new image encryption scheme is presented in this paper. In the encryption scheme, the shuffling matrix is generated by using a Hopfield neural network, which is used to shuffle the pixels location; the diffusing matrix is generated by using a cellular neural network, which is used to diffuse the pixels grey value by OXRoperation. Finally, through numerical simulation and security analysis, the effectiveness of the encryption scheme is verified. Duo to the complex dynamical behavior of the hyper-chaotic systems, the encryption scheme has the advantage of large secret key space and high security, and can resist brute-force attacks and statistical attacks effectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kandıran, Engin, und Avadis Hacınlıyan. „Comparison of Feedforward and Recurrent Neural Network in Forecasting Chaotic Dynamical System“. AJIT-e Online Academic Journal of Information Technology 10, Nr. 37 (01.04.2019): 31–44. http://dx.doi.org/10.5824/1309-1581.2019.2.002.x.

Der volle Inhalt der Quelle
Annotation:
Artificial neural networks are commonly accepted as a very successful tool for global function approximation. Because of this reason, they are considered as a good approach to forecasting chaotic time series in many studies. For a given time series, the Lyapunov exponent is a good parameter to characterize the series as chaotic or not. In this study, we use three different neural network architectures to test capabilities of the neural network in forecasting time series generated from different dynamical systems. In addition to forecasting time series, using the feedforward neural network with single hidden layer, Lyapunov exponents of the studied systems are forecasted.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bertschinger, Nils, und Thomas Natschläger. „Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks“. Neural Computation 16, Nr. 7 (01.07.2004): 1413–36. http://dx.doi.org/10.1162/089976604323057443.

Der volle Inhalt der Quelle
Annotation:
Depending on the connectivity, recurrent networks of simple computational units can show very different types of dynamics, ranging from totally ordered to chaotic. We analyze how the type of dynamics (ordered or chaotic) exhibited by randomly connected networks of threshold gates driven by a time-varying input signal depends on the parameters describing the distribution of the connectivity matrix. In particular, we calculate the critical boundary in parameter space where the transition from ordered to chaotic dynamics takes place. Employing a recently developed framework for analyzing real-time computations, we show that only near the critical boundary can such networks perform complex computations on time series. Hence, this result strongly supports conjectures that dynamical systems that are capable of doing complex computational tasks should operate near the edge of chaos, that is, the transition from ordered to chaotic dynamics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wen, Tan, und Wang Yao-Nan. „Synchronization of an uncertain chaotic system via recurrent neural networks“. Chinese Physics 14, Nr. 1 (23.12.2004): 72–76. http://dx.doi.org/10.1088/1009-1963/14/1/015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cechin, Adelmo L., Denise R. Pechmann und Luiz P. L. de Oliveira. „Optimizing Markovian modeling of chaotic systems with recurrent neural networks“. Chaos, Solitons & Fractals 37, Nr. 5 (September 2008): 1317–27. http://dx.doi.org/10.1016/j.chaos.2006.10.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ryeu, Jin Kyung, und Ho Sun Chung. „Chaotic recurrent neural networks and their application to speech recognition“. Neurocomputing 13, Nr. 2-4 (Oktober 1996): 281–94. http://dx.doi.org/10.1016/0925-2312(95)00093-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wu, Xiaoying, Yuanlong Chen, Jing Tian und Liangliang Li. „Chaotic Dynamics of Discrete Multiple-Time Delayed Neural Networks of Ring Architecture Evoked by External Inputs“. International Journal of Bifurcation and Chaos 26, Nr. 11 (Oktober 2016): 1650179. http://dx.doi.org/10.1142/s0218127416501790.

Der volle Inhalt der Quelle
Annotation:
In this paper, we consider a general class of discrete multiple-time delayed recurrent neural networks with external inputs. By applying a new transformation, we transform an m-neuron network model into a parameterized map from [Formula: see text] to [Formula: see text]. A chaotic invariant set of the neural networks system is obtained by using a family of projections from [Formula: see text] onto [Formula: see text]. Furthermore, we prove that the dynamics of this neural networks system restricted to the chaotic invariant set is topologically conjugate to the dynamics of the full shift map with two symbols, which indicates that chaos occurs. Numerical simulations are presented to illustrate the theoretical outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Chaotic Recurrent Neural Networks"

1

Molter, Colin. „Storing information through complex dynamics in recurrent neural networks“. Doctoral thesis, Universite Libre de Bruxelles, 2005. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211039.

Der volle Inhalt der Quelle
Annotation:
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus.

In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Vincent-Lamarre, Philippe. „Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations“. Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.

Der volle Inhalt der Quelle
Annotation:
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code is transformed in the target motor sequence is a question of its own, their origin is perhaps even more puzzling. Most models of cortical and sub-cortical circuits suggest that many of their neural populations are chaotic. This means that very small amounts of noise, such as an additional action potential in a neuron of a network, can lead to completely different patterns of activity. Reservoir computing is one of the first frameworks that provided an efficient solution for biologically relevant neural networks to learn complex temporal tasks in the presence of chaos. We showed that although reservoirs (i.e. recurrent neural networks) are robust to noise, they are extremely sensitive to some forms of structural perturbations, such as removing one neuron out of thousands. We proposed an alternative to these models, where the source of autonomous activity is no longer originating from the reservoir, but from a set of oscillating networks projecting to the reservoir. In our simulations, we show that this solution produce rich patterns of activity and lead to networks that are both resistant to noise and structural perturbations. The model can learn a wide variety of temporal tasks such as interval timing, motor control, speech production and spatial navigation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chen, Cong. „High-Dimensional Generative Models for 3D Perception“. Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103948.

Der volle Inhalt der Quelle
Annotation:
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval.
Doctor of Philosophy
The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Clodong, Sébastien. „Recurrent outbreaks in ecology : chaotic dynamics in complex networks“. Phd thesis, Universität Potsdam, 2004. http://opus.kobv.de/ubp/volltexte/2005/171/.

Der volle Inhalt der Quelle
Annotation:
Gegenstand der Dissertation ist die Untersuchung von wiederkehrenden Ausbrüchen (wie z.B. Epidemien) in der Natur. Dies gelang anhand von Modellen, die die Dynamik von Phytoplankton und die Ausbreitung von Krankheiten zwischen Städten beschreiben. Diese beide Systeme bilden hervorragende Beispiele für solche Phänomene. Die Frage, ob die in der Zeit wiederkehrenden Ausbrüche ein Ausdruck chaotischer Dynamik sein können, ist aktuell in der Ökologie und fasziniert Wissenschaftler dieser Disziplin. Wir konnten zeigen, dass sich das Plankton-Modell im Falle von periodischem Antreiben über die Nährstoffe in einem chaotischen Regime befindet. Diese Dynamik wurde als die komplexe Wechselwirkung zweier Oszillatoren verstanden. Ebenfalls wurde die Ausbreitung von Epidemien in Netzwerken wechselwirkender Städte mit unterschiedlichen Grössen untersucht. Dafür wurde zunächst die Kopplung zwischen zwei Städten als Verhältnis der Stadtgrössen eingeführt. Es konnte gezeigt werden, dass das System sich in einem globalen zweijährigen Zyklus, der auch in den realen Daten beobachtet wird, befinden kann. Der Effekt von Heterogenität in der Grösseverteilung ist durch gewichtete Kopplung von generischen Modellen (Zelt- und Logistische Abbildung) in Netzwerken im Detail untersucht worden. Eine neue Art von Kopplungsfunktion mit nichtlinearer Sättigung wurde eingeführt, um die Stabilität des Systems zu gewährleisten. Diese Kopplung beinhaltet einen Parameter, der es erlaubt, die Netzwerktopologie von globaler Kopplung in gerichtete Netzwerke gleichmässig umzuwandeln. Die Dynamik des Systems wurde anhand von Bifurkationsdiagrammen untersucht. Zum Verständnis dieser Dynamik wurde eine effektive Theorie, die die beobachteten Bifurkationen sehr gut nachahmt, entwickelt.
One of the most striking features of ecological systems is their ability to undergo sudden outbreaks in the population numbers of one or a small number of species. The similarity of outbreak characteristics, which is exhibited in totally different and unrelated (ecological) systems naturally leads to the question whether there are universal mechanisms underlying outbreak dynamics in Ecology. It will be shown into two case studies (dynamics of phytoplankton blooms under variable nutrients supply and spread of epidemics in networks of cities) that one explanation for the regular recurrence of outbreaks stems from the interaction of the natural systems with periodical variations of their environment. Natural aquatic systems like lakes offer very good examples for the annual recurrence of outbreaks in Ecology. The idea whether chaos is responsible for the irregular heights of outbreaks is central in the domain of ecological modeling. This question is investigated in the context of phytoplankton blooms. The dynamics of epidemics in networks of cities is a problem which offers many ecological and theoretical aspects. The coupling between the cities is introduced through their sizes and gives rise to a weighted network which topology is generated from the distribution of the city sizes. We examine the dynamics in this network and classified the different possible regimes. It could be shown that a single epidemiological model can be reduced to a one-dimensional map. We analyze in this context the dynamics in networks of weighted maps. The coupling is a saturation function which possess a parameter which can be interpreted as an effective temperature for the network. This parameter allows to vary continously the network topology from global coupling to hierarchical network. We perform bifurcation analysis of the global dynamics and succeed to construct an effective theory explaining very well the behavior of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Clodong, Sébastien. „Recurrent outbreaks in ecology chaotic dynamics in complex networks /“. [S.l. : s.n.], 2004. http://pub.ub.uni-potsdam.de/2004/0062/clodong.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Żbikowski, Rafal Waclaw. „Recurrent neural networks some control aspects /“. Connect to electronic version, 1994. http://hdl.handle.net/1905/180.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ahamed, Woakil Uddin. „Quantum recurrent neural networks for filtering“. Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Der volle Inhalt der Quelle
Annotation:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zbikowski, Rafal Waclaw. „Recurrent neural networks : some control aspects“. Thesis, University of Glasgow, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390233.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jacobsson, Henrik. „Rule extraction from recurrent neural networks“. Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/6081/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bonato, Tommaso. „Time Series Predictions With Recurrent Neural Networks“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Den vollen Inhalt der Quelle finden
Annotation:
L'obiettivo principale di questa tesi è studiare come gli algoritmi di apprendimento automatico (machine learning in inglese) e in particolare le reti neurali LSTM (Long Short Term Memory) possano essere utilizzati per prevedere i valori futuri di una serie storica regolare come, per esempio, le funzioni seno e coseno. Una serie storica è definita come una sequenza di osservazioni s_t ordinate nel tempo. Inoltre cercheremo di applicare gli stessi principi per prevedere i valori di una serie storica prodotta utilizzando i dati di vendita di un prodotto cosmetico durante un periodo di tre anni. Prima di arrivare alla parte pratica di questa tesi è necessario introdurre alcuni concetti fondamentali che saranno necessari per sviluppare l'architettura e il codice del nostro modello. Sia nell'introduzione teorica che nella parte pratica l'attenzione sarà focalizzata sull'uso di RNN (Recurrent Neural Network o Rete Neurale Ricorrente) poiché sono le reti neurali più adatte a questo tipo di problema. Un particolare tipo di RNN, chiamato Long Short Term Memory (LSTM), sarà soggetto dello studio principale di questa tesi e verrà presentata e utilizzata anche una delle sue varianti chiamata Gated Recurrent Unit (GRU). Questa tesi, in conclusione, conferma che LSTM e GRU sono il miglior tipo di rete neurale per le previsioni di serie temporali. Nell'ultima parte analizzeremo le differenze tra l'utilizzo di una CPU e una GPU durante la fase di training della rete neurale.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Chaotic Recurrent Neural Networks"

1

Hu, Xiaolin, und P. Balasubramaniam. Recurrent neural networks. Rijek, Crotia: InTech, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hammer, Barbara. Learning with recurrent neural networks. London: Springer London, 2000. http://dx.doi.org/10.1007/bfb0110016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

ElHevnawi, Mahmoud, und Mohamed Mysara. Recurrent neural networks and soft computing. Rijeka: InTech, 2012.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

K, Tan K., Hrsg. Convergence analysis of recurrent neural networks. Boston: Kluwer Academic Publishers, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yi, Zhang, und K. K. Tan. Convergence Analysis of Recurrent Neural Networks. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Whittle, Peter. Neural nets and chaotic carriers. Chichester: Wiley, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Neural nets and chaotic carriers. 2. Aufl. London: Imperial College Press, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Derong, Liu, Hrsg. Qualitative analysis and synthesis of recurrent neural networks. New York: Marcel Dekker, Inc., 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Chaotic Recurrent Neural Networks"

1

Assaad, Mohammad, Romuald Boné und Hubert Cardot. „Predicting Chaotic Time Series by Boosted Recurrent Neural Networks“. In Neural Information Processing, 831–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893257_92.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Luo, Haigeng, Xiaodong Xu und Xiaoxin Liao. „Numerical Analysis of a Chaotic Delay Recurrent Neural Network with Four Neurons“. In Advances in Neural Networks - ISNN 2006, 328–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_51.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sun, Jiancheng, Taiyi Zhang und Haiyuan Liu. „Modelling of Chaotic Systems with Novel Weighted Recurrent Least Squares Support Vector Machines“. In Advances in Neural Networks – ISNN 2004, 578–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_95.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sun, Jiancheng, Lun Yu, Guang Yang und Congde Lu. „Modelling of Chaotic Systems with Recurrent Least Squares Support Vector Machines Combined with Stationary Wavelet Transform“. In Advances in Neural Networks – ISNN 2005, 424–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_69.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hu, Yun-an, Bin Zuo und Jing Li. „A Novel Chaotic Annealing Recurrent Neural Network for Multi-parameters Extremum Seeking Algorithm“. In Neural Information Processing, 1022–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893257_112.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yoshinaka, Ryosuke, Masato Kawashima, Yuta Takamura, Hitoshi Yamaguchi, Naoya Miyahara, Kei-ichiro Nabeta, Yongtao Li und Shigetoshi Nara. „Adaptive Control of Robot Systems with Simple Rules Using Chaotic Dynamics in Quasi-layered Recurrent Neural Networks“. In Studies in Computational Intelligence, 287–305. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27534-0_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Yongtao, und Shigetoshi Nara. „Solving Complex Control Tasks via Simple Rule(s): Using Chaotic Dynamics in a Recurrent Neural Network Model“. In The Relevance of the Time Domain to Neural Network Models, 159–78. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-0724-9_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Du, Ke-Lin, und M. N. S. Swamy. „Recurrent Neural Networks“. In Neural Networks and Statistical Learning, 351–71. London: Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yalçın, Orhan Gazi. „Recurrent Neural Networks“. In Applied Neural Networks with TensorFlow 2, 161–85. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6513-0_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Calin, Ovidiu. „Recurrent Neural Networks“. In Deep Learning Architectures, 543–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Chaotic Recurrent Neural Networks"

1

Liu, Ziqian. „Optimal chaotic synchronization of stochastic delayed recurrent neural networks“. In 2013 IEEE Signal Processing in Medicine and Biology Symposium (SPMB). IEEE, 2013. http://dx.doi.org/10.1109/spmb.2013.6736775.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Azarpour, M., S. A. Seyyedsalehi und A. Taherkhani. „Robust pattern recognition using chaotic dynamics in Attractor Recurrent Neural Network“. In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596375.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Zhanying, Kejun Wang und Mo Tang. „Optimization of learning algorithms for Chaotic Diagonal Recurrent Neural Networks“. In 2010 International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2010. http://dx.doi.org/10.1109/icicip.2010.5564282.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ma, Qian-Li, Qi-Lun Zheng, Hong Peng, Tan-Wei Zhong und Li-Qiang Xu. „Chaotic Time Series Prediction Based on Evolving Recurrent Neural Networks“. In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370752.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liu, Leipo, Xiaona Song und Xiaoqiang Li. „Adaptive exponential synchronization of chaotic recurrent neural networks with stochastic perturbation“. In 2012 IEEE International Conference on Automation and Logistics (ICAL). IEEE, 2012. http://dx.doi.org/10.1109/ical.2012.6308232.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hussein, Shamina, Rohitash Chandra und Anuraganand Sharma. „Multi-step-ahead chaotic time series prediction using coevolutionary recurrent neural networks“. In 2016 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2016. http://dx.doi.org/10.1109/cec.2016.7744179.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Coca, Andres E., Roseli A. F. Romero und Liang Zhao. „Generation of composed musical structures through recurrent neural networks based on chaotic inspiration“. In 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose). IEEE, 2011. http://dx.doi.org/10.1109/ijcnn.2011.6033648.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tang, Mo, Ke jun Wang und Yan Zhang. „A Research on Chaotic Recurrent Fuzzy Neural Network and Its Convergence“. In 2007 International Conference on Mechatronics and Automation. IEEE, 2007. http://dx.doi.org/10.1109/icma.2007.4303626.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Folgheraiter, Michele, Nazgul Tazhigaliyeva und Aibek Niyetkaliyev. „Adaptive joint trajectory generator based on a chaotic recurrent neural network“. In 2015 5th Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). IEEE, 2015. http://dx.doi.org/10.1109/devlrn.2015.7346158.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Li, Yongtao, Shuhei Kurata, Ryosuke Yoshinaka und Shigetoshi Nara. „Chaotic dynamics in quasi-layered recurrent neural network model and application to complex control via simple rule“. In 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178834.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Chaotic Recurrent Neural Networks"

1

Bodruzzaman, M., und M. A. Essawy. Iterative prediction of chaotic time series using a recurrent neural network. Quarterly progress report, January 1, 1995--March 31, 1995. Office of Scientific and Technical Information (OSTI), März 1996. http://dx.doi.org/10.2172/283610.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pearlmutter, Barak A. Learning State Space Trajectories in Recurrent Neural Networks: A preliminary Report. Fort Belvoir, VA: Defense Technical Information Center, Juli 1988. http://dx.doi.org/10.21236/ada219114.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), Juni 2017. http://dx.doi.org/10.2172/1366924.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mathia, Karl. Solutions of linear equations and a class of nonlinear equations using recurrent neural networks. Portland State University Library, Januar 2000. http://dx.doi.org/10.15760/etd.1354.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kozman, Robert, und Walter J. Freeman. The Effect of External and Internal Noise on the Performance of Chaotic Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, Januar 2002. http://dx.doi.org/10.21236/ada413501.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie