Auswahl der wissenschaftlichen Literatur zum Thema „Chaotic Recurrent Neural Networks“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Chaotic Recurrent Neural Networks" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Chaotic Recurrent Neural Networks"

1

Marković, Dimitrije, and Claudius Gros. "Intrinsic Adaptation in Autonomous Recurrent Neural Networks." Neural Computation 24, no. 2 (2012): 523–40. http://dx.doi.org/10.1162/neco_a_00232.

Der volle Inhalt der Quelle
Annotation:
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Jeff, and Raymond Lee. "Chaotic Recurrent Neural Networks for Financial Forecast." American Journal of Neural Networks and Applications 7, no. 1 (2021): 7. http://dx.doi.org/10.11648/j.ajnna.20210701.12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wang, Xing-Yuan, and Yi Zhang. "Chaotic diagonal recurrent neural network." Chinese Physics B 21, no. 3 (2012): 038703. http://dx.doi.org/10.1088/1674-1056/21/3/038703.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Manantsoa, Franci Zara, Hery Zo Randrianandraina, Minoson Sendrahasina Rakotomalala, and Modeste Kameni Nematchoua. "Chaos Control in Recurrent Neural Networks Using a Sinusoidal Activation Function via the Periodic Pulse Method." Research on Intelligent Manufacturing and Assembly 4, no. 1 (2025): 168–79. https://doi.org/10.25082/rima.2025.01.003.

Der volle Inhalt der Quelle
Annotation:
Controlling chaos in recurrent neural networks (RNNs) is a crucial challenge in both computational neuroscience and artificial intelligence. Chaotic behavior in these networks can hinder stability and predictability, particularly in systems requiring structured memory and temporal processing. In this study, we apply the periodic pulse method to stabilize the dynamics of chaotic RNNs using a sinusoidal activation function. Two network configurations (2 and 3 neurons) were analyzed using numerical simulations in MATLAB. Our results show that the periodic pulse method effectively suppresses chaot
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bertschinger, Nils, and Thomas Natschläger. "Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks." Neural Computation 16, no. 7 (2004): 1413–36. http://dx.doi.org/10.1162/089976604323057443.

Der volle Inhalt der Quelle
Annotation:
Depending on the connectivity, recurrent networks of simple computational units can show very different types of dynamics, ranging from totally ordered to chaotic. We analyze how the type of dynamics (ordered or chaotic) exhibited by randomly connected networks of threshold gates driven by a time-varying input signal depends on the parameters describing the distribution of the connectivity matrix. In particular, we calculate the critical boundary in parameter space where the transition from ordered to chaotic dynamics takes place. Employing a recently developed framework for analyzing real-tim
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Echenausía-Monroy, José Luis, Daniel Alejandro Magallón-García, Luis Javier Ontañón-García, Raul Rivera Rodriguez, Jonatan Pena Ramirez, and Joaquín Álvarez. "Does a Fractional-Order Recurrent Neural Network Improve the Identification of Chaotic Dynamics?" Fractal and Fractional 8, no. 11 (2024): 632. http://dx.doi.org/10.3390/fractalfract8110632.

Der volle Inhalt der Quelle
Annotation:
This paper presents a quantitative study of the effects of using arbitrary-order operators in Neural Networks. It is based on a Recurrent Wavelet First-Order Neural Network (RWFONN), which can accurately identify several chaotic systems (measured by the mean square error and the coefficient of determination, also known as R-Squared, r2) under a fixed parameter scheme in the neural algorithm. Using fractional operators, we analyze whether the identification capabilities of the RWFONN are improved, and whether it can identify signals from fractional-order chaotic systems. The results presented i
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fournier, Samantha J., and Pierfrancesco Urbani. "Statistical physics of learning in high-dimensional chaotic systems." Journal of Statistical Mechanics: Theory and Experiment 2023, no. 11 (2023): 113301. http://dx.doi.org/10.1088/1742-5468/ad082d.

Der volle Inhalt der Quelle
Annotation:
Abstract In many complex systems, elementary units live in a chaotic environment and need to adapt their strategies to perform a task by extracting information from the environment and controlling the feedback loop on it. One of the main examples of systems of this kind is provided by recurrent neural networks. In this case, recurrent connections between neurons drive chaotic behavior, and when learning takes place, the response of the system to a perturbation should also take into account its feedback on the dynamics of the network itself. In this work, we consider an abstract model of a high
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dong, En Zeng, Yang Du, Cheng Cheng Li, and Zai Ping Chen. "Image Encryption Scheme Based on Dual Hyper-Chaotic Recurrent Neural Networks." Key Engineering Materials 474-476 (April 2011): 599–604. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.599.

Der volle Inhalt der Quelle
Annotation:
Based on two hyper-chaotic recurrent neural networks, a new image encryption scheme is presented in this paper. In the encryption scheme, the shuffling matrix is generated by using a Hopfield neural network, which is used to shuffle the pixels location; the diffusing matrix is generated by using a cellular neural network, which is used to diffuse the pixels grey value by OXRoperation. Finally, through numerical simulation and security analysis, the effectiveness of the encryption scheme is verified. Duo to the complex dynamical behavior of the hyper-chaotic systems, the encryption scheme has t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kandıran, Engin, and Avadis Hacınlıyan. "Comparison of Feedforward and Recurrent Neural Network in Forecasting Chaotic Dynamical System." AJIT-e Online Academic Journal of Information Technology 10, no. 37 (2019): 31–44. http://dx.doi.org/10.5824/1309-1581.2019.2.002.x.

Der volle Inhalt der Quelle
Annotation:
Artificial neural networks are commonly accepted as a very successful tool for global function approximation. Because of this reason, they are considered as a good approach to forecasting chaotic time series in many studies. For a given time series, the Lyapunov exponent is a good parameter to characterize the series as chaotic or not. In this study, we use three different neural network architectures to test capabilities of the neural network in forecasting time series generated from different dynamical systems. In addition to forecasting time series, using the feedforward neural network with
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wu, Xiaoying, Yuanlong Chen, Jing Tian, and Liangliang Li. "Chaotic Dynamics of Discrete Multiple-Time Delayed Neural Networks of Ring Architecture Evoked by External Inputs." International Journal of Bifurcation and Chaos 26, no. 11 (2016): 1650179. http://dx.doi.org/10.1142/s0218127416501790.

Der volle Inhalt der Quelle
Annotation:
In this paper, we consider a general class of discrete multiple-time delayed recurrent neural networks with external inputs. By applying a new transformation, we transform an m-neuron network model into a parameterized map from [Formula: see text] to [Formula: see text]. A chaotic invariant set of the neural networks system is obtained by using a family of projections from [Formula: see text] onto [Formula: see text]. Furthermore, we prove that the dynamics of this neural networks system restricted to the chaotic invariant set is topologically conjugate to the dynamics of the full shift map wi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Chaotic Recurrent Neural Networks"

1

Molter, Colin. "Storing information through complex dynamics in recurrent neural networks." Doctoral thesis, Universite Libre de Bruxelles, 2005. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211039.

Der volle Inhalt der Quelle
Annotation:
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Vincent-Lamarre, Philippe. "Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.

Der volle Inhalt der Quelle
Annotation:
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code i
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chen, Cong. "High-Dimensional Generative Models for 3D Perception." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103948.

Der volle Inhalt der Quelle
Annotation:
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The fram
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Clodong, Sébastien. "Recurrent outbreaks in ecology chaotic dynamics in complex networks /." [S.l. : s.n.], 2004. http://pub.ub.uni-potsdam.de/2004/0062/clodong.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Clodong, Sébastien. "Recurrent outbreaks in ecology : chaotic dynamics in complex networks." Phd thesis, Universität Potsdam, 2004. http://opus.kobv.de/ubp/volltexte/2005/171/.

Der volle Inhalt der Quelle
Annotation:
Gegenstand der Dissertation ist die Untersuchung von wiederkehrenden Ausbrüchen (wie z.B. Epidemien) in der Natur. Dies gelang anhand von Modellen, die die Dynamik von Phytoplankton und die Ausbreitung von Krankheiten zwischen Städten beschreiben. Diese beide Systeme bilden hervorragende Beispiele für solche Phänomene. Die Frage, ob die in der Zeit wiederkehrenden Ausbrüche ein Ausdruck chaotischer Dynamik sein können, ist aktuell in der Ökologie und fasziniert Wissenschaftler dieser Disziplin. Wir konnten zeigen, dass sich das Plankton-Modell im Falle von periodischem Antreiben über die Nährs
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Żbikowski, Rafal Waclaw. "Recurrent neural networks some control aspects /." Connect to electronic version, 1994. http://hdl.handle.net/1905/180.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Der volle Inhalt der Quelle
Annotation:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorpor
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zbikowski, Rafal Waclaw. "Recurrent neural networks : some control aspects." Thesis, University of Glasgow, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390233.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jacobsson, Henrik. "Rule extraction from recurrent neural networks." Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/6081/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bonato, Tommaso. "Time Series Predictions With Recurrent Neural Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Den vollen Inhalt der Quelle finden
Annotation:
L'obiettivo principale di questa tesi è studiare come gli algoritmi di apprendimento automatico (machine learning in inglese) e in particolare le reti neurali LSTM (Long Short Term Memory) possano essere utilizzati per prevedere i valori futuri di una serie storica regolare come, per esempio, le funzioni seno e coseno. Una serie storica è definita come una sequenza di osservazioni s_t ordinate nel tempo. Inoltre cercheremo di applicare gli stessi principi per prevedere i valori di una serie storica prodotta utilizzando i dati di vendita di un prodotto cosmetico durante un periodo di tre anni.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Chaotic Recurrent Neural Networks"

1

Salem, Fathi M. Recurrent Neural Networks. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89929-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tyagi, Amit Kumar, and Ajith Abraham. Recurrent Neural Networks. CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hu, Xiaolin, and P. Balasubramaniam. Recurrent neural networks. InTech, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hammer, Barbara. Learning with recurrent neural networks. Springer London, 2000. http://dx.doi.org/10.1007/bfb0110016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yi, Zhang, and K. K. Tan. Convergence Analysis of Recurrent Neural Networks. Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

ElHevnawi, Mahmoud, and Mohamed Mysara. Recurrent neural networks and soft computing. InTech, 2012.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

R, Medsker L., and Jain L. C, eds. Recurrent neural networks: Design and applications. CRC Press, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

K, Tan K., ed. Convergence analysis of recurrent neural networks. Kluwer Academic Publishers, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Whittle, Peter. Neural nets and chaotic carriers. Wiley, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Chaotic Recurrent Neural Networks"

1

Assaad, Mohammad, Romuald Boné, and Hubert Cardot. "Predicting Chaotic Time Series by Boosted Recurrent Neural Networks." In Neural Information Processing. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893257_92.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Myrzakhmetov, Bagdat, Rustem Takhanov, and Zhenisbek Assylbekov. "Initial Explorations on Chaotic Behaviors of Recurrent Neural Networks." In Computational Linguistics and Intelligent Text Processing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-24337-0_26.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Luo, Haigeng, Xiaodong Xu, and Xiaoxin Liao. "Numerical Analysis of a Chaotic Delay Recurrent Neural Network with Four Neurons." In Advances in Neural Networks - ISNN 2006. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_51.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sun, Jiancheng, Taiyi Zhang, and Haiyuan Liu. "Modelling of Chaotic Systems with Novel Weighted Recurrent Least Squares Support Vector Machines." In Advances in Neural Networks – ISNN 2004. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_95.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sun, Jiancheng, Lun Yu, Guang Yang, and Congde Lu. "Modelling of Chaotic Systems with Recurrent Least Squares Support Vector Machines Combined with Stationary Wavelet Transform." In Advances in Neural Networks – ISNN 2005. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_69.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hu, Yun-an, Bin Zuo, and Jing Li. "A Novel Chaotic Annealing Recurrent Neural Network for Multi-parameters Extremum Seeking Algorithm." In Neural Information Processing. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893257_112.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yoshinaka, Ryosuke, Masato Kawashima, Yuta Takamura, et al. "Adaptive Control of Robot Systems with Simple Rules Using Chaotic Dynamics in Quasi-layered Recurrent Neural Networks." In Studies in Computational Intelligence. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27534-0_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Li, Yongtao, and Shigetoshi Nara. "Solving Complex Control Tasks via Simple Rule(s): Using Chaotic Dynamics in a Recurrent Neural Network Model." In The Relevance of the Time Domain to Neural Network Models. Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-0724-9_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Du, Ke-Lin, and M. N. S. Swamy. "Recurrent Neural Networks." In Neural Networks and Statistical Learning. Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5571-3_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Du, Ke-Lin, and M. N. S. Swamy. "Recurrent Neural Networks." In Neural Networks and Statistical Learning. Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Chaotic Recurrent Neural Networks"

1

Narmada, A., Anuj Jain, and Manoj Kumar Shukla. "Recurrent Neural Network with Backstepping controller based Fractional-Order Chaotic System based Solution for Secure Communication and Image Encryption." In 2024 International Conference on IoT Based Control Networks and Intelligent Systems (ICICNIS). IEEE, 2024. https://doi.org/10.1109/icicnis64247.2024.10823231.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liu, Ziqian. "Optimal chaotic synchronization of stochastic delayed recurrent neural networks." In 2013 IEEE Signal Processing in Medicine and Biology Symposium (SPMB). IEEE, 2013. http://dx.doi.org/10.1109/spmb.2013.6736775.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Azarpour, M., S. A. Seyyedsalehi, and A. Taherkhani. "Robust pattern recognition using chaotic dynamics in Attractor Recurrent Neural Network." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596375.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ma, Qian-Li, Qi-Lun Zheng, Hong Peng, Tan-Wei Zhong, and Li-Qiang Xu. "Chaotic Time Series Prediction Based on Evolving Recurrent Neural Networks." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370752.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Li, Zhanying, Kejun Wang, and Mo Tang. "Optimization of learning algorithms for Chaotic Diagonal Recurrent Neural Networks." In 2010 International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2010. http://dx.doi.org/10.1109/icicip.2010.5564282.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Leipo, Xiaona Song, and Xiaoqiang Li. "Adaptive exponential synchronization of chaotic recurrent neural networks with stochastic perturbation." In 2012 IEEE International Conference on Automation and Logistics (ICAL). IEEE, 2012. http://dx.doi.org/10.1109/ical.2012.6308232.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Coca, Andres E., Roseli A. F. Romero, and Liang Zhao. "Generation of composed musical structures through recurrent neural networks based on chaotic inspiration." In 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose). IEEE, 2011. http://dx.doi.org/10.1109/ijcnn.2011.6033648.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hussein, Shamina, Rohitash Chandra, and Anuraganand Sharma. "Multi-step-ahead chaotic time series prediction using coevolutionary recurrent neural networks." In 2016 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2016. http://dx.doi.org/10.1109/cec.2016.7744179.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ahuja, Muskaan, and Sanju Saini. "Recurrent Neural Network for Chaotic Wind Speed Time Series Prediction." In 2023 9th IEEE India International Conference on Power Electronics (IICPE). IEEE, 2023. http://dx.doi.org/10.1109/iicpe60303.2023.10474907.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bingi, Kishore, P. Arun Mozhi Devan, and Fawnizu Azmadi Hussin. "Reconstruction of Chaotic Attractor for Fractional-order Tamaševičius System Using Recurrent Neural Networks." In 2021 Australian & New Zealand Control Conference (ANZCC). IEEE, 2021. http://dx.doi.org/10.1109/anzcc53563.2021.9628225.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Chaotic Recurrent Neural Networks"

1

Bodruzzaman, M., and M. A. Essawy. Iterative prediction of chaotic time series using a recurrent neural network. Quarterly progress report, January 1, 1995--March 31, 1995. Office of Scientific and Technical Information (OSTI), 1996. http://dx.doi.org/10.2172/283610.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pearlmutter, Barak A. Learning State Space Trajectories in Recurrent Neural Networks: A preliminary Report. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada219114.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1366924.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mathia, Karl. Solutions of linear equations and a class of nonlinear equations using recurrent neural networks. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.1354.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lin, Linyu, Joomyung Lee, Bikash Poudel, Timothy McJunkin, Nam Dinh, and Vivek Agarwal. Enhancing the Operational Resilience of Advanced Reactors with Digital Twins by Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1835892.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kozman, Robert, and Walter J. Freeman. The Effect of External and Internal Noise on the Performance of Chaotic Neural Networks. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada413501.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Der volle Inhalt der Quelle
Annotation:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capa
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Der volle Inhalt der Quelle
Annotation:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detecti
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yu, Nanpeng, Koji Yamashita, Brandon Foggo, et al. Final Project Report: Discovery of Signatures, Anomalies, and Precursors in Synchrophasor Data with Matrix Profile and Deep Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1874793.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Perdigão, Rui A. P. Neuro-Quantum Cyber-Physical Intelligence (NQCPI). Synergistic Manifolds, 2024. http://dx.doi.org/10.46337/241024.

Der volle Inhalt der Quelle
Annotation:
Neuro-Quantum Cyber-Physical Intelligence (NQCPI) is hereby introduced, entailing a novel framework for nonlinear natural-based neural post-quantum information physics, along with novel advances in far-from-equilibrium thermodynamics and evolutionary cognition in post-quantum neurobiochemistry for next-generation information physical systems intelligence. NQCPI harnesses and operates with the higher-order nonlinear nature of previously elusive quantum behaviour, including in open chaotic dissipative systems in thermodynamically and magneto-electrodynamically disruptive conditions, such as in n
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!