Journal articles on the topic 'Reservoir computing networks'

To see the other types of publications on this topic, follow the link: Reservoir computing networks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Reservoir computing networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Van der Sande, Guy, Daniel Brunner, and Miguel C. Soriano. "Advances in photonic reservoir computing." Nanophotonics 6, no. 3 (May 12, 2017): 561–76. http://dx.doi.org/10.1515/nanoph-2016-0132.

Full text
Abstract:
AbstractWe review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.
APA, Harvard, Vancouver, ISO, and other styles
2

Ghosh, Sanjib, Kohei Nakajima, Tanjung Krisnanda, Keisuke Fujii, and Timothy C. H. Liew. "Quantum Neuromorphic Computing with Reservoir Computing Networks." Advanced Quantum Technologies 4, no. 9 (July 9, 2021): 2100053. http://dx.doi.org/10.1002/qute.202100053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rohm, Andre, Lina Jaurigue, and Kathy Ludge. "Reservoir Computing Using Laser Networks." IEEE Journal of Selected Topics in Quantum Electronics 26, no. 1 (January 2020): 1–8. http://dx.doi.org/10.1109/jstqe.2019.2927578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Damicelli, Fabrizio, Claus C. Hilgetag, and Alexandros Goulas. "Brain connectivity meets reservoir computing." PLOS Computational Biology 18, no. 11 (November 16, 2022): e1010639. http://dx.doi.org/10.1371/journal.pcbi.1010639.

Full text
Abstract:
The connectivity of Artificial Neural Networks (ANNs) is different from the one observed in Biological Neural Networks (BNNs). Can the wiring of actual brains help improve ANNs architectures? Can we learn from ANNs about what network features support computation in the brain when solving a task? At a meso/macro-scale level of the connectivity, ANNs’ architectures are carefully engineered and such those design decisions have crucial importance in many recent performance improvements. On the other hand, BNNs exhibit complex emergent connectivity patterns at all scales. At the individual level, BNNs connectivity results from brain development and plasticity processes, while at the species level, adaptive reconfigurations during evolution also play a major role shaping connectivity. Ubiquitous features of brain connectivity have been identified in recent years, but their role in the brain’s ability to perform concrete computations remains poorly understood. Computational neuroscience studies reveal the influence of specific brain connectivity features only on abstract dynamical properties, although the implications of real brain networks topologies on machine learning or cognitive tasks have been barely explored. Here we present a cross-species study with a hybrid approach integrating real brain connectomes and Bio-Echo State Networks, which we use to solve concrete memory tasks, allowing us to probe the potential computational implications of real brain connectivity patterns on task solving. We find results consistent across species and tasks, showing that biologically inspired networks perform as well as classical echo state networks, provided a minimum level of randomness and diversity of connections is allowed. We also present a framework, bio2art, to map and scale up real connectomes that can be integrated into recurrent ANNs. This approach also allows us to show the crucial importance of the diversity of interareal connectivity patterns, stressing the importance of stochastic processes determining neural networks connectivity in general.
APA, Harvard, Vancouver, ISO, and other styles
5

Antonik, Piotr, Serge Massar, and Guy Van Der Sande. "Photonic reservoir computing using delay dynamical systems." Photoniques, no. 104 (September 2020): 45–48. http://dx.doi.org/10.1051/photon/202010445.

Full text
Abstract:
The recent progress in artificial intelligence has spurred renewed interest in hardware implementations of neural networks. Reservoir computing is a powerful, highly versatile machine learning algorithm well suited for experimental implementations. The simplest highperformance architecture is based on delay dynamical systems. We illustrate its power through a series of photonic examples, including the first all optical reservoir computer and reservoir computers based on lasers with delayed feedback. We also show how reservoirs can be used to emulate dynamical systems. We discuss the perspectives of photonic reservoir computing.
APA, Harvard, Vancouver, ISO, and other styles
6

Tran, Dat, and Christof Teuscher. "Computational Capacity of Complex Memcapacitive Networks." ACM Journal on Emerging Technologies in Computing Systems 17, no. 2 (April 2021): 1–25. http://dx.doi.org/10.1145/3445795.

Full text
Abstract:
Emerging memcapacitive nanoscale devices have the potential to perform computations in new ways. In this article, we systematically study, to the best of our knowledge for the first time, the computational capacity of complex memcapacitive networks, which function as reservoirs in reservoir computing, one of the brain-inspired computing architectures. Memcapacitive networks are composed of memcapacitive devices randomly connected through nanowires. Previous studies have shown that both regular and random reservoirs provide sufficient dynamics to perform simple tasks. How do complex memcapacitive networks illustrate their computational capability, and what are the topological structures of memcapacitive networks that solve complex tasks with efficiency? Studies show that small-world power-law (SWPL) networks offer an ideal trade-off between the communication properties and the wiring cost of networks. In this study, we illustrate the computing nature of SWPL memcapacitive reservoirs by exploring the two essential properties: fading memory and linear separation through measurements of kernel quality. Compared to ideal reservoirs, nanowire memcapacitive reservoirs had a better dynamic response and improved their performance by 4.67% on three tasks: MNIST, Isolated Spoken Digits, and CIFAR-10. On the same three tasks, compared to memristive reservoirs, nanowire memcapacitive reservoirs achieved comparable performance with much less power, on average, about 99× , 17×, and 277×, respectively. Simulation results of the topological transformation of memcapacitive networks reveal that that topological structures of the memcapacitive SWPL reservoirs did not affect their performance but significantly contributed to the wiring cost and the power consumption of the systems. The minimum trade-off between the wiring cost and the power consumption occurred at different network settings of α and β : 4.5 and 0.61 for Biolek reservoirs, 2.7 and 1.0 for Mohamed reservoirs, and 3.0 and 1.0 for Najem reservoirs. The results of our research illustrate the computational capacity of complex memcapacitive networks as reservoirs in reservoir computing. Such memcapacitive networks with an SWPL topology are energy-efficient systems that are suitable for low-power applications such as mobile devices and the Internet of Things.
APA, Harvard, Vancouver, ISO, and other styles
7

Senn, Christoph Walter, and Itsuo Kumazawa. "Abstract Reservoir Computing." AI 3, no. 1 (March 10, 2022): 194–210. http://dx.doi.org/10.3390/ai3010012.

Full text
Abstract:
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such as the ones used in physical reservoir computing. By abstracting away these kinds of noise sources using intervals, we derive a regularized training regime for reservoir computing using sets of possible reservoir states. Numerical simulations are used to show the effectiveness of our approach against different sources of errors that can appear in real-world scenarios and compare them with standard approaches. Our results support the application of interval arithmetics to improve the robustness of mass-spring networks trained in simulations.
APA, Harvard, Vancouver, ISO, and other styles
8

Hart, Joseph D., Laurent Larger, Thomas E. Murphy, and Rajarshi Roy. "Delayed dynamical systems: networks, chimeras and reservoir computing." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 377, no. 2153 (July 22, 2019): 20180123. http://dx.doi.org/10.1098/rsta.2018.0123.

Full text
Abstract:
We present a systematic approach to reveal the correspondence between time delay dynamics and networks of coupled oscillators. After early demonstrations of the usefulness of spatio-temporal representations of time-delay system dynamics, extensive research on optoelectronic feedback loops has revealed their immense potential for realizing complex system dynamics such as chimeras in rings of coupled oscillators and applications to reservoir computing. Delayed dynamical systems have been enriched in recent years through the application of digital signal processing techniques. Very recently, we have showed that one can significantly extend the capabilities and implement networks with arbitrary topologies through the use of field programmable gate arrays. This architecture allows the design of appropriate filters and multiple time delays, and greatly extends the possibilities for exploring synchronization patterns in arbitrary network topologies. This has enabled us to explore complex dynamics on networks with nodes that can be perfectly identical, introduce parameter heterogeneities and multiple time delays, as well as change network topologies to control the formation and evolution of patterns of synchrony. This article is part of the theme issue ‘Nonlinear dynamics of delay systems’.
APA, Harvard, Vancouver, ISO, and other styles
9

S Pathak, Shantanu, and D. Rajeswara Rao. "Reservoir Computing for Healthcare Analytics." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 240. http://dx.doi.org/10.14419/ijet.v7i2.32.15576.

Full text
Abstract:
In this data age tools for sophisticated generation and handling of data are at epitome of usage. Data varying in both space and time poses a breed of challenges. Challenges they possess for forecasting can be well handled by Reservoir computing based neural networks. Challenges like class imbalance, missing values, locality effect are discussed here. Additionally, popular statistical techniques for forecasting such data are discussed. Results show how Reservoir Computing based technique outper-forms traditional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Andrecut, M. "Reservoir computing on the hypersphere." International Journal of Modern Physics C 28, no. 07 (July 2017): 1750095. http://dx.doi.org/10.1142/s0129183117500954.

Full text
Abstract:
Reservoir Computing (RC) refers to a Recurrent Neural Network (RNNs) framework, frequently used for sequence learning and time series prediction. The RC system consists of a random fixed-weight RNN (the input-hidden reservoir layer) and a classifier (the hidden-output readout layer). Here, we focus on the sequence learning problem, and we explore a different approach to RC. More specifically, we remove the nonlinear neural activation function, and we consider an orthogonal reservoir acting on normalized states on the unit hypersphere. Surprisingly, our numerical results show that the system’s memory capacity exceeds the dimensionality of the reservoir, which is the upper bound for the typical RC approach based on Echo State Networks (ESNs). We also show how the proposed system can be applied to symmetric cryptography problems, and we include a numerical implementation.
APA, Harvard, Vancouver, ISO, and other styles
11

Akai-Kasaya, Megumi. "(Invited) Neuromorphic Devices and Systems Using Carbon Nanotubes." ECS Meeting Abstracts MA2022-01, no. 10 (July 7, 2022): 778. http://dx.doi.org/10.1149/ma2022-0110778mtgabs.

Full text
Abstract:
Molecular neuromorphic devices composed of single-walled carbon nanotubes (SWNTs) complexed with polyoxometalate (POM) exhibit various properties very similar to the functions of neurons. A random and extremely dense SWNT/POM network is expected to have the rudimentary ability of reservoir computing, which is a recently introduced framework derived from recurrent neural networks. We performed RC using multiple signals collected from a SWNT/POM random network. The results are expected to contribute to the design and fabrication of SWNT/molecular networks with high reservoir functionalities for future development. A physical reservoir consisting of nanomaterials may become a computing system with low cost, low power consumption, and highly integrated hardware devices for increasingly important edge computing.
APA, Harvard, Vancouver, ISO, and other styles
12

Thivierge, Jean-Philippe, Eloïse Giraud, Michael Lynn, and Annie Théberge Charbonneau. "Key role of neuronal diversity in structured reservoir computing." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 11 (November 2022): 113130. http://dx.doi.org/10.1063/5.0111131.

Full text
Abstract:
Chaotic time series have been captured by reservoir computing models composed of a recurrent neural network whose output weights are trained in a supervised manner. These models, however, are typically limited to randomly connected networks of homogeneous units. Here, we propose a new class of structured reservoir models that incorporates a diversity of cell types and their known connections. In a first version of the model, the reservoir was composed of mean-rate units separated into pyramidal, parvalbumin, and somatostatin cells. Stability analysis of this model revealed two distinct dynamical regimes, namely, (i) an inhibition-stabilized network (ISN) where strong recurrent excitation is balanced by strong inhibition and (ii) a non-ISN network with weak excitation. These results were extended to a leaky integrate-and-fire model that captured different cell types along with their network architecture. ISN and non-ISN reservoir networks were trained to relay and generate a chaotic Lorenz attractor. Despite their increased performance, ISN networks operate in a regime of activity near the limits of stability where external perturbations yield a rapid divergence in output. The proposed framework of structured reservoir computing opens avenues for exploring how neural microcircuits can balance performance and stability when representing time series through distinct dynamical regimes.
APA, Harvard, Vancouver, ISO, and other styles
13

Yamaguti, Yutaka, and Ichiro Tsuda. "Functional differentiations in evolutionary reservoir computing networks." Chaos: An Interdisciplinary Journal of Nonlinear Science 31, no. 1 (January 2021): 013137. http://dx.doi.org/10.1063/5.0019116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

de Vos, N. J. "Reservoir computing as an alternative to traditional artificial neural networks in rainfall-runoff modelling." Hydrology and Earth System Sciences Discussions 9, no. 5 (May 11, 2012): 6101–34. http://dx.doi.org/10.5194/hessd-9-6101-2012.

Full text
Abstract:
Abstract. Despite theoretical benefits of recurrent artificial neural networks over their feedforward counterparts, it is still unclear whether the former offer practical advantages as rainfall-runoff models. The main drawback of recurrent networks is the increased complexity of the training procedure due to their architecture. This work uses recently introduced, conceptually simple reservoir computing models for one-day-ahead forecasts on twelve river basins in the Eastern United States, and compares them to a variety of traditional feedforward and recurrent models. Two modifications on the reservoir computing models are made to increase the hydrologically relevant information content of their internal state. The results show that the reservoir computing networks outperform feedforward networks and are competitive with state-of-the-art recurrent networks, across a range of performance measures. This, along with their simplicity and ease of training, suggests that reservoir computing models can be considered promising alternatives to traditional artificial neural networks in rainfall-runoff modelling.
APA, Harvard, Vancouver, ISO, and other styles
15

Ferreira, Tiago D., Nuno A. Silva, Duarte Silva, Carla C. Rosa, and Ariel Guerreiro. "Reservoir computing with nonlinear optical media." Journal of Physics: Conference Series 2407, no. 1 (December 1, 2022): 012019. http://dx.doi.org/10.1088/1742-6596/2407/1/012019.

Full text
Abstract:
Abstract Reservoir computing is a versatile approach for implementing physically Recurrent Neural networks which take advantage of a reservoir, consisting of a set of interconnected neurons with temporal dynamics, whose weights and biases are fixed and do not need to be optimized. Instead, the training takes place only at the output layer towards a specific task. One important requirement for these systems to work is nonlinearity, which in optical setups is usually obtained via the saturation of the detection device. In this work, we explore a distinct approach using a photorefractive crystal as the source of the nonlinearity in the reservoir. Furthermore, by leveraging on the time response of the photorefractive media, one can also have the temporal interaction required for such architecture. If we space out in time the propagation of different states, the temporal interaction is lost, and the system can work as an extreme learning machine. This corresponds to a physical implementation of a Feed-Forward Neural Network with a single hidden layer and fixed random weights and biases. Some preliminary results are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Obst, Oliver, Adrian Trinchi, Simon G. Hardin, Matthew Chadwick, Ivan Cole, Tim H. Muster, Nigel Hoschke, et al. "Nano-scale reservoir computing." Nano Communication Networks 4, no. 4 (December 2013): 189–96. http://dx.doi.org/10.1016/j.nancom.2013.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hanias, Michael P., Spyros P. Georgopoulos, Stavros G. Stavrinides, Panagiotis Tziatzios, and Ioannis P. Antoniades. "Reservoir computing vs. neural networks in financial forecasting." International Journal of Computational Economics and Econometrics 1, no. 1 (2021): 1. http://dx.doi.org/10.1504/ijcee.2021.10041721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gallicchio, Claudio, and Alessio Micheli. "Echo State Property of Deep Reservoir Computing Networks." Cognitive Computation 9, no. 3 (May 5, 2017): 337–50. http://dx.doi.org/10.1007/s12559-017-9461-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Georgopoulos, Spyros P., Panagiotis Tziatzios, Stavros G. Stavrinides, Ioannis P. Antoniades, and Michael P. Hanias. "Reservoir computing vs. neural networks in financial forecasting." International Journal of Computational Economics and Econometrics 13, no. 1 (2023): 1. http://dx.doi.org/10.1504/ijcee.2023.127283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Röhm, André, and Kathy Lüdge. "Multiplexed networks: reservoir computing with virtual and real nodes." Journal of Physics Communications 2, no. 8 (August 3, 2018): 085007. http://dx.doi.org/10.1088/2399-6528/aad56d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

OZTURK, MUSTAFA C., and JOSE C. PRINCIPE. "FREEMAN'S K MODELS AS RESERVOIR COMPUTING ARCHITECTURES." New Mathematics and Natural Computation 05, no. 01 (March 2009): 265–86. http://dx.doi.org/10.1142/s179300570900126x.

Full text
Abstract:
Walter Freeman in his classic 1975 book "Mass Activation of the Nervous System" presented a hierarchy of dynamical computational models based on studies and measurements done in real brains, which has been known as the Freeman's K model (FKM). Much more recently, liquid state machine (LSM) and echo state network (ESN) have been proposed as universal approximators in the class of functionals with exponential decaying memory. In this paper, we briefly review these models and show that the restricted K set architecture of KI and KII networks share the same properties of LSM/ESNs and is therefore one more member of the reservoir computing family. In the reservoir computing perspective, the states of the FKM are a representation space that stores in its spatio-temporal dynamics a short-term history of the input patterns. Then at any time, with a simple instantaneous read-out made up of a KI, information related to the input history can be accessed and read out. This work provides two important contributions. First, it emphasizes the need for optimal readouts, and shows how to adaptively design them. Second, it shows that the Freeman model is able to process continuous signals with temporal structure. We will provide theoretical results for the conditions on the system parameters of FKM satisfying the echo state property. Experimental results are presented to illustrate the validity of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
22

Büsing, Lars, Benjamin Schrauwen, and Robert Legenstein. "Connectivity, Dynamics, and Memory in Reservoir Computing with Binary and Analog Neurons." Neural Computation 22, no. 5 (May 2010): 1272–311. http://dx.doi.org/10.1162/neco.2009.01-09-947.

Full text
Abstract:
Reservoir computing (RC) systems are powerful models for online computations on input sequences. They consist of a memoryless readout neuron that is trained on top of a randomly connected recurrent neural network. RC systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work indicated a fundamental difference in the behavior of these two implementations of the RC idea. The performance of an RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons, such clear dependency has not been observed. In this letter, we address this apparent dichotomy by investigating the influence of the network connectivity (parameterized by the neuron in-degree) on a family of network models that interpolates between analog and binary networks. Our analyses are based on a novel estimation of the Lyapunov exponent of the network dynamics with the help of branching process theory, rank measures that estimate the kernel quality and generalization capabilities of recurrent networks, and a novel mean field predictor for computational performance. These analyses reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits, leading to differences in the integration of information over short and long timescales. This explains the decreased computational performance observed in binary circuits that are densely connected. The mean field predictor is also used to bound the memory function of recurrent circuits of binary neurons.
APA, Harvard, Vancouver, ISO, and other styles
23

Allwood, Dan A., Matthew O. A. Ellis, David Griffin, Thomas J. Hayward, Luca Manneschi, Mohammad F. KH Musameh, Simon O'Keefe, et al. "A perspective on physical reservoir computing with nanomagnetic devices." Applied Physics Letters 122, no. 4 (January 23, 2023): 040501. http://dx.doi.org/10.1063/5.0119040.

Full text
Abstract:
Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here, we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
APA, Harvard, Vancouver, ISO, and other styles
24

Basterrech, Sebastián, and Gerardo Rubino. "ECHO STATE QUEUEING NETWORKS: A COMBINATION OF RESERVOIR COMPUTING AND RANDOM NEURAL NETWORKS." Probability in the Engineering and Informational Sciences 31, no. 4 (May 17, 2017): 457–76. http://dx.doi.org/10.1017/s0269964817000110.

Full text
Abstract:
This paper deals with two ideas appeared during the last developing phase in Artificial Intelligence: Reservoir Computing (RC) and Random Neural Networks. Both have been very successful in many applications. We propose a new model belonging to the first class, taking the structure of the second for its dynamics. The new model is called Echo State Queuing Network. The paper positions the model in the global Machine Learning area, and provides examples of its use and performances. We show on largely used benchmarks that it is a very accurate tool, and we illustrate how it compares with standard RC models.
APA, Harvard, Vancouver, ISO, and other styles
25

Ren, Bin, and Huanfei Ma. "Global optimization of hyper-parameters in reservoir computing." Electronic Research Archive 30, no. 7 (2022): 2719–29. http://dx.doi.org/10.3934/era.2022139.

Full text
Abstract:
<abstract><p>Reservoir computing has emerged as a powerful and efficient machine learning tool especially in the reconstruction of many complex systems even for chaotic systems only based on the observational data. Though fruitful advances have been extensively studied, how to capture the art of hyper-parameter settings to construct efficient RC is still a long-standing and urgent problem. In contrast to the local manner of many works which aim to optimize one hyper-parameter while keeping others constant, in this work, we propose a global optimization framework using simulated annealing technique to find the optimal architecture of the randomly generated networks for a successful RC. Based on the optimized results, we further study several important properties of some hyper-parameters. Particularly, we find that the globally optimized reservoir network has a largest singular value significantly larger than one, which is contrary to the sufficient condition reported in the literature to guarantee the echo state property. We further reveal the mechanism of this phenomenon with a simplified model and the theory of nonlinear dynamical systems.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
26

Gonon, Lukas, and Juan-Pablo Ortega. "Reservoir Computing Universality With Stochastic Inputs." IEEE Transactions on Neural Networks and Learning Systems 31, no. 1 (January 2020): 100–112. http://dx.doi.org/10.1109/tnnls.2019.2899649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhong, Xijuan, and Shuai Wang. "Learning Coupled Oscillators System with Reservoir Computing." Symmetry 14, no. 6 (May 25, 2022): 1084. http://dx.doi.org/10.3390/sym14061084.

Full text
Abstract:
In this paper, we reconstruct the dynamic behavior of the ring-coupled Lorenz oscillators system by reservoir computing. Although the reconstruction of various complex chaotic attractors has been well studied by using various neural networks, little attention has been paid to whether the spatio-temporal structure of some special attractors can be maintained in long-term prediction. Reservoir computing has been shown to be effective for model-free prediction, so we want to investigate whether reservoir computing can restore the rotational symmetry of the original ring-coupled Lorenz system. We find that although the state prediction of the trained reservoir computer will gradually deviate from the actual trajectory of the original system, the associated spatio-temporal structure is maintained in the process of reconstruction. Specifically, we show that the rotational symmetric structure of periodic rotating waves, quasi-periodic torus, and chaotic rotating waves is well maintained.
APA, Harvard, Vancouver, ISO, and other styles
28

Carbajal, Juan Pablo, Joni Dambre, Michiel Hermans, and Benjamin Schrauwen. "Memristor Models for Machine Learning." Neural Computation 27, no. 3 (March 2015): 725–47. http://dx.doi.org/10.1162/neco_a_00694.

Full text
Abstract:
In the quest for alternatives to traditional complementary metal-oxide-semiconductor, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov’s model, and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that although both models could lead to useful memristor-based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models.
APA, Harvard, Vancouver, ISO, and other styles
29

Andreev, Andrey V., Artem A. Badarin, Vladimir A. Maximenko, and Alexander E. Hramov. "Forecasting macroscopic dynamics in adaptive Kuramoto network using reservoir computing." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 10 (October 2022): 103126. http://dx.doi.org/10.1063/5.0114127.

Full text
Abstract:
Forecasting a system’s behavior is an essential task encountering the complex systems theory. Machine learning offers supervised algorithms, e.g., recurrent neural networks and reservoir computers that predict the behavior of model systems whose states consist of multidimensional time series. In real life, we often have limited information about the behavior of complex systems. The brightest example is the brain neural network described by the electroencephalogram. Forecasting the behavior of these systems is a more challenging task but provides a potential for real-life application. Here, we trained reservoir computer to predict the macroscopic signal produced by the network of phase oscillators. The Lyapunov analysis revealed the chaotic nature of the signal and reservoir computer failed to forecast it. Augmenting the feature space using Takkens’ theorem improved the quality of forecasting. RC achieved the best prediction score when the number of signals coincided with the embedding dimension estimated via the nearest false neighbors method. We found that short-time prediction required a large number of features, while long-time prediction utilizes a limited number of features. These results refer to the bias-variance trade-off, an important concept in machine learning.
APA, Harvard, Vancouver, ISO, and other styles
30

Hermans, Michiel, and Benjamin Schrauwen. "Recurrent Kernel Machines: Computing with Infinite Echo State Networks." Neural Computation 24, no. 1 (January 2012): 104–33. http://dx.doi.org/10.1162/neco_a_00200.

Full text
Abstract:
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.
APA, Harvard, Vancouver, ISO, and other styles
31

Yilmaz, Ozgur. "Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing." Neural Computation 27, no. 12 (December 2015): 2661–92. http://dx.doi.org/10.1162/neco_a_00787.

Full text
Abstract:
This letter introduces a novel framework of reservoir computing that is capable of both connectionist machine intelligence and symbolic computation. A cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells, and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is shown to be capable of long-term memory, and it requires orders of magnitude less computation compared to echo state networks. As the focus of the letter, we suggest that binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing. To demonstrate the capability of the proposed system, we make analogies directly on image data by asking, What is the automobile of air?
APA, Harvard, Vancouver, ISO, and other styles
32

Steiner, Peter, Azarakhsh Jalalvand, Simon Stone, and Peter Birkholz. "PyRCN: A toolbox for exploration and application of Reservoir Computing Networks." Engineering Applications of Artificial Intelligence 113 (August 2022): 104964. http://dx.doi.org/10.1016/j.engappai.2022.104964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hamedani, Kian, Lingjia Liu, Rachad Atat, Jinsong Wu, and Yang Yi. "Reservoir Computing Meets Smart Grids: Attack Detection Using Delayed Feedback Networks." IEEE Transactions on Industrial Informatics 14, no. 2 (February 2018): 734–43. http://dx.doi.org/10.1109/tii.2017.2769106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jalalvand, Azarakhsh, Kris Demuynck, Wesley De Neve, and Jean-Pierre Martens. "On the application of reservoir computing networks for noisy image recognition." Neurocomputing 277 (February 2018): 237–48. http://dx.doi.org/10.1016/j.neucom.2016.11.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mastoi, Qurat-ul-ain, Teh Wah, and Ram Gopal Raj. "Reservoir Computing Based Echo State Networks for Ventricular Heart Beat Classification." Applied Sciences 9, no. 4 (February 18, 2019): 702. http://dx.doi.org/10.3390/app9040702.

Full text
Abstract:
The abnormal conduction of cardiac activity in the lower chamber of the heart (ventricular) can cause cardiac diseases and sometimes leads to sudden death. In this paper, the author proposed the Reservoir Computing (RC) based Echo State Networks (ESNs) for ventricular heartbeat classification based on a single Electrocardiogram (ECG) lead. The Association for the Advancement of Medical Instrumentation (AAMI) standards were used to preprocesses the standardized diagnostic tool (ECG signals) based on the interpatient scheme. Despite the extensive efforts and notable experiments that have been done on machine learning techniques for heartbeat classification, ESNs are yet to be considered for heartbeat classification as a is fast, scalable, and reliable approach for real-time scenarios. Our proposed method was especially designed for Medical Internet of Things (MIoT) devices, for instance wearable wireless devices for ECG monitoring or ventricular heart beat detection systems and so on. The experiments were conducted on two public datasets, namely AHA and MIT-BIH-SVDM. The performance of the proposed model was evaluated using the MIT-BIH-AR dataset and it achieved remarkable results. The positive predictive value and sensitivity are 98.98% and 98.98%, respectively for the modified lead II (MLII) and 98.96% and 97.95 for the V1 lead, respectively. However, the experimental results of the state-of-the-art approaches, namely the patient-adaptable method, improved generalization, and the multiview learning approach obtained 92.8%, 87.0%, and 98.0% positive predictive values, respectively. These obtained results of the existing studies exemplify that the performance of this method achieved higher accuracy. We believe that the improved classification accuracy opens up the possibility for implementation of this methodology in Medical Internet of Things (MIoT) devices in order to bring improvements in e-health systems.
APA, Harvard, Vancouver, ISO, and other styles
36

Le, Phuong Y., Billy J. Murdoch, Anders J. Barlow, Anthony S. Holland, Dougal G. McCulloch, Chris F. McConville, and Jim G. Partridge. "Electroformed, Self‐Connected Tin Oxide Nanoparticle Networks for Electronic Reservoir Computing." Advanced Electronic Materials 6, no. 7 (May 20, 2020): 2000081. http://dx.doi.org/10.1002/aelm.202000081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hosseini, Mahshid, Nikolay Frick, Damien Guilbaud, Ming Gao, and Thomas H. LaBean. "Resistive switching of two-dimensional Ag2S nanowire networks for neuromorphic applications." Journal of Vacuum Science & Technology B 40, no. 4 (July 2022): 043201. http://dx.doi.org/10.1116/6.0001867.

Full text
Abstract:
Randomly assembled networks of nanowires (NWs) can display complex memristive behaviors and are promising candidates for use as memory and computing elements in neuromorphic applications due to device fault tolerance and ease of fabrication. This study investigated resistive switching (RS) in two-dimensional, self-assembled silver sulfide (Ag2S) NW networks first experimentally and then theoretically using a previously reported stochastic RS model. The simulated switching behavior in these networks showed good correlation with experimental results. We also demonstrated fault-tolerance of a small NW network that retained RS property despite being severely damaged. Finally, we investigated information entropy in NW networks and showed unusual dynamics during switching as a result of self-organization of the memristive elements. The results of this work provide insights toward physical implementation of randomly assembled RS NW networks for reservoir and neuromorphic computing research.
APA, Harvard, Vancouver, ISO, and other styles
38

Mujal, Pere, Johannes Nokkala, Rodrigo Martínez-Peña, Gian Luca Giorgi, Miguel C. Soriano, and Roberta Zambrini. "Analytical evidence of nonlinearity in qubits and continuous-variable quantum reservoir computing." Journal of Physics: Complexity 2, no. 4 (November 15, 2021): 045008. http://dx.doi.org/10.1088/2632-072x/ac340e.

Full text
Abstract:
Abstract The natural dynamics of complex networks can be harnessed for information processing purposes. A paradigmatic example are artificial neural networks used for machine learning. In this context, quantum reservoir computing (QRC) constitutes a natural extension of the use of classical recurrent neural networks using quantum resources for temporal information processing. Here, we explore the fundamental properties of QRC systems based on qubits and continuous variables. We provide analytical results that illustrate how nonlinearity enters the input–output map in these QRC implementations. We find that the input encoding through state initialization can serve to control the type of nonlinearity as well as the dependence on the history of the input sequences to be processed.
APA, Harvard, Vancouver, ISO, and other styles
39

Shirai, Shota, Susant Kumar Acharya, Saurabh Kumar Bose, Joshua Brian Mallinson, Edoardo Galli, Matthew D. Pike, Matthew D. Arnold, and Simon Anthony Brown. "Long-range temporal correlations in scale-free neuromorphic networks." Network Neuroscience 4, no. 2 (January 2020): 432–47. http://dx.doi.org/10.1162/netn_a_00128.

Full text
Abstract:
Biological neuronal networks are the computing engines of the mammalian brain. These networks exhibit structural characteristics such as hierarchical architectures, small-world attributes, and scale-free topologies, providing the basis for the emergence of rich temporal characteristics such as scale-free dynamics and long-range temporal correlations. Devices that have both the topological and the temporal features of a neuronal network would be a significant step toward constructing a neuromorphic system that can emulate the computational ability and energy efficiency of the human brain. Here we use numerical simulations to show that percolating networks of nanoparticles exhibit structural properties that are reminiscent of biological neuronal networks, and then show experimentally that stimulation of percolating networks by an external voltage stimulus produces temporal dynamics that are self-similar, follow power-law scaling, and exhibit long-range temporal correlations. These results are expected to have important implications for the development of neuromorphic devices, especially for those based on the concept of reservoir computing.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhou, Zhou, Lingjia Liu, Vikram Chandrasekhar, Jianzhong Zhang, and Yang Yi. "Deep Reservoir Computing Meets 5G MIMO-OFDM Systems in Symbol Detection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 1266–73. http://dx.doi.org/10.1609/aaai.v34i01.5481.

Full text
Abstract:
Conventional reservoir computing (RC) is a shallow recurrent neural network (RNN) with fixed high dimensional hidden dynamics and one trainable output layer. It has the nice feature of requiring limited training which is critical for certain applications where training data is extremely limited and costly to obtain. In this paper, we consider two ways to extend the shallow architecture to deep RC to improve the performance without sacrificing the underlying benefit: (1) Extend the output layer to a three layer structure which promotes a joint time-frequency processing to neuron states; (2) Sequentially stack RCs to form a deep neural network. Using the new structure of the deep RC we redesign the physical layer receiver for multiple-input multiple-output with orthogonal frequency division multiplexing (MIMO-OFDM) signals since MIMO-OFDM is a key enabling technology in the 5th generation (5G) cellular network. The combination of RNN dynamics and the time-frequency structure of MIMO-OFDM signals allows deep RC to handle miscellaneous interference in nonlinear MIMO-OFDM channels to achieve improved performance compared to existing techniques. Meanwhile, rather than deep feedforward neural networks which rely on a massive amount of training, our introduced deep RC framework can provide a decent generalization performance using the same amount of pilots as conventional model-based methods in 5G systems. Numerical experiments show that the deep RC based receiver can offer a faster learning convergence and effectively mitigate unknown non-linear radio frequency (RF) distortion yielding twenty percent gain in terms of bit error rate (BER) over the shallow RC structure.
APA, Harvard, Vancouver, ISO, and other styles
41

Elbedwehy, Aya N., Awny M. El-Mohandes, Ahmed Elnakib, and Mohy Eldin Abou-Elsoud. "FPGA-based reservoir computing system for ECG denoising." Microprocessors and Microsystems 91 (June 2022): 104549. http://dx.doi.org/10.1016/j.micpro.2022.104549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cucchi, Matteo, Christopher Gruener, Lautaro Petrauskas, Peter Steiner, Hsin Tseng, Axel Fischer, Bogdan Penkovsky, et al. "Reservoir computing with biocompatible organic electrochemical networks for brain-inspired biosignal classification." Science Advances 7, no. 34 (August 2021): eabh0693. http://dx.doi.org/10.1126/sciadv.abh0693.

Full text
Abstract:
Early detection of malign patterns in patients’ biological signals can save millions of lives. Despite the steady improvement of artificial intelligence–based techniques, the practical clinical application of these methods is mostly constrained to an offline evaluation of the patients’ data. Previous studies have identified organic electrochemical devices as ideal candidates for biosignal monitoring. However, their use for pattern recognition in real time was never demonstrated. Here, we produce and characterize brain-inspired networks composed of organic electrochemical transistors and use them for time-series predictions and classification tasks using the reservoir computing approach. To show their potential use for biofluid monitoring and biosignal analysis, we classify four classes of arrhythmic heartbeats with an accuracy of 88%. The results of this study introduce a previously unexplored paradigm for biocompatible computational platforms and may enable development of ultralow–power consumption hardware-based artificial neural networks capable of interacting with body fluids and biological tissues.
APA, Harvard, Vancouver, ISO, and other styles
43

Athanasiou, Vasileios, and Zoran Konkoli. "On using reservoir computing for sensing applications: exploring environment-sensitive memristor networks." International Journal of Parallel, Emergent and Distributed Systems 33, no. 4 (February 25, 2017): 367–86. http://dx.doi.org/10.1080/17445760.2017.1287264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Viero, Yannick, David Guérin, Anton Vladyka, Fabien Alibart, Stéphane Lenfant, M. Calame, and Dominique Vuillaume. "Light-Stimulatable Molecules/Nanoparticles Networks for Switchable Logical Functions and Reservoir Computing." Advanced Functional Materials 28, no. 39 (August 6, 2018): 1801506. http://dx.doi.org/10.1002/adfm.201801506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Alomar, Miquel L., Vincent Canals, Nicolas Perez-Mora, Víctor Martínez-Moll, and Josep L. Rosselló. "FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting." Computational Intelligence and Neuroscience 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/3917892.

Full text
Abstract:
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
APA, Harvard, Vancouver, ISO, and other styles
46

Frady, E. Paxon, Denis Kleyko, and Friedrich T. Sommer. "A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks." Neural Computation 30, no. 6 (June 2018): 1449–513. http://dx.doi.org/10.1162/neco_a_01084.

Full text
Abstract:
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
APA, Harvard, Vancouver, ISO, and other styles
47

Johnson, Chris, Andrew Philippides, and Philip Husbands. "Active Shape Discrimination with Compliant Bodies as Reservoir Computers." Artificial Life 22, no. 2 (May 2016): 241–68. http://dx.doi.org/10.1162/artl_a_00202.

Full text
Abstract:
Compliant bodies with complex dynamics can be used both to simplify control problems and to lead to adaptive reflexive behavior when engaged with the environment in the sensorimotor loop. By revisiting an experiment introduced by Beer and replacing the continuous-time recurrent neural network therein with reservoir computing networks abstracted from compliant bodies, we demonstrate that adaptive behavior can be produced by an agent in which the body is the main computational locus. We show that bodies with complex dynamics are capable of integrating, storing, and processing information in meaningful and useful ways, and furthermore that with the addition of the simplest of nervous systems such bodies can generate behavior that could equally be described as reflexive or minimally cognitive.
APA, Harvard, Vancouver, ISO, and other styles
48

Dion, Yves, and Saad Bennis. "A global modeling approach to the hydraulic performance evaluation of a sewer network." Canadian Journal of Civil Engineering 37, no. 11 (November 2010): 1432–36. http://dx.doi.org/10.1139/l10-082.

Full text
Abstract:
This paper presents an objective methodology for evaluating hydraulic sewer networks performance. Two methods of computing hydraulic performance index are presented. One method uses the nonlinear reservoir model to generate runoff hydrograph at outlets of each of the drainage basins at street section scale level. Dynamic wave equations are used for routing these hydrographs through the drainage network to obtain flow depth and level within pipes and manholes. The second method computes this index using a generalized rational hydrograph model applied to the entire upstream basin, aggregated as a single node. The hydraulic model uses a simple energy balance equation applied to each evaluated individual sewer. The case study compares these approaches by applying the methodology to small hypothetical and real networks. Slightly better results were obtained with the generalized rational method than with the nonlinear reservoir method. The generalized rational method was able to forecast measured discharges at the outlet of subcatchments with a level of accuracy acceptable to sewer network management and evaluation needs.
APA, Harvard, Vancouver, ISO, and other styles
49

Di, Sarli, Claudio Gallicchio, and Alessio Micheli. "On the effectiveness of Gated Echo State Networks for data exhibiting long-term dependencies." Computer Science and Information Systems 19, no. 1 (2022): 379–96. http://dx.doi.org/10.2298/csis210218063d.

Full text
Abstract:
In the context of recurrent neural networks, gated architectures such as the GRU have contributed to the development of highly accurate machine learning models that can tackle long-term dependencies in the data. However, the training of such networks is performed by the expensive algorithm of gradient descent with backpropagation through time. On the other hand, reservoir computing approaches such as Echo State Networks (ESNs) can produce models that can be trained efficiently thanks to the use of fixed random parameters, but are not ideal for dealing with data presenting long-term dependencies. We explore the problem of employing gated architectures in ESNs from both theoretical and empirical perspectives. We do so by deriving and evaluating a necessary condition for the non-contractivity of the state transition function, which is important to overcome the fading-memory characterization of conventional ESNs. We find that using pure reservoir computing methodologies is not sufficient for effective gating mechanisms, while instead training even only the gates is highly effective in terms of predictive accuracy.
APA, Harvard, Vancouver, ISO, and other styles
50

Sillin, Henry O., Renato Aguilera, Hsien-Hang Shieh, Audrius V. Avizienis, Masakazu Aono, Adam Z. Stieg, and James K. Gimzewski. "A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing." Nanotechnology 24, no. 38 (September 2, 2013): 384004. http://dx.doi.org/10.1088/0957-4484/24/38/384004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography