Articles de revues sur le sujet « Reservoir computing networks »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Reservoir computing networks.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Reservoir computing networks ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Van der Sande, Guy, Daniel Brunner et Miguel C. Soriano. « Advances in photonic reservoir computing ». Nanophotonics 6, no 3 (12 mai 2017) : 561–76. http://dx.doi.org/10.1515/nanoph-2016-0132.

Texte intégral
Résumé :
AbstractWe review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ghosh, Sanjib, Kohei Nakajima, Tanjung Krisnanda, Keisuke Fujii et Timothy C. H. Liew. « Quantum Neuromorphic Computing with Reservoir Computing Networks ». Advanced Quantum Technologies 4, no 9 (9 juillet 2021) : 2100053. http://dx.doi.org/10.1002/qute.202100053.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Rohm, Andre, Lina Jaurigue et Kathy Ludge. « Reservoir Computing Using Laser Networks ». IEEE Journal of Selected Topics in Quantum Electronics 26, no 1 (janvier 2020) : 1–8. http://dx.doi.org/10.1109/jstqe.2019.2927578.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Damicelli, Fabrizio, Claus C. Hilgetag et Alexandros Goulas. « Brain connectivity meets reservoir computing ». PLOS Computational Biology 18, no 11 (16 novembre 2022) : e1010639. http://dx.doi.org/10.1371/journal.pcbi.1010639.

Texte intégral
Résumé :
The connectivity of Artificial Neural Networks (ANNs) is different from the one observed in Biological Neural Networks (BNNs). Can the wiring of actual brains help improve ANNs architectures? Can we learn from ANNs about what network features support computation in the brain when solving a task? At a meso/macro-scale level of the connectivity, ANNs’ architectures are carefully engineered and such those design decisions have crucial importance in many recent performance improvements. On the other hand, BNNs exhibit complex emergent connectivity patterns at all scales. At the individual level, BNNs connectivity results from brain development and plasticity processes, while at the species level, adaptive reconfigurations during evolution also play a major role shaping connectivity. Ubiquitous features of brain connectivity have been identified in recent years, but their role in the brain’s ability to perform concrete computations remains poorly understood. Computational neuroscience studies reveal the influence of specific brain connectivity features only on abstract dynamical properties, although the implications of real brain networks topologies on machine learning or cognitive tasks have been barely explored. Here we present a cross-species study with a hybrid approach integrating real brain connectomes and Bio-Echo State Networks, which we use to solve concrete memory tasks, allowing us to probe the potential computational implications of real brain connectivity patterns on task solving. We find results consistent across species and tasks, showing that biologically inspired networks perform as well as classical echo state networks, provided a minimum level of randomness and diversity of connections is allowed. We also present a framework, bio2art, to map and scale up real connectomes that can be integrated into recurrent ANNs. This approach also allows us to show the crucial importance of the diversity of interareal connectivity patterns, stressing the importance of stochastic processes determining neural networks connectivity in general.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Antonik, Piotr, Serge Massar et Guy Van Der Sande. « Photonic reservoir computing using delay dynamical systems ». Photoniques, no 104 (septembre 2020) : 45–48. http://dx.doi.org/10.1051/photon/202010445.

Texte intégral
Résumé :
The recent progress in artificial intelligence has spurred renewed interest in hardware implementations of neural networks. Reservoir computing is a powerful, highly versatile machine learning algorithm well suited for experimental implementations. The simplest highperformance architecture is based on delay dynamical systems. We illustrate its power through a series of photonic examples, including the first all optical reservoir computer and reservoir computers based on lasers with delayed feedback. We also show how reservoirs can be used to emulate dynamical systems. We discuss the perspectives of photonic reservoir computing.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Tran, Dat, et Christof Teuscher. « Computational Capacity of Complex Memcapacitive Networks ». ACM Journal on Emerging Technologies in Computing Systems 17, no 2 (avril 2021) : 1–25. http://dx.doi.org/10.1145/3445795.

Texte intégral
Résumé :
Emerging memcapacitive nanoscale devices have the potential to perform computations in new ways. In this article, we systematically study, to the best of our knowledge for the first time, the computational capacity of complex memcapacitive networks, which function as reservoirs in reservoir computing, one of the brain-inspired computing architectures. Memcapacitive networks are composed of memcapacitive devices randomly connected through nanowires. Previous studies have shown that both regular and random reservoirs provide sufficient dynamics to perform simple tasks. How do complex memcapacitive networks illustrate their computational capability, and what are the topological structures of memcapacitive networks that solve complex tasks with efficiency? Studies show that small-world power-law (SWPL) networks offer an ideal trade-off between the communication properties and the wiring cost of networks. In this study, we illustrate the computing nature of SWPL memcapacitive reservoirs by exploring the two essential properties: fading memory and linear separation through measurements of kernel quality. Compared to ideal reservoirs, nanowire memcapacitive reservoirs had a better dynamic response and improved their performance by 4.67% on three tasks: MNIST, Isolated Spoken Digits, and CIFAR-10. On the same three tasks, compared to memristive reservoirs, nanowire memcapacitive reservoirs achieved comparable performance with much less power, on average, about 99× , 17×, and 277×, respectively. Simulation results of the topological transformation of memcapacitive networks reveal that that topological structures of the memcapacitive SWPL reservoirs did not affect their performance but significantly contributed to the wiring cost and the power consumption of the systems. The minimum trade-off between the wiring cost and the power consumption occurred at different network settings of α and β : 4.5 and 0.61 for Biolek reservoirs, 2.7 and 1.0 for Mohamed reservoirs, and 3.0 and 1.0 for Najem reservoirs. The results of our research illustrate the computational capacity of complex memcapacitive networks as reservoirs in reservoir computing. Such memcapacitive networks with an SWPL topology are energy-efficient systems that are suitable for low-power applications such as mobile devices and the Internet of Things.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Senn, Christoph Walter, et Itsuo Kumazawa. « Abstract Reservoir Computing ». AI 3, no 1 (10 mars 2022) : 194–210. http://dx.doi.org/10.3390/ai3010012.

Texte intégral
Résumé :
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such as the ones used in physical reservoir computing. By abstracting away these kinds of noise sources using intervals, we derive a regularized training regime for reservoir computing using sets of possible reservoir states. Numerical simulations are used to show the effectiveness of our approach against different sources of errors that can appear in real-world scenarios and compare them with standard approaches. Our results support the application of interval arithmetics to improve the robustness of mass-spring networks trained in simulations.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Hart, Joseph D., Laurent Larger, Thomas E. Murphy et Rajarshi Roy. « Delayed dynamical systems : networks, chimeras and reservoir computing ». Philosophical Transactions of the Royal Society A : Mathematical, Physical and Engineering Sciences 377, no 2153 (22 juillet 2019) : 20180123. http://dx.doi.org/10.1098/rsta.2018.0123.

Texte intégral
Résumé :
We present a systematic approach to reveal the correspondence between time delay dynamics and networks of coupled oscillators. After early demonstrations of the usefulness of spatio-temporal representations of time-delay system dynamics, extensive research on optoelectronic feedback loops has revealed their immense potential for realizing complex system dynamics such as chimeras in rings of coupled oscillators and applications to reservoir computing. Delayed dynamical systems have been enriched in recent years through the application of digital signal processing techniques. Very recently, we have showed that one can significantly extend the capabilities and implement networks with arbitrary topologies through the use of field programmable gate arrays. This architecture allows the design of appropriate filters and multiple time delays, and greatly extends the possibilities for exploring synchronization patterns in arbitrary network topologies. This has enabled us to explore complex dynamics on networks with nodes that can be perfectly identical, introduce parameter heterogeneities and multiple time delays, as well as change network topologies to control the formation and evolution of patterns of synchrony. This article is part of the theme issue ‘Nonlinear dynamics of delay systems’.
Styles APA, Harvard, Vancouver, ISO, etc.
9

S Pathak, Shantanu, et D. Rajeswara Rao. « Reservoir Computing for Healthcare Analytics ». International Journal of Engineering & ; Technology 7, no 2.32 (31 mai 2018) : 240. http://dx.doi.org/10.14419/ijet.v7i2.32.15576.

Texte intégral
Résumé :
In this data age tools for sophisticated generation and handling of data are at epitome of usage. Data varying in both space and time poses a breed of challenges. Challenges they possess for forecasting can be well handled by Reservoir computing based neural networks. Challenges like class imbalance, missing values, locality effect are discussed here. Additionally, popular statistical techniques for forecasting such data are discussed. Results show how Reservoir Computing based technique outper-forms traditional neural networks.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Andrecut, M. « Reservoir computing on the hypersphere ». International Journal of Modern Physics C 28, no 07 (juillet 2017) : 1750095. http://dx.doi.org/10.1142/s0129183117500954.

Texte intégral
Résumé :
Reservoir Computing (RC) refers to a Recurrent Neural Network (RNNs) framework, frequently used for sequence learning and time series prediction. The RC system consists of a random fixed-weight RNN (the input-hidden reservoir layer) and a classifier (the hidden-output readout layer). Here, we focus on the sequence learning problem, and we explore a different approach to RC. More specifically, we remove the nonlinear neural activation function, and we consider an orthogonal reservoir acting on normalized states on the unit hypersphere. Surprisingly, our numerical results show that the system’s memory capacity exceeds the dimensionality of the reservoir, which is the upper bound for the typical RC approach based on Echo State Networks (ESNs). We also show how the proposed system can be applied to symmetric cryptography problems, and we include a numerical implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Akai-Kasaya, Megumi. « (Invited) Neuromorphic Devices and Systems Using Carbon Nanotubes ». ECS Meeting Abstracts MA2022-01, no 10 (7 juillet 2022) : 778. http://dx.doi.org/10.1149/ma2022-0110778mtgabs.

Texte intégral
Résumé :
Molecular neuromorphic devices composed of single-walled carbon nanotubes (SWNTs) complexed with polyoxometalate (POM) exhibit various properties very similar to the functions of neurons. A random and extremely dense SWNT/POM network is expected to have the rudimentary ability of reservoir computing, which is a recently introduced framework derived from recurrent neural networks. We performed RC using multiple signals collected from a SWNT/POM random network. The results are expected to contribute to the design and fabrication of SWNT/molecular networks with high reservoir functionalities for future development. A physical reservoir consisting of nanomaterials may become a computing system with low cost, low power consumption, and highly integrated hardware devices for increasingly important edge computing.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Thivierge, Jean-Philippe, Eloïse Giraud, Michael Lynn et Annie Théberge Charbonneau. « Key role of neuronal diversity in structured reservoir computing ». Chaos : An Interdisciplinary Journal of Nonlinear Science 32, no 11 (novembre 2022) : 113130. http://dx.doi.org/10.1063/5.0111131.

Texte intégral
Résumé :
Chaotic time series have been captured by reservoir computing models composed of a recurrent neural network whose output weights are trained in a supervised manner. These models, however, are typically limited to randomly connected networks of homogeneous units. Here, we propose a new class of structured reservoir models that incorporates a diversity of cell types and their known connections. In a first version of the model, the reservoir was composed of mean-rate units separated into pyramidal, parvalbumin, and somatostatin cells. Stability analysis of this model revealed two distinct dynamical regimes, namely, (i) an inhibition-stabilized network (ISN) where strong recurrent excitation is balanced by strong inhibition and (ii) a non-ISN network with weak excitation. These results were extended to a leaky integrate-and-fire model that captured different cell types along with their network architecture. ISN and non-ISN reservoir networks were trained to relay and generate a chaotic Lorenz attractor. Despite their increased performance, ISN networks operate in a regime of activity near the limits of stability where external perturbations yield a rapid divergence in output. The proposed framework of structured reservoir computing opens avenues for exploring how neural microcircuits can balance performance and stability when representing time series through distinct dynamical regimes.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Yamaguti, Yutaka, et Ichiro Tsuda. « Functional differentiations in evolutionary reservoir computing networks ». Chaos : An Interdisciplinary Journal of Nonlinear Science 31, no 1 (janvier 2021) : 013137. http://dx.doi.org/10.1063/5.0019116.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

de Vos, N. J. « Reservoir computing as an alternative to traditional artificial neural networks in rainfall-runoff modelling ». Hydrology and Earth System Sciences Discussions 9, no 5 (11 mai 2012) : 6101–34. http://dx.doi.org/10.5194/hessd-9-6101-2012.

Texte intégral
Résumé :
Abstract. Despite theoretical benefits of recurrent artificial neural networks over their feedforward counterparts, it is still unclear whether the former offer practical advantages as rainfall-runoff models. The main drawback of recurrent networks is the increased complexity of the training procedure due to their architecture. This work uses recently introduced, conceptually simple reservoir computing models for one-day-ahead forecasts on twelve river basins in the Eastern United States, and compares them to a variety of traditional feedforward and recurrent models. Two modifications on the reservoir computing models are made to increase the hydrologically relevant information content of their internal state. The results show that the reservoir computing networks outperform feedforward networks and are competitive with state-of-the-art recurrent networks, across a range of performance measures. This, along with their simplicity and ease of training, suggests that reservoir computing models can be considered promising alternatives to traditional artificial neural networks in rainfall-runoff modelling.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Ferreira, Tiago D., Nuno A. Silva, Duarte Silva, Carla C. Rosa et Ariel Guerreiro. « Reservoir computing with nonlinear optical media ». Journal of Physics : Conference Series 2407, no 1 (1 décembre 2022) : 012019. http://dx.doi.org/10.1088/1742-6596/2407/1/012019.

Texte intégral
Résumé :
Abstract Reservoir computing is a versatile approach for implementing physically Recurrent Neural networks which take advantage of a reservoir, consisting of a set of interconnected neurons with temporal dynamics, whose weights and biases are fixed and do not need to be optimized. Instead, the training takes place only at the output layer towards a specific task. One important requirement for these systems to work is nonlinearity, which in optical setups is usually obtained via the saturation of the detection device. In this work, we explore a distinct approach using a photorefractive crystal as the source of the nonlinearity in the reservoir. Furthermore, by leveraging on the time response of the photorefractive media, one can also have the temporal interaction required for such architecture. If we space out in time the propagation of different states, the temporal interaction is lost, and the system can work as an extreme learning machine. This corresponds to a physical implementation of a Feed-Forward Neural Network with a single hidden layer and fixed random weights and biases. Some preliminary results are presented and discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Obst, Oliver, Adrian Trinchi, Simon G. Hardin, Matthew Chadwick, Ivan Cole, Tim H. Muster, Nigel Hoschke et al. « Nano-scale reservoir computing ». Nano Communication Networks 4, no 4 (décembre 2013) : 189–96. http://dx.doi.org/10.1016/j.nancom.2013.08.005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Hanias, Michael P., Spyros P. Georgopoulos, Stavros G. Stavrinides, Panagiotis Tziatzios et Ioannis P. Antoniades. « Reservoir computing vs. neural networks in financial forecasting ». International Journal of Computational Economics and Econometrics 1, no 1 (2021) : 1. http://dx.doi.org/10.1504/ijcee.2021.10041721.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Gallicchio, Claudio, et Alessio Micheli. « Echo State Property of Deep Reservoir Computing Networks ». Cognitive Computation 9, no 3 (5 mai 2017) : 337–50. http://dx.doi.org/10.1007/s12559-017-9461-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Georgopoulos, Spyros P., Panagiotis Tziatzios, Stavros G. Stavrinides, Ioannis P. Antoniades et Michael P. Hanias. « Reservoir computing vs. neural networks in financial forecasting ». International Journal of Computational Economics and Econometrics 13, no 1 (2023) : 1. http://dx.doi.org/10.1504/ijcee.2023.127283.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Röhm, André, et Kathy Lüdge. « Multiplexed networks : reservoir computing with virtual and real nodes ». Journal of Physics Communications 2, no 8 (3 août 2018) : 085007. http://dx.doi.org/10.1088/2399-6528/aad56d.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

OZTURK, MUSTAFA C., et JOSE C. PRINCIPE. « FREEMAN'S K MODELS AS RESERVOIR COMPUTING ARCHITECTURES ». New Mathematics and Natural Computation 05, no 01 (mars 2009) : 265–86. http://dx.doi.org/10.1142/s179300570900126x.

Texte intégral
Résumé :
Walter Freeman in his classic 1975 book "Mass Activation of the Nervous System" presented a hierarchy of dynamical computational models based on studies and measurements done in real brains, which has been known as the Freeman's K model (FKM). Much more recently, liquid state machine (LSM) and echo state network (ESN) have been proposed as universal approximators in the class of functionals with exponential decaying memory. In this paper, we briefly review these models and show that the restricted K set architecture of KI and KII networks share the same properties of LSM/ESNs and is therefore one more member of the reservoir computing family. In the reservoir computing perspective, the states of the FKM are a representation space that stores in its spatio-temporal dynamics a short-term history of the input patterns. Then at any time, with a simple instantaneous read-out made up of a KI, information related to the input history can be accessed and read out. This work provides two important contributions. First, it emphasizes the need for optimal readouts, and shows how to adaptively design them. Second, it shows that the Freeman model is able to process continuous signals with temporal structure. We will provide theoretical results for the conditions on the system parameters of FKM satisfying the echo state property. Experimental results are presented to illustrate the validity of the proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Büsing, Lars, Benjamin Schrauwen et Robert Legenstein. « Connectivity, Dynamics, and Memory in Reservoir Computing with Binary and Analog Neurons ». Neural Computation 22, no 5 (mai 2010) : 1272–311. http://dx.doi.org/10.1162/neco.2009.01-09-947.

Texte intégral
Résumé :
Reservoir computing (RC) systems are powerful models for online computations on input sequences. They consist of a memoryless readout neuron that is trained on top of a randomly connected recurrent neural network. RC systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work indicated a fundamental difference in the behavior of these two implementations of the RC idea. The performance of an RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons, such clear dependency has not been observed. In this letter, we address this apparent dichotomy by investigating the influence of the network connectivity (parameterized by the neuron in-degree) on a family of network models that interpolates between analog and binary networks. Our analyses are based on a novel estimation of the Lyapunov exponent of the network dynamics with the help of branching process theory, rank measures that estimate the kernel quality and generalization capabilities of recurrent networks, and a novel mean field predictor for computational performance. These analyses reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits, leading to differences in the integration of information over short and long timescales. This explains the decreased computational performance observed in binary circuits that are densely connected. The mean field predictor is also used to bound the memory function of recurrent circuits of binary neurons.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Allwood, Dan A., Matthew O. A. Ellis, David Griffin, Thomas J. Hayward, Luca Manneschi, Mohammad F. KH Musameh, Simon O'Keefe et al. « A perspective on physical reservoir computing with nanomagnetic devices ». Applied Physics Letters 122, no 4 (23 janvier 2023) : 040501. http://dx.doi.org/10.1063/5.0119040.

Texte intégral
Résumé :
Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here, we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Basterrech, Sebastián, et Gerardo Rubino. « ECHO STATE QUEUEING NETWORKS : A COMBINATION OF RESERVOIR COMPUTING AND RANDOM NEURAL NETWORKS ». Probability in the Engineering and Informational Sciences 31, no 4 (17 mai 2017) : 457–76. http://dx.doi.org/10.1017/s0269964817000110.

Texte intégral
Résumé :
This paper deals with two ideas appeared during the last developing phase in Artificial Intelligence: Reservoir Computing (RC) and Random Neural Networks. Both have been very successful in many applications. We propose a new model belonging to the first class, taking the structure of the second for its dynamics. The new model is called Echo State Queuing Network. The paper positions the model in the global Machine Learning area, and provides examples of its use and performances. We show on largely used benchmarks that it is a very accurate tool, and we illustrate how it compares with standard RC models.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Ren, Bin, et Huanfei Ma. « Global optimization of hyper-parameters in reservoir computing ». Electronic Research Archive 30, no 7 (2022) : 2719–29. http://dx.doi.org/10.3934/era.2022139.

Texte intégral
Résumé :
<abstract><p>Reservoir computing has emerged as a powerful and efficient machine learning tool especially in the reconstruction of many complex systems even for chaotic systems only based on the observational data. Though fruitful advances have been extensively studied, how to capture the art of hyper-parameter settings to construct efficient RC is still a long-standing and urgent problem. In contrast to the local manner of many works which aim to optimize one hyper-parameter while keeping others constant, in this work, we propose a global optimization framework using simulated annealing technique to find the optimal architecture of the randomly generated networks for a successful RC. Based on the optimized results, we further study several important properties of some hyper-parameters. Particularly, we find that the globally optimized reservoir network has a largest singular value significantly larger than one, which is contrary to the sufficient condition reported in the literature to guarantee the echo state property. We further reveal the mechanism of this phenomenon with a simplified model and the theory of nonlinear dynamical systems.</p></abstract>
Styles APA, Harvard, Vancouver, ISO, etc.
26

Gonon, Lukas, et Juan-Pablo Ortega. « Reservoir Computing Universality With Stochastic Inputs ». IEEE Transactions on Neural Networks and Learning Systems 31, no 1 (janvier 2020) : 100–112. http://dx.doi.org/10.1109/tnnls.2019.2899649.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Zhong, Xijuan, et Shuai Wang. « Learning Coupled Oscillators System with Reservoir Computing ». Symmetry 14, no 6 (25 mai 2022) : 1084. http://dx.doi.org/10.3390/sym14061084.

Texte intégral
Résumé :
In this paper, we reconstruct the dynamic behavior of the ring-coupled Lorenz oscillators system by reservoir computing. Although the reconstruction of various complex chaotic attractors has been well studied by using various neural networks, little attention has been paid to whether the spatio-temporal structure of some special attractors can be maintained in long-term prediction. Reservoir computing has been shown to be effective for model-free prediction, so we want to investigate whether reservoir computing can restore the rotational symmetry of the original ring-coupled Lorenz system. We find that although the state prediction of the trained reservoir computer will gradually deviate from the actual trajectory of the original system, the associated spatio-temporal structure is maintained in the process of reconstruction. Specifically, we show that the rotational symmetric structure of periodic rotating waves, quasi-periodic torus, and chaotic rotating waves is well maintained.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Carbajal, Juan Pablo, Joni Dambre, Michiel Hermans et Benjamin Schrauwen. « Memristor Models for Machine Learning ». Neural Computation 27, no 3 (mars 2015) : 725–47. http://dx.doi.org/10.1162/neco_a_00694.

Texte intégral
Résumé :
In the quest for alternatives to traditional complementary metal-oxide-semiconductor, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov’s model, and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that although both models could lead to useful memristor-based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Andreev, Andrey V., Artem A. Badarin, Vladimir A. Maximenko et Alexander E. Hramov. « Forecasting macroscopic dynamics in adaptive Kuramoto network using reservoir computing ». Chaos : An Interdisciplinary Journal of Nonlinear Science 32, no 10 (octobre 2022) : 103126. http://dx.doi.org/10.1063/5.0114127.

Texte intégral
Résumé :
Forecasting a system’s behavior is an essential task encountering the complex systems theory. Machine learning offers supervised algorithms, e.g., recurrent neural networks and reservoir computers that predict the behavior of model systems whose states consist of multidimensional time series. In real life, we often have limited information about the behavior of complex systems. The brightest example is the brain neural network described by the electroencephalogram. Forecasting the behavior of these systems is a more challenging task but provides a potential for real-life application. Here, we trained reservoir computer to predict the macroscopic signal produced by the network of phase oscillators. The Lyapunov analysis revealed the chaotic nature of the signal and reservoir computer failed to forecast it. Augmenting the feature space using Takkens’ theorem improved the quality of forecasting. RC achieved the best prediction score when the number of signals coincided with the embedding dimension estimated via the nearest false neighbors method. We found that short-time prediction required a large number of features, while long-time prediction utilizes a limited number of features. These results refer to the bias-variance trade-off, an important concept in machine learning.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Hermans, Michiel, et Benjamin Schrauwen. « Recurrent Kernel Machines : Computing with Infinite Echo State Networks ». Neural Computation 24, no 1 (janvier 2012) : 104–33. http://dx.doi.org/10.1162/neco_a_00200.

Texte intégral
Résumé :
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Yilmaz, Ozgur. « Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing ». Neural Computation 27, no 12 (décembre 2015) : 2661–92. http://dx.doi.org/10.1162/neco_a_00787.

Texte intégral
Résumé :
This letter introduces a novel framework of reservoir computing that is capable of both connectionist machine intelligence and symbolic computation. A cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells, and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is shown to be capable of long-term memory, and it requires orders of magnitude less computation compared to echo state networks. As the focus of the letter, we suggest that binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing. To demonstrate the capability of the proposed system, we make analogies directly on image data by asking, What is the automobile of air?
Styles APA, Harvard, Vancouver, ISO, etc.
32

Steiner, Peter, Azarakhsh Jalalvand, Simon Stone et Peter Birkholz. « PyRCN : A toolbox for exploration and application of Reservoir Computing Networks ». Engineering Applications of Artificial Intelligence 113 (août 2022) : 104964. http://dx.doi.org/10.1016/j.engappai.2022.104964.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Hamedani, Kian, Lingjia Liu, Rachad Atat, Jinsong Wu et Yang Yi. « Reservoir Computing Meets Smart Grids : Attack Detection Using Delayed Feedback Networks ». IEEE Transactions on Industrial Informatics 14, no 2 (février 2018) : 734–43. http://dx.doi.org/10.1109/tii.2017.2769106.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Jalalvand, Azarakhsh, Kris Demuynck, Wesley De Neve et Jean-Pierre Martens. « On the application of reservoir computing networks for noisy image recognition ». Neurocomputing 277 (février 2018) : 237–48. http://dx.doi.org/10.1016/j.neucom.2016.11.100.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Mastoi, Qurat-ul-ain, Teh Wah et Ram Gopal Raj. « Reservoir Computing Based Echo State Networks for Ventricular Heart Beat Classification ». Applied Sciences 9, no 4 (18 février 2019) : 702. http://dx.doi.org/10.3390/app9040702.

Texte intégral
Résumé :
The abnormal conduction of cardiac activity in the lower chamber of the heart (ventricular) can cause cardiac diseases and sometimes leads to sudden death. In this paper, the author proposed the Reservoir Computing (RC) based Echo State Networks (ESNs) for ventricular heartbeat classification based on a single Electrocardiogram (ECG) lead. The Association for the Advancement of Medical Instrumentation (AAMI) standards were used to preprocesses the standardized diagnostic tool (ECG signals) based on the interpatient scheme. Despite the extensive efforts and notable experiments that have been done on machine learning techniques for heartbeat classification, ESNs are yet to be considered for heartbeat classification as a is fast, scalable, and reliable approach for real-time scenarios. Our proposed method was especially designed for Medical Internet of Things (MIoT) devices, for instance wearable wireless devices for ECG monitoring or ventricular heart beat detection systems and so on. The experiments were conducted on two public datasets, namely AHA and MIT-BIH-SVDM. The performance of the proposed model was evaluated using the MIT-BIH-AR dataset and it achieved remarkable results. The positive predictive value and sensitivity are 98.98% and 98.98%, respectively for the modified lead II (MLII) and 98.96% and 97.95 for the V1 lead, respectively. However, the experimental results of the state-of-the-art approaches, namely the patient-adaptable method, improved generalization, and the multiview learning approach obtained 92.8%, 87.0%, and 98.0% positive predictive values, respectively. These obtained results of the existing studies exemplify that the performance of this method achieved higher accuracy. We believe that the improved classification accuracy opens up the possibility for implementation of this methodology in Medical Internet of Things (MIoT) devices in order to bring improvements in e-health systems.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Le, Phuong Y., Billy J. Murdoch, Anders J. Barlow, Anthony S. Holland, Dougal G. McCulloch, Chris F. McConville et Jim G. Partridge. « Electroformed, Self‐Connected Tin Oxide Nanoparticle Networks for Electronic Reservoir Computing ». Advanced Electronic Materials 6, no 7 (20 mai 2020) : 2000081. http://dx.doi.org/10.1002/aelm.202000081.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Hosseini, Mahshid, Nikolay Frick, Damien Guilbaud, Ming Gao et Thomas H. LaBean. « Resistive switching of two-dimensional Ag2S nanowire networks for neuromorphic applications ». Journal of Vacuum Science & ; Technology B 40, no 4 (juillet 2022) : 043201. http://dx.doi.org/10.1116/6.0001867.

Texte intégral
Résumé :
Randomly assembled networks of nanowires (NWs) can display complex memristive behaviors and are promising candidates for use as memory and computing elements in neuromorphic applications due to device fault tolerance and ease of fabrication. This study investigated resistive switching (RS) in two-dimensional, self-assembled silver sulfide (Ag2S) NW networks first experimentally and then theoretically using a previously reported stochastic RS model. The simulated switching behavior in these networks showed good correlation with experimental results. We also demonstrated fault-tolerance of a small NW network that retained RS property despite being severely damaged. Finally, we investigated information entropy in NW networks and showed unusual dynamics during switching as a result of self-organization of the memristive elements. The results of this work provide insights toward physical implementation of randomly assembled RS NW networks for reservoir and neuromorphic computing research.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Mujal, Pere, Johannes Nokkala, Rodrigo Martínez-Peña, Gian Luca Giorgi, Miguel C. Soriano et Roberta Zambrini. « Analytical evidence of nonlinearity in qubits and continuous-variable quantum reservoir computing ». Journal of Physics : Complexity 2, no 4 (15 novembre 2021) : 045008. http://dx.doi.org/10.1088/2632-072x/ac340e.

Texte intégral
Résumé :
Abstract The natural dynamics of complex networks can be harnessed for information processing purposes. A paradigmatic example are artificial neural networks used for machine learning. In this context, quantum reservoir computing (QRC) constitutes a natural extension of the use of classical recurrent neural networks using quantum resources for temporal information processing. Here, we explore the fundamental properties of QRC systems based on qubits and continuous variables. We provide analytical results that illustrate how nonlinearity enters the input–output map in these QRC implementations. We find that the input encoding through state initialization can serve to control the type of nonlinearity as well as the dependence on the history of the input sequences to be processed.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Shirai, Shota, Susant Kumar Acharya, Saurabh Kumar Bose, Joshua Brian Mallinson, Edoardo Galli, Matthew D. Pike, Matthew D. Arnold et Simon Anthony Brown. « Long-range temporal correlations in scale-free neuromorphic networks ». Network Neuroscience 4, no 2 (janvier 2020) : 432–47. http://dx.doi.org/10.1162/netn_a_00128.

Texte intégral
Résumé :
Biological neuronal networks are the computing engines of the mammalian brain. These networks exhibit structural characteristics such as hierarchical architectures, small-world attributes, and scale-free topologies, providing the basis for the emergence of rich temporal characteristics such as scale-free dynamics and long-range temporal correlations. Devices that have both the topological and the temporal features of a neuronal network would be a significant step toward constructing a neuromorphic system that can emulate the computational ability and energy efficiency of the human brain. Here we use numerical simulations to show that percolating networks of nanoparticles exhibit structural properties that are reminiscent of biological neuronal networks, and then show experimentally that stimulation of percolating networks by an external voltage stimulus produces temporal dynamics that are self-similar, follow power-law scaling, and exhibit long-range temporal correlations. These results are expected to have important implications for the development of neuromorphic devices, especially for those based on the concept of reservoir computing.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Zhou, Zhou, Lingjia Liu, Vikram Chandrasekhar, Jianzhong Zhang et Yang Yi. « Deep Reservoir Computing Meets 5G MIMO-OFDM Systems in Symbol Detection ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 01 (3 avril 2020) : 1266–73. http://dx.doi.org/10.1609/aaai.v34i01.5481.

Texte intégral
Résumé :
Conventional reservoir computing (RC) is a shallow recurrent neural network (RNN) with fixed high dimensional hidden dynamics and one trainable output layer. It has the nice feature of requiring limited training which is critical for certain applications where training data is extremely limited and costly to obtain. In this paper, we consider two ways to extend the shallow architecture to deep RC to improve the performance without sacrificing the underlying benefit: (1) Extend the output layer to a three layer structure which promotes a joint time-frequency processing to neuron states; (2) Sequentially stack RCs to form a deep neural network. Using the new structure of the deep RC we redesign the physical layer receiver for multiple-input multiple-output with orthogonal frequency division multiplexing (MIMO-OFDM) signals since MIMO-OFDM is a key enabling technology in the 5th generation (5G) cellular network. The combination of RNN dynamics and the time-frequency structure of MIMO-OFDM signals allows deep RC to handle miscellaneous interference in nonlinear MIMO-OFDM channels to achieve improved performance compared to existing techniques. Meanwhile, rather than deep feedforward neural networks which rely on a massive amount of training, our introduced deep RC framework can provide a decent generalization performance using the same amount of pilots as conventional model-based methods in 5G systems. Numerical experiments show that the deep RC based receiver can offer a faster learning convergence and effectively mitigate unknown non-linear radio frequency (RF) distortion yielding twenty percent gain in terms of bit error rate (BER) over the shallow RC structure.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Elbedwehy, Aya N., Awny M. El-Mohandes, Ahmed Elnakib et Mohy Eldin Abou-Elsoud. « FPGA-based reservoir computing system for ECG denoising ». Microprocessors and Microsystems 91 (juin 2022) : 104549. http://dx.doi.org/10.1016/j.micpro.2022.104549.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Cucchi, Matteo, Christopher Gruener, Lautaro Petrauskas, Peter Steiner, Hsin Tseng, Axel Fischer, Bogdan Penkovsky et al. « Reservoir computing with biocompatible organic electrochemical networks for brain-inspired biosignal classification ». Science Advances 7, no 34 (août 2021) : eabh0693. http://dx.doi.org/10.1126/sciadv.abh0693.

Texte intégral
Résumé :
Early detection of malign patterns in patients’ biological signals can save millions of lives. Despite the steady improvement of artificial intelligence–based techniques, the practical clinical application of these methods is mostly constrained to an offline evaluation of the patients’ data. Previous studies have identified organic electrochemical devices as ideal candidates for biosignal monitoring. However, their use for pattern recognition in real time was never demonstrated. Here, we produce and characterize brain-inspired networks composed of organic electrochemical transistors and use them for time-series predictions and classification tasks using the reservoir computing approach. To show their potential use for biofluid monitoring and biosignal analysis, we classify four classes of arrhythmic heartbeats with an accuracy of 88%. The results of this study introduce a previously unexplored paradigm for biocompatible computational platforms and may enable development of ultralow–power consumption hardware-based artificial neural networks capable of interacting with body fluids and biological tissues.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Athanasiou, Vasileios, et Zoran Konkoli. « On using reservoir computing for sensing applications : exploring environment-sensitive memristor networks ». International Journal of Parallel, Emergent and Distributed Systems 33, no 4 (25 février 2017) : 367–86. http://dx.doi.org/10.1080/17445760.2017.1287264.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Viero, Yannick, David Guérin, Anton Vladyka, Fabien Alibart, Stéphane Lenfant, M. Calame et Dominique Vuillaume. « Light-Stimulatable Molecules/Nanoparticles Networks for Switchable Logical Functions and Reservoir Computing ». Advanced Functional Materials 28, no 39 (6 août 2018) : 1801506. http://dx.doi.org/10.1002/adfm.201801506.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Alomar, Miquel L., Vincent Canals, Nicolas Perez-Mora, Víctor Martínez-Moll et Josep L. Rosselló. « FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting ». Computational Intelligence and Neuroscience 2016 (2016) : 1–14. http://dx.doi.org/10.1155/2016/3917892.

Texte intégral
Résumé :
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Frady, E. Paxon, Denis Kleyko et Friedrich T. Sommer. « A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks ». Neural Computation 30, no 6 (juin 2018) : 1449–513. http://dx.doi.org/10.1162/neco_a_01084.

Texte intégral
Résumé :
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Johnson, Chris, Andrew Philippides et Philip Husbands. « Active Shape Discrimination with Compliant Bodies as Reservoir Computers ». Artificial Life 22, no 2 (mai 2016) : 241–68. http://dx.doi.org/10.1162/artl_a_00202.

Texte intégral
Résumé :
Compliant bodies with complex dynamics can be used both to simplify control problems and to lead to adaptive reflexive behavior when engaged with the environment in the sensorimotor loop. By revisiting an experiment introduced by Beer and replacing the continuous-time recurrent neural network therein with reservoir computing networks abstracted from compliant bodies, we demonstrate that adaptive behavior can be produced by an agent in which the body is the main computational locus. We show that bodies with complex dynamics are capable of integrating, storing, and processing information in meaningful and useful ways, and furthermore that with the addition of the simplest of nervous systems such bodies can generate behavior that could equally be described as reflexive or minimally cognitive.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Dion, Yves, et Saad Bennis. « A global modeling approach to the hydraulic performance evaluation of a sewer network ». Canadian Journal of Civil Engineering 37, no 11 (novembre 2010) : 1432–36. http://dx.doi.org/10.1139/l10-082.

Texte intégral
Résumé :
This paper presents an objective methodology for evaluating hydraulic sewer networks performance. Two methods of computing hydraulic performance index are presented. One method uses the nonlinear reservoir model to generate runoff hydrograph at outlets of each of the drainage basins at street section scale level. Dynamic wave equations are used for routing these hydrographs through the drainage network to obtain flow depth and level within pipes and manholes. The second method computes this index using a generalized rational hydrograph model applied to the entire upstream basin, aggregated as a single node. The hydraulic model uses a simple energy balance equation applied to each evaluated individual sewer. The case study compares these approaches by applying the methodology to small hypothetical and real networks. Slightly better results were obtained with the generalized rational method than with the nonlinear reservoir method. The generalized rational method was able to forecast measured discharges at the outlet of subcatchments with a level of accuracy acceptable to sewer network management and evaluation needs.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Di, Sarli, Claudio Gallicchio et Alessio Micheli. « On the effectiveness of Gated Echo State Networks for data exhibiting long-term dependencies ». Computer Science and Information Systems 19, no 1 (2022) : 379–96. http://dx.doi.org/10.2298/csis210218063d.

Texte intégral
Résumé :
In the context of recurrent neural networks, gated architectures such as the GRU have contributed to the development of highly accurate machine learning models that can tackle long-term dependencies in the data. However, the training of such networks is performed by the expensive algorithm of gradient descent with backpropagation through time. On the other hand, reservoir computing approaches such as Echo State Networks (ESNs) can produce models that can be trained efficiently thanks to the use of fixed random parameters, but are not ideal for dealing with data presenting long-term dependencies. We explore the problem of employing gated architectures in ESNs from both theoretical and empirical perspectives. We do so by deriving and evaluating a necessary condition for the non-contractivity of the state transition function, which is important to overcome the fading-memory characterization of conventional ESNs. We find that using pure reservoir computing methodologies is not sufficient for effective gating mechanisms, while instead training even only the gates is highly effective in terms of predictive accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Sillin, Henry O., Renato Aguilera, Hsien-Hang Shieh, Audrius V. Avizienis, Masakazu Aono, Adam Z. Stieg et James K. Gimzewski. « A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing ». Nanotechnology 24, no 38 (2 septembre 2013) : 384004. http://dx.doi.org/10.1088/0957-4484/24/38/384004.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie