Добірка наукової літератури з теми "Reservoir computing networks"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Reservoir computing networks".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Reservoir computing networks"

1

Van der Sande, Guy, Daniel Brunner, and Miguel C. Soriano. "Advances in photonic reservoir computing." Nanophotonics 6, no. 3 (May 12, 2017): 561–76. http://dx.doi.org/10.1515/nanoph-2016-0132.

Повний текст джерела
Анотація:
AbstractWe review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ghosh, Sanjib, Kohei Nakajima, Tanjung Krisnanda, Keisuke Fujii, and Timothy C. H. Liew. "Quantum Neuromorphic Computing with Reservoir Computing Networks." Advanced Quantum Technologies 4, no. 9 (July 9, 2021): 2100053. http://dx.doi.org/10.1002/qute.202100053.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rohm, Andre, Lina Jaurigue, and Kathy Ludge. "Reservoir Computing Using Laser Networks." IEEE Journal of Selected Topics in Quantum Electronics 26, no. 1 (January 2020): 1–8. http://dx.doi.org/10.1109/jstqe.2019.2927578.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Damicelli, Fabrizio, Claus C. Hilgetag, and Alexandros Goulas. "Brain connectivity meets reservoir computing." PLOS Computational Biology 18, no. 11 (November 16, 2022): e1010639. http://dx.doi.org/10.1371/journal.pcbi.1010639.

Повний текст джерела
Анотація:
The connectivity of Artificial Neural Networks (ANNs) is different from the one observed in Biological Neural Networks (BNNs). Can the wiring of actual brains help improve ANNs architectures? Can we learn from ANNs about what network features support computation in the brain when solving a task? At a meso/macro-scale level of the connectivity, ANNs’ architectures are carefully engineered and such those design decisions have crucial importance in many recent performance improvements. On the other hand, BNNs exhibit complex emergent connectivity patterns at all scales. At the individual level, BNNs connectivity results from brain development and plasticity processes, while at the species level, adaptive reconfigurations during evolution also play a major role shaping connectivity. Ubiquitous features of brain connectivity have been identified in recent years, but their role in the brain’s ability to perform concrete computations remains poorly understood. Computational neuroscience studies reveal the influence of specific brain connectivity features only on abstract dynamical properties, although the implications of real brain networks topologies on machine learning or cognitive tasks have been barely explored. Here we present a cross-species study with a hybrid approach integrating real brain connectomes and Bio-Echo State Networks, which we use to solve concrete memory tasks, allowing us to probe the potential computational implications of real brain connectivity patterns on task solving. We find results consistent across species and tasks, showing that biologically inspired networks perform as well as classical echo state networks, provided a minimum level of randomness and diversity of connections is allowed. We also present a framework, bio2art, to map and scale up real connectomes that can be integrated into recurrent ANNs. This approach also allows us to show the crucial importance of the diversity of interareal connectivity patterns, stressing the importance of stochastic processes determining neural networks connectivity in general.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Antonik, Piotr, Serge Massar, and Guy Van Der Sande. "Photonic reservoir computing using delay dynamical systems." Photoniques, no. 104 (September 2020): 45–48. http://dx.doi.org/10.1051/photon/202010445.

Повний текст джерела
Анотація:
The recent progress in artificial intelligence has spurred renewed interest in hardware implementations of neural networks. Reservoir computing is a powerful, highly versatile machine learning algorithm well suited for experimental implementations. The simplest highperformance architecture is based on delay dynamical systems. We illustrate its power through a series of photonic examples, including the first all optical reservoir computer and reservoir computers based on lasers with delayed feedback. We also show how reservoirs can be used to emulate dynamical systems. We discuss the perspectives of photonic reservoir computing.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tran, Dat, and Christof Teuscher. "Computational Capacity of Complex Memcapacitive Networks." ACM Journal on Emerging Technologies in Computing Systems 17, no. 2 (April 2021): 1–25. http://dx.doi.org/10.1145/3445795.

Повний текст джерела
Анотація:
Emerging memcapacitive nanoscale devices have the potential to perform computations in new ways. In this article, we systematically study, to the best of our knowledge for the first time, the computational capacity of complex memcapacitive networks, which function as reservoirs in reservoir computing, one of the brain-inspired computing architectures. Memcapacitive networks are composed of memcapacitive devices randomly connected through nanowires. Previous studies have shown that both regular and random reservoirs provide sufficient dynamics to perform simple tasks. How do complex memcapacitive networks illustrate their computational capability, and what are the topological structures of memcapacitive networks that solve complex tasks with efficiency? Studies show that small-world power-law (SWPL) networks offer an ideal trade-off between the communication properties and the wiring cost of networks. In this study, we illustrate the computing nature of SWPL memcapacitive reservoirs by exploring the two essential properties: fading memory and linear separation through measurements of kernel quality. Compared to ideal reservoirs, nanowire memcapacitive reservoirs had a better dynamic response and improved their performance by 4.67% on three tasks: MNIST, Isolated Spoken Digits, and CIFAR-10. On the same three tasks, compared to memristive reservoirs, nanowire memcapacitive reservoirs achieved comparable performance with much less power, on average, about 99× , 17×, and 277×, respectively. Simulation results of the topological transformation of memcapacitive networks reveal that that topological structures of the memcapacitive SWPL reservoirs did not affect their performance but significantly contributed to the wiring cost and the power consumption of the systems. The minimum trade-off between the wiring cost and the power consumption occurred at different network settings of α and β : 4.5 and 0.61 for Biolek reservoirs, 2.7 and 1.0 for Mohamed reservoirs, and 3.0 and 1.0 for Najem reservoirs. The results of our research illustrate the computational capacity of complex memcapacitive networks as reservoirs in reservoir computing. Such memcapacitive networks with an SWPL topology are energy-efficient systems that are suitable for low-power applications such as mobile devices and the Internet of Things.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Senn, Christoph Walter, and Itsuo Kumazawa. "Abstract Reservoir Computing." AI 3, no. 1 (March 10, 2022): 194–210. http://dx.doi.org/10.3390/ai3010012.

Повний текст джерела
Анотація:
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such as the ones used in physical reservoir computing. By abstracting away these kinds of noise sources using intervals, we derive a regularized training regime for reservoir computing using sets of possible reservoir states. Numerical simulations are used to show the effectiveness of our approach against different sources of errors that can appear in real-world scenarios and compare them with standard approaches. Our results support the application of interval arithmetics to improve the robustness of mass-spring networks trained in simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hart, Joseph D., Laurent Larger, Thomas E. Murphy, and Rajarshi Roy. "Delayed dynamical systems: networks, chimeras and reservoir computing." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 377, no. 2153 (July 22, 2019): 20180123. http://dx.doi.org/10.1098/rsta.2018.0123.

Повний текст джерела
Анотація:
We present a systematic approach to reveal the correspondence between time delay dynamics and networks of coupled oscillators. After early demonstrations of the usefulness of spatio-temporal representations of time-delay system dynamics, extensive research on optoelectronic feedback loops has revealed their immense potential for realizing complex system dynamics such as chimeras in rings of coupled oscillators and applications to reservoir computing. Delayed dynamical systems have been enriched in recent years through the application of digital signal processing techniques. Very recently, we have showed that one can significantly extend the capabilities and implement networks with arbitrary topologies through the use of field programmable gate arrays. This architecture allows the design of appropriate filters and multiple time delays, and greatly extends the possibilities for exploring synchronization patterns in arbitrary network topologies. This has enabled us to explore complex dynamics on networks with nodes that can be perfectly identical, introduce parameter heterogeneities and multiple time delays, as well as change network topologies to control the formation and evolution of patterns of synchrony. This article is part of the theme issue ‘Nonlinear dynamics of delay systems’.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

S Pathak, Shantanu, and D. Rajeswara Rao. "Reservoir Computing for Healthcare Analytics." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 240. http://dx.doi.org/10.14419/ijet.v7i2.32.15576.

Повний текст джерела
Анотація:
In this data age tools for sophisticated generation and handling of data are at epitome of usage. Data varying in both space and time poses a breed of challenges. Challenges they possess for forecasting can be well handled by Reservoir computing based neural networks. Challenges like class imbalance, missing values, locality effect are discussed here. Additionally, popular statistical techniques for forecasting such data are discussed. Results show how Reservoir Computing based technique outper-forms traditional neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Andrecut, M. "Reservoir computing on the hypersphere." International Journal of Modern Physics C 28, no. 07 (July 2017): 1750095. http://dx.doi.org/10.1142/s0129183117500954.

Повний текст джерела
Анотація:
Reservoir Computing (RC) refers to a Recurrent Neural Network (RNNs) framework, frequently used for sequence learning and time series prediction. The RC system consists of a random fixed-weight RNN (the input-hidden reservoir layer) and a classifier (the hidden-output readout layer). Here, we focus on the sequence learning problem, and we explore a different approach to RC. More specifically, we remove the nonlinear neural activation function, and we consider an orthogonal reservoir acting on normalized states on the unit hypersphere. Surprisingly, our numerical results show that the system’s memory capacity exceeds the dimensionality of the reservoir, which is the upper bound for the typical RC approach based on Echo State Networks (ESNs). We also show how the proposed system can be applied to symmetric cryptography problems, and we include a numerical implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Reservoir computing networks"

1

Fu, Kaiwei. "Reservoir Computing with Neuro-memristive Nanowire Networks." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25900.

Повний текст джерела
Анотація:
We present simulation results based on a model of self–assembled nanowire networks with memristive junctions and neural network–like topology. We analyse the dynamical voltage distribution in response to an applied bias and explain the network conductance fluctuations observed in previous experimental studies. We show I − V curves under AC stimulation and compare these to other bulk memristors. We then study the capacity of these nanowire networks for neuro-inspired reservoir computing by demonstrating higher harmonic generation and short/long–term memory. Benchmark tasks in a reservoir computing framework are implemented. The tasks include nonlinear wave transformation, wave auto-generation, and hand-written digit classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kulkarni, Manjari S. "Memristor-based Reservoir Computing." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.

Повний текст джерела
Анотація:
In today's nanoscale era, scaling down to even smaller feature sizes poses a significant challenge in the device fabrication, the circuit, and the system design and integration. On the other hand, nanoscale technology has also led to novel materials and devices with unique properties. The memristor is one such emergent nanoscale device that exhibits non-linear current-voltage characteristics and has an inherent memory property, i.e., its current state depends on the past. Both the non-linear and the memory property of memristors have the potential to enable solving spatial and temporal pattern recognition tasks in radically different ways from traditional binary transistor-based technology. The goal of this thesis is to explore the use of memristors in a novel computing paradigm called "Reservoir Computing" (RC). RC is a new paradigm that belongs to the class of artificial recurrent neural networks (RNN). However, it architecturally differs from the traditional RNN techniques in that the pre-processor (i.e., the reservoir) is made up of random recurrently connected non-linear elements. Learning is only implemented at the readout (i.e., the output) layer, which reduces the learning complexity significantly. To the best of our knowledge, memristors have never been used as reservoir components. We use pattern recognition and classification tasks as benchmark problems. Real world applications associated with these tasks include process control, speech recognition, and signal processing. We have built a software framework, RCspice (Reservoir Computing Simulation Program with Integrated Circuit Emphasis), for this purpose. The framework allows to create random memristor networks, to simulate and evaluate them in Ngspice, and to train the readout layer by means of Genetic Algorithms (GA). We have explored reservoir-related parameters, such as the network connectivity and the reservoir size along with the GA parameters. Our results show that we are able to efficiently and robustly classify time-series patterns using memristor-based dynamical reservoirs. This presents an important step towards computing with memristor-based nanoscale systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Canaday, Daniel M. "Modeling and Control of Dynamical Systems with Reservoir Computing." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu157469471458874.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Dai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.

Повний текст джерела
Анотація:
Computational intelligence techniques, such as artificial neural networks (ANNs), have been widely used to improve the performance of power system monitoring and control. Although inspired by the neurons in the brain, ANNs are largely different from living neuron networks (LNNs) in many aspects. Due to the oversimplification, the huge computational potential of LNNs cannot be realized by ANNs. Therefore, a more brain-like artificial neural network is highly desired to bridge the gap between ANNs and LNNs. The focus of this research is to develop a biologically inspired artificial neural network (BIANN), which is not only biologically meaningful, but also computationally powerful. The BIANN can serve as a novel computational intelligence tool in monitoring, modeling and control of the power systems. A comprehensive survey of ANNs applications in power system is presented. It is shown that novel types of reservoir-computing-based ANNs, such as echo state networks (ESNs) and liquid state machines (LSMs), have stronger modeling capability than conventional ANNs. The feasibility of using ESNs as modeling and control tools is further investigated in two specific power system applications, namely, power system nonlinear load modeling for true load harmonic prediction and the closed-loop control of active filters for power quality assessment and enhancement. It is shown that in both applications, ESNs are capable of providing satisfactory performances with low computational requirements. A novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. A comprehensive survey of the spiking models of living neurons as well as the coding approaches is presented to review the state-of-the-art in BIANN research. The proposed BIANNs are based on spiking models of living neurons with adoption of reservoir-computing approaches. It is shown that the proposed BIANNs have strong modeling capability and low computational requirements, which makes it a perfect candidate for online monitoring and control applications in power systems. BIANN-based modeling and control techniques are also proposed for power system applications. The proposed modeling and control schemes are validated for the modeling and control of a generator in a single-machine infinite-bus system under various operating conditions and disturbances. It is shown that the proposed BIANN-based technique can provide better control of the power system to enhance its reliability and tolerance to disturbances. To sum up, a novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. It is clearly shown that the proposed BIANN-based modeling and control schemes can provide faster and more accurate control for power system applications. The conclusions, the recommendations for future research, as well as the major contributions of this research are presented at the end.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Alomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.

Повний текст джерела
Анотація:
[cat]Inspirades en la forma en què el cervell processa la informació, les xarxes neuronals artificials (XNA) es crearen amb l’objectiu de reproduir habilitats humanes en tasques que són difícils de resoldre mitjançant la programació algorítmica clàssica. El paradigma de les XNA s’ha aplicat a nombrosos camps de la ciència i enginyeria gràcies a la seva capacitat d’aprendre dels exemples, l’adaptació, el paral·lelisme i la tolerància a fallades. El reservoir computing (RC), basat en l’ús d’una xarxa neuronal recurrent (XNR) aleatòria com a nucli de processament, és un model de gran abast molt adequat per processar sèries temporals. Les realitzacions en maquinari de les XNA són crucials per aprofitar les propietats paral·leles d’aquests models, les quals afavoreixen una major velocitat i fiabilitat. D’altra banda, les xarxes neuronals en maquinari (XNM) poden oferir avantatges apreciables en termes de consum energètic i cost. Els dispositius compactes de baix cost implementant XNM són útils per donar suport o reemplaçar el programari en aplicacions en temps real, com ara de control, supervisió mèdica, robòtica i xarxes de sensors. No obstant això, la realització en maquinari de XNA amb un nombre elevat de neurones, com al cas de l’RC, és una tasca difícil a causa de la gran quantitat de recursos exigits per les operacions involucrades. Tot i els possibles beneficis dels circuits digitals en maquinari per realitzar un processament neuronal basat en RC, la majoria d’implementacions es realitzen en programari usant processadors convencionals. En aquesta tesi, proposo i analitzo diverses metodologies per a la implementació digital de sistemes RC fent ús d’un nombre limitat de recursos de maquinari. Els dissenys de la xarxa neuronal es descriuen en detall tant per a una implementació convencional com per als distints mètodes alternatius. Es discuteixen els avantatges i inconvenients de les diferents tècniques pel que fa a l’exactitud, velocitat de càlcul i àrea requerida. Finalment, les implementacions proposades s’apliquen a resoldre diferents problemes pràctics d’enginyeria.
[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vincent-Lamarre, Philippe. "Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.

Повний текст джерела
Анотація:
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code is transformed in the target motor sequence is a question of its own, their origin is perhaps even more puzzling. Most models of cortical and sub-cortical circuits suggest that many of their neural populations are chaotic. This means that very small amounts of noise, such as an additional action potential in a neuron of a network, can lead to completely different patterns of activity. Reservoir computing is one of the first frameworks that provided an efficient solution for biologically relevant neural networks to learn complex temporal tasks in the presence of chaos. We showed that although reservoirs (i.e. recurrent neural networks) are robust to noise, they are extremely sensitive to some forms of structural perturbations, such as removing one neuron out of thousands. We proposed an alternative to these models, where the source of autonomous activity is no longer originating from the reservoir, but from a set of oscillating networks projecting to the reservoir. In our simulations, we show that this solution produce rich patterns of activity and lead to networks that are both resistant to noise and structural perturbations. The model can learn a wide variety of temporal tasks such as interval timing, motor control, speech production and spatial navigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Almassian, Amin. "Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2724.

Повний текст джерела
Анотація:
Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bazzanella, Davide. "Microring Based Neuromorphic Photonics." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/344624.

Повний текст джерела
Анотація:
This manuscript investigates the use of microring resonators to create all-optical reservoir-computing networks implemented in silicon photonics. Artificial neural networks and reservoir-computing are promising applications for integrated photonics, as they could make use of the bandwidth and the intrinsic parallelism of optical signals. This work mainly illustrates two aspects: the modelling of photonic integrated circuits and the experimental results obtained with all-optical devices. The modelling of photonic integrated circuits is examined in detail, both concerning fundamental theory and from the point of view of numerical simulations. In particular, the simulations focus on the nonlinear effects present in integrated optical cavities, which increase the inherent complexity of their optical response. Toward this objective, I developed a new numerical tool, precise, which can simulate arbitrary circuits, taking into account both linear propagation and nonlinear effects. The experimental results concentrate on the use of SCISSORs and a single microring resonator as reservoirs and the complex perceptron scheme. The devices have been extensively tested with logical operations, achieving bit error rates of less than 10^−5 at 16 Gbps in the case of the complex perceptron. Additionally, an in-depth explanation of the experimental setup and the description of the manufactured designs are provided. The achievements reported in this work mark an encouraging first step in the direction of the development of novel networks that employ the full potential of all-optical devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Röhm, André [Verfasser], Kathy [Akademischer Betreuer] Lüdge, Kathy [Gutachter] Lügde, and Ingo [Gutachter] Fischer. "Symmetry-Breaking bifurcations and reservoir computing in regular oscillator networks / André Röhm ; Gutachter: Kathy Lügde, Ingo Fischer ; Betreuer: Kathy Lüdge." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1183789491/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Enel, Pierre. "Représentation dynamique dans le cortex préfrontal : comparaison entre reservoir computing et neurophysiologie du primate." Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-01056696.

Повний текст джерела
Анотація:
Les primates doivent pouvoir reconnaître de nouvelles situations pour pouvoir s'y adapter. La représentation de ces situations dans l'activité du cortex est le sujet de cette thèse. Les situations complexes s'expliquent souvent par l'interaction entre des informations sensorielles, internes et motrices. Des activités unitaires dénommées sélectivité mixte, qui sont très présentes dans le cortex préfrontal (CPF), sont un mécanisme possible pour représenter n'importe quelle interaction entre des informations. En parallèle, le Reservoir Computing a démontré que des réseaux récurrents ont la propriété de recombiner des entrées actuelles et passées dans un espace de plus haute dimension, fournissant ainsi un pré-codage potentiellement universel de combinaisons pouvant être ensuite sélectionnées et utilisées en fonction de leur pertinence pour la tâche courante. En combinant ces deux approches, nous soutenons que la nature fortement récurrente de la connectivité locale du CPF est à l'origine d'une forme dynamique de sélectivité mixte. De plus, nous tentons de démontrer qu'une simple régression linéaire, implémentable par un neurone seul, peut extraire n'importe qu'elle information/contingence encodée dans ces combinaisons complexes et dynamiques. Finalement, les entrées précédentes, qu'elles soient sensorielles ou motrices, à ces réseaux du CPF doivent être maintenues pour pouvoir influencer les traitements courants. Nous soutenons que les représentations de ces contextes définis par ces entrées précédentes doivent être exprimées explicitement et retournées aux réseaux locaux du CPF pour influencer les combinaisons courantes à l'origine de la représentation des contingences
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Reservoir computing networks"

1

Brunner, Daniel, Miguel C. Soriano, and Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Brunner, Daniel, Miguel C. Soriano, and Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Brunner, Daniel, Miguel C. Soriano, and Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Reservoir computing networks"

1

Sergio, Anderson Tenório, and Teresa B. Ludermir. "PSO for Reservoir Computing Optimization." In Artificial Neural Networks and Machine Learning – ICANN 2012, 685–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_86.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Verstraeten, David, and Benjamin Schrauwen. "On the Quantification of Dynamics in Reservoir Computing." In Artificial Neural Networks – ICANN 2009, 985–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Atencia, Miguel, Claudio Gallicchio, Gonzalo Joya, and Alessio Micheli. "Time Series Clustering with Deep Reservoir Computing." In Artificial Neural Networks and Machine Learning – ICANN 2020, 482–93. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61616-8_39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Colla, Valentina, Ismael Matino, Stefano Dettori, Silvia Cateni, and Ruben Matino. "Reservoir Computing Approaches Applied to Energy Management in Industry." In Engineering Applications of Neural Networks, 66–79. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20257-6_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kobayashi, Taisuke. "Practical Fractional-Order Neuron Dynamics for Reservoir Computing." In Artificial Neural Networks and Machine Learning – ICANN 2018, 116–25. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01424-7_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Schliebs, Stefan, Nikola Kasabov, Dave Parry, and Doug Hunt. "Towards a Wearable Coach: Classifying Sports Activities with Reservoir Computing." In Engineering Applications of Neural Networks, 233–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41013-0_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hinaut, Xavier, and Peter F. Dominey. "On-Line Processing of Grammatical Structure Using Reservoir Computing." In Artificial Neural Networks and Machine Learning – ICANN 2012, 596–603. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_75.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Oyebode, Oluwaseun, and Josiah Adeyemo. "Reservoir Inflow Forecasting Using Differential Evolution Trained Neural Networks." In Advances in Intelligent Systems and Computing, 307–19. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07494-8_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tietz, Stephan, Doreen Jirak, and Stefan Wermter. "A Reservoir Computing Framework for Continuous Gesture Recognition." In Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions, 7–18. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30493-5_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Locquet, Jean-Pierre. "Overview on the PHRESCO Project: PHotonic REServoir COmputing." In Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions, 149–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30493-5_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Reservoir computing networks"

1

Bacciu, Davide, Daniele Di Sarli, Pouria Faraji, Claudio Gallicchio, and Alessio Micheli. "Federated Reservoir Computing Neural Networks." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Heroux, Jean Benoit, Hidetoshi Numata, Naoki Kanazawa, and Daiju Nakano. "Optoelectronic Reservoir Computing with VCSEL." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489757.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dong, Jonathan, Erik Borve, Mushegh Rafayelyan, and Michael Unser. "Asymptotic Stability in Reservoir Computing." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wyffels, Francis, Benjamin Schrauwen, David Verstraeten, and Dirk Stroobandt. "Band-pass Reservoir Computing." In 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4634252.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fu, Kaiwei, Ruomin Zhu, Alon Loeffler, Joel Hochstetter, Adrian Diaz-Alvarez, Adam Stieg, James Gimzewski, Tomonobu Nakayama, and Zdenka Kuncic. "Reservoir Computing with Neuromemristive Nanowire Networks." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207727.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mujal, Pere, Johannes Nokkala, Rodrigo Martinez-Peña, Jorge Garcia Beni, Gian Luca Giorgi, Miguel C. Cornelles-Soriano, and Roberta Zambrini. "Quantum reservoir computing in bosonic networks." In Emerging Topics in Artificial Intelligence (ETAI) 2021, edited by Giovanni Volpe, Joana B. Pereira, Daniel Brunner, and Aydogan Ozcan. SPIE, 2021. http://dx.doi.org/10.1117/12.2596177.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gallicchio, Claudio. "Sparsity in Reservoir Computing Neural Networks." In 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020. http://dx.doi.org/10.1109/inista49547.2020.9194611.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhu, Ruomin, Sam Lilak, Alon Loeffler, Joseph Lizier, Adam Stieg, James Gimzewski, and Zdenka Kuncic. "Reservoir Computing with Neuromorphic Nanowire Networks." In Neuromorphic Materials, Devices, Circuits and Systems. València: FUNDACIO DE LA COMUNITAT VALENCIANA SCITO, 2023. http://dx.doi.org/10.29363/nanoge.neumatdecas.2023.055.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Röhm, André, and Kathy Lüdge. "Reservoir computing with delay in structured networks." In Neuro-inspired Photonic Computing, edited by Marc Sciamanna and Peter Bienstman. SPIE, 2018. http://dx.doi.org/10.1117/12.2307159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Laporte, Floris, Joni Dambre, and Peter Bienstman. "Reservoir computing with signal-mixing cavities." In 2017 19th International Conference on Transparent Optical Networks (ICTON). IEEE, 2017. http://dx.doi.org/10.1109/icton.2017.8024990.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії