Дисертації з теми "Reservoir computing networks"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-30 дисертацій для дослідження на тему "Reservoir computing networks".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Fu, Kaiwei. "Reservoir Computing with Neuro-memristive Nanowire Networks." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25900.
Повний текст джерелаKulkarni, Manjari S. "Memristor-based Reservoir Computing." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.
Повний текст джерелаCanaday, Daniel M. "Modeling and Control of Dynamical Systems with Reservoir Computing." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu157469471458874.
Повний текст джерелаDai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.
Повний текст джерелаAlomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.
Повний текст джерела[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
Vincent-Lamarre, Philippe. "Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.
Повний текст джерелаAlmassian, Amin. "Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2724.
Повний текст джерелаBazzanella, Davide. "Microring Based Neuromorphic Photonics." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/344624.
Повний текст джерелаRöhm, André [Verfasser], Kathy [Akademischer Betreuer] Lüdge, Kathy [Gutachter] Lügde, and Ingo [Gutachter] Fischer. "Symmetry-Breaking bifurcations and reservoir computing in regular oscillator networks / André Röhm ; Gutachter: Kathy Lügde, Ingo Fischer ; Betreuer: Kathy Lüdge." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1183789491/34.
Повний текст джерелаEnel, Pierre. "Représentation dynamique dans le cortex préfrontal : comparaison entre reservoir computing et neurophysiologie du primate." Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-01056696.
Повний текст джерелаPassey, Jr David Joseph. "Growing Complex Networks for Better Learning of Chaotic Dynamical Systems." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8146.
Повний текст джерелаBaldini, Paolo. "Online adaptation of robots controlled by nanowire networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25396/.
Повний текст джерелаAntonik, Piotr. "Application of FPGA to real-time machine learning: hardware reservoir computers and software image processing." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/257660.
Повний текст джерелаDoctorat en Sciences
info:eu-repo/semantics/nonPublished
Strock, Anthony. "Mémoire de travail dans les réseaux de neurones récurrents aléatoires." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0195.
Повний текст джерелаWorking memory can be defined as the ability to temporarily store and manipulate information of any kind.For example, imagine that you are asked to mentally add a series of numbers.In order to accomplish this task, you need to keep track of the partial sum that needs to be updated every time a new number is given.The working memory is precisely what would make it possible to maintain (i.e. temporarily store) the partial sum and to update it (i.e. manipulate).In this thesis, we propose to explore the neuronal implementations of this working memory using a limited number of hypotheses.To do this, we place ourselves in the general context of recurrent neural networks and we propose to use in particular the reservoir computing paradigm.This type of very simple model nevertheless makes it possible to produce dynamics that learning can take advantage of to solve a given task.In this job, the task to be performed is a gated working memory task.The model receives as input a signal which controls the update of the memory.When the door is closed, the model should maintain its current memory state, while when open, it should update it based on an input.In our approach, this additional input is present at all times, even when there is no update to do.In other words, we require our model to be an open system, i.e. a system which is always disturbed by its inputs but which must nevertheless learn to keep a stable memory.In the first part of this work, we present the architecture of the model and its properties, then we show its robustness through a parameter sensitivity study.This shows that the model is extremely robust for a wide range of parameters.More or less, any random population of neurons can be used to perform gating.Furthermore, after learning, we highlight an interesting property of the model, namely that information can be maintained in a fully distributed manner, i.e. without being correlated to any of the neurons but only to the dynamics of the group.More precisely, working memory is not correlated with the sustained activity of neurons, which has nevertheless been observed for a long time in the literature and recently questioned experimentally.This model confirms these results at the theoretical level.In the second part of this work, we show how these models obtained by learning can be extended in order to manipulate the information which is in the latent space.We therefore propose to consider conceptors which can be conceptualized as a set of synaptic weights which constrain the dynamics of the reservoir and direct it towards particular subspaces; for example subspaces corresponding to the maintenance of a particular value.More generally, we show that these conceptors can not only maintain information, they can also maintain functions.In the case of mental arithmetic mentioned previously, these conceptors then make it possible to remember and apply the operation to be carried out on the various inputs given to the system.These conceptors therefore make it possible to instantiate a procedural working memory in addition to the declarative working memory.We conclude this work by putting this theoretical model into perspective with respect to biology and neurosciences
Pinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.
Повний текст джерелаThis work introduces novel neural networks algorithms for online spatio-temporal pattern processing by extending the Incremental Gaussian Mixture Network (IGMN). The IGMN algorithm is an online incremental neural network that learns from a single scan through data by means of an incremental version of the Expectation-Maximization (EM) algorithm combined with locally weighted regression (LWR). Four different approaches are used to give temporal processing capabilities to the IGMN algorithm: time-delay lines (Time-Delay IGMN), a reservoir layer (Echo-State IGMN), exponential moving average of reconstructed input vector (Merge IGMN) and self-referencing (Recursive IGMN). This results in algorithms that are online, incremental, aggressive and have temporal capabilities, and therefore are suitable for tasks with memory or unknown internal states, characterized by continuous non-stopping data-flows, and that require life-long learning while operating and giving predictions without separated stages. The proposed algorithms are compared to other spatio-temporal neural networks in 8 time-series prediction tasks. Two of them show satisfactory performances, generally improving upon existing approaches. A general enhancement for the IGMN algorithm is also described, eliminating one of the algorithm’s manually tunable parameters and giving better results.
Andersson, Casper. "Reservoir Computing Approach for Network Intrusion Detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54983.
Повний текст джерелаLawrie, Sofía. "Information representation and processing in neuronal networks: from biological to artificial systems and from first to second-order statistics." Doctoral thesis, Universitat Pompeu Fabra, 2022. http://hdl.handle.net/10803/673989.
Повний текст джерелаLas redes neuronales se presentan hoy, hipotéticamente, como las responsables de las capacidades computacionales de los sistemas nerviosos biológicos. De la misma manera, los sistemas neuronales artificiales son intensamente explotados en una diversidad de aplicaciones industriales y científicas. No obstante, cómo la información es representada y procesada por estas redes está aún sujeto a debate. Es decir, no está claro qué propiedades de la actividad neuronal son útiles para llevar a cabo computaciones. En esta tesis, presento un conjunto de resultados que relaciona el primer orden estadístico de la actividad neuronal con comportamiento, en el contexto general de codificación/decodificación, para analizar datos recolectados mientras primates no humanos realizaban una tarea de memoria de trabajo. Subsecuentemente, voy más allá del primer orden y muestro que las estadísticas de segundo orden en computación de reservorios, un modelo de red neuronal artificial y recurrente, constituyen un candidato robusto para la representación y transmisión de información con el fin de clasificar señales multidimensionales.
Martinenghi, Romain. "Démonstration opto-électronique du concept de calculateur neuromorphique par Reservoir Computing." Thesis, Besançon, 2013. http://www.theses.fr/2013BESA2052/document.
Повний текст джерелаReservoir Computing (RC) is a currently emerging new brain-inspired computational paradigm, which appeared in theearly 2000s. It is similar to conventional recurrent neural network (RNN) computing concepts, exhibiting essentiallythree parts: (i) an input layer to inject the information in the computing system; (ii) a central computational layercalled the Reservoir; (iii) and an output layer which is extracting the computed result though a so-called Read-Outprocedure, the latter being determined after a learning and training step. The main originality compared to RNNconsists in the last part, which is the only one concerned by the training step, the input layer and the Reservoir beingoriginally randomly determined and fixed. This specificity brings attractive features to RC compared to RNN, in termsof simplification, efficiency, rapidity, and feasibility of the learning, as well as in terms of dedicated hardwareimplementation of the RC scheme. This thesis is indeed concerned by one of the first a hardware implementation of RC,moreover with an optoelectronic architecture.Our approach to physical RC implementation is based on the use of a sepcial class of complex system for the Reservoir,a nonlinear delay dynamics involving multiple delayed feedback paths. The Reservoir appears thus as a spatio-temporalemulation of a purely temporal dynamics, the delay dynamics. Specific design of the input and output layer are shownto be possible, e.g. through time division multiplexing techniques, and amplitude modulation for the realization of aninput mask to address the virtual nodes in the delay dynamics. Two optoelectronic setups are explored, one involving awavelength nonlinear dynamics with a tunable laser, and another one involving an intensity nonlinear dynamics with anintegrated optics Mach-Zehnder modulator. Experimental validation of the computational efficiency is performedthrough two standard benchmark tasks: the NARMA10 test (prediction task), and a spoken digit recognition test(classification task), the latter showing results very close to state of the art performances, even compared with purenumerical simulation approaches
Bai, Kang Jun. "Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence Applications." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103711.
Повний текст джерелаDoctor of Philosophy
Deep learning strategies are the cutting-edge of artificial intelligence, in which the artificial neural networks are trained to extract key features or finding similarities from raw sensory information. This is made possible through multiple processing layers with a colossal amount of neurons, in a similar way to humans. Deep learning strategies run on von Neumann computers are deployed worldwide. However, in today's data-driven society, the use of general-purpose computing systems and cloud infrastructures can no longer offer a timely response while themselves exposing other significant security issues. Arose with the introduction of neuromorphic architecture, application-specific integrated circuit chips have paved the way for machine intelligence applications in recently years. The major contributions in this dissertation include designing and fabricating a new class of hybrid neural computing architecture and implementing various deep learning strategies to diverse machine intelligence applications. The resulting hybrid neural computing architecture offers an alternative solution to accelerate the neural computations required for sophisticated machine intelligence applications with a simple system-level design, and therefore, opening the door to low-power system-on-chip design for future intelligence computing, what is more, providing prominent design solutions and performance improvements for internet of things applications.
Vinckier, Quentin. "Analog bio-inspired photonic processors based on the reservoir computing paradigm." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/237069.
Повний текст джерелаDoctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Nowshin, Fabiha. "Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103854.
Повний текст джерелаM.S.
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
Cazin, Nicolas. "A replay driven model of spatial sequence learning in the hippocampus-prefrontal cortex network using reservoir computing." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1133/document.
Повний текст джерелаAs rats learn to search for multiple sources of food or water in a complex environment, processes of spatial sequence learning and recall in the hippocampus (HC) and prefrontal cortex (PFC) are taking place. Recent studies (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) show that spatial navigation in the rat hippocampus involves the replay of place-cell firing during awake and sleep states generating small contiguous subsequences of spatially related place-cell activations that we will call "snippets". These "snippets" occur primarily during sharp-wave-ripple (SPWR) events. Much attention has been paid to replay during sleep in the context of long-term memory consolidation. Here we focus on the role of replay during the awake state, as the animal is learning across multiple trials.We hypothesize that these "snippets" can be used by the PFC to achieve multi-goal spatial sequence learning.We propose to develop an integrated model of HC and PFC that is able to form place-cell activation sequences based on snippet replay. The proposed collaborative research will extend existing spatial cognition model for simpler goal-oriented tasks (Barrera and Weitzenfeld 2008; Barrera et al. 2015) with a new replay-driven model for memory formation in the hippocampus and spatial sequence learning and recall in PFC.In contrast to existing work on sequence learning that relies heavily on sophisticated learning algorithms and synaptic modification rules, we propose to use an alternative computational framework known as reservoir computing (Dominey 1995) in which large pools of prewired neural elements process information dynamically through reverberations. This reservoir computational model will consolidate snippets into larger place-cell activation sequences that may be later recalled by subsets of the original sequences.The proposed work is expected to generate a new understanding of the role of replay in memory acquisition in complex tasks such as sequence learning. That operational understanding will be leveraged and tested on a an embodied-cognitive real-time framework of a robot, related to the animat paradigm (Wilson 1991) [etc...]
Gargesa, Padmashri. "Reward-driven Training of Random Boolean Network Reservoirs for Model-Free Environments." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/669.
Повний текст джерелаBaylon, Fuentes Antonio. "Ring topology of an optical phase delayed nonlinear dynamics for neuromorphic photonic computing." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2047/document.
Повний текст джерелаNowadays most of computers are still based on concepts developed more than 60 years ago by Alan Turing and John von Neumann. However, these digital computers have already begun to reach certain physical limits of their implementation via silicon microelectronics technology (dissipation, speed, integration limits, energy consumption). Alternative approaches, more powerful, more efficient and with less consume of energy, have constituted a major scientific issue for several years. Many of these approaches naturally attempt to get inspiration for the human brain, whose operating principles are still far from being understood. In this line of research, a surprising variation of recurrent neural network (RNN), simpler, and also even sometimes more efficient for features or processing cases, has appeared in the early 2000s, now known as Reservoir Computing (RC), which is currently emerging new brain-inspired computational paradigm. Its structure is quite similar to the classical RNN computing concepts, exhibiting generally three parts: an input layer to inject the information into a nonlinear dynamical system (Write-In), a second layer where the input information is projected in a space of high dimension called dynamical reservoir and an output layer from which the processed information is extracted through a so-called Read-Out function. In RC approach the learning procedure is performed in the output layer only, while the input and reservoir layer are randomly fixed, being the main originality of RC compared to the RNN methods. This feature allows to get more efficiency, rapidity and a learning convergence, as well as to provide an experimental implementation solution. This PhD thesis is dedicated to one of the first photonic RC implementation using telecommunication devices. Our experimental implementation is based on a nonlinear delayed dynamical system, which relies on an electro-optic (EO) oscillator with a differential phase modulation. This EO oscillator was extensively studied in the context of the optical chaos cryptography. Dynamics exhibited by such systems are indeed known to develop complex behaviors in an infinite dimensional phase space, and analogies with space-time dynamics (as neural network ones are a kind of) are also found in the literature. Such peculiarities of delay systems supported the idea of replacing the traditional RNN (usually difficult to design technologically) by a nonlinear EO delay architecture. In order to evaluate the computational power of our RC approach, we implement two spoken digit recognition tests (classification tests) taken from a standard databases in artificial intelligence TI-46 and AURORA-2, obtaining results very close to state-of-the-art performances and establishing state-of-the-art in classification speed. Our photonic RC approach allowed us to process around of 1 million of words per second, improving the information processing speed by a factor ~3
(8088431), Gopalakrishnan Srinivasan. "Training Spiking Neural Networks for Energy-Efficient Neuromorphic Computing." Thesis, 2019.
Знайти повний текст джерелаSpiking Neural Networks (SNNs), widely known as the third
generation of artificial neural networks, offer a promising solution to
approaching the brains' processing capability for cognitive tasks. With more
biologically realistic perspective on input processing, SNN performs neural
computations using spikes in an event-driven manner. The asynchronous
spike-based computing capability can be exploited to achieve improved energy
efficiency in neuromorphic hardware. Furthermore, SNN, on account of
spike-based processing, can be trained in an unsupervised manner using Spike
Timing Dependent Plasticity (STDP). STDP-based learning rules modulate the strength
of a multi-bit synapse based on the correlation between the spike times of the
input and output neurons. In order to achieve plasticity with compressed
synaptic memory, stochastic binary synapse is proposed where spike timing
information is embedded in the synaptic switching probability. A bio-plausible
probabilistic-STDP learning rule consistent with Hebbian learning theory is
proposed to train a network of binary as well as quaternary synapses. In
addition, hybrid probabilistic-STDP learning rule incorporating Hebbian and
anti-Hebbian mechanisms is proposed to enhance the learnt representations of
the stochastic SNN. The efficacy of the presented learning rules are
demonstrated for feed-forward fully-connected and residual convolutional SNNs
on the MNIST and the CIFAR-10 datasets.
STDP-based learning is limited to shallow SNNs (<5
layers) yielding lower than acceptable accuracy on complex datasets. This
thesis proposes block-wise complexity-aware training algorithm, referred to as
BlocTrain, for incrementally training deep SNNs with reduced memory
requirements using spike-based backpropagation through time. The deep network
is divided into blocks, where each block consists of few convolutional layers
followed by an auxiliary classifier. The blocks are trained sequentially using
local errors from the respective auxiliary classifiers. Also, the deeper blocks
are trained only on the hard classes determined using the class-wise accuracy
obtained from the classifier of previously trained blocks. Thus, BlocTrain
improves the training time and computational efficiency with increasing block
depth. In addition, higher computational efficiency is obtained during
inference by exiting early for easy class instances and activating the deeper
blocks only for hard class instances. The ability of BlocTrain to provide
improved accuracy as well as higher training and inference efficiency compared
to end-to-end approaches is demonstrated for deep SNNs (up to 11 layers) on the
CIFAR-10 and the CIFAR-100 datasets.
Feed-forward SNNs are typically used for static image recognition while recurrent Liquid State Machines (LSMs) have been shown to encode time-varying speech data. Liquid-SNN, consisting of input neurons sparsely connected by plastic synapses to randomly interlinked reservoir of spiking neurons (or liquid), is proposed for unsupervised speech and image recognition. The strength of the synapses interconnecting the input and liquid are trained using STDP, which makes it possible to infer the class of a test pattern without a readout layer typical in standard LSMs. The Liquid-SNN suffers from scalability challenges due to the need to primarily increase the number of neurons to enhance the accuracy. SpiLinC, composed of an ensemble of multiple liquids, where each liquid is trained on a unique input segment, is proposed as a scalable model to achieve improved accuracy. SpiLinC recognizes a test pattern by combining the spiking activity of the individual liquids, each of which identifies unique input features. As a result, SpiLinC offers comparable accuracy to Liquid-SNN with added synaptic sparsity and faster training convergence, which is validated on the digit subset of TI46 speech corpus and the MNIST dataset.
Ghani, A., Chan H. See, Hassan S. O. Migdadi, Rameez Asif, Raed A. Abd-Alhameed, and James M. Noras. "Reconfigurable neurons - making the most of configurable logic blocks (CLBs)." 2015. http://hdl.handle.net/10454/9152.
Повний текст джерелаAn area-efficient hardware architecture is used to map fully parallel cortical columns on Field Programmable Gate Arrays (FPGA) is presented in this paper. To demonstrate the concept of this work, the proposed architecture is shown at the system level and benchmarked with image and speech recognition applications. Due to the spatio-temporal nature of spiking neurons, this has allowed such architectures to map on FPGAs in which communication can be performed through the use of spikes and signal can be represented in binary form. The process and viability of designing and implementing the multiple recurrent neural reservoirs with a novel multiplier-less reconfigurable architectures is described.
Vilimelis, Aceituno Pau. "Structure, Dynamics and Self-Organization in Recurrent Neural Networks: From Machine Learning to Theoretical Neuroscience." 2020. https://ul.qucosa.de/id/qucosa%3A71390.
Повний текст джерела"Predicting and Controlling Complex Dynamical Systems." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57150.
Повний текст джерелаDissertation/Thesis
Doctoral Dissertation Electrical Engineering 2020
Castellano, Marta. "Computational Principles of Neural Processing: modulating neural systems through temporally structured stimuli." Doctoral thesis, 2014. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2014121112959.
Повний текст джерелаDaouda, Tariq. "Génération et reconnaissance de rythmes au moyen de réseaux de neurones à réservoir." Thèse, 2010. http://hdl.handle.net/1866/4931.
Повний текст джерелаReservoir computing, the combination of a recurrent neural network and one or more memoryless readout units, has seen recent growth in popularity in and machine learning, signal processing and computational neurosciences. Reservoir-based methods have been successfully applied to a wide range of time series problems [11][64][49][45][38] including music [30], and usually can be found in two flavours: Echo States Networks(ESN)[29], where the reservoir is composed of mean rates neurons, and Liquid Sates Machines (LSM),[43] where the reservoir is composed of spiking neurons. In this work, we propose two new models based upon the ESN architecture. The first one is a model for rhythm recognition that uses two levels of learning and with which we have been able to get satisfying results on both recognition and noise resistance. The second one is a model for learning and generating periodic sequences, with this model we introduced a new architecture for generative models based upon ESNs where the reservoir receives inputs from a clock, as well as a new learning algorithm that we called "Orbite". By combining these two elements within our model, we were able to get good results on generation, over-fitting and data extraction. We also believe that a combination of several instances of our model can serve as a basis for the elaboration of an entirely virtual orchestra, and we propose two architectures that this orchestra may have. In the last part of this work, we briefly present the tools that we have developed during our research.
Les fichiers sons qui accompagne mon document sont au format midi. Le programme que nous avons développés pour ce travail est en language Python.