Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Reservoir computing networks.

Дисертації з теми "Reservoir computing networks"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-30 дисертацій для дослідження на тему "Reservoir computing networks".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Fu, Kaiwei. "Reservoir Computing with Neuro-memristive Nanowire Networks." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25900.

Повний текст джерела
Анотація:
We present simulation results based on a model of self–assembled nanowire networks with memristive junctions and neural network–like topology. We analyse the dynamical voltage distribution in response to an applied bias and explain the network conductance fluctuations observed in previous experimental studies. We show I − V curves under AC stimulation and compare these to other bulk memristors. We then study the capacity of these nanowire networks for neuro-inspired reservoir computing by demonstrating higher harmonic generation and short/long–term memory. Benchmark tasks in a reservoir computing framework are implemented. The tasks include nonlinear wave transformation, wave auto-generation, and hand-written digit classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kulkarni, Manjari S. "Memristor-based Reservoir Computing." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.

Повний текст джерела
Анотація:
In today's nanoscale era, scaling down to even smaller feature sizes poses a significant challenge in the device fabrication, the circuit, and the system design and integration. On the other hand, nanoscale technology has also led to novel materials and devices with unique properties. The memristor is one such emergent nanoscale device that exhibits non-linear current-voltage characteristics and has an inherent memory property, i.e., its current state depends on the past. Both the non-linear and the memory property of memristors have the potential to enable solving spatial and temporal pattern recognition tasks in radically different ways from traditional binary transistor-based technology. The goal of this thesis is to explore the use of memristors in a novel computing paradigm called "Reservoir Computing" (RC). RC is a new paradigm that belongs to the class of artificial recurrent neural networks (RNN). However, it architecturally differs from the traditional RNN techniques in that the pre-processor (i.e., the reservoir) is made up of random recurrently connected non-linear elements. Learning is only implemented at the readout (i.e., the output) layer, which reduces the learning complexity significantly. To the best of our knowledge, memristors have never been used as reservoir components. We use pattern recognition and classification tasks as benchmark problems. Real world applications associated with these tasks include process control, speech recognition, and signal processing. We have built a software framework, RCspice (Reservoir Computing Simulation Program with Integrated Circuit Emphasis), for this purpose. The framework allows to create random memristor networks, to simulate and evaluate them in Ngspice, and to train the readout layer by means of Genetic Algorithms (GA). We have explored reservoir-related parameters, such as the network connectivity and the reservoir size along with the GA parameters. Our results show that we are able to efficiently and robustly classify time-series patterns using memristor-based dynamical reservoirs. This presents an important step towards computing with memristor-based nanoscale systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Canaday, Daniel M. "Modeling and Control of Dynamical Systems with Reservoir Computing." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu157469471458874.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Dai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.

Повний текст джерела
Анотація:
Computational intelligence techniques, such as artificial neural networks (ANNs), have been widely used to improve the performance of power system monitoring and control. Although inspired by the neurons in the brain, ANNs are largely different from living neuron networks (LNNs) in many aspects. Due to the oversimplification, the huge computational potential of LNNs cannot be realized by ANNs. Therefore, a more brain-like artificial neural network is highly desired to bridge the gap between ANNs and LNNs. The focus of this research is to develop a biologically inspired artificial neural network (BIANN), which is not only biologically meaningful, but also computationally powerful. The BIANN can serve as a novel computational intelligence tool in monitoring, modeling and control of the power systems. A comprehensive survey of ANNs applications in power system is presented. It is shown that novel types of reservoir-computing-based ANNs, such as echo state networks (ESNs) and liquid state machines (LSMs), have stronger modeling capability than conventional ANNs. The feasibility of using ESNs as modeling and control tools is further investigated in two specific power system applications, namely, power system nonlinear load modeling for true load harmonic prediction and the closed-loop control of active filters for power quality assessment and enhancement. It is shown that in both applications, ESNs are capable of providing satisfactory performances with low computational requirements. A novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. A comprehensive survey of the spiking models of living neurons as well as the coding approaches is presented to review the state-of-the-art in BIANN research. The proposed BIANNs are based on spiking models of living neurons with adoption of reservoir-computing approaches. It is shown that the proposed BIANNs have strong modeling capability and low computational requirements, which makes it a perfect candidate for online monitoring and control applications in power systems. BIANN-based modeling and control techniques are also proposed for power system applications. The proposed modeling and control schemes are validated for the modeling and control of a generator in a single-machine infinite-bus system under various operating conditions and disturbances. It is shown that the proposed BIANN-based technique can provide better control of the power system to enhance its reliability and tolerance to disturbances. To sum up, a novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. It is clearly shown that the proposed BIANN-based modeling and control schemes can provide faster and more accurate control for power system applications. The conclusions, the recommendations for future research, as well as the major contributions of this research are presented at the end.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Alomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.

Повний текст джерела
Анотація:
[cat]Inspirades en la forma en què el cervell processa la informació, les xarxes neuronals artificials (XNA) es crearen amb l’objectiu de reproduir habilitats humanes en tasques que són difícils de resoldre mitjançant la programació algorítmica clàssica. El paradigma de les XNA s’ha aplicat a nombrosos camps de la ciència i enginyeria gràcies a la seva capacitat d’aprendre dels exemples, l’adaptació, el paral·lelisme i la tolerància a fallades. El reservoir computing (RC), basat en l’ús d’una xarxa neuronal recurrent (XNR) aleatòria com a nucli de processament, és un model de gran abast molt adequat per processar sèries temporals. Les realitzacions en maquinari de les XNA són crucials per aprofitar les propietats paral·leles d’aquests models, les quals afavoreixen una major velocitat i fiabilitat. D’altra banda, les xarxes neuronals en maquinari (XNM) poden oferir avantatges apreciables en termes de consum energètic i cost. Els dispositius compactes de baix cost implementant XNM són útils per donar suport o reemplaçar el programari en aplicacions en temps real, com ara de control, supervisió mèdica, robòtica i xarxes de sensors. No obstant això, la realització en maquinari de XNA amb un nombre elevat de neurones, com al cas de l’RC, és una tasca difícil a causa de la gran quantitat de recursos exigits per les operacions involucrades. Tot i els possibles beneficis dels circuits digitals en maquinari per realitzar un processament neuronal basat en RC, la majoria d’implementacions es realitzen en programari usant processadors convencionals. En aquesta tesi, proposo i analitzo diverses metodologies per a la implementació digital de sistemes RC fent ús d’un nombre limitat de recursos de maquinari. Els dissenys de la xarxa neuronal es descriuen en detall tant per a una implementació convencional com per als distints mètodes alternatius. Es discuteixen els avantatges i inconvenients de les diferents tècniques pel que fa a l’exactitud, velocitat de càlcul i àrea requerida. Finalment, les implementacions proposades s’apliquen a resoldre diferents problemes pràctics d’enginyeria.
[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vincent-Lamarre, Philippe. "Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.

Повний текст джерела
Анотація:
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code is transformed in the target motor sequence is a question of its own, their origin is perhaps even more puzzling. Most models of cortical and sub-cortical circuits suggest that many of their neural populations are chaotic. This means that very small amounts of noise, such as an additional action potential in a neuron of a network, can lead to completely different patterns of activity. Reservoir computing is one of the first frameworks that provided an efficient solution for biologically relevant neural networks to learn complex temporal tasks in the presence of chaos. We showed that although reservoirs (i.e. recurrent neural networks) are robust to noise, they are extremely sensitive to some forms of structural perturbations, such as removing one neuron out of thousands. We proposed an alternative to these models, where the source of autonomous activity is no longer originating from the reservoir, but from a set of oscillating networks projecting to the reservoir. In our simulations, we show that this solution produce rich patterns of activity and lead to networks that are both resistant to noise and structural perturbations. The model can learn a wide variety of temporal tasks such as interval timing, motor control, speech production and spatial navigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Almassian, Amin. "Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2724.

Повний текст джерела
Анотація:
Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bazzanella, Davide. "Microring Based Neuromorphic Photonics." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/344624.

Повний текст джерела
Анотація:
This manuscript investigates the use of microring resonators to create all-optical reservoir-computing networks implemented in silicon photonics. Artificial neural networks and reservoir-computing are promising applications for integrated photonics, as they could make use of the bandwidth and the intrinsic parallelism of optical signals. This work mainly illustrates two aspects: the modelling of photonic integrated circuits and the experimental results obtained with all-optical devices. The modelling of photonic integrated circuits is examined in detail, both concerning fundamental theory and from the point of view of numerical simulations. In particular, the simulations focus on the nonlinear effects present in integrated optical cavities, which increase the inherent complexity of their optical response. Toward this objective, I developed a new numerical tool, precise, which can simulate arbitrary circuits, taking into account both linear propagation and nonlinear effects. The experimental results concentrate on the use of SCISSORs and a single microring resonator as reservoirs and the complex perceptron scheme. The devices have been extensively tested with logical operations, achieving bit error rates of less than 10^−5 at 16 Gbps in the case of the complex perceptron. Additionally, an in-depth explanation of the experimental setup and the description of the manufactured designs are provided. The achievements reported in this work mark an encouraging first step in the direction of the development of novel networks that employ the full potential of all-optical devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Röhm, André [Verfasser], Kathy [Akademischer Betreuer] Lüdge, Kathy [Gutachter] Lügde, and Ingo [Gutachter] Fischer. "Symmetry-Breaking bifurcations and reservoir computing in regular oscillator networks / André Röhm ; Gutachter: Kathy Lügde, Ingo Fischer ; Betreuer: Kathy Lüdge." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1183789491/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Enel, Pierre. "Représentation dynamique dans le cortex préfrontal : comparaison entre reservoir computing et neurophysiologie du primate." Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-01056696.

Повний текст джерела
Анотація:
Les primates doivent pouvoir reconnaître de nouvelles situations pour pouvoir s'y adapter. La représentation de ces situations dans l'activité du cortex est le sujet de cette thèse. Les situations complexes s'expliquent souvent par l'interaction entre des informations sensorielles, internes et motrices. Des activités unitaires dénommées sélectivité mixte, qui sont très présentes dans le cortex préfrontal (CPF), sont un mécanisme possible pour représenter n'importe quelle interaction entre des informations. En parallèle, le Reservoir Computing a démontré que des réseaux récurrents ont la propriété de recombiner des entrées actuelles et passées dans un espace de plus haute dimension, fournissant ainsi un pré-codage potentiellement universel de combinaisons pouvant être ensuite sélectionnées et utilisées en fonction de leur pertinence pour la tâche courante. En combinant ces deux approches, nous soutenons que la nature fortement récurrente de la connectivité locale du CPF est à l'origine d'une forme dynamique de sélectivité mixte. De plus, nous tentons de démontrer qu'une simple régression linéaire, implémentable par un neurone seul, peut extraire n'importe qu'elle information/contingence encodée dans ces combinaisons complexes et dynamiques. Finalement, les entrées précédentes, qu'elles soient sensorielles ou motrices, à ces réseaux du CPF doivent être maintenues pour pouvoir influencer les traitements courants. Nous soutenons que les représentations de ces contextes définis par ces entrées précédentes doivent être exprimées explicitement et retournées aux réseaux locaux du CPF pour influencer les combinaisons courantes à l'origine de la représentation des contingences
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Passey, Jr David Joseph. "Growing Complex Networks for Better Learning of Chaotic Dynamical Systems." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8146.

Повний текст джерела
Анотація:
This thesis advances the theory of network specialization by characterizing the effect of network specialization on the eigenvectors of a network. We prove and provide explicit formulas for the eigenvectors of specialized graphs based on the eigenvectors of their parent graphs. The second portion of this thesis applies network specialization to learning problems. Our work focuses on training reservoir computers to mimic the Lorentz equations. We experiment with random graph, preferential attachment and small world topologies and demonstrate that the random removal of directed edges increases predictive capability of a reservoir topology. We then create a new network model by growing networks via targeted application of the specialization model. This is accomplished iteratively by selecting top preforming nodes within the reservoir computer and specializing them. Our generated topology out-preforms all other topologies on average.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Baldini, Paolo. "Online adaptation of robots controlled by nanowire networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25396/.

Повний текст джерела
Анотація:
The use of current computational systems is sometimes problematic in robotics, as their cost and power consumption might be too high to find space in low-cost robots. Additionally, their computational capabilities are notable, but not always suited for real-time operations. These limits drastically reduce the amount of applications that can be envisioned. A basic example is the use of micro and nanobots, that even add the issue of size. The need to find different technologies is therefore strong. Here, one alternative computational system is proposed and used: Nanowire Networks. These are novel types of electrical circuits, able to show dynamical properties. Their low-cost and consumption, in addition to their high computational capabilities, make them a perfect candidate for robotic applications. Here, this possibility is assessed, evaluating their use as robots controllers. This research begins with preliminary studies on their behaviour, and considerations about their use. The initial analysis is then used to define an online, adaptive learning approach, allowing the robot to exploit the network to adapt to different tasks and environments. The tested capabilities are: a simple collision avoidance, with fault tolerance considerations; a fast, reactive behaviour to avoid illegal areas; a memory aware behaviour, that can navigate a maze according to an initial stimulus. The results support the promising capabilities of the robotic controller. Additionally, the power of the online adaptation is clearly shown. Therefore, this thesis paves the way for a new type of computation in the robotic area, allowing plastic, fault-tolerant, cheap and efficient systems to be developed.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Antonik, Piotr. "Application of FPGA to real-time machine learning: hardware reservoir computers and software image processing." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/257660.

Повний текст джерела
Анотація:
Reservoir computing est un ensemble de techniques permettant de simplifierl’utilisation des réseaux de neurones artificiels. Les réalisations expérimentales,notamment optiques, de ce concept ont montré des performances proches de l’étatde l’art ces dernières années. La vitesse élevée des expériences optiques ne permetpas d’y intervenir en temps réel avec un ordinateur standard. Dans ce travail, nousutilisons une carte de logique programmable (Field-Programmable Gate Array, ouFPGA) très rapide afin d’interagir avec l’expérience en temps réel, ce qui permetde développer de nouvelles fonctionnalités.Quatre expériences ont été réalisées dans ce cadre. La première visait à implé-menter un algorithme de online training, permettant d’optimiser les paramètresdu réseau de neurones en temps réel. Nous avons montré qu’un tel système étaitcapable d’accomplir des tâches réalistes dont les consignes variaient au cours dutemps.Le but de la deuxième expérience était de créer un reservoir computer optiquepermettant l’optimisation de ses poids d’entrée suivant l’algorithme de backpropaga-tion through time. L’expérience a montré que cette idée était tout à fait réalisable,malgré les quelques difficultés techniques rencontrées. Nous avons testé le systèmeobtenu sur des tâches complexes (au-delà des capacités de reservoir computers clas-siques) et avons obtenu des résultats proches de l’état de l’art.Dans la troisième expérience nous avons rebouclé notre reservoir computer op-tique sur lui-même afin de pouvoir générer des séries temporelles de façon autonome.Le système a été testé avec succès sur des séries périodiques et des attracteurs chao-tiques. L’expérience nous a également permis de mettre en évidence les effets debruit expérimental dans les systèmes rebouclés.La quatrième expérience, bien que numérique, visait le développement d’unecouche de sortie analogique. Nous avons pu vérifier que la méthode de onlinetraining, développée précédemment, était robuste contre tous les problèmes expéri-mentaux étudiés. Par conséquent, nous avons toutes les informations pour réalisercette idée expérimentalement.Finalement, durant les derniers mois de ma thèse, j’ai effectué un stage dont lebut était d’appliquer mes connaissance en programmation de FPGA et réseaux deneurones artificiels à un problème concret en imagerie cardiovasculaire. Nous avonsdéveloppé un programme capable d’analyser les images en temps réel, convenablepour des applications cliniques.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Strock, Anthony. "Mémoire de travail dans les réseaux de neurones récurrents aléatoires." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0195.

Повний текст джерела
Анотація:
La mémoire de travail peut être définie comme la capacité à stocker temporairement et à manipuler des informations de toute nature.Par exemple, imaginez que l'on vous demande d'additionner mentalement une série de nombres. Afin de réaliser cette tâche, vous devez garder une trace de la somme partielle qui doit être mise à jour à chaque fois qu'un nouveau nombre est donné. La mémoire de travail est précisément ce qui permettrait de maintenir (i.e. stocker temporairement) la somme partielle et de la mettre à jour (i.e. manipuler). Dans cette thèse, nous proposons d'explorer les implémentations neuronales de cette mémoire de travail en utilisant un nombre restreint d'hypothèses.Pour ce faire, nous nous plaçons dans le contexte général des réseaux de neurones récurrents et nous proposons d'utiliser en particulier le paradigme du reservoir computing.Ce type de modèle très simple permet néanmoins de produire des dynamiques dont l'apprentissage peut tirer parti pour résoudre une tâche donnée.Dans ce travail, la tâche à réaliser est une mémoire de travail à porte (gated working memory).Le modèle reçoit en entrée un signal qui contrôle la mise à jour de la mémoire.Lorsque la porte est fermée, le modèle doit maintenir son état de mémoire actuel, alors que lorsqu'elle est ouverte, il doit la mettre à jour en fonction d'une entrée.Dans notre approche, cette entrée supplémentaire est présente à tout instant, même lorsqu'il n'y a pas de mise à jour à faire.En d'autres termes, nous exigeons que notre modèle soit un système ouvert, i.e. un système qui est toujours perturbé par ses entrées mais qui doit néanmoins apprendre à conserver une mémoire stable.Dans la première partie de ce travail, nous présentons l'architecture du modèle et ses propriétés, puis nous montrons sa robustesse au travers d'une étude de sensibilité aux paramètres.Celle-ci montre que le modèle est extrêmement robuste pour une large gamme de paramètres.Peu ou prou, toute population aléatoire de neurones peut être utilisée pour effectuer le gating.Par ailleurs, après apprentissage, nous mettons en évidence une propriété intéressante du modèle, à savoir qu'une information peut être maintenue de manière entièrement distribuée, i.e. sans être corrélée à aucun des neurones mais seulement à la dynamique du groupe.Plus précisément, la mémoire de travail n'est pas corrélée avec l'activité soutenue des neurones ce qui a pourtant longtemps été observé dans la littérature et remis en cause récemment de façon expérimentale.Ce modèle vient confirmer ces résultats au niveau théorique.Dans la deuxième partie de ce travail, nous montrons comment ces modèles obtenus par apprentissage peuvent être étendus afin de manipuler l'information qui se trouve dans l'espace latent.Nous proposons pour cela de considérer les conceptors qui peuvent être conceptualisé comme un jeu de poids synaptiques venant contraindre la dynamique du réservoir et la diriger vers des sous-espaces particuliers; par exemple des sous-espaces correspondants au maintien d'une valeur particulière.Plus généralement, nous montrons que ces conceptors peuvent non seulement maintenir des informations, ils peuvent aussi maintenir des fonctions.Dans le cas du calcul mental évoqué précédemment, ces conceptors permettent alors de se rappeler et d'appliquer l'opération à effectuer sur les différentes entrées données au système.Ces conceptors permettent donc d'instancier une mémoire de type procédural en complément de la mémoire de travail de type déclaratif.Nous concluons ce travail en remettant en perspective ce modèle théorique vis à vis de la biologie et des neurosciences
Working memory can be defined as the ability to temporarily store and manipulate information of any kind.For example, imagine that you are asked to mentally add a series of numbers.In order to accomplish this task, you need to keep track of the partial sum that needs to be updated every time a new number is given.The working memory is precisely what would make it possible to maintain (i.e. temporarily store) the partial sum and to update it (i.e. manipulate).In this thesis, we propose to explore the neuronal implementations of this working memory using a limited number of hypotheses.To do this, we place ourselves in the general context of recurrent neural networks and we propose to use in particular the reservoir computing paradigm.This type of very simple model nevertheless makes it possible to produce dynamics that learning can take advantage of to solve a given task.In this job, the task to be performed is a gated working memory task.The model receives as input a signal which controls the update of the memory.When the door is closed, the model should maintain its current memory state, while when open, it should update it based on an input.In our approach, this additional input is present at all times, even when there is no update to do.In other words, we require our model to be an open system, i.e. a system which is always disturbed by its inputs but which must nevertheless learn to keep a stable memory.In the first part of this work, we present the architecture of the model and its properties, then we show its robustness through a parameter sensitivity study.This shows that the model is extremely robust for a wide range of parameters.More or less, any random population of neurons can be used to perform gating.Furthermore, after learning, we highlight an interesting property of the model, namely that information can be maintained in a fully distributed manner, i.e. without being correlated to any of the neurons but only to the dynamics of the group.More precisely, working memory is not correlated with the sustained activity of neurons, which has nevertheless been observed for a long time in the literature and recently questioned experimentally.This model confirms these results at the theoretical level.In the second part of this work, we show how these models obtained by learning can be extended in order to manipulate the information which is in the latent space.We therefore propose to consider conceptors which can be conceptualized as a set of synaptic weights which constrain the dynamics of the reservoir and direct it towards particular subspaces; for example subspaces corresponding to the maintenance of a particular value.More generally, we show that these conceptors can not only maintain information, they can also maintain functions.In the case of mental arithmetic mentioned previously, these conceptors then make it possible to remember and apply the operation to be carried out on the various inputs given to the system.These conceptors therefore make it possible to instantiate a procedural working memory in addition to the declarative working memory.We conclude this work by putting this theoretical model into perspective with respect to biology and neurosciences
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Pinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.

Повний текст джерела
Анотація:
Este trabalho introduz novos algoritmos de redes neurais para o processamento online de padrões espaço-temporais, estendendo o algoritmo Incremental Gaussian Mixture Network (IGMN). O algoritmo IGMN é uma rede neural online incremental que aprende a partir de uma única passada através de dados por meio de uma versão incremental do algoritmo Expectation-Maximization (EM) combinado com regressão localmente ponderada (Locally Weighted Regression, LWR). Quatro abordagens diferentes são usadas para dar capacidade de processamento temporal para o algoritmo IGMN: linhas de atraso (Time-Delay IGMN), uma camada de reservoir (Echo-State IGMN), média móvel exponencial do vetor de entrada reconstruído (Merge IGMN) e auto-referência (Recursive IGMN). Isso resulta em algoritmos que são online, incrementais, agressivos e têm capacidades temporais e, portanto, são adequados para tarefas com memória ou estados internos desconhecidos, caracterizados por fluxo contínuo ininterrupto de dados, e que exigem operação perpétua provendo previsões sem etapas separadas para aprendizado e execução. Os algoritmos propostos são comparados a outras redes neurais espaço-temporais em 8 tarefas de previsão de séries temporais. Dois deles mostram desempenhos satisfatórios, em geral, superando as abordagens existentes. Uma melhoria geral para o algoritmo IGMN também é descrita, eliminando um dos parâmetros ajustáveis manualmente e provendo melhores resultados.
This work introduces novel neural networks algorithms for online spatio-temporal pattern processing by extending the Incremental Gaussian Mixture Network (IGMN). The IGMN algorithm is an online incremental neural network that learns from a single scan through data by means of an incremental version of the Expectation-Maximization (EM) algorithm combined with locally weighted regression (LWR). Four different approaches are used to give temporal processing capabilities to the IGMN algorithm: time-delay lines (Time-Delay IGMN), a reservoir layer (Echo-State IGMN), exponential moving average of reconstructed input vector (Merge IGMN) and self-referencing (Recursive IGMN). This results in algorithms that are online, incremental, aggressive and have temporal capabilities, and therefore are suitable for tasks with memory or unknown internal states, characterized by continuous non-stopping data-flows, and that require life-long learning while operating and giving predictions without separated stages. The proposed algorithms are compared to other spatio-temporal neural networks in 8 time-series prediction tasks. Two of them show satisfactory performances, generally improving upon existing approaches. A general enhancement for the IGMN algorithm is also described, eliminating one of the algorithm’s manually tunable parameters and giving better results.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Andersson, Casper. "Reservoir Computing Approach for Network Intrusion Detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54983.

Повний текст джерела
Анотація:
Identifying intrusions in computer networks is important to be able to protect the network. The network is the entry point that attackers use in an attempt to gain access to valuable information from a company or organization or to simply destroy digital property. There exist many good methods already but there is always room for improvement. This thesis proposes to use reservoir computing as a feature extractor on network traffic data as a time series to train machine learning models for anomaly detection. The models used in this thesis are neural network, support vector machine, and linear discriminant analysis. The performance is measured in terms of detection rate, false alarm rate, and overall accuracy of the identification of attacks in the test data. The results show that the neural network generally improved with the use of a reservoir network. Support vector machine wasn't hugely affected by the reservoir. Linear discriminant analysis always got worse performance. Overall, the time aspect of the reservoir didn't have a huge effect. The performance of my experiments is inferior to those of previous works, but it might perform better if a separate feature selection or extraction is done first. Extracting a sequence to a single vector and determining if it contained any attacks worked very well when the sequences contained several attacks, otherwise not so well.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Lawrie, Sofía. "Information representation and processing in neuronal networks: from biological to artificial systems and from first to second-order statistics." Doctoral thesis, Universitat Pompeu Fabra, 2022. http://hdl.handle.net/10803/673989.

Повний текст джерела
Анотація:
Neuronal networks are today hypothesized to the basis for the computing capabilities of biological nervous systems. In the same manner, artificial neuronal systems are intensively exploited for a diversity of industrial and scientific applications. However, how information is represented and processed by these networks remains under debate, meaning that it is not clear which sets of neuronal activity features are useful for computation. In this thesis, I present a set of results that link the first-order statistics of neuronal activity with behavior, in the general context of encoding/decoding to analyse experimental data collected while non human primates performed a working memory task. Subsequently, I go beyond the first-order and show that the second-order statistics of neuronal activity in reservoir computing, a recurrent artificial network model, make up a robust candidate for information representation and transmission for the classification of multivariate inputs.
Las redes neuronales se presentan hoy, hipotéticamente, como las responsables de las capacidades computacionales de los sistemas nerviosos biológicos. De la misma manera, los sistemas neuronales artificiales son intensamente explotados en una diversidad de aplicaciones industriales y científicas. No obstante, cómo la información es representada y procesada por estas redes está aún sujeto a debate. Es decir, no está claro qué propiedades de la actividad neuronal son útiles para llevar a cabo computaciones. En esta tesis, presento un conjunto de resultados que relaciona el primer orden estadístico de la actividad neuronal con comportamiento, en el contexto general de codificación/decodificación, para analizar datos recolectados mientras primates no humanos realizaban una tarea de memoria de trabajo. Subsecuentemente, voy más allá del primer orden y muestro que las estadísticas de segundo orden en computación de reservorios, un modelo de red neuronal artificial y recurrente, constituyen un candidato robusto para la representación y transmisión de información con el fin de clasificar señales multidimensionales.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Martinenghi, Romain. "Démonstration opto-électronique du concept de calculateur neuromorphique par Reservoir Computing." Thesis, Besançon, 2013. http://www.theses.fr/2013BESA2052/document.

Повний текст джерела
Анотація:
Le Reservoir Computing (RC) est un paradigme s’inspirant du cerveau humain, apparu récemment au début des années2000. Il s'agit d'un calculateur neuromorphique habituellement décomposé en trois parties dont la plus importanteappelée "réservoir" est très proche d'un réseau de neurones récurrent. Il se démarque des autres réseaux de neuronesartificiels notamment grâce aux traditionnelles phases d'apprentissage et d’entraînement qui ne sont plus appliquées surla totalité du réseau de neurones mais uniquement sur la lecture du réservoir, ce qui simplifie le fonctionnement etfacilite une réalisation physique. C'est précisément dans ce contexte qu’ont été réalisés les travaux de recherche de cettethèse, durant laquelle nous avons réalisé une première implémentation physique opto-électronique de système RC.Notre approche des systèmes physiques RC repose sur l'utilisation de dynamiques non-linéaires à retards multiples dansl'objectif de reproduire le comportement complexe d'un réservoir. L'utilisation d'un système dynamique purementtemporel pour reproduire la dimension spatio-temporelle d'un réseau de neurones traditionnel, nécessite une mise enforme particulière des signaux d'entrée et de sortie, appelée multiplexage temporel ou encore étape de masquage. Troisannées auront été nécessaires pour étudier et construire expérimentalement nos démonstrateurs physiques basés sur desdynamiques non-linéaires à retards multiples opto-électroniques, en longueur d'onde et en intensité. La validationexpérimentale de nos systèmes RC a été réalisée en utilisant deux tests de calcul standards. Le test NARMA10 (test deprédiction de séries temporelles) et la reconnaissance vocale de chiffres prononcés (test de classification de données) ontpermis de quantifier la puissance de calcul de nos systèmes RC et d'atteindre pour certaines configurations l'état del'art
Reservoir Computing (RC) is a currently emerging new brain-inspired computational paradigm, which appeared in theearly 2000s. It is similar to conventional recurrent neural network (RNN) computing concepts, exhibiting essentiallythree parts: (i) an input layer to inject the information in the computing system; (ii) a central computational layercalled the Reservoir; (iii) and an output layer which is extracting the computed result though a so-called Read-Outprocedure, the latter being determined after a learning and training step. The main originality compared to RNNconsists in the last part, which is the only one concerned by the training step, the input layer and the Reservoir beingoriginally randomly determined and fixed. This specificity brings attractive features to RC compared to RNN, in termsof simplification, efficiency, rapidity, and feasibility of the learning, as well as in terms of dedicated hardwareimplementation of the RC scheme. This thesis is indeed concerned by one of the first a hardware implementation of RC,moreover with an optoelectronic architecture.Our approach to physical RC implementation is based on the use of a sepcial class of complex system for the Reservoir,a nonlinear delay dynamics involving multiple delayed feedback paths. The Reservoir appears thus as a spatio-temporalemulation of a purely temporal dynamics, the delay dynamics. Specific design of the input and output layer are shownto be possible, e.g. through time division multiplexing techniques, and amplitude modulation for the realization of aninput mask to address the virtual nodes in the delay dynamics. Two optoelectronic setups are explored, one involving awavelength nonlinear dynamics with a tunable laser, and another one involving an intensity nonlinear dynamics with anintegrated optics Mach-Zehnder modulator. Experimental validation of the computational efficiency is performedthrough two standard benchmark tasks: the NARMA10 test (prediction task), and a spoken digit recognition test(classification task), the latter showing results very close to state of the art performances, even compared with purenumerical simulation approaches
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Bai, Kang Jun. "Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence Applications." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103711.

Повний текст джерела
Анотація:
Rapid advances in machine learning have made information analysis more efficient than ever before. However, to extract valuable information from trillion bytes of data for learning and decision-making, general-purpose computing systems or cloud infrastructures are often deployed to train a large-scale neural network, resulting in a colossal amount of resources in use while themselves exposing other significant security issues. Among potential approaches, the neuromorphic architecture, which is not only amenable to low-cost implementation, but can also deployed with in-memory computing strategy, has been recognized as important methods to accelerate machine intelligence applications. In this dissertation, theoretical and practical properties of a hybrid neural computing architecture are introduced, which utilizes a dynamic reservoir having the short-term memory to enable the historical learning capability with the potential to classify non-separable functions. The hybrid neural computing architecture integrates both spatial and temporal processing structures, sidestepping the limitations introduced by the vanishing gradient. To be specific, this is made possible through four critical features: (i) a feature extractor built based upon the in-memory computing strategy, (ii) a high-dimensional mapping with the Mackey-Glass neural activation, (iii) a delay-dynamic system with historical learning capability, and (iv) a unique learning mechanism by only updating readout weights. To support the integration of neuromorphic architecture and deep learning strategies, the first generation of delay-feedback reservoir network has been successfully fabricated in 2017, better yet, the spatial-temporal hybrid neural network with an improved delay-feedback reservoir network has been successfully fabricated in 2020. To demonstrate the effectiveness and performance across diverse machine intelligence applications, the introduced network structures are evaluated through (i) time series prediction, (ii) image classification, (iii) speech recognition, (iv) modulation symbol detection, (v) radio fingerprint identification, and (vi) clinical disease identification.
Doctor of Philosophy
Deep learning strategies are the cutting-edge of artificial intelligence, in which the artificial neural networks are trained to extract key features or finding similarities from raw sensory information. This is made possible through multiple processing layers with a colossal amount of neurons, in a similar way to humans. Deep learning strategies run on von Neumann computers are deployed worldwide. However, in today's data-driven society, the use of general-purpose computing systems and cloud infrastructures can no longer offer a timely response while themselves exposing other significant security issues. Arose with the introduction of neuromorphic architecture, application-specific integrated circuit chips have paved the way for machine intelligence applications in recently years. The major contributions in this dissertation include designing and fabricating a new class of hybrid neural computing architecture and implementing various deep learning strategies to diverse machine intelligence applications. The resulting hybrid neural computing architecture offers an alternative solution to accelerate the neural computations required for sophisticated machine intelligence applications with a simple system-level design, and therefore, opening the door to low-power system-on-chip design for future intelligence computing, what is more, providing prominent design solutions and performance improvements for internet of things applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Vinckier, Quentin. "Analog bio-inspired photonic processors based on the reservoir computing paradigm." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/237069.

Повний текст джерела
Анотація:
For many challenging problems where the mathematical description is not explicitly defined, artificial intelligence methods appear to be much more robust compared to traditional algorithms. Such methods share the common property of learning from examples in order to “explore” the problem to solve. Then, they generalize these examples to new and unseen input signals. The reservoir computing paradigm is a bio-inspired approach drawn from the theory of artificial Recurrent Neural Networks (RNNs) to process time-dependent data. This machine learning method was proposed independently by several research groups in the early 2000s. It has enabled a breakthrough in analog information processing, with several experiments demonstrating state-of-the-art performance for a wide range of hard nonlinear tasks. These tasks include for instance dynamic pattern classification, grammar modeling, speechrecognition, nonlinear channel equalization, detection of epileptic seizures, robot control, timeseries prediction, brain-machine interfacing, power system monitoring, financial forecasting, or handwriting recognition. A Reservoir Computer (RC) is composed of three different layers. There is first the neural network itself, called “reservoir”, which consists of a large number of internal variables (i.e. reservoir states) all interconnected together to exchange information. The internal dynamics of such a system, driven by a function of the inputs and the former reservoir states, is thus extremely rich. Through an input layer, a time-dependent input signal is applied to all the internal variables to disturb the neural network dynamics. Then, in the output layer, all these reservoir states are processed, often by taking a linear combination thereof at each time-step, to compute the output signal. Let us note that the presence of a non-linearity somewhere in the system is essential to reach high performance computing on nonlinear tasks. The principal novelty of the reservoir computing paradigm was to propose an RNN where most of the connection weights are generated randomly, except for the weights adjusted to compute the output signal from a linear combination of the reservoir states. In addition, some global parameters can be tuned to get the best performance, depending on the reservoir architecture and on the task. This simple and easy process considerably decreases the training complexity compared to traditional RNNs, for which all the weights needed to be optimized. RC algorithms can be programmed using modern traditional processors. But these electronic processors are better suited to digital processing for which a lot of transistors continuously need to be switched on and off, leading to higher power consumption. As we can intuitively understand, processors with hardware directly dedicated to RC operations – in otherwords analog bio-inspired processors – could be much more efficient regarding both speed and power consumption. Based on the same idea of high speed and low power consumption, the last few decades have seen an increasing use of coherent optics in the transport of information thanks to its high bandwidth and high power efficiency advantages. In order to address the future challenge of high performance, high speed, and power efficient nontrivial computing, it is thus natural to turn towards optical implementations of RCs using coherent light. Over the last few years, several physical implementations of RCs using optics and (opto)electronics have been successfully demonstrated. In the present PhD thesis, the reservoirs are based on a large coherently driven linear passive fiber cavity. The internal states are encoded by time-multiplexing in the cavity. Each reservoir state is therefore processed sequentially. This reservoir architecture exhibits many qualities that were either absent or not simultaneously present in previous works: we can perform analog optical signal processing; the easy tunability of each key parameter achieves the best operating point for each task; the system is able to reach a strikingly weak noise floor thanks to the absence of active elements in the reservoir itself; a richer dynamics is provided by operating in coherent light, as the reservoir states are encoded in both the amplitude and the phase of the electromagnetic field; high power efficiency is obtained as a result of the passive nature and simplicity of the setup. However, it is important to note that at this stage we have only obtained low optical power consumption for the reservoir itself. We have not tried to minimize the overall power consumption, including all control electronics. The first experiment reported in chapter 4 uses a quadratic non-linearity on each reservoir state in the output layer. This non-linearity is provided by a readout photodiode since it produces a current proportional to the intensity of the light. On a number of benchmark tasks widely used in the reservoir computing community, the error rates demonstrated with this RC architecture – both in simulation and experimentally – are, to our knowledge, the lowest obtained so far. Furthermore, the analytic model describing our experiment is also of interest, asit constitutes a very simple high performance RC algorithm. The setup reported in chapter 4 requires offline digital post-processing to compute its output signal by summing the weighted reservoir states at each time-step. In chapter 5, we numerically study a realistic model of an optoelectronic “analog readout layer” adapted on the setup presented in chapter 4. This readout layer is based on an RLC low-pass filter acting as an integrator over the weighted reservoir states to autonomously generate the RC output signal. On three benchmark tasks, we obtained very good simulation results that need to be confirmed experimentally in the future. These promising simulation results pave the way for standalone high performance physical reservoir computers.The RC architecture presented in chapter 5 is an autonomous optoelectronic implementation able to electrically generate its output signal. In order to contribute to the challenge of all-optical computing, chapter 6 highlights the possibility of processing information autonomously and optically using an RC based on two coherently driven passive linear cavities. The first one constitutes the reservoir itself and pumps the second one, which acts as an optical integrator onthe weighted reservoir states to optically generate the RC output signal after sampling. A sine non-linearity is implemented on the input signal, whereas both the reservoir and the readout layer are kept linear. Let us note that, because the non-linearity in this system is provided by a Mach-Zehnder modulator on the input signal, the input signal of this RC configuration needs to be an electrical signal. On the contrary, the RC implementation presented in chapter 5 processes optical input signals, but its output is electrical. We obtained very good simulation results on a single task and promising experimental results on two tasks. At the end of this chapter, interesting perspectives are pointed out to improve the performance of this challenging experiment. This system constitutes the first autonomous photonic RC able to optically generate its output signal.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Nowshin, Fabiha. "Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103854.

Повний текст джерела
Анотація:
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
M.S.
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Cazin, Nicolas. "A replay driven model of spatial sequence learning in the hippocampus-prefrontal cortex network using reservoir computing." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1133/document.

Повний текст джерела
Анотація:
Alors que le rat apprend à chercher de multiples sources de nourriture ou d'eau, des processus d'apprentissage de séquences spatiales et de rejeu ont lieu dans l'hippocampe et le cortex préfrontal.Des études récentes (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) mettent en évidence que la navigation spatiale dans l'hippocampe de rat implique le rejeu de l'activation de cellules de lieu durant les étant de sommeil et d'éveil en générant des petites sous séquences contigues d'activation de cellules de lieu cohérentes entre elles. Ces fragments sont observés en particulier lors d'évènements sharp wave ripple (SPWR).Les phénomènes de rejeu lors du sommeil dans le contexte de la consolidation de la mémoire à long terme ont beaucoup attiré l'attention. Ici nous nous focalisons sur le rôle du rejeu pendant l'état d'éveil.Nous formulons l'hypothèse que ces fragments peuvent être utilisés par le cortex préfrontal pour réaliser une tâche d'apprentissage spatial comprenant plusieurs buts.Nous proposons de développer un modèle intégré d'hippocampe et de cortex préfrontal capable de générer des séquences d'activation de cellules de lieu.Le travail collaboratif proposé prolonge les travaux existants sur un modèle de cognition spatiale pour des tâches orientés but plus simples (Barrera and Weitzenfeld 2008; Barrera et al. 2015) avec un nouveau modèle basé sur le rejeu pour la formation de mémoire dans l'hippocampe et l'apprentissage et génération de séquences spatiales par le cortex préfrontal.En contraste avec les travaux existants d'apprentissage de séquence qui repose sur des règles d'apprentissage sophistiquées, nous proposons d'utiliser un paradigme calculatoire appelé calcul par réservoir (Dominey 1995) dans lequel des groupes importants de neurones artificiels dont la connectivité est fixe traitent dynamiquement l'information au travers de réverbérations. Ce modèle calculatoire par réservoir consolide les fragments de séquence d'activations de cellule de lieu en une plus grande séquence qui pourra être rappelée elle-même par des fragments de séquence.Le travail proposé est supposé contribuer à une nouvelle compréhension du rôle du phénomène de rejeu dans l'acquisition de la mémoire dans une tâche complexe liée à l'apprentissage de séquence.Cette compréhension opérationnelle sera mise à profit et testée dans l'architecture cognitive incarnée d'un robot mobile selon l'approche animat (Wilson 1991) [etc...]
As rats learn to search for multiple sources of food or water in a complex environment, processes of spatial sequence learning and recall in the hippocampus (HC) and prefrontal cortex (PFC) are taking place. Recent studies (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) show that spatial navigation in the rat hippocampus involves the replay of place-cell firing during awake and sleep states generating small contiguous subsequences of spatially related place-cell activations that we will call "snippets". These "snippets" occur primarily during sharp-wave-ripple (SPWR) events. Much attention has been paid to replay during sleep in the context of long-term memory consolidation. Here we focus on the role of replay during the awake state, as the animal is learning across multiple trials.We hypothesize that these "snippets" can be used by the PFC to achieve multi-goal spatial sequence learning.We propose to develop an integrated model of HC and PFC that is able to form place-cell activation sequences based on snippet replay. The proposed collaborative research will extend existing spatial cognition model for simpler goal-oriented tasks (Barrera and Weitzenfeld 2008; Barrera et al. 2015) with a new replay-driven model for memory formation in the hippocampus and spatial sequence learning and recall in PFC.In contrast to existing work on sequence learning that relies heavily on sophisticated learning algorithms and synaptic modification rules, we propose to use an alternative computational framework known as reservoir computing (Dominey 1995) in which large pools of prewired neural elements process information dynamically through reverberations. This reservoir computational model will consolidate snippets into larger place-cell activation sequences that may be later recalled by subsets of the original sequences.The proposed work is expected to generate a new understanding of the role of replay in memory acquisition in complex tasks such as sequence learning. That operational understanding will be leveraged and tested on a an embodied-cognitive real-time framework of a robot, related to the animat paradigm (Wilson 1991) [etc...]
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Gargesa, Padmashri. "Reward-driven Training of Random Boolean Network Reservoirs for Model-Free Environments." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/669.

Повний текст джерела
Анотація:
Reservoir Computing (RC) is an emerging machine learning paradigm where a fixed kernel, built from a randomly connected "reservoir" with sufficiently rich dynamics, is capable of expanding the problem space in a non-linear fashion to a higher dimensional feature space. These features can then be interpreted by a linear readout layer that is trained by a gradient descent method. In comparison to traditional neural networks, only the output layer needs to be trained, which leads to a significant computational advantage. In addition, the short term memory of the reservoir dynamics has the ability to transform a complex temporal input state space to a simple non-temporal representation. Adaptive real-time systems are multi-stage decision problems that can be used to train an agent to achieve a preset goal by performing an optimal action at each timestep. In such problems, the agent learns through continuous interactions with its environment. Conventional techniques to solving such problems become computationally expensive or may not converge if the state-space being considered is large, partially observable, or if short term memory is required in optimal decision making. The objective of this thesis is to use reservoir computers to solve such goal-driven tasks, where no error signal can be readily calculated to apply gradient descent methodologies. To address this challenge, we propose a novel reinforcement learning approach in combination with reservoir computers built from simple Boolean components. Such reservoirs are of interest because they have the potential to be fabricated by self-assembly techniques. We evaluate the performance of our approach in both Markovian and non-Markovian environments. We compare the performance of an agent trained through traditional Q-Learning. We find that the reservoir-based agent performs successfully in these problem contexts and even performs marginally better than Q-Learning agents in certain cases. Our proposed approach allows to retain the advantage of traditional parameterized dynamic systems in successfully modeling embedded state-space representations while eliminating the complexity involved in training traditional neural networks. To the best of our knowledge, our method of training a reservoir readout layer through an on-policy boot-strapping approach is unique in the field of random Boolean network reservoirs.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Baylon, Fuentes Antonio. "Ring topology of an optical phase delayed nonlinear dynamics for neuromorphic photonic computing." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2047/document.

Повний текст джерела
Анотація:
Aujourd'hui, la plupart des ordinateurs sont encore basés sur des concepts développés il y a plus de 60 ans par Alan Turing et John von Neumann. Cependant, ces ordinateurs numériques ont déjà commencé à atteindre certaines limites physiques via la technologie de la microélectronique au silicium (dissipation, vitesse, limites d'intégration, consommation d'énergie). Des approches alternatives, plus puissantes, plus efficaces et moins consommatrices d'énergie, constituent depuis plusieurs années un enjeu scientifique majeur. Beaucoup de ces approches s'inspirent naturellement du cerveau humain, dont les principes opérationnels sont encore loin d'être compris. Au début des années 2000, la communauté scientifique s'est aperçue qu'une modification du réseau neuronal récurrent (RNN), plus simple et maintenant appelée Reservoir Computing (RC), est parfois plus efficace pour certaines fonctionnalités, et est un nouveau paradigme de calcul qui s'inspire du cerveau. Sa structure est assez semblable aux concepts classiques de RNN, présentant généralement trois parties: une couche d'entrée pour injecter l'information dans un système dynamique non-linéaire (Write-In), une seconde couche où l'information d'entrée est projetée dans un espace de grande dimension (appelé réservoir dynamique) et une couche de sortie à partir de laquelle les informations traitées sont extraites par une fonction dite de lecture-sortie. Dans l'approche RC, la procédure d'apprentissage est effectuée uniquement dans la couche de sortie, tandis que la couche d'entrée et la couche réservoir sont fixées de manière aléatoire, ce qui constitue l'originalité principale du RC par rapport aux méthodes RNN. Cette fonctionnalité permet d'obtenir plus d'efficacité, de rapidité, de convergence d'apprentissage, et permet une mise en œuvre expérimentale. Cette thèse de doctorat a pour objectifs d'implémenter pour la première fois le RC photoniques en utilisant des dispositifs de télécommunication. Notre mise en œuvre expérimentale est basée sur un système dynamique non linéaire à retard, qui repose sur un oscillateur électro-optique (EO) avec une modulation de phase différentielle. Cet oscillateur EO a été largement étudié dans le contexte de la cryptographie optique du chaos. La dynamique présentée par de tels systèmes est en effet exploitée pour développer des comportements complexes dans un espace de phase à dimension infinie, et des analogies avec la dynamique spatio-temporelle (tels que les réseaux neuronaux) sont également trouvés dans la littérature. De telles particularités des systèmes à retard ont conforté l'idée de remplacer le RNN traditionnel (généralement difficile à concevoir technologiquement) par une architecture à retard d'EO non linéaire. Afin d'évaluer la puissance de calcul de notre approche RC, nous avons mis en œuvre deux tests de reconnaissance de chiffres parlés (tests de classification) à partir d'une base de données standard en intelligence artificielle (TI-46 et AURORA-2), et nous avons obtenu des performances très proches de l'état de l'art tout en établissant un nouvel état de l'art en ce qui concerne la vitesse de classification. Notre approche RC photonique nous a en effet permis de traiter environ 1 million de mots par seconde, améliorant la vitesse de traitement de l'information d'un facteur supérieur à ~3
Nowadays most of computers are still based on concepts developed more than 60 years ago by Alan Turing and John von Neumann. However, these digital computers have already begun to reach certain physical limits of their implementation via silicon microelectronics technology (dissipation, speed, integration limits, energy consumption). Alternative approaches, more powerful, more efficient and with less consume of energy, have constituted a major scientific issue for several years. Many of these approaches naturally attempt to get inspiration for the human brain, whose operating principles are still far from being understood. In this line of research, a surprising variation of recurrent neural network (RNN), simpler, and also even sometimes more efficient for features or processing cases, has appeared in the early 2000s, now known as Reservoir Computing (RC), which is currently emerging new brain-inspired computational paradigm. Its structure is quite similar to the classical RNN computing concepts, exhibiting generally three parts: an input layer to inject the information into a nonlinear dynamical system (Write-In), a second layer where the input information is projected in a space of high dimension called dynamical reservoir and an output layer from which the processed information is extracted through a so-called Read-Out function. In RC approach the learning procedure is performed in the output layer only, while the input and reservoir layer are randomly fixed, being the main originality of RC compared to the RNN methods. This feature allows to get more efficiency, rapidity and a learning convergence, as well as to provide an experimental implementation solution. This PhD thesis is dedicated to one of the first photonic RC implementation using telecommunication devices. Our experimental implementation is based on a nonlinear delayed dynamical system, which relies on an electro-optic (EO) oscillator with a differential phase modulation. This EO oscillator was extensively studied in the context of the optical chaos cryptography. Dynamics exhibited by such systems are indeed known to develop complex behaviors in an infinite dimensional phase space, and analogies with space-time dynamics (as neural network ones are a kind of) are also found in the literature. Such peculiarities of delay systems supported the idea of replacing the traditional RNN (usually difficult to design technologically) by a nonlinear EO delay architecture. In order to evaluate the computational power of our RC approach, we implement two spoken digit recognition tests (classification tests) taken from a standard databases in artificial intelligence TI-46 and AURORA-2, obtaining results very close to state-of-the-art performances and establishing state-of-the-art in classification speed. Our photonic RC approach allowed us to process around of 1 million of words per second, improving the information processing speed by a factor ~3
Стилі APA, Harvard, Vancouver, ISO та ін.
25

(8088431), Gopalakrishnan Srinivasan. "Training Spiking Neural Networks for Energy-Efficient Neuromorphic Computing." Thesis, 2019.

Знайти повний текст джерела
Анотація:

Spiking Neural Networks (SNNs), widely known as the third generation of artificial neural networks, offer a promising solution to approaching the brains' processing capability for cognitive tasks. With more biologically realistic perspective on input processing, SNN performs neural computations using spikes in an event-driven manner. The asynchronous spike-based computing capability can be exploited to achieve improved energy efficiency in neuromorphic hardware. Furthermore, SNN, on account of spike-based processing, can be trained in an unsupervised manner using Spike Timing Dependent Plasticity (STDP). STDP-based learning rules modulate the strength of a multi-bit synapse based on the correlation between the spike times of the input and output neurons. In order to achieve plasticity with compressed synaptic memory, stochastic binary synapse is proposed where spike timing information is embedded in the synaptic switching probability. A bio-plausible probabilistic-STDP learning rule consistent with Hebbian learning theory is proposed to train a network of binary as well as quaternary synapses. In addition, hybrid probabilistic-STDP learning rule incorporating Hebbian and anti-Hebbian mechanisms is proposed to enhance the learnt representations of the stochastic SNN. The efficacy of the presented learning rules are demonstrated for feed-forward fully-connected and residual convolutional SNNs on the MNIST and the CIFAR-10 datasets.

STDP-based learning is limited to shallow SNNs (<5 layers) yielding lower than acceptable accuracy on complex datasets. This thesis proposes block-wise complexity-aware training algorithm, referred to as BlocTrain, for incrementally training deep SNNs with reduced memory requirements using spike-based backpropagation through time. The deep network is divided into blocks, where each block consists of few convolutional layers followed by an auxiliary classifier. The blocks are trained sequentially using local errors from the respective auxiliary classifiers. Also, the deeper blocks are trained only on the hard classes determined using the class-wise accuracy obtained from the classifier of previously trained blocks. Thus, BlocTrain improves the training time and computational efficiency with increasing block depth. In addition, higher computational efficiency is obtained during inference by exiting early for easy class instances and activating the deeper blocks only for hard class instances. The ability of BlocTrain to provide improved accuracy as well as higher training and inference efficiency compared to end-to-end approaches is demonstrated for deep SNNs (up to 11 layers) on the CIFAR-10 and the CIFAR-100 datasets.

Feed-forward SNNs are typically used for static image recognition while recurrent Liquid State Machines (LSMs) have been shown to encode time-varying speech data. Liquid-SNN, consisting of input neurons sparsely connected by plastic synapses to randomly interlinked reservoir of spiking neurons (or liquid), is proposed for unsupervised speech and image recognition. The strength of the synapses interconnecting the input and liquid are trained using STDP, which makes it possible to infer the class of a test pattern without a readout layer typical in standard LSMs. The Liquid-SNN suffers from scalability challenges due to the need to primarily increase the number of neurons to enhance the accuracy. SpiLinC, composed of an ensemble of multiple liquids, where each liquid is trained on a unique input segment, is proposed as a scalable model to achieve improved accuracy. SpiLinC recognizes a test pattern by combining the spiking activity of the individual liquids, each of which identifies unique input features. As a result, SpiLinC offers comparable accuracy to Liquid-SNN with added synaptic sparsity and faster training convergence, which is validated on the digit subset of TI46 speech corpus and the MNIST dataset.

Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ghani, A., Chan H. See, Hassan S. O. Migdadi, Rameez Asif, Raed A. Abd-Alhameed, and James M. Noras. "Reconfigurable neurons - making the most of configurable logic blocks (CLBs)." 2015. http://hdl.handle.net/10454/9152.

Повний текст джерела
Анотація:
No
An area-efficient hardware architecture is used to map fully parallel cortical columns on Field Programmable Gate Arrays (FPGA) is presented in this paper. To demonstrate the concept of this work, the proposed architecture is shown at the system level and benchmarked with image and speech recognition applications. Due to the spatio-temporal nature of spiking neurons, this has allowed such architectures to map on FPGAs in which communication can be performed through the use of spikes and signal can be represented in binary form. The process and viability of designing and implementing the multiple recurrent neural reservoirs with a novel multiplier-less reconfigurable architectures is described.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Vilimelis, Aceituno Pau. "Structure, Dynamics and Self-Organization in Recurrent Neural Networks: From Machine Learning to Theoretical Neuroscience." 2020. https://ul.qucosa.de/id/qucosa%3A71390.

Повний текст джерела
Анотація:
At a first glance, artificial neural networks, with engineered learning algorithms and carefully chosen nonlinearities, are nothing like the complicated self-organized spiking neural networks studied by theoretical neuroscientists. Yet, both adapt to their inputs, keep information from the past in their state space and are able of learning, implying that some information processing principles should be common to both. In this thesis we study those principles by incorporating notions of systems theory, statistical physics and graph theory into artificial neural networks and theoretical neuroscience models. % TO DO: What is different in this thesis? -> classical signal processing with complex systems on top The starting point for this thesis is \ac{RC}, a learning paradigm used both in machine learning\cite{jaeger2004harnessing} and in theoretical neuroscience\cite{maass2002real}. A neural network in \ac{RC} consists of two parts, a reservoir – a directed and weighted network of neurons that projects the input time series onto a high dimensional space – and a readout which is trained to read the state of the neurons in the reservoir and combine them linearly to give the desired output. In classical \ac{RC}, the reservoir is randomly initialized and left untrained, which alleviates the training costs in comparison to other recurrent neural networks. However, this lack of training implies that reservoirs are not adapted to specific tasks and thus their performance is often lower than that of other neural networks. Our contribution has been to show how knowledge about a task can be integrated into the reservoir architecture, so that reservoirs can be tailored to specific problems without training. We do this design by identifying two features that are useful for machine learning: the memory of the reservoir and its power spectra. First we show that the correlations between neurons limit the capacity of the reservoir to retain traces of previous inputs, and demonstrate that those correlations are controlled by moduli of the eigenvalues of the adjacency matrix of the reservoir. Second, we prove that when the reservoir resonates at the frequencies that are present on the desired output signal, the performance of the readout increases. Knowing the features of the reservoir dynamics that we need, the next question is how to impose them. The simplest way to design a network with that resonates at a certain frequency is by adding cycles, which act as feedback loops, but this also induces correlations and hence memory modifications. To disentangle the frequencies and the memory design, we studied how the addition of cycles modifies the eigenvalues in the adjacency matrix of the network. Surprisingly, the shape of the eigenvalues is quite beautiful \cite{aceituno2019universal} and can be characterized using random matrix theory tools. Combining this knowledge with our result relating eigenvalues and correlations, we designed an heuristic that tailors reservoirs to specific tasks and showed that it improves upon state of the art \ac{RC} in three different machine learning tasks. Although this idea works in the machine learning version of \ac{RC}, there is one fundamental problem when we try to translate to the world of theoretical neuroscience: the proposed frequency adaptation requires prior knowledge of the task, which might not be plausible in a biological neural network. Therefore the following questions are whether those resonances can emerge by unsupervised learning, and which kind of learning rules would be required. Remarkably, these resonances can be induced by the well-known Spike Time-Dependent Plasticity (STDP) combined with homeostatic mechanisms. We show this by deriving two self-consistent equations: one where the activity of every neuron can be calculated from its synaptic weights and its external inputs and a second one where the synaptic weights can be obtained from the neural activity. By considering spatio-temporal symmetries in our inputs we obtained two families of solutions to those equations where a periodic input is enhanced by the neural network after STDP. This approach shows that periodic and quasiperiodic inputs can induce resonances that agree with the aforementioned \ac{RC} theory. Those results, although rigorous, are expressed on a language of statistical physics and cannot be easily tested or verified in real, scarce data. To make them more accessible to the neuroscience community we showed that latency reduction, a well-known effect of STDP\cite{song2000competitive} which has been experimentally observed \cite{mehta2000experience}, generates neural codes that agree with the self-consistency equations and their solutions. In particular, this analysis shows that metabolic efficiency, synchronization and predictions can emerge from that same phenomena of latency reduction, thus closing the loop with our original machine learning problem. To summarize, this thesis exposes principles of learning recurrent neural networks that are consistent with adaptation in the nervous system and also improve current machine learning methods. This is done by leveraging features of the dynamics of recurrent neural networks such as resonances and correlations in machine learning problems, then imposing the required dynamics into reservoir computing through control theory notions such as feedback loops and spectral analysis. Then we assessed the plausibility of such adaptation in biological networks, deriving solutions from self-organizing processes that are biologically plausible and align with the machine learning prescriptions. Finally, we relate those processes to learning rules in biological neurons, showing how small local adaptations of the spike times can lead to neural codes that are efficient and can be interpreted in machine learning terms.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

"Predicting and Controlling Complex Dynamical Systems." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57150.

Повний текст джерела
Анотація:
abstract: Complex dynamical systems are the kind of systems with many interacting components that usually have nonlinear dynamics. Those systems exist in a wide range of disciplines, such as physical, biological, and social fields. Those systems, due to a large amount of interacting components, tend to possess very high dimensionality. Additionally, due to the intrinsic nonlinear dynamics, they have tremendous rich system behavior, such as bifurcation, synchronization, chaos, solitons. To develop methods to predict and control those systems has always been a challenge and an active research area. My research mainly concentrates on predicting and controlling tipping points (saddle-node bifurcation) in complex ecological systems, comparing linear and nonlinear control methods in complex dynamical systems. Moreover, I use advanced artificial neural networks to predict chaotic spatiotemporal dynamical systems. Complex networked systems can exhibit a tipping point (a “point of no return”) at which a total collapse occurs. Using complex mutualistic networks in ecology as a prototype class of systems, I carry out a dimension reduction process to arrive at an effective two-dimensional (2D) system with the two dynamical variables corresponding to the average pollinator and plant abundances, respectively. I demonstrate that, using 59 empirical mutualistic networks extracted from real data, our 2D model can accurately predict the occurrence of a tipping point even in the presence of stochastic disturbances. I also develop an ecologically feasible strategy to manage/control the tipping point by maintaining the abundance of a particular pollinator species at a constant level, which essentially removes the hysteresis associated with tipping points. Besides, I also find that the nodal importance ranking for nonlinear and linear control exhibits opposite trends: for the former, large degree nodes are more important but for the latter, the importance scale is tilted towards the small-degree nodes, suggesting strongly irrelevance of linear controllability to these systems. Focusing on a class of recurrent neural networks - reservoir computing systems that have recently been exploited for model-free prediction of nonlinear dynamical systems, I uncover a surprising phenomenon: the emergence of an interval in the spectral radius of the neural network in which the prediction error is minimized.
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2020
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Castellano, Marta. "Computational Principles of Neural Processing: modulating neural systems through temporally structured stimuli." Doctoral thesis, 2014. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2014121112959.

Повний текст джерела
Анотація:
In order to understand how the neural system encodes and processes information, research has focused on the study of neural representations of simple stimuli, paying no particular attention to it's temporal structure, with the assumption that a deeper understanding of how the neural system processes simpli fied stimuli will lead to an understanding of how the brain functions as a whole [1]. However, time is intrinsically bound to neural processing as all sensory, motor, and cognitive processes are inherently dynamic. Despite the importance of neural and stimulus dynamics, little is known of how the neural system represents rich spatio-temporal stimulus, which ultimately link the neural system to a continuously changing environment. The purpose of this thesis is to understand whether and how temporally-structured neural activity modulates the processing of information within the brain, proposing in turn that, the precise interaction between the spatio-temporal structure of the stimulus and the neural system is particularly relevant, particularly when considering the ongoing plasticity mechanisms which allow the neural system to learn from experience. In order to answer these questions, three studies were conducted. First, we studied the impact of spiking temporal structure on a single neuron spiking response, and explored in which way the functional connections to pre-synaptic neurons are modulated through adaptation. Our results suggest that, in a generic spiking neuron, the temporal structure of pre-synaptic excitatory and inhibitory neurons modulate both the spiking response of that same neuron and, most importantly, the speed and strength of learning. In the second, we present a generic model of a spiking neural network that processes rich spatio-temporal stimuli, and explored whether the processing of stimulus within the network is modulated due to the interaction with an external dynamical system (i.e. extracellular media), as well as several plasticity mechanisms. Our results indicate that the memory capacity, that re ects a dynamic short-term memory of incoming stimuli, can be extended on the presence of plasticity and through the interaction with an external dynamical system, while maintaining the network dynamics in a regime suitable for information processing. Finally, we characterized cortical signals of human subjects (electroencephalography, EEG) associated to a visual categorization task. Among other aspects, we studied whether changes in the dynamics of the stimulus leads to a changes in the neural processing at the cortical level, and introduced the relevance of large-scale integration for cognitive processing. Our results suggest that the dynamic synchronization across distributed cortical areas is stimulus specific and specifically linked to perceptual grouping. Taken together, the results presented here suggest that the temporal structure of the stimulus modulates how the neural system encodes and processes information within single neurons, network of neurons and cortical areas. In particular, the results indicate that timing modulates single neuron connectivity structures, the memory capability of networks of neurons, and the cortical representation of a visual stimuli. While the learning of invariant representations remains as the best framework to account for a number of neural processes (e.g. long-term memory [2]), the reported studies seem to provide support the idea that, at least to some extent, the neural system functions in a non-stationary fashion, where the processing of information is modulated by the stimulus dynamics itself. Altogether, this thesis highlights the relevance of understanding adaptive processes and their interaction with the temporal structure of the stimulus, arguing that a further understanding how the neural system processes dynamic stimuli is crucial for the further understanding of neural processing itself, and any theory that aims to understand neural processing should consider the processing of dynamic signals. 1. Frankish, K., and Ramsey, W. The Cambridge Handbook of Cognitive Science. Cambridge University Press, 2012. // 2. McGaugh, J. L. Memory{a Century of Consolidation. Science 287, 5451 (Jan. 2000), 248{251.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Daouda, Tariq. "Génération et reconnaissance de rythmes au moyen de réseaux de neurones à réservoir." Thèse, 2010. http://hdl.handle.net/1866/4931.

Повний текст джерела
Анотація:
Les réseaux de neurones à réservoir, dont le principe est de combiner un vaste réseau de neurones fixes avec un apprenant ne possédant aucune forme de mémoire, ont récemment connu un gain en popularité dans les communautés d’apprentissage machine, de traitement du signal et des neurosciences computationelles. Ces réseaux qui peuvent être classés en deux catégories : 1. les réseaux à états échoïques (ESN)[29] dont les activations des neurones sont des réels 2. les machines à états liquides (LSM)[43] dont les neurones possèdent des potentiels d’actions, ont été appliqués à différentes tâches [11][64][49][45][38] dont la génération de séquences mélodiques [30]. Dans le cadre de la présente recherche, nous proposons deux nouveaux modèles à base de réseaux de neurones à réservoir. Le premier est un modèle pour la reconnaissance de rythmes utilisant deux niveaux d’apprentissage, et avec lequel nous avons été en mesure d’obtenir des résultats satisfaisants tant au niveau de la reconnaissance que de la résistance au bruit. Le second modèle sert à l’apprentissage et à la génération de séquences périodiques. Ce modèle diffère du modèle génératif classique utilisé avec les ESN à la fois au niveau de ses entrées, puisqu’il possède une Horloge, ainsi qu’au niveau de l’algorithme d’apprentissage, puisqu’il utilise un algorithme que nous avons spécialement développé pour cette tache et qui se nomme "Orbite". La combinaison de ces deux éléments, nous a permis d’obtenir de bons résultats, pour la génération, le sur-apprentissage et l’extraction de données. Nous pensons également que ce modèle ouvre une fenêtre intéressante vers la réalisation d’un orchestre entièrement virtuel et nous proposons deux architectures possibles que pourrait avoir cet orchestre. Dans la dernière partie de ce travail nous présentons les outils que nous avons développés pour faciliter notre travail de recherche.
Reservoir computing, the combination of a recurrent neural network and one or more memoryless readout units, has seen recent growth in popularity in and machine learning, signal processing and computational neurosciences. Reservoir-based methods have been successfully applied to a wide range of time series problems [11][64][49][45][38] including music [30], and usually can be found in two flavours: Echo States Networks(ESN)[29], where the reservoir is composed of mean rates neurons, and Liquid Sates Machines (LSM),[43] where the reservoir is composed of spiking neurons. In this work, we propose two new models based upon the ESN architecture. The first one is a model for rhythm recognition that uses two levels of learning and with which we have been able to get satisfying results on both recognition and noise resistance. The second one is a model for learning and generating periodic sequences, with this model we introduced a new architecture for generative models based upon ESNs where the reservoir receives inputs from a clock, as well as a new learning algorithm that we called "Orbite". By combining these two elements within our model, we were able to get good results on generation, over-fitting and data extraction. We also believe that a combination of several instances of our model can serve as a basis for the elaboration of an entirely virtual orchestra, and we propose two architectures that this orchestra may have. In the last part of this work, we briefly present the tools that we have developed during our research.
Les fichiers sons qui accompagne mon document sont au format midi. Le programme que nous avons développés pour ce travail est en language Python.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії