Littérature scientifique sur le sujet « Spiking neural works »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Spiking neural works ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Spiking neural works"

1

Ponghiran, Wachirawit, et Kaushik Roy. « Spiking Neural Networks with Improved Inherent Recurrence Dynamics for Sequential Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 7 (28 juin 2022) : 8001–8. http://dx.doi.org/10.1609/aaai.v36i7.20771.

Texte intégral
Résumé :
Spiking neural networks (SNNs) with leaky integrate and fire (LIF) neurons, can be operated in an event-driven manner and have internal states to retain information over time, providing opportunities for energy-efficient neuromorphic computing, especially on edge devices. Note, however, many representative works on SNNs do not fully demonstrate the usefulness of their inherent recurrence (membrane potential retaining information about the past) for sequential learning. Most of the works train SNNs to recognize static images by artificially expanded input representation in time through rate coding. We show that SNNs can be trained for practical sequential tasks by proposing modifications to a network of LIF neurons that enable internal states to learn long sequences and make their inherent recurrence resilient to the vanishing gradient problem. We then develop a training scheme to train the proposed SNNs with improved inherent recurrence dynamics. Our training scheme allows spiking neurons to produce multi-bit outputs (as opposed to binary spikes) which help mitigate the mismatch between a derivative of spiking neurons' activation function and a surrogate derivative used to overcome spiking neurons' non-differentiability. Our experimental results indicate that the proposed SNN architecture on TIMIT and LibriSpeech 100h speech recognition dataset yields accuracy comparable to that of LSTMs (within 1.10% and 0.36%, respectively), but with 2x fewer parameters than LSTMs. The sparse SNN outputs also lead to 10.13x and 11.14x savings in multiplication operations compared to GRUs, which are generally considered as a lightweight alternative to LSTMs, on TIMIT and LibriSpeech 100h datasets, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chunduri, Raghavendra K., et Darshika G. Perera. « Neuromorphic Sentiment Analysis Using Spiking Neural Networks ». Sensors 23, no 18 (6 septembre 2023) : 7701. http://dx.doi.org/10.3390/s23187701.

Texte intégral
Résumé :
Over the past decade, the artificial neural networks domain has seen a considerable embracement of deep neural networks among many applications. However, deep neural networks are typically computationally complex and consume high power, hindering their applicability for resource-constrained applications, such as self-driving vehicles, drones, and robotics. Spiking neural networks, often employed to bridge the gap between machine learning and neuroscience fields, are considered a promising solution for resource-constrained applications. Since deploying spiking neural networks on traditional von-Newman architectures requires significant processing time and high power, typically, neuromorphic hardware is created to execute spiking neural networks. The objective of neuromorphic devices is to mimic the distinctive functionalities of the human brain in terms of energy efficiency, computational power, and robust learning. Furthermore, natural language processing, a machine learning technique, has been widely utilized to aid machines in comprehending human language. However, natural language processing techniques cannot also be deployed efficiently on traditional computing platforms. In this research work, we strive to enhance the natural language processing traits/abilities by harnessing and integrating the SNNs traits, as well as deploying the integrated solution on neuromorphic hardware, efficiently and effectively. To facilitate this endeavor, we propose a novel, unique, and efficient sentiment analysis model created using a large-scale SNN model on SpiNNaker neuromorphic hardware that responds to user inputs. SpiNNaker neuromorphic hardware typically can simulate large spiking neural networks in real time and consumes low power. We initially create an artificial neural networks model, and then train the model using an Internet Movie Database (IMDB) dataset. Next, the pre-trained artificial neural networks model is converted into our proposed spiking neural networks model, called a spiking sentiment analysis (SSA) model. Our SSA model using SpiNNaker, called SSA-SpiNNaker, is created in such a way to respond to user inputs with a positive or negative response. Our proposed SSA-SpiNNaker model achieves 100% accuracy and only consumes 3970 Joules of energy, while processing around 10,000 words and predicting a positive/negative review. Our experimental results and analysis demonstrate that by leveraging the parallel and distributed capabilities of SpiNNaker, our proposed SSA-SpiNNaker model achieves better performance compared to artificial neural networks models. Our investigation into existing works revealed that no similar models exist in the published literature, demonstrating the uniqueness of our proposed model. Our proposed work would offer a synergy between SNNs and NLP within the neuromorphic computing domain, in order to address many challenges in this domain, including computational complexity and power consumption. Our proposed model would not only enhance the capabilities of sentiment analysis but also contribute to the advancement of brain-inspired computing. Our proposed model could be utilized in other resource-constrained and low-power applications, such as robotics, autonomous, and smart systems.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Szczęsny, Szymon, Damian Huderek et Łukasz Przyborowski. « Spiking Neural Network with Linear Computational Complexity for Waveform Analysis in Amperometry ». Sensors 21, no 9 (10 mai 2021) : 3276. http://dx.doi.org/10.3390/s21093276.

Texte intégral
Résumé :
The paper describes the architecture of a Spiking Neural Network (SNN) for time waveform analyses using edge computing. The network model was based on the principles of preprocessing signals in the diencephalon and using tonic spiking and inhibition-induced spiking models typical for the thalamus area. The research focused on a significant reduction of the complexity of the SNN algorithm by eliminating most synaptic connections and ensuring zero dispersion of weight values concerning connections between neuron layers. The paper describes a network mapping and learning algorithm, in which the number of variables in the learning process is linearly dependent on the size of the patterns. The works included testing the stability of the accuracy parameter for various network sizes. The described approach used the ability of spiking neurons to process currents of less than 100 pA, typical of amperometric techniques. An example of a practical application is an analysis of vesicle fusion signals using an amperometric system based on Carbon NanoTube (CNT) sensors. The paper concludes with a discussion of the costs of implementing the network as a semiconductor structure.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ngu, Huynh Cong Viet, et Keon Myung Lee. « Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks ». Applied Sciences 12, no 11 (6 juin 2022) : 5749. http://dx.doi.org/10.3390/app12115749.

Texte intégral
Résumé :
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ngu, Huynh Cong Viet, et Keon Myung Lee. « Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks ». Applied Sciences 12, no 11 (6 juin 2022) : 5749. http://dx.doi.org/10.3390/app12115749.

Texte intégral
Résumé :
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yan, Zhanglu, Jun Zhou et Weng-Fai Wong. « Near Lossless Transfer Learning for Spiking Neural Networks ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 12 (18 mai 2021) : 10577–84. http://dx.doi.org/10.1609/aaai.v35i12.17265.

Texte intégral
Résumé :
Spiking neural networks (SNNs) significantly reduce energy consumption by replacing weight multiplications with additions. This makes SNNs suitable for energy-constrained platforms. However, due to its discrete activation, training of SNNs remains a challenge. A popular approach is to first train an equivalent CNN using traditional backpropagation, and then transfer the weights to the intended SNN. Unfortunately, this often results in significant accuracy loss, especially in deeper networks. In this paper, we propose CQ training (Clamped and Quantized training), an SNN-compatible CNN training algorithm with clamp and quantization that achieves near-zero conversion accuracy loss. Essentially, CNN training in CQ training accounts for certain SNN characteristics. Using a 7 layer VGG-* and a 21 layer VGG-19, running on the CIFAR-10 dataset, we achieved 94.16% and 93.44% accuracy in the respective equivalent SNNs. It outperforms other existing comparable works that we know of. We also demonstrate the low-precision weight compatibility for the VGG-19 structure. Without retraining, an accuracy of 93.43% and 92.82% using quantized 9-bit and 8-bit weights, respectively, was achieved. The framework was developed in PyTorch and is publicly available.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kim, Youngeun, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer et Priyadarshini Panda. « Exploring Temporal Information Dynamics in Spiking Neural Networks ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 7 (26 juin 2023) : 8308–16. http://dx.doi.org/10.1609/aaai.v37i7.26002.

Texte intégral
Résumé :
Most existing Spiking Neural Network (SNN) works state that SNNs may utilize temporal information dynamics of spikes. However, an explicit analysis of temporal information dynamics is still missing. In this paper, we ask several important questions for providing a fundamental understanding of SNNs: What are temporal information dynamics inside SNNs? How can we measure the temporal information dynamics? How do the temporal information dynamics affect the overall learning performance? To answer these questions, we estimate the Fisher Information of the weights to measure the distribution of temporal information during training in an empirical manner. Surprisingly, as training goes on, Fisher information starts to concentrate in the early timesteps. After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration. We observe that the temporal information concentration phenomenon is a common learning feature of SNNs by conducting extensive experiments on various configurations such as architecture, dataset, optimization strategy, time constant, and timesteps. Furthermore, to reveal how temporal information concentration affects the performance of SNNs, we design a loss function to change the trend of temporal information. We find that temporal information concentration is crucial to building a robust SNN but has little effect on classification accuracy. Finally, we propose an efficient iterative pruning method based on our observation on temporal information concentration. Code is available at https://github.com/Intelligent-Computing-Lab-Yale/Exploring-Temporal-Information-Dynamics-in-Spiking-Neural-Networks.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera et Alfian Ma'arif. « Spiking PID Control Applied in the Van de Vusse Reaction ». International Journal of Robotics and Control Systems 1, no 4 (25 novembre 2021) : 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.

Texte intégral
Résumé :
Artificial neural networks (ANN) can approximate signals and give interesting results in pattern recognition; some works use neural networks for control applications. However, biological neurons do not generate similar signals to the obtained by ANN. The spiking neurons are an interesting topic since they simulate the real behavior depicted by biological neurons. This paper employed a spiking neuron to compute a PID control, which is further applied to the Van de Vusse reaction. This reaction, as the inverse pendulum, is a benchmark used to work with systems that has inverse response producing the output to undershoot. One problem is how to code information that the neuron can interpret and decode the peak generated by the neuron to interpret the neuron's behavior. In this work, a spiking neuron is used to compute a PID control by coding in time the peaks generated by the neuron. The neuron has as synaptic weights the PID gains, and the peak observed in the axon is the coded control signal. The neuron adaptation tries to obtain the necessary weights to generate the peak instant necessary to control the chemical reaction. The simulation results show the possibility of using this kind of neuron for control issues and the possibility of using a spiking neural network to overcome the undershoot obtained due to the inverse response of the chemical reaction.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wu, Yujie, Lei Deng, Guoqi Li, Jun Zhu, Yuan Xie et Luping Shi. « Direct Training for Spiking Neural Networks : Faster, Larger, Better ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 1311–18. http://dx.doi.org/10.1609/aaai.v33i01.33011311.

Texte intégral
Résumé :
Spiking neural networks (SNNs) that enables energy efficient implementation on emerging neuromorphic hardware are gaining more attention. Yet now, SNNs have not shown competitive performance compared with artificial neural networks (ANNs), due to the lack of effective learning algorithms and efficient programming frameworks. We address this issue from two aspects: (1) We propose a neuron normalization technique to adjust the neural selectivity and develop a direct learning algorithm for deep SNNs. (2) Via narrowing the rate coding window and converting the leaky integrate-and-fire (LIF) model into an explicitly iterative version, we present a Pytorch-based implementation method towards the training of large-scale SNNs. In this way, we are able to train deep SNNs with tens of times speedup. As a result, we achieve significantly better accuracy than the reported works on neuromorphic datasets (N-MNIST and DVSCIFAR10), and comparable accuracy as existing ANNs and pre-trained SNNs on non-spiking datasets (CIFAR10). To our best knowledge, this is the first work that demonstrates direct training of deep SNNs with high performance on CIFAR10, and the efficient implementation provides a new way to explore the potential of SNNs.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Lourenço, J., Q. R. Al-Taai, A. Al-Khalidi, E. Wasige et J. Figueiredo. « Resonant Tunnelling Diode – Photodetectors for spiking neural networks ». Journal of Physics : Conference Series 2407, no 1 (1 décembre 2022) : 012047. http://dx.doi.org/10.1088/1742-6596/2407/1/012047.

Texte intégral
Résumé :
Abstract Spike-based neuromorphic devices promise to alleviate the energy greed of the artificial intelligence hardware by using spiking neural networks (SNNs), which employ neuron like units to process information through the timing of the spikes. These neuron-like devices only consume energy when active. Recent works have shown that resonant tunnelling diodes (RTDs) incorporating optoelectronic functionalities such as photodetection and light emission can play a major role on photonic SNNs. RTDs are devices that display an N-shaped current-voltage characteristics capable of providing negative differential conductance (NDC) over a range of the operating voltages. Specifically, RTD photodetectors (RTD-PDs) show promise due to their unique mixture of the structural simplicity while simultaneously providing highly complex non-linear behavior. The goal of this work is to present a systematic study of the how the thickness of the RTD-PD light absorption layers (100, 250, 500 nm) and the device size impacts on the performance of InGaAs RTD-PDs, namely on its responsivity and time response when operating in the third (1550 nm) optical transmission window. Our focus is on the overall characterization of the device optoelectronic response including the impact of the light absorption on the device static current-voltage characteristic, the responsivity and the photodetection time response. For the static characterization, the devices I-V curves were measured under dark conditions and under illumination, giving insights on the light induced I-V tunability effect. The RTD-PD responsivity was compared to the response of a commercial photodetector. The characterization of the temporal response included its capacity to generate optical induced neuronal-like electrical spike, that is, when working as an opto-to-electrical spike converter. The experimental data obtained at each characterization phase is being used for the evaluation and refinement of a behavioral model for RTD-PD devices under construction.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Spiking neural works"

1

Ali, Elsayed Sarah. « Fault Tolerance in Hardware Spiking Neural Networks ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS310.

Texte intégral
Résumé :
L'intelligence artificielle et les algorithmes d'apprentissage automatique sont au sommet du marché de la technologie de nos jours. Dans ce contexte, les accélérateurs matériels d'IA devraient jouer un rôle de plus en plus primordial pour de nombreuses applications, surtout ceux ayant une mission critique et un haut niveau de sécurité. Cela nécessite d'évaluer leur fiabilité et de développer des techniques peu coûteuses de tolérance aux fautes; un problème qui reste largement inexploré pour les puces neuromorphiques et les réseaux de neurones impulsionnels. Il est souvent présumé que la fiabilité et la résilience aux erreurs dans les Réseaux de Neurones Artificiels sont intrinsèquement obtenues grâce au parallélisme, à la redondance structurelle et à la ressemblance avec les réseaux de neurones biologiques. Cependant, des travaux antérieurs dans la littérature ont révélé le non-fondement de cette hypothèse et ont exposé la vulnérabilité des ANN aux fautes. Dans cette thèse, nous abordons le sujet de test et de la tolérance aux fautes pour les SNNs matériels. Nous abordons tout d’abord la question du test de post-fabrication des réseaux de neurones matériels et de leur autotest orienté sur le comportement. Puis, nous nous dirigeons vers une solution globale pour l'accélération des tests et l'analyse de la résilience des SNN contre les défauts au niveau matériel. Après ça, nous proposons une stratégie de tolérance aux fautes des neurones pour les SNNs qui a été optimisée afin de minimiser les surcoûts en surface et puissance du circuit. Enfin, nous présentons une étude de cas matériel qui serait utilisée comme plateforme pour démontrer les expériences d'injection de fautes
Artificial Intelligence (AI) and machine learning algorithms are taking up the lion's share of the technology market nowadays, and hardware AI accelerators are foreseen to play an increasing role in numerous applications, many of which are mission-critical and safety-critical. This requires assessing their reliability and developing cost-effective fault tolerance techniques; an issue that remains largely unexplored for neuromorphic chips and Spiking Neural Networks (SNNs). A tacit assumption is often made that reliability and error-resiliency in Artificial Neural Networks (ANNs) are inherently achieved thanks to the high parallelism, structural redundancy, and the resemblance to their biological counterparts. However, prior work in the literature unraveled the falsity of this assumption and exposed the vulnerability of ANNs to faults. This requires assessing their reliability and developing cost-effective fault tolerance techniques; an issue that remains largely unexplored for neuromorphic chips and Spiking Neural Networks (SNNs). In this thesis, we tackle the subject of testing and fault tolerance in hardware SNNs. We start by addressing the issue of post-manufacturing test and behavior-oriented self-test of hardware neurons. Then we move on towards a global solution for the acceleration of testing and resiliency analysis of SNNs against hardware-level faults. We also propose a neuron fault tolerance strategy for SNNs, optimized for low area and power overhead. Finally, we present a hardware case-study which would be used as a platform for demonstrating fault-injection experiments and fault-tolerance capabilities
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Spiking neural works"

1

Antonietti, Alberto, Claudia Casellato, Egidio D’Angelo et Alessandra Pedrocchi. « Computational Modelling of Cerebellar Magnetic Stimulation : The Effect of Washout ». Dans Lecture Notes in Computer Science, 35–46. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82427-3_3.

Texte intégral
Résumé :
AbstractNowadays, clinicians have multiple tools that they can use to stimulate the brain, by means of electric or magnetic fields that can interfere with the bio-electrical behaviour of neurons. However, it is still unclear which are the neural mechanisms that are involved and how the external stimulation changes the neural responses at network-level. In this paper, we have exploited the simulations carried out using a spiking neural network model, which reconstructed the cerebellar system, to shed light on the underlying mechanisms of cerebellar Transcranial Magnetic Stimulation affecting specific task behaviour. Namely, two computational studies have been merged and compared. The two studies employed a very similar experimental protocol: a first session of Pavlovian associative conditioning, the administration of the TMS (effective or sham), a washout period, and a second session of Pavlovian associative conditioning. In one study, the washout period between the two sessions was long (1 week), while the other study foresaw a very short washout (15 min). Computational models suggested a mechanistic explanation for the TMS effect on the cerebellum. In this work, we have found that the duration of the washout strongly changes the modification of plasticity mechanisms in the cerebellar network, then reflected in the learning behaviour.
Styles APA, Harvard, Vancouver, ISO, etc.
2

van Albada, Sacha J., Jari Pronold, Alexander van Meegen et Markus Diesmann. « Usage and Scaling of an Open-Source Spiking Multi-Area Model of Monkey Cortex ». Dans Lecture Notes in Computer Science, 47–59. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82427-3_4.

Texte intégral
Résumé :
AbstractWe are entering an age of ‘big’ computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other’s work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of an ICT infrastructure for neuroscience.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zheng, Honghao, et Yang Cindy Yi. « Spiking Neural Encoding and Hardware Implementations for Neuromorphic Computing ». Dans Neuromorphic Computing [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.113050.

Texte intégral
Résumé :
Due to the high requirements of the computational power of modern data-intensive applications, the traditional von Neumann structure and neuromorphic computing structure started to play complementary roles in the area of computing. Thus, neuromorphic computing architectures have attracted much attention with high data capacity and power efficiency. In this chapter, the basic concept of neuromorphic computing is discussed, including spiking codes and neurons. The spiking encoder can transfer analog signals to spike signals, thus avoiding using power-consuming analog-to-digital converters. Comparisons of training accuracy and robustness of neural codes are carried out, and the circuit implementations of the spiking temporal encoders are briefly introduced. The encoding schemes are evaluated on the PyTorch platform with the most common datasets, such as Modified National Institute of Standards and Technology (MNIST), Canadian Institute for Advanced Research, 10 classes (CIFAR-10), and The Street View House Numbers (SVHN). From the result, the multiplexing temporal code has shown high data capacity, robustness, and low training error. It achieves at least 6.4% more accuracy than other state-of-the-art works using other encoding schemes.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Frick, Nikolay. « Neuromorphic Computing with Resistive Memory and Bayesian Machines ». Dans Memristors - the Fourth Fundamental Circuit Element - Theory, Device, and Applications [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.1003254.

Texte intégral
Résumé :
Bio-inspired computing with memristors and neuristors offers promising pathways to energy-efficient intelligence. This work reviews toolkits for implementing spiking neural networks and Bayesian machine learning directly in hardware using these emerging devices. We first demonstrate that normally passive memristors can exhibit neuristor-like oscillatory behavior when heating and cooling is taken into account. Such oscillations enable spike-based neural computing. We then summarize recent works on leveraging intrinsic switching stochasticity in memristive devices to physically embed Bayesian models and perform in-situ probabilistic inference. While still facing challenges in endurance, variation tolerance, and peripheral circuitry, this co-design approach combining tailored algorithms and nanodevices could enable a new class of ultra-low power brain-inspired intelligence tolerant to uncertainty and capable to learn with small datasets. Longer-term, hybrid CMOS-memristor systems with sensing/actuation may provide fully adaptive Bayesian edge intelligence. Overall, the confluence of probabilistic algorithms and memristive hardware holds promise for future electronics combining efficiency, adaptability, and human-like reasoning. Academic innovations exploring this algorithm-hardware co-design can lay the foundation for this emerging paradigm of probabilistic cognitive computing.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gamez, David. « The Simulation of Spiking Neural Networks ». Dans Handbook of Research on Discrete Event Simulation Environments, 337–58. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-774-4.ch015.

Texte intégral
Résumé :
This chapter is an overview of the simulation of spiking neural networks that relates discrete event simulation to other approaches and includes a case study of recent work. The chapter starts with an introduction to the key components of the brain and sets out three neuron models that are commonly used in simulation work. After explaining discrete event, continuous and hybrid simulation, the performance of each method is evaluated and recent research is discussed. To illustrate the issues surrounding this work, the second half of this chapter presents a case study of the SpikeStream neural simulator that covers the architecture, performance and typical applications of this software along with some recent experiments. The last part of the chapter suggests some future trends for work in this area.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Dumesnil, Etienne, Philippe-Olivier Beaulieu et Mounir Boukadoum. « Single SNN Architecture for Classical and Operant Conditioning Using Reinforcement Learning ». Dans Robotic Systems, 786–810. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1754-3.ch041.

Texte intégral
Résumé :
A bio-inspired robotic brain is presented where the same spiking neural network (SNN) can implement five variations of learning by conditioning (LC): classical conditioning (CC), and operant conditioning (OC) with positive/negative reinforcement/punishment. In all cases, the links between input stimuli, output actions, reinforcements and punishments are strengthened depending on the stability of the delays between them. To account for the parallel processing nature of neural networks, the SNN is implemented on a field-programmable gate array (FPGA), and the neural delays are extracted via an adaptation of the synapto-dendritic kernel adapting neuron (SKAN) model, for a low resource demanding FPGA implementation of the SNN. A custom robotic platform successfully tested the ability of the proposed architecture to implement the five LC behaviors. Hence, this work contributes to the engineering field by proposing a scalable low resource demanding architecture for adaptive systems, and the cognitive field by suggesting that both CC and OC can be modeled as a single cognitive architecture.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Cabarle, F., H. Adorna et M. A. Martínez-del-Amor. « Simulating Spiking Neural P Systems Without Delays Using GPUs ». Dans Natural Computing for Simulation and Knowledge Discovery, 109–21. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4253-9.ch006.

Texte intégral
Résumé :
In this paper, the authors discuss the simulation of a P system variant known as Spiking Neural P systems (SNP systems), using Graphics Processing Units (GPUs). GPUs are well suited for highly parallel computations because of their intentional and massively parallel architecture. General purpose GPU computing has seen the use of GPUs for computationally intensive applications, not just in graphics and video processing. P systems, including SNP systems, are maximally parallel computing models taking inspiration from the functioning and dynamics of a living cell. In particular, SNP systems take inspiration from a type of cell known as a neuron. The nature of SNP systems allowed for their representation as matrices, which is an elegant step toward their simulation on GPUs. In this paper, the simulation algorithms, design considerations, and implementation are presented. Finally, simulation results, observations, and analyses using a simple but non-trivial SNP system as an example are discussed, including recommendations for future work.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tang, Tiong Yew, Simon Egerton et János Botzheim. « Spiking Reflective Processing Model for Stress-Inspired Adaptive Robot Partner Applications ». Dans Rapid Automation, 1047–66. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8060-7.ch049.

Texte intégral
Résumé :
In a real-world environment, a social robot is constantly required to make many critical decisions in an ambiguous and demanding (stressful) environment. Hence, a biological stress response system model is a good gauge indicator to judge when the robot should react to such environment and adapt itself towards the environment changes. This work is to implement the Smerek's reflective processing model into human-robot communication application where reflective processing is triggered during such situations where the best action is not known. The authors want to investigate how to address better the human-robot communication problems with the focus on reflective processing model in the perspectives of working memory, Spiking Neural Network (SNN) and stress response system. The authors had applied their proposed Spiking Reflective Processing model for the human-robot communication application in a university population. The initial experimental results showed the positive attitude changes before and after the human-robot interaction experiment.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ahmed, L. Jubair, S. Dhanasekar, K. Martin Sagayam, Surbhi Vijh, Vipin Tyagi, Mayank Singh et Alex Norta. « Introduction to Neuromorphic Computing Systems ». Dans Advances in Systems Analysis, Software Engineering, and High Performance Computing, 1–29. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6596-7.ch001.

Texte intégral
Résumé :
The process of using electronic circuits to replicate the neurobiological architectures seen in the nervous system is known as neuromorphic engineering, also referred to as neuromorphic computing. These technologies are essential for the future of computing, although most of the work in neuromorphic computing has been focused on hardware development. The execution speed, energy efficiency, accessibility and robustness against local failures are vital advantages of neuromorphic computing over conventional methods. Spiking neural networks are generated using neuromorphic computing. This chapter covers the basic ideas of neuromorphic engineering, neuromorphic computing, and its motivating factors and challenges. Deep learning models are frequently referred to as deep neural networks because deep learning techniques use neural network topologies. Deep learning techniques and their different architectures were also covered in this section. Furthermore, Emerging memory Devices for neuromorphic systems and neuromorphic circuits were illustrated.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Spiking neural works"

1

Zhang, Duzhen, Tielin Zhang, Shuncheng Jia, Qingyu Wang et Bo Xu. « Recent Advances and New Frontiers in Spiking Neural Networks ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/790.

Texte intégral
Résumé :
In recent years, spiking neural networks (SNNs) have received extensive attention in brain-inspired intelligence due to their rich spatially-temporal dynamics, various encoding methods, and event-driven characteristics that naturally fit the neuromorphic hardware. With the development of SNNs, brain-inspired intelligence, an emerging research field inspired by brain science achievements and aiming at artificial general intelligence, is becoming hot. This paper reviews recent advances and discusses new frontiers in SNNs from five major research topics, including essential elements (i.e., spiking neuron models, encoding methods, and topology structures), neuromorphic datasets, optimization algorithms, software, and hardware frameworks. We hope our survey can help researchers understand SNNs better and inspire new works to advance this field.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Yuchen, Kexin Shi, Chengzhuo Lu, Yuguo Liu, Malu Zhang et Hong Qu. « Spatial-Temporal Self-Attention for Asynchronous Spiking Neural Networks ». Dans Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California : International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/344.

Texte intégral
Résumé :
The brain-inspired spiking neural networks (SNNs) are receiving increasing attention due to their asynchronous event-driven characteristics and low power consumption. As attention mechanisms recently become an indispensable part of sequence dependence modeling, the combination of SNNs and attention mechanisms holds great potential for energy-efficient and high-performance computing paradigms. However, the existing works cannot benefit from both temporal-wise attention and the asynchronous characteristic of SNNs. To fully leverage the advantages of both SNNs and attention mechanisms, we propose an SNNs-based spatial-temporal self-attention (STSA) mechanism, which calculates the feature dependence across the time and space domains without destroying the asynchronous transmission properties of SNNs. To further improve the performance, we also propose a spatial-temporal relative position bias (STRPB) for STSA to consider the spatiotemporal position of spikes. Based on the STSA and STRPB, we construct a spatial-temporal spiking Transformer framework, named STS-Transformer, which is powerful and enables SNNs to work in an asynchronous event-driven manner. Extensive experiments are conducted on popular neuromorphic datasets and speech datasets, including DVS128 Gesture, CIFAR10-DVS, and Google Speech Commands, and our experimental results can outperform other state-of-the-art models.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Liu, Qianhui, Dong Xing, Huajin Tang, De Ma et Gang Pan. « Event-based Action Recognition Using Motion Information and Spiking Neural Networks ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/240.

Texte intégral
Résumé :
Event-based cameras have attracted increasing attention due to their advantages of biologically inspired paradigm and low power consumption. Since event-based cameras record the visual input as asynchronous discrete events, they are inherently suitable to cooperate with the spiking neural network (SNN). Existing works of SNNs for processing events mainly focus on the task of object recognition. However, events from the event-based camera are triggered by dynamic changes, which makes it an ideal choice to capture actions in the visual scene. Inspired by the dorsal stream in visual cortex, we propose a hierarchical SNN architecture for event-based action recognition using motion information. Motion features are extracted and utilized from events to local and finally to global perception for action recognition. To the best of the authors’ knowledge, it is the first attempt of SNN to apply motion information to event-based action recognition. We evaluate our proposed SNN on three event-based action recognition datasets, including our newly published DailyAction-DVS dataset comprising 12 actions collected under diverse recording conditions. Extensive experimental results show the effectiveness of motion information and our proposed SNN architecture for event-based action recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Yuchen, Malu Zhang, Yi Chen et Hong Qu. « Signed Neuron with Memory : Towards Simple, Accurate and High-Efficient ANN-SNN Conversion ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/347.

Texte intégral
Résumé :
Spiking Neural Networks (SNNs) are receiving increasing attention due to their biological plausibility and the potential for ultra-low-power event-driven neuromorphic hardware implementation. Due to the complex temporal dynamics and discontinuity of spikes, training SNNs directly usually suffers from high computing resources and a long training time. As an alternative, SNN can be converted from a pre-trained artificial neural network (ANN) to bypass the difficulty in SNNs learning. However, the existing ANN-to-SNN methods neglect the inconsistency of information transmission between synchronous ANNs and asynchronous SNNs. In this work, we first analyze how the asynchronous spikes in SNNs may cause conversion errors between ANN and SNN. To address this problem, we propose a signed neuron with memory function, which enables almost no accuracy loss during the conversion process, and maintains the properties of asynchronous transmission in the converted SNNs. We further propose a new normalization method, named neuron-wise normalization, to significantly shorten the inference latency in the converted SNNs. We conduct experiments on challenging datasets including CIFAR10 (95.44% top-1), CIFAR100 (78.3% top-1) and ImageNet (73.16% top-1). Experimental results demonstrate that the proposed method outperforms the state-of-the-art works in terms of accuracy and inference time. The code is available at https://github.com/ppppps/ANN2SNNConversion_SNM_NeuronNorm.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Cheng, Xiang, Yunzhe Hao, Jiaming Xu et Bo Xu. « LISNN : Improving Spiking Neural Networks with Lateral Interactions for Robust Object Recognition ». Dans Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/211.

Texte intégral
Résumé :
Spiking Neural Network (SNN) is considered more biologically plausible and energy-efficient on emerging neuromorphic hardware. Recently backpropagation algorithm has been utilized for training SNN, which allows SNN to go deeper and achieve higher performance. However, most existing SNN models for object recognition are mainly convolutional structures or fully-connected structures, which only have inter-layer connections, but no intra-layer connections. Inspired by Lateral Interactions in neuroscience, we propose a high-performance and noise-robust Spiking Neural Network (dubbed LISNN). Based on the convolutional SNN, we model the lateral interactions between spatially adjacent neurons and integrate it into the spiking neuron membrane potential formula, then build a multi-layer SNN on a popular deep learning framework, i.\,e., PyTorch. We utilize the pseudo-derivative method to solve the non-differentiable problem when applying backpropagation to train LISNN and test LISNN on multiple standard datasets. Experimental results demonstrate that the proposed model can achieve competitive or better performance compared to current state-of-the-art spiking neural networks on MNIST, Fashion-MNIST, and N-MNIST datasets. Besides, thanks to lateral interactions, our model processes stronger noise-robustness than other SNN. Our work brings a biologically plausible mechanism into SNN, hoping that it can help us understand the visual information processing in the brain.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhu, Zulun, Jiaying Peng, Jintang Li, Liang Chen, Qi Yu et Siqiang Luo. « Spiking Graph Convolutional Networks ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/338.

Texte intégral
Résumé :
Graph Convolutional Networks (GCNs) achieve an impressive performance due to the remarkable representation ability in learning the graph information. However, GCNs, when implemented on a deep network, require expensive computation power, making them difficult to be deployed on battery-powered devices. In contrast, Spiking Neural Networks (SNNs), which perform a bio-fidelity inference process, offer an energy-efficient neural architecture. In this work, we propose SpikingGCN, an end-to-end framework that aims to integrate the embedding of GCNs with the biofidelity characteristics of SNNs. The original graph data are encoded into spike trains based on the incorporation of graph convolution. We further model biological information processing by utilizing a fully connected layer combined with neuron nodes. In a wide range of scenarios (e.g., citation networks, image graph classification, and recommender systems), our experimental results show that the proposed method could gain competitive performance against state-of-the-art approaches. Furthermore, we show that SpikingGCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis, which demonstrates its great potential to construct environment-friendly machine learning models.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Morozov, Alexander, Karine Abgaryan et Dmitry Reviznikov. « SIMULATION OF A NEUROMORPHIC NETWORK ON MEMRISTIVE ELEMENTS WITH 1T1R KROSSBAR ARCHITECTURE ». Dans International Forum “Microelectronics – 2020”. Joung Scientists Scholarship “Microelectronics – 2020”. XIII International conference «Silicon – 2020». XII young scientists scholarship for silicon nanostructures and devices physics, material science, process and analysis. LLC MAKS Press, 2020. http://dx.doi.org/10.29003/m1638.silicon-2020/322-325.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Liu, Xiyu, et Hongyan Zhang. « Spiking DNA neural trees with applications to conceptual design ». Dans 2011 15th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, 2011. http://dx.doi.org/10.1109/cscwd.2011.5960085.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Hong, Shen, Liu Ning, Li Xiaoping et Wang Qian. « A cooperative method for supervised learning in Spiking neural networks ». Dans 2010 14th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, 2010. http://dx.doi.org/10.1109/cscwd.2010.5472007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Jimeno Yepes, Antonio, Jianbin Tang et Benjamin Scott Mashford. « Improving Classification Accuracy of Feedforward Neural Networks for Spiking Neuromorphic Chips ». Dans Twenty-Sixth International Joint Conference on Artificial Intelligence. California : International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/274.

Texte intégral
Résumé :
Deep Neural Networks (DNN) achieve human level performance in many image analytics tasks but DNNs are mostly deployed to GPU platforms that consume a considerable amount of power. New hardware platforms using lower precision arithmetic achieve drastic reductions in power consumption. More recently, brain-inspired spiking neuromorphic chips have achieved even lower power consumption, on the order of milliwatts, while still offering real-time processing. However, for deploying DNNs to energy efficient neuromorphic chips the incompatibility between continuous neurons and synaptic weights of traditional DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be overcome. Previous work has achieved this by training a network to learn continuous probabilities, before it is deployed to a neuromorphic architecture, such as IBM TrueNorth Neurosynaptic System, by random sampling these probabilities. The main contribution of this paper is a new learning algorithm that learns a TrueNorth configuration ready for deployment. We achieve this by training directly a binary hardware crossbar that accommodates the TrueNorth axon configuration constrains and we propose a different neuron model. Results of our approach trained on electroencephalogram (EEG) data show a significant improvement with previous work (76% vs 86% accuracy) while maintaining state of the art performance on the MNIST handwritten data set.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie