Artigos de revistas sobre o tema "Spiking neural works"

Siga este link para ver outros tipos de publicações sobre o tema: Spiking neural works.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Spiking neural works".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Ponghiran, Wachirawit, e Kaushik Roy. "Spiking Neural Networks with Improved Inherent Recurrence Dynamics for Sequential Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junho de 2022): 8001–8. http://dx.doi.org/10.1609/aaai.v36i7.20771.

Texto completo da fonte
Resumo:
Spiking neural networks (SNNs) with leaky integrate and fire (LIF) neurons, can be operated in an event-driven manner and have internal states to retain information over time, providing opportunities for energy-efficient neuromorphic computing, especially on edge devices. Note, however, many representative works on SNNs do not fully demonstrate the usefulness of their inherent recurrence (membrane potential retaining information about the past) for sequential learning. Most of the works train SNNs to recognize static images by artificially expanded input representation in time through rate coding. We show that SNNs can be trained for practical sequential tasks by proposing modifications to a network of LIF neurons that enable internal states to learn long sequences and make their inherent recurrence resilient to the vanishing gradient problem. We then develop a training scheme to train the proposed SNNs with improved inherent recurrence dynamics. Our training scheme allows spiking neurons to produce multi-bit outputs (as opposed to binary spikes) which help mitigate the mismatch between a derivative of spiking neurons' activation function and a surrogate derivative used to overcome spiking neurons' non-differentiability. Our experimental results indicate that the proposed SNN architecture on TIMIT and LibriSpeech 100h speech recognition dataset yields accuracy comparable to that of LSTMs (within 1.10% and 0.36%, respectively), but with 2x fewer parameters than LSTMs. The sparse SNN outputs also lead to 10.13x and 11.14x savings in multiplication operations compared to GRUs, which are generally considered as a lightweight alternative to LSTMs, on TIMIT and LibriSpeech 100h datasets, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chunduri, Raghavendra K., e Darshika G. Perera. "Neuromorphic Sentiment Analysis Using Spiking Neural Networks". Sensors 23, n.º 18 (6 de setembro de 2023): 7701. http://dx.doi.org/10.3390/s23187701.

Texto completo da fonte
Resumo:
Over the past decade, the artificial neural networks domain has seen a considerable embracement of deep neural networks among many applications. However, deep neural networks are typically computationally complex and consume high power, hindering their applicability for resource-constrained applications, such as self-driving vehicles, drones, and robotics. Spiking neural networks, often employed to bridge the gap between machine learning and neuroscience fields, are considered a promising solution for resource-constrained applications. Since deploying spiking neural networks on traditional von-Newman architectures requires significant processing time and high power, typically, neuromorphic hardware is created to execute spiking neural networks. The objective of neuromorphic devices is to mimic the distinctive functionalities of the human brain in terms of energy efficiency, computational power, and robust learning. Furthermore, natural language processing, a machine learning technique, has been widely utilized to aid machines in comprehending human language. However, natural language processing techniques cannot also be deployed efficiently on traditional computing platforms. In this research work, we strive to enhance the natural language processing traits/abilities by harnessing and integrating the SNNs traits, as well as deploying the integrated solution on neuromorphic hardware, efficiently and effectively. To facilitate this endeavor, we propose a novel, unique, and efficient sentiment analysis model created using a large-scale SNN model on SpiNNaker neuromorphic hardware that responds to user inputs. SpiNNaker neuromorphic hardware typically can simulate large spiking neural networks in real time and consumes low power. We initially create an artificial neural networks model, and then train the model using an Internet Movie Database (IMDB) dataset. Next, the pre-trained artificial neural networks model is converted into our proposed spiking neural networks model, called a spiking sentiment analysis (SSA) model. Our SSA model using SpiNNaker, called SSA-SpiNNaker, is created in such a way to respond to user inputs with a positive or negative response. Our proposed SSA-SpiNNaker model achieves 100% accuracy and only consumes 3970 Joules of energy, while processing around 10,000 words and predicting a positive/negative review. Our experimental results and analysis demonstrate that by leveraging the parallel and distributed capabilities of SpiNNaker, our proposed SSA-SpiNNaker model achieves better performance compared to artificial neural networks models. Our investigation into existing works revealed that no similar models exist in the published literature, demonstrating the uniqueness of our proposed model. Our proposed work would offer a synergy between SNNs and NLP within the neuromorphic computing domain, in order to address many challenges in this domain, including computational complexity and power consumption. Our proposed model would not only enhance the capabilities of sentiment analysis but also contribute to the advancement of brain-inspired computing. Our proposed model could be utilized in other resource-constrained and low-power applications, such as robotics, autonomous, and smart systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Szczęsny, Szymon, Damian Huderek e Łukasz Przyborowski. "Spiking Neural Network with Linear Computational Complexity for Waveform Analysis in Amperometry". Sensors 21, n.º 9 (10 de maio de 2021): 3276. http://dx.doi.org/10.3390/s21093276.

Texto completo da fonte
Resumo:
The paper describes the architecture of a Spiking Neural Network (SNN) for time waveform analyses using edge computing. The network model was based on the principles of preprocessing signals in the diencephalon and using tonic spiking and inhibition-induced spiking models typical for the thalamus area. The research focused on a significant reduction of the complexity of the SNN algorithm by eliminating most synaptic connections and ensuring zero dispersion of weight values concerning connections between neuron layers. The paper describes a network mapping and learning algorithm, in which the number of variables in the learning process is linearly dependent on the size of the patterns. The works included testing the stability of the accuracy parameter for various network sizes. The described approach used the ability of spiking neurons to process currents of less than 100 pA, typical of amperometric techniques. An example of a practical application is an analysis of vesicle fusion signals using an amperometric system based on Carbon NanoTube (CNT) sensors. The paper concludes with a discussion of the costs of implementing the network as a semiconductor structure.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ngu, Huynh Cong Viet, e Keon Myung Lee. "Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks". Applied Sciences 12, n.º 11 (6 de junho de 2022): 5749. http://dx.doi.org/10.3390/app12115749.

Texto completo da fonte
Resumo:
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ngu, Huynh Cong Viet, e Keon Myung Lee. "Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks". Applied Sciences 12, n.º 11 (6 de junho de 2022): 5749. http://dx.doi.org/10.3390/app12115749.

Texto completo da fonte
Resumo:
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Yan, Zhanglu, Jun Zhou e Weng-Fai Wong. "Near Lossless Transfer Learning for Spiking Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de maio de 2021): 10577–84. http://dx.doi.org/10.1609/aaai.v35i12.17265.

Texto completo da fonte
Resumo:
Spiking neural networks (SNNs) significantly reduce energy consumption by replacing weight multiplications with additions. This makes SNNs suitable for energy-constrained platforms. However, due to its discrete activation, training of SNNs remains a challenge. A popular approach is to first train an equivalent CNN using traditional backpropagation, and then transfer the weights to the intended SNN. Unfortunately, this often results in significant accuracy loss, especially in deeper networks. In this paper, we propose CQ training (Clamped and Quantized training), an SNN-compatible CNN training algorithm with clamp and quantization that achieves near-zero conversion accuracy loss. Essentially, CNN training in CQ training accounts for certain SNN characteristics. Using a 7 layer VGG-* and a 21 layer VGG-19, running on the CIFAR-10 dataset, we achieved 94.16% and 93.44% accuracy in the respective equivalent SNNs. It outperforms other existing comparable works that we know of. We also demonstrate the low-precision weight compatibility for the VGG-19 structure. Without retraining, an accuracy of 93.43% and 92.82% using quantized 9-bit and 8-bit weights, respectively, was achieved. The framework was developed in PyTorch and is publicly available.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kim, Youngeun, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer e Priyadarshini Panda. "Exploring Temporal Information Dynamics in Spiking Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 7 (26 de junho de 2023): 8308–16. http://dx.doi.org/10.1609/aaai.v37i7.26002.

Texto completo da fonte
Resumo:
Most existing Spiking Neural Network (SNN) works state that SNNs may utilize temporal information dynamics of spikes. However, an explicit analysis of temporal information dynamics is still missing. In this paper, we ask several important questions for providing a fundamental understanding of SNNs: What are temporal information dynamics inside SNNs? How can we measure the temporal information dynamics? How do the temporal information dynamics affect the overall learning performance? To answer these questions, we estimate the Fisher Information of the weights to measure the distribution of temporal information during training in an empirical manner. Surprisingly, as training goes on, Fisher information starts to concentrate in the early timesteps. After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration. We observe that the temporal information concentration phenomenon is a common learning feature of SNNs by conducting extensive experiments on various configurations such as architecture, dataset, optimization strategy, time constant, and timesteps. Furthermore, to reveal how temporal information concentration affects the performance of SNNs, we design a loss function to change the trend of temporal information. We find that temporal information concentration is crucial to building a robust SNN but has little effect on classification accuracy. Finally, we propose an efficient iterative pruning method based on our observation on temporal information concentration. Code is available at https://github.com/Intelligent-Computing-Lab-Yale/Exploring-Temporal-Information-Dynamics-in-Spiking-Neural-Networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera e Alfian Ma'arif. "Spiking PID Control Applied in the Van de Vusse Reaction". International Journal of Robotics and Control Systems 1, n.º 4 (25 de novembro de 2021): 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.

Texto completo da fonte
Resumo:
Artificial neural networks (ANN) can approximate signals and give interesting results in pattern recognition; some works use neural networks for control applications. However, biological neurons do not generate similar signals to the obtained by ANN. The spiking neurons are an interesting topic since they simulate the real behavior depicted by biological neurons. This paper employed a spiking neuron to compute a PID control, which is further applied to the Van de Vusse reaction. This reaction, as the inverse pendulum, is a benchmark used to work with systems that has inverse response producing the output to undershoot. One problem is how to code information that the neuron can interpret and decode the peak generated by the neuron to interpret the neuron's behavior. In this work, a spiking neuron is used to compute a PID control by coding in time the peaks generated by the neuron. The neuron has as synaptic weights the PID gains, and the peak observed in the axon is the coded control signal. The neuron adaptation tries to obtain the necessary weights to generate the peak instant necessary to control the chemical reaction. The simulation results show the possibility of using this kind of neuron for control issues and the possibility of using a spiking neural network to overcome the undershoot obtained due to the inverse response of the chemical reaction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wu, Yujie, Lei Deng, Guoqi Li, Jun Zhu, Yuan Xie e Luping Shi. "Direct Training for Spiking Neural Networks: Faster, Larger, Better". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 1311–18. http://dx.doi.org/10.1609/aaai.v33i01.33011311.

Texto completo da fonte
Resumo:
Spiking neural networks (SNNs) that enables energy efficient implementation on emerging neuromorphic hardware are gaining more attention. Yet now, SNNs have not shown competitive performance compared with artificial neural networks (ANNs), due to the lack of effective learning algorithms and efficient programming frameworks. We address this issue from two aspects: (1) We propose a neuron normalization technique to adjust the neural selectivity and develop a direct learning algorithm for deep SNNs. (2) Via narrowing the rate coding window and converting the leaky integrate-and-fire (LIF) model into an explicitly iterative version, we present a Pytorch-based implementation method towards the training of large-scale SNNs. In this way, we are able to train deep SNNs with tens of times speedup. As a result, we achieve significantly better accuracy than the reported works on neuromorphic datasets (N-MNIST and DVSCIFAR10), and comparable accuracy as existing ANNs and pre-trained SNNs on non-spiking datasets (CIFAR10). To our best knowledge, this is the first work that demonstrates direct training of deep SNNs with high performance on CIFAR10, and the efficient implementation provides a new way to explore the potential of SNNs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Lourenço, J., Q. R. Al-Taai, A. Al-Khalidi, E. Wasige e J. Figueiredo. "Resonant Tunnelling Diode – Photodetectors for spiking neural networks". Journal of Physics: Conference Series 2407, n.º 1 (1 de dezembro de 2022): 012047. http://dx.doi.org/10.1088/1742-6596/2407/1/012047.

Texto completo da fonte
Resumo:
Abstract Spike-based neuromorphic devices promise to alleviate the energy greed of the artificial intelligence hardware by using spiking neural networks (SNNs), which employ neuron like units to process information through the timing of the spikes. These neuron-like devices only consume energy when active. Recent works have shown that resonant tunnelling diodes (RTDs) incorporating optoelectronic functionalities such as photodetection and light emission can play a major role on photonic SNNs. RTDs are devices that display an N-shaped current-voltage characteristics capable of providing negative differential conductance (NDC) over a range of the operating voltages. Specifically, RTD photodetectors (RTD-PDs) show promise due to their unique mixture of the structural simplicity while simultaneously providing highly complex non-linear behavior. The goal of this work is to present a systematic study of the how the thickness of the RTD-PD light absorption layers (100, 250, 500 nm) and the device size impacts on the performance of InGaAs RTD-PDs, namely on its responsivity and time response when operating in the third (1550 nm) optical transmission window. Our focus is on the overall characterization of the device optoelectronic response including the impact of the light absorption on the device static current-voltage characteristic, the responsivity and the photodetection time response. For the static characterization, the devices I-V curves were measured under dark conditions and under illumination, giving insights on the light induced I-V tunability effect. The RTD-PD responsivity was compared to the response of a commercial photodetector. The characterization of the temporal response included its capacity to generate optical induced neuronal-like electrical spike, that is, when working as an opto-to-electrical spike converter. The experimental data obtained at each characterization phase is being used for the evaluation and refinement of a behavioral model for RTD-PD devices under construction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Fu, Si-Yao, Guo-Sheng Yang e Xin-Kai Kuai. "A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition". Computational Intelligence and Neuroscience 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/946589.

Texto completo da fonte
Resumo:
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people’s facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Xiao, Chao, Jihua Chen e Lei Wang. "Optimal Mapping of Spiking Neural Network to Neuromorphic Hardware for Edge-AI". Sensors 22, n.º 19 (24 de setembro de 2022): 7248. http://dx.doi.org/10.3390/s22197248.

Texto completo da fonte
Resumo:
Neuromorphic hardware, the new generation of non-von Neumann computing system, implements spiking neurons and synapses to spiking neural network (SNN)-based applications. The energy-efficient property makes the neuromorphic hardware suitable for power-constrained environments where sensors and edge nodes of the internet of things (IoT) work. The mapping of SNNs onto neuromorphic hardware is challenging because a non-optimized mapping may result in a high network-on-chip (NoC) latency and energy consumption. In this paper, we propose NeuMap, a simple and fast toolchain, to map SNNs onto the multicore neuromorphic hardware. NeuMap first obtains the communication patterns of an SNN by calculation that simplifies the mapping process. Then, NeuMap exploits localized connections, divides the adjacent layers into a sub-network, and partitions each sub-network into multiple clusters while meeting the hardware resource constraints. Finally, we employ a meta-heuristics algorithm to search for the best cluster-to-core mapping scheme in the reduced searching space. We conduct experiments using six realistic SNN-based applications to evaluate NeuMap and two prior works (SpiNeMap and SNEAP). The experimental results show that, compared to SpiNeMap and SNEAP, NeuMap reduces the average energy consumption by 84% and 17% and has 55% and 12% lower spike latency, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

‘Atyka Nor Rashid, Fadilla, e Nor Surayahani Suriani. "Spiking neural network classification for spike train analysis of physiotherapy movements". Bulletin of Electrical Engineering and Informatics 9, n.º 1 (1 de fevereiro de 2020): 319–25. http://dx.doi.org/10.11591/eei.v9i1.1868.

Texto completo da fonte
Resumo:
Classifying gesture or movements nowadays become a demanding business as the technologies of sensor rose. This has enchanted many researchers to actively investigated widely within the area of computer vision. Rehabilitation exercises is one of the most popular gestures or movements that being worked by the researchers nowadays. Rehab session usually involves experts that monitored the patients but lacking the experts itself made the session become longer and unproductive. This works adopted a dataset from UI-PRMD that assembled from 10 rehabilitation movements. The data has been encoded into spike trains for spike patterns analysis. Next, we tend to train the spike trains into Spiking Neural Networks and resulting into a promising result. However, in future, this method will be tested with other data to validate the performance, also to enhance the success rate of the accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Kheradpisheh, Saeed Reza, e Timothée Masquelier. "Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron". International Journal of Neural Systems 30, n.º 06 (28 de maio de 2020): 2050027. http://dx.doi.org/10.1142/s0129065720500276.

Texto completo da fonte
Resumo:
We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi-fully connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, nonleaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN .
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Al-Hamid, Ali A., e HyungWon Kim. "Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding". Electronics 9, n.º 10 (29 de setembro de 2020): 1599. http://dx.doi.org/10.3390/electronics9101599.

Texto completo da fonte
Resumo:
Spiking neural networks (SNN) increasingly attract attention for their similarity to the biological neural system. Hardware implementation of spiking neural networks, however, remains a great challenge due to their excessive complexity and circuit size. This work introduces a novel optimization method for hardware friendly SNN architecture based on a modified rate coding scheme called Binary Streamed Rate Coding (BSRC). BSRC combines the features of both rate and temporal coding. In addition, by employing a built-in randomizer, the BSRC SNN model provides a higher accuracy and faster training. We also present SNN optimization methods including structure optimization and weight quantization. Extensive evaluations with MNIST SNNs demonstrate that the structure optimization of SNN (81-30-20-10) provides 183.19 times reduction in hardware compared with SNN (784-800-10), while providing an accuracy of 95.25%, a small loss compared with 98.89% and 98.93% reported in the previous works. Our weight quantization reduces 32-bit weights to 4-bit integers leading to further hardware reduction of 4 times with only 0.56% accuracy loss. Overall, the SNN model (81-30-20-10) optimized by our method shrinks the SNN’s circuit area from 3089.49 mm2 for SNN (784-800-10) to 4.04 mm2—a reduction of 765 times.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Qin, Xing, Chaojie Li, Haitao He, Zejun Pan e Chenxiao Lai. "Python-Based Circuit Design for Fundamental Building Blocks of Spiking Neural Network". Electronics 12, n.º 11 (23 de maio de 2023): 2351. http://dx.doi.org/10.3390/electronics12112351.

Texto completo da fonte
Resumo:
Spiking neural networks (SNNs) are considered a crucial research direction to address the “storage wall” and “power wall” challenges faced by traditional artificial intelligence computing. However, developing SNN chips based on CMOS (complementary metal oxide semiconductor) circuits remains a challenge. Although memristor process technology is the best alternative to synapses, it is still undergoing refinement. In this study, a novel approach is proposed that employs tools to automatically generate HDL (hardware description language) code for constructing neuron and memristor circuits after using Python to describe the neuron and memristor models. Based on this approach, HR (Hindmash–Rose), LIF (leaky integrate-and-fire), and IZ (Izhikevich) neuron circuits, as well as HP, EG (enhanced generalized), and TB (the behavioral threshold bipolar) memristor circuits are designed to construct the most basic connection of a SNN: the neuron–memristor–neuron circuit that satisfies the STDP (spike-timing-dependent-plasticity) learning rule. Through simulation experiments and FPGA (field programmable gate array) prototype verification, it is confirmed that the IZ and LIF circuits are suitable as neurons in SNNs, while the X variables of the EG memristor model serve as characteristic synaptic weights. The EG memristor circuits best satisfy the STDP learning rule and are suitable as synapses in SNNs. In comparison to previous works on hardware spiking neurons, the proposed method needed fewer area resources for creating spiking neurons models on FPGA. The proposed SNN basic components design method, and the resulting circuits, are beneficial for architectural exploration and hardware–software co-design of SNN chips.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Korsakov, Anton, Lyubov Astapova e Aleksandr Bakhshiev. "Application of a Compartmental Spiking Neuron Model with Structural Adaptation for Solving Classification Problems". Informatics and Automation 21, n.º 3 (13 de maio de 2022): 493–520. http://dx.doi.org/10.15622/ia.21.3.2.

Texto completo da fonte
Resumo:
The problem of classification using a compartmental spiking neuron model is considered. The state of the art of spiking neural networks analysis is carried out. It is concluded that there are very few works on the study of compartmental neuron models. The choice of a compartmental spiking model is justified as a neuron model for this work. A brief description of such a model is given, and its main features are noted in terms of the possibility of its structural reconfiguration. The method of structural adaptation of the model to the input spike pattern is described. The general scheme of the compartmental spiking neurons’ organization into a network for solving the classification problem is given. The time-to-first-spike method is chosen for encoding numerical information into spike patterns, and a formula is given for calculating the delays of individual signals in the spike pattern when encoding information. Brief results of experiments on solving the classification problem on publicly available data sets (Iris, MNIST) are presented. The conclusion is made about the comparability of the obtained results with the existing classical methods. In addition, a detailed step-by-step description of experiments to determine the state of an autonomous uninhabited underwater vehicle is provided. Estimates of computational costs for solving the classification problem using a compartmental spiking neuron model are given. The conclusion is made about the prospects of using spiking compartmental models of a neuron to increase the bio-plausibility of the implementation of behavioral functions in neuromorphic control systems. Further promising directions for the development of neuromorphic systems based on the compartmental spiking neuron model are considered.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Qiu, Xuerui, Rui-Jie Zhu, Yuhong Chou, Zhaorui Wang, Liang-Jian Deng e Guoqi Li. "Gated Attention Coding for Training High-Performance and Efficient Spiking Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 1 (24 de março de 2024): 601–10. http://dx.doi.org/10.1609/aaai.v38i1.27816.

Texto completo da fonte
Resumo:
Spiking neural networks (SNNs) are emerging as an energy-efficient alternative to traditional artificial neural networks (ANNs) due to their unique spike-based event-driven nature. Coding is crucial in SNNs as it converts external input stimuli into spatio-temporal feature sequences. However, most existing deep SNNs rely on direct coding that generates powerless spike representation and lacks the temporal dynamics inherent in human vision. Hence, we introduce Gated Attention Coding (GAC), a plug-and-play module that leverages the multi-dimensional gated attention unit to efficiently encode inputs into powerful representations before feeding them into the SNN architecture. GAC functions as a preprocessing layer that does not disrupt the spike-driven nature of the SNN, making it amenable to efficient neuromorphic hardware implementation with minimal modifications. Through an observer model theoretical analysis, we demonstrate GAC's attention mechanism improves temporal dynamics and coding efficiency. Experiments on CIFAR10/100 and ImageNet datasets demonstrate that GAC achieves state-of-the-art accuracy with remarkable efficiency. Notably, we improve top-1 accuracy by 3.10% on CIFAR100 with only 6-time steps and 1.07% on ImageNet while reducing energy usage to 66.9% of the previous works. To our best knowledge, it is the first time to explore the attention-based dynamic coding scheme in deep SNNs, with exceptional effectiveness and efficiency on large-scale datasets. Code is available at https://github.com/bollossom/GAC.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Sakthivadivel, Dalton A. R. "Formalizing the Use of the Activation Function in Neural Inference". Complex Systems 31, n.º 4 (15 de dezembro de 2022): 433–49. http://dx.doi.org/10.25088/complexsystems.31.4.433.

Texto completo da fonte
Resumo:
We investigate how the activation function can be used to describe neural firing in an abstract way, and in turn, why it works well in artificial neural networks. We discuss how a spike in a biological neuron belongs to a particular universality class of phase transitions in statistical physics. We then show that the artificial neuron is, mathematically, a mean-field model of biological neural membrane dynamics, which arises from modeling spiking as a phase transition. This allows us to treat selective neural firing in an abstract way and formalize the role of the activation function in perceptron learning. The resultant statistical physical model allows us to recover the expressions for some known activation functions as various special cases. Along with deriving this model and specifying the analogous neural case, we analyze the phase transition to understand the physics of neural network learning. Together, it is shown that there is not only a biological meaning but a physical justification for the emergence and performance of typical activation functions; implications for neural learning and inference are also discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Liu, Jing, Xu Yang, Yimeng Zhu, Yunlin Lei, Jian Cai, Miao Wang, Ziyi Huan e Xialv Lin. "How Neuronal Noises Influence the Spiking Neural Networks’s Cognitive Learning Process: A Preliminary Study". Brain Sciences 11, n.º 2 (25 de janeiro de 2021): 153. http://dx.doi.org/10.3390/brainsci11020153.

Texto completo da fonte
Resumo:
In neuroscience, the Default Mode Network (DMN), also known as the default network or the default-state network, is a large-scale brain network known to have highly correlated activities that are distinct from other networks in the brain. Many studies have revealed that DMNs can influence other cognitive functions to some extent. This paper is motivated by this idea and intends to further explore on how DMNs could help Spiking Neural Networks (SNNs) on image classification problems through an experimental study. The approach emphasizes the bionic meaning on model selection and parameters settings. For modeling, we select Leaky Integrate-and-Fire (LIF) as the neuron model, Additive White Gaussian Noise (AWGN) as the input DMN, and design the learning algorithm based on Spike-Timing-Dependent Plasticity (STDP). Then, we experiment on a two-layer SNN to evaluate the influence of DMN on classification accuracy, and on a three-layer SNN to examine the influence of DMN on structure evolution, where the results both appear positive. Finally, we discuss possible directions for future works.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Kanazawa, Yusuke, Tetsuya Asai e Yoshihito Amemiya. "Basic Circuit Design of a Neural Processor: Analog CMOS Implementation of Spiking Neurons and Dynamic Synapses". Journal of Robotics and Mechatronics 15, n.º 2 (20 de abril de 2003): 208–18. http://dx.doi.org/10.20965/jrm.2003.p0208.

Texto completo da fonte
Resumo:
We discuss the integration architecture of spiking neurons, predicted to be next-generation basic circuits of neural processor and dynamic synapse circuits. A key to development of a brain-like processor is to learn from the brain. Learning from the brain, we try to develop circuits implementing neuron and synapse functions while enabling large-scale integration, so large-scale integrated circuits (LSIs) realize functional behavior of neural networks. With such VLSI, we try to construct a large-scale neural network on a single semiconductor chip. With circuit integration now reaching micron levels, however, problems have arisen in dispersion of device performance in analog IC and in the influence of electromagnetic noise. A genuine brain computer should solve such problems on the network level rather than the element level. To achieve such a target, we must develop an architecture that learns brain functions sufficiently and works correctly even in a noisy environment. As the first step, we propose an analog circuit architecture of spiking neurons and dynamic synapses representing the model of artificial neurons and synapses in a form closer to that of the brain. With the proposed circuit, the model of neurons and synapses can be integrated on a silicon chip with metal-oxide-semiconductor (MOS) devices. In the sections that follow, we discuss the dynamic performance of the proposed circuit by using a circuit simulator, HSPICE. As examples of networks using these circuits, we introduce a competitive neural network and an active pattern recognition network by extracting firing frequency information from input information. We also show simulation results of the operation of networks constructed with the proposed circuits.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Handy, Gregory, e Alla Borisyuk. "Investigating the ability of astrocytes to drive neural network synchrony". PLOS Computational Biology 19, n.º 8 (9 de agosto de 2023): e1011290. http://dx.doi.org/10.1371/journal.pcbi.1011290.

Texto completo da fonte
Resumo:
Recent experimental works have implicated astrocytes as a significant cell type underlying several neuronal processes in the mammalian brain, from encoding sensory information to neurological disorders. Despite this progress, it is still unclear how astrocytes are communicating with and driving their neuronal neighbors. While previous computational modeling works have helped propose mechanisms responsible for driving these interactions, they have primarily focused on interactions at the synaptic level, with microscale models of calcium dynamics and neurotransmitter diffusion. Since it is computationally infeasible to include the intricate microscale details in a network-scale model, little computational work has been done to understand how astrocytes may be influencing spiking patterns and synchronization of large networks. We overcome this issue by first developing an “effective” astrocyte that can be easily implemented to already established network frameworks. We do this by showing that the astrocyte proximity to a synapse makes synaptic transmission faster, weaker, and less reliable. Thus, our “effective” astrocytes can be incorporated by considering heterogeneous synaptic time constants, which are parametrized only by the degree of astrocytic proximity at that synapse. We then apply our framework to large networks of exponential integrate-and-fire neurons with various spatial structures. Depending on key parameters, such as the number of synapses ensheathed and the strength of this ensheathment, we show that astrocytes can push the network to a synchronous state and exhibit spatially correlated patterns.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Saemaldahr, Raghdah, e Mohammad Ilyas. "Patient-Specific Preictal Pattern-Aware Epileptic Seizure Prediction with Federated Learning". Sensors 23, n.º 14 (21 de julho de 2023): 6578. http://dx.doi.org/10.3390/s23146578.

Texto completo da fonte
Resumo:
Electroencephalography (EEG) signals are the primary source for discriminating the preictal from the interictal stage, enabling early warnings before the seizure onset. Epileptic siezure prediction systems face significant challenges due to data scarcity, diversity, and privacy. This paper proposes a three-tier architecture for epileptic seizure prediction associated with the Federated Learning (FL) model, which is able to achieve enhanced capability by utilizing a significant number of seizure patterns from globally distributed patients while maintaining data privacy. The determination of the preictal state is influenced by global and local model-assisted decision making by modeling the two-level edge layer. The Spiking Encoder (SE), integrated with the Graph Convolutional Neural Network (Spiking-GCNN), works as the local model trained using a bi-timescale approach. Each local model utilizes the aggregated seizure knowledge obtained from the different medical centers through FL and determines the preictal probability in the coarse-grained personalization. The Adaptive Neuro-Fuzzy Inference System (ANFIS) is utilized in fine-grained personalization to recognize epileptic seizure patients by examining the outcomes of the FL model, heart rate variability features, and patient-specific clinical features. Thus, the proposed approach achieved 96.33% sensitivity and 96.14% specificity when tested on the CHB-MIT EEG dataset when modeling was performed using the bi-timescale approach and Spiking-GCNN-based epileptic pattern learning. Moreover, the adoption of federated learning greatly assists the proposed system, yielding a 96.28% higher accuracy as a result of addressing data scarcity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

O’Donnell, Cian, J. Tiago Gonçalves, Nick Whiteley, Carlos Portera-Cailliau e Terrence J. Sejnowski. "The Population Tracking Model: A Simple, Scalable Statistical Model for Neural Population Data". Neural Computation 29, n.º 1 (janeiro de 2017): 50–93. http://dx.doi.org/10.1162/neco_a_00910.

Texto completo da fonte
Resumo:
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded ([Formula: see text]). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca[Formula: see text] and voltage imaging tools.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Wang, Jiang, Ruixue Han, Xilei Wei, Yingmei Qin, Haitao Yu e Bin Deng. "Weak signal detection and propagation in diluted feed-forward neural network with recurrent excitation and inhibition". International Journal of Modern Physics B 30, n.º 02 (20 de janeiro de 2016): 1550253. http://dx.doi.org/10.1142/s0217979215502537.

Texto completo da fonte
Resumo:
Reliable signal propagation across distributed brain areas provides the basis for neural circuit function. Modeling studies on cortical circuits have shown that multilayered feed-forward networks (FFNs), if strongly and/or densely connected, can enable robust signal propagation. However, cortical networks are typically neither densely connected nor have strong synapses. This paper investigates under which conditions spiking activity can be propagated reliably across diluted FFNs. Extending previous works, we model each layer as a recurrent sub-network constituting both excitatory (E) and inhibitory (I) neurons and consider the effect of interactions between local excitation and inhibition on signal propagation. It is shown that elevation of cellular excitation–inhibition (EI) balance in the local sub-networks (layers) softens the requirement for dense/strong anatomical connections and thereby promotes weak signal propagation in weakly connected networks. By means of iterated maps, we show how elevated local excitability state compensates for the decreased gain of synchrony transfer function that is due to sparse long-range connectivity. Finally, we report that modulations of EI balance and background activity provide a mechanism for selectively gating and routing neural signal. Our results highlight the essential role of intrinsic network states in neural computation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Li, Duowei, Jianping Wu e Depin Peng. "Online Traffic Accident Spatial-Temporal Post-Impact Prediction Model on Highways Based on Spiking Neural Networks". Journal of Advanced Transportation 2021 (2 de dezembro de 2021): 1–20. http://dx.doi.org/10.1155/2021/9290921.

Texto completo da fonte
Resumo:
Traffic accident management as an approach to improve public security and reduce economic losses has received public attention for a long time, among which traffic accidents post-impact prediction (TAPIP) is one of the most important procedures. However, existing systems and methodologies for TAPIP are insufficient for addressing the problem. The drawbacks include ignoring the recovery process after clearance and failing to make comprehensive prediction in both time and space domain. To this end, we build a 3-stage TAPIP model on highways, using the technology of spiking neural networks (SNNs) and convolutional neural networks (CNNs). By dividing the accident lifetime into two phases, i.e., clean-up phase and recovery phase, the model extracts characteristics in each phase and achieves prediction of spatial-temporal post-impact variables (e.g., clean-up time, recovery time, and accumulative queue length). The framework takes advantage of SNNs to efficiently capture accident spatial-temporal features and CNNs to precisely represent the traffic environment. Integrated with an adaptation and updating mechanism, the whole system works autonomously in an online manner that continues to self-improve during usage. By testing with a new dataset CASTA pertaining to California statewide traffic accidents on highways collected in four years, we prove that the proposed model achieves higher prediction accuracy than other methods (e.g., KNN, shockwave theory, and ANNs). This work is the introduction of SNNs in the traffic accident prediction domain and also a complete description of post-impact in the whole accident lifetime.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Harel, Yuval, e Ron Meir. "Optimal Multivariate Tuning with Neuron-Level and Population-Level Energy Constraints". Neural Computation 32, n.º 4 (abril de 2020): 794–828. http://dx.doi.org/10.1162/neco_a_01267.

Texto completo da fonte
Resumo:
Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting as neural tuning which minimizes mean decoding error. Many works optimize Fisher information, which approximates the minimum mean square error (MMSE) of the optimal decoder for long encoding time but may be misleading for short encoding times. We study MMSE-optimal neural encoding of a multivariate stimulus by uniform populations of spiking neurons, under firing rate constraints for each neuron as well as for the entire population. We show that the population-level constraint is essential for the formulation of a well-posed problem having finite optimal tuning widths and optimal tuning aligns with the principal components of the prior distribution. Numerical evaluation of the two-dimensional case shows that encoding only the dimension with higher variance is optimal for short encoding times. We also compare direct MMSE optimization to optimization of several proxies to MMSE: Fisher information, maximum likelihood estimation error, and the Bayesian Cramér-Rao bound. We find that optimization of these measures yields qualitatively misleading results regarding MMSE-optimal tuning and its dependence on encoding time and energy constraints.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Kleijnen, Robert, Markus Robens, Michael Schiek e Stefan van Waasen. "A Network Simulator for the Estimation of Bandwidth Load and Latency Created by Heterogeneous Spiking Neural Networks on Neuromorphic Computing Communication Networks". Journal of Low Power Electronics and Applications 12, n.º 2 (21 de abril de 2022): 23. http://dx.doi.org/10.3390/jlpea12020023.

Texto completo da fonte
Resumo:
Accelerated simulations of biological neural networks are in demand to discover the principals of biological learning. Novel many-core simulation platforms, e.g., SpiNNaker, BrainScaleS and Neurogrid, allow one to study neuron behavior in the brain at an accelerated rate, with a high level of detail. However, they do not come anywhere near simulating the human brain. The massive amount of spike communication has turned out to be a bottleneck. We specifically developed a network simulator to analyze in high detail the network loads and latencies caused by different network topologies and communication protocols in neuromorphic computing communication networks. This simulator allows simulating the impacts of heterogeneous neural networks and evaluating neuron mapping algorithms, which is a unique feature among state-of-the-art network models and simulators. The simulator was cross-checked by comparing the results of a homogeneous neural network-based run with corresponding bandwidth load results from comparable works. Additionally, the increased level of detail achieved by the new simulator is presented. Then, we show the impact heterogeneous connectivity can have on the network load, first for a small-scale test case, and later for a large-scale test case, and how different neuron mapping algorithms can influence this effect. Finally, we look at the latency estimations performed by the simulator for different mapping algorithms, and the impact of the node size.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Wang, Yihao, Danqing Wu, Yu Wang, Xianwu Hu, Zizhao Ma, Jiayun Feng e Yufeng Xie. "A Low-Cost Hardware-Friendly Spiking Neural Network Based on Binary MRAM Synapses, Accelerated Using In-Memory Computing". Electronics 10, n.º 19 (8 de outubro de 2021): 2441. http://dx.doi.org/10.3390/electronics10192441.

Texto completo da fonte
Resumo:
In recent years, the scaling down that Moore’s Law relies on has been gradually slowing down, and the traditional von Neumann architecture has been limiting the improvement of computing power. Thus, neuromorphic in-memory computing hardware has been proposed and is becoming a promising alternative. However, there is still a long way to make it possible, and one of the problems is to provide an efficient, reliable, and achievable neural network for hardware implementation. In this paper, we proposed a two-layer fully connected spiking neural network based on binary MRAM (Magneto-resistive Random Access Memory) synapses with low hardware cost. First, the network used an array of multiple binary MRAM cells to store multi-bit fixed-point weight values. This helps to simplify the read/write circuit. Second, we used different kinds of spike encoders that ensure the sparsity of input spikes, to reduce the complexity of peripheral circuits, such as sense amplifiers. Third, we designed a single-step learning rule, which fit well with the fixed-point binary weights. Fourth, we replaced the traditional exponential Leak-Integrate-Fire (LIF) neuron model to avoid the massive cost of exponential circuits. The simulation results showed that, compared to other similar works, our SNN with 1184 neurons and 313,600 synapses achieved an accuracy of up to 90.6% in the MNIST recognition task with full-resolution (28 × 28) and full-bit-depth (8-bit) images. In the case of low-resolution (16 × 16) and black-white (1-bit) images, the smaller version of our network with 384 neurons and 32,768 synapses still maintained an accuracy of about 77%, extending its application to ultra-low-cost situations. Both versions need less than 30,000 samples to reach convergence, which is a >50% reduction compared to other similar networks. As for robustness, it is immune to the fluctuation of MRAM cell resistance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Chen, Ruizhi, e Ling Li. "Analyzing and Accelerating the Bottlenecks of Training Deep SNNs With Backpropagation". Neural Computation 32, n.º 12 (dezembro de 2020): 2557–600. http://dx.doi.org/10.1162/neco_a_01319.

Texto completo da fonte
Resumo:
Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter, we aim to understand the intrinsic limitations of SNN training to design better algorithms. First, the pros and cons of typical SNN training algorithms are analyzed. Then it is found that the spatiotemporal backpropagation algorithm (STBP) has potential in training deep SNNs due to its simplicity and fast convergence. Later, the main bottlenecks of the STBP algorithm are analyzed, and three conditions for training deep SNNs with the STBP algorithm are derived. By analyzing the connection between CNNs and SNNs, we propose a weight initialization algorithm to satisfy the three conditions. Moreover, we propose an error minimization method and a modified loss function to further improve the training performance. Experimental results show that the proposed method achieves 91.53% accuracy on the CIFAR10 data set with 1% accuracy increase over the STBP algorithm and decreases the training epochs on the MNIST data set to 15 epochs (over 13 times speed-up compared to the STBP algorithm). The proposed method also decreases classification latency by over 25 times compared to the CNN-SNN conversion algorithms. In addition, the proposed method works robustly for very deep SNNs, while the STBP algorithm fails in a 19-layer SNN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Qiu, Xiaorong, Ye Xu, Yingzhong Shi, S. Kannadhasan Deepa e S. Balakumar. "Maximum Entropy Principle Based on Bank Customer Account Validation Using the Spark Method". Journal of Computer Networks and Communications 2023 (31 de dezembro de 2023): 1–13. http://dx.doi.org/10.1155/2023/8840168.

Texto completo da fonte
Resumo:
Bank customer validation is carried out with the aim of providing a series of services to users of a bank and financial institutions. It is necessary to perform various analytical methods for user’s accounts due to the high volume of banking data. This research works in the field of money laundering detection from real bank data. Banking data analysis is a complex process that involves information gathered from various sources, mainly in terms of personality, such as bills or bank account transactions which have qualitative characteristics such as the testimony of eyewitnesses. Operational or research activities can be greatly improved if supported by proprietary techniques and tools, due to the vast nature of this information. The application of data mining operations with the aim of discovering new knowledge of banking data with an intelligent approach is considered in this research. The approach of this research is to use the spiking neural network (SNN) with a group of sparks to detect money laundering, but due to the weakness in accurately identifying the characteristics of money laundering, the maximum entropy principle (MEP) method is also used. This approach will have a mapping from clustering and feature extraction to classification for accurate detection. Based on the analysis and simulation, it is observed that the proposed approach SNN-MFP has 87% accuracy and is 84.71% more functional than the classical method of using only the SNN. In this analysis, it is observed that in real banking data from Mellat Bank, Iran, in its third and fourth data, with a comprehensive analysis and reaching different outputs, there have been two money laundering cases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Morita, Kenta, Haruhiko Takase, Naoki Morita, Hiroharu Kawanak e Hidehiko Kita. "Spiking Neural Network to Extract Frequent Words from Japanese Speech Data". Procedia Computer Science 159 (2019): 363–71. http://dx.doi.org/10.1016/j.procs.2019.09.191.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Fadhil, Muthna Jasim, Maitham Ali Naji e Ghalib Ahmed Salman. "Transceiver error reduction by design prototype system based on neural network analysis method". Indonesian Journal of Electrical Engineering and Computer Science 18, n.º 3 (1 de junho de 2020): 1244. http://dx.doi.org/10.11591/ijeecs.v18.i3.pp1244-1251.

Texto completo da fonte
Resumo:
<p><span>Code words traditional can be decoding when applied in artificial neural network. Nevertheless, explored rarely for encoding of artificial neural network so that it proposed encoder for artificial neural network forward with major structure built by Self Organizing Feature Map (SOFM). According to number of bits codeword and bits source mentioned the dimension of forward neural network at first then sets weight of distribution proposal choosing after that algorithm appropriate using for sets weight initializing and finally sets code word uniqueness check so that matching with existing. The spiking neural network (SNN) using as decoder of neural network for processing of decoding where depending on numbers of bits codeword and bits source dimension the spiking neural network structure built at first then generated sets codeword by network neural forward using for train spiking neural network after that when whole error reached minimum the process training stop and at last sets code word decode accepted. In tests simulation appear that feasible decoding and encoding neural network while performance better for structure network neural forward a proper condition is achieved with γ node output degree. The methods of mathematical traditional can not using for decoding generated Sets codeword by encoder network of neural so it is prospect good for communication security. </span></p>
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Naudin, Loïs. "Biological emergent properties in non-spiking neural networks". AIMS Mathematics 7, n.º 10 (2022): 19415–39. http://dx.doi.org/10.3934/math.20221066.

Texto completo da fonte
Resumo:
<abstract><p>A central goal of neuroscience is to understand the way nervous systems work to produce behavior. Experimental measurements in freely moving animals (<italic>e.g.</italic> in the <italic>C. elegans</italic> worm) suggest that ON- and OFF-states in non-spiking nervous tissues underlie many physiological behaviors. Such states are defined by the collective activity of non-spiking neurons with correlated up- and down-states of their membrane potentials. How these network states emerge from the intrinsic neuron dynamics and their couplings remains unclear. In this paper, we develop a rigorous mathematical framework for better understanding their emergence. To that end, we use a recent simple phenomenological model capable of reproducing the experimental behavior of non-spiking neurons. The analysis of the stationary points and the bifurcation dynamics of this model are performed. Then, we give mathematical conditions to monitor the impact of network activity on intrinsic neuron properties. From then on, we highlight that ON- and OFF-states in non-spiking coupled neurons could be a consequence of bistable synaptic inputs, and not of intrinsic neuron dynamics. In other words, the apparent up- and down-states in the neuron's bimodal voltage distribution do not necessarily result from an intrinsic bistability of the cell. Rather, these states could be driven by bistable presynaptic neurons, ubiquitous in non-spiking nervous tissues, which dictate their behaviors to their postsynaptic ones.</p></abstract>
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Grimaldi, Antoine, Amélie Gruel, Camille Besnainou, Jean-Nicolas Jérémie, Jean Martinet e Laurent U. Perrinet. "Precise Spiking Motifs in Neurobiological and Neuromorphic Data". Brain Sciences 13, n.º 1 (29 de dezembro de 2022): 68. http://dx.doi.org/10.3390/brainsci13010068.

Texto completo da fonte
Resumo:
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption—a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Long, Yun. "Design and Evaluation of English Vocabulary Learning Aids Based on Word Vector Modelling". Journal of Electrical Systems 20, n.º 6s (29 de abril de 2024): 1763–74. http://dx.doi.org/10.52783/jes.3094.

Texto completo da fonte
Resumo:
English vocabulary learning aids based on word vector modeling involve creating tools that leverage advanced techniques to enhance vocabulary acquisition. Word vector modeling, often using methods like Word2Vec or GloVe, represents words as high-dimensional vectors capturing semantic relationships. These models can power vocabulary learning aids by offering context-based word suggestions, personalized word quizzes, or interactive visualizations of word associations.. The paper introduces the Hierarchical Spiking Vocabulary Deep Learning (HSV-DL) framework, a novel approach aimed at enhancing vocabulary learning and classification tasks. With word vector modeling and spiking neural networks, HSV-DL offers a sophisticated methodology for accurately categorizing vocabulary words into their respective semantic categories. The Hierarchical Spiking Vocabulary Deep Learning (HSV-DL) framework introduces novel methodologies for vocabulary learning and classification tasks, achieving outstanding performance metrics. Experimental results demonstrate high accuracy (95%), precision (96%), recall (94%), and F1-score (95%) in categorizing vocabulary words into their semantic categories. Moreover, HSV-DL exhibits robustness to noise and efficient resource utilization, showcasing its potential for real-world applications in natural language processing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Hayat, Hanna, Amit Marmelshtein, Aaron J. Krom, Yaniv Sela, Ariel Tankus, Ido Strauss, Firas Fahoum, Itzhak Fried e Yuval Nir. "Reduced neural feedback signaling despite robust neuron and gamma auditory responses during human sleep". Nature Neuroscience 25, n.º 7 (julho de 2022): 935–43. http://dx.doi.org/10.1038/s41593-022-01107-4.

Texto completo da fonte
Resumo:
AbstractDuring sleep, sensory stimuli rarely trigger a behavioral response or conscious perception. However, it remains unclear whether sleep inhibits specific aspects of sensory processing, such as feedforward or feedback signaling. Here, we presented auditory stimuli (for example, click-trains, words, music) during wakefulness and sleep in patients with epilepsy, while recording neuronal spiking, microwire local field potentials, intracranial electroencephalogram and polysomnography. Auditory stimuli induced robust and selective spiking and high-gamma (80–200 Hz) power responses across the lateral temporal lobe during both non-rapid eye movement (NREM) and rapid eye movement (REM) sleep. Sleep only moderately attenuated response magnitudes, mainly affecting late responses beyond early auditory cortex and entrainment to rapid click-trains in NREM sleep. By contrast, auditory-induced alpha–beta (10–30 Hz) desynchronization (that is, decreased power), prevalent in wakefulness, was strongly reduced in sleep. Thus, extensive auditory responses persist during sleep whereas alpha–beta power decrease, likely reflecting neural feedback processes, is deficient. More broadly, our findings suggest that feedback signaling is key to conscious sensory processing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Pattusamy, Murugan, e Lakshmi Kanth. "Classification of Tweets Into Facts and Opinions Using Recurrent Neural Networks". International Journal of Technology and Human Interaction 19, n.º 1 (10 de março de 2023): 1–14. http://dx.doi.org/10.4018/ijthi.319358.

Texto completo da fonte
Resumo:
In the last few years, the growth rate of the number of people who are active on Twitter has been consistently spiking. In India, even the government agencies have started using Twitter accounts as they feel that they can get connected to a greater number of people in a short span of time. Apart from the social media platforms, there are an enormous number of blogging applications that have popped up providing another platform for the people to share their views. With all this, the authenticity of the content that is being generated is going for a toss. On that note, the authors have the task in hand of differentiating the genuineness of the content. In this process, they have worked upon various techniques that would maximize the authenticity of the content and propose a long short-term memory (LSTM) model that will make a distinction between the tweets posted on the Twitter platform. The model in combination with the manually engineered features and the bag of words model is able to classify the tweets efficiently.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Hatsopoulos, N. G., M. Burrows e G. Laurent. "Hysteresis reduction in proprioception using presynaptic shunting inhibition". Journal of Neurophysiology 73, n.º 3 (1 de março de 1995): 1031–42. http://dx.doi.org/10.1152/jn.1995.73.3.1031.

Texto completo da fonte
Resumo:
1. The tonic responses of angular-position-sensitive afferents in the metathoracic chordotonal organ of the locust leg exhibit much hysteresis. For a given joint angle, the ratio of an afferent's tonic firing rate after extension to its firing rate after flexion (or vice versa) is typically between 1.2:1 and 3:1 but can be as large as 10:1. Spiking local interneurons, that receive direct inputs from these afferents, can, by contrast, exhibit much less hysteresis (between 1.1:1 and 1.2:1). We tested the hypothesis that presynaptic inhibitory interactions between afferent axons reduces the hysteresis of postsynaptic interneurons by acting as an automatic gain control mechanism. 2. We used two kinds of neural models to test this hypothesis: 1) an abstract nonspiking neural model in which a multiplicative, shunting term reduced the "firing rate" of the afferent and 2) a more realistic compartmental model in which shunting inhibition presynaptically attenuated the amplitude of the action potentials reaching the afferent terminals. 3. The abstract neural model demonstrated the automatic gain control capability of a network of laterally inhibited afferent units. A postsynaptic unit, which was connected to the competitive network of afferents, coded for joint angle without saturating as the strength of the afferent input increased by two orders of magnitude. This was possible because shunting inhibition exactly balanced the increase in the excitatory input. This compensatory mechanism required the sum of the excitatory and inhibitory conductances to be much larger than the leak conductance. This requirement suggested a graded weighting scheme in which the afferent recruited first (i.e., at a small joint angle) received the largest inhibition from each of the other afferents because of the lack of active neighbors, and the afferent recruited last (i.e., at a large joint angle) received the least inhibition because all the other afferents were active. 4. The compartmental model demonstrated that presynaptic shunting inhibition between afferents could decrease the average synaptic conductance caused by the afferents onto the spiking interneuron, thereby counterbalancing the afferents' large average firing rates after movements in the preferred direction. Therefore the total postsynaptic input per unit time did not differ much between the preferred and nonpreferred directions.(ABSTRACT TRUNCATED AT 400 WORDS)
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Larsson, J. P., Fátima Vera Constán, Núria Sebastián-Gallés e Gustavo Deco. "Lexical Plasticity in Early Bilinguals Does Not Alter Phoneme Categories: I. Neurodynamical Modeling". Journal of Cognitive Neuroscience 20, n.º 1 (janeiro de 2008): 76–94. http://dx.doi.org/10.1162/jocn.2008.20004.

Texto completo da fonte
Resumo:
Sebastián-Gallés et al. [The influence of initial exposure on lexical representation: Comparing early and simultaneous bilinguals. Journal of Memory and Language, 52, 240–255, 2005] contrasted highly proficient early Spanish-Catalan and Catalan-Spanish bilinguals, using Catalan materials in a lexical decision task (LDT). They constructed two types of experimental pseudowords, substituting Catalan phoneme /e/ for Catalan /ɛ/, or vice versa. Catalan-dominant bilinguals showed a performance asymmetry across experimental conditions, making more mistakes for /ɛ/→/e/ changes, than for /e/→/ɛ/ ones. This was considered evidence of a developed acceptance of mispronounced Catalan /ɛ/-words, caused by exposure to a bilingual environment where mispronunciations by Spanish-dominant bilinguals using their /e/-category abound. Although this indicated modified or added lexical representations, an open issue is whether such lexical information also modifies phoneme categories. We address this using a biophysically realistic neurodynamic model, describing neural activity at the synaptic and spiking levels. We construct a network of pools of neurons, representing phonemic and lexical processing. Carefully analyzing the dependency of network dynamics on connection strengths, by first exploring parameter space under steady-state assumptions (mean-field scans), then running spiking simulations, we investigate the neural substrate role in a representative LDT. We also simulate a phoneme discrimination task to address whether lexical changes affect the phonemic level. We find that the same network configuration which displays asymmetry in the LDT shows equal performance discriminating the two modeled phonemes. Thus, we predicted that the Catalan-dominant bilinguals do not alter their phoneme categories, although showing signs of having stored a new word variation in the lexicon. To explore this prediction, a syllable discrimination task involving the /e/-/ɛ/ contrast was set up, using Catalan-dominants displaying performance asymmetry in a repetition of the original LDT. Discrimination task results support the prediction, showing that these subjects discriminate both categories equally well. We conclude that subjects often exposed to dialectal word variations can store these in their lexicons, without altering their phoneme representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Skinner, T. L., e B. Peretz. "Age sensitivity of osmoregulation and of its neural correlates in Aplysia". American Journal of Physiology-Regulatory, Integrative and Comparative Physiology 256, n.º 4 (1 de abril de 1989): R989—R996. http://dx.doi.org/10.1152/ajpregu.1989.256.4.r989.

Texto completo da fonte
Resumo:
Osmoregulation was studied in the marine mollusc Aplysia californica in young, mature, and old adults. To monitor volume and osmoregulation, we measured body weight, hemolymph osmolality, and chloride concentration. These parameters were measured at regular intervals with animals in 90% artificial seawater (90% ASW) for up to 36 h. They showed that the rates at which Aplysia osmo- and volume regulate were significantly slowed with increased age. However, no age effect was found in osmoregulation when the hemolymph was diluted to 90% of control in animals without an external stress, i.e., by injection of distilled H2O and keeping animals in 100% ASW. Because the dilution bypassed the sensory receptors that detect external changes of osmolality, this finding suggested that the slowed osmoregulation involved age-impaired functioning of the neural pathway mediating osmoregulation. Other evidence was from mature adults whose osmoreceptive organ, the osphradium, was lesioned; they mimicked osmoregulation measured in old adults. In preparations containing a portion of the osmoregulatory pathway, the osphradium was stimulated by 90% ASW, and the responsiveness of neuron R15, which putatively regulates antidiuresis, was tested. The stimulus inhibited spiking in R15 from mature adults but not in R15 from old adults or from osphradiallesioned mature ones. In old Aplysia the refractoriness of R15 to osphradial stimulation demonstrated that the effecacy of the pathway was impaired with increased age; it helped explain the slower rate of osmoregulation. Possible changes of osmoregulatory mechanisms and behavior compensating for the age sensitivity of osmoregulation are discussed.(ABSTRACT TRUNCATED AT 250 WORDS)
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Woo, Dae-Seong, Hyun-Do Choi, Hong-Uk Jin, Jae-Kyeong Kim, Tae-Hun Shim e Jea-Gun Park. "Multi-Bit Self-Rectifying Synaptic Memristor Having Tri-Layer Structure for Quantization Aware Training of Quantized Neural Network". ECS Meeting Abstracts MA2023-02, n.º 30 (22 de dezembro de 2023): 1560. http://dx.doi.org/10.1149/ma2023-02301560mtgabs.

Texto completo da fonte
Resumo:
Recently, a cross-point synaptic memristor arrays have been employed to implement synaptic cores of various ANNs, i.e., deep neural networks (DNNs), spiking neural networks (SNNs), and convolutional neural networks (CNNs). The most of reported studies conducted training and inference of ANNs using analog synaptic weight modulation of memristors such as resistive random access memory (ReRAM), phase change random access memory (PCRAM), and ferroelectric random access memory (FeRAM). In other words, no case has been reported so far in which a synaptic memristor with quantized multi-bit is utilized onto a synaptic core of a quantized neural network (QNN). In this study, for the first time, we introduced the multi-bit self-rectifying synaptic memristor having tri-layer structure being composed of oxygen-rich AlOx rectifying layer, oxygen-deficient HfOx top switching layer, and oxygen-rich HfOy bottom switching layer for quantization aware training of QNN. The resistive switching and self-rectifying mechanism of the multi-bit self-rectifying synaptic memristor was evidently proven by precisely investigating migration of oxygen ions and vacancies in resistive switching layers via ToF-SIMS and XPS depth profiles depending on the resistance states (i.e., pristine, set, and reset). The designed multi-bit self-rectifying synaptic memristor presented linear, discrete, and quantized 4-bit (i.e., 16-level) conductance level depending on incremental write pulse number, which was 4-bit self-rectifying synaptic memristor for the first time. In addtion, the quantization aware training (QAT) was conducted using 4-bit quantized conductance level of the desgiend multi-bit self-rectifying synaptic memristor via stright through estimator (STE). Finally, three different iris datasets were successfully classified using a quantized neural network designed via SPICE circuit simulation. The conductance mechanism of the self-rectifying synaptic memristor and its application of QNN will be presented in detail. Acknowledgement This research was supported by National R&D Program through the National Research Foundation of Korea(NRF) funded by Ministry of Science and ICT(2021M3F3A2A01037733) Figure 1
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Lee, Albert K., e Matthew A. Wilson. "A Combinatorial Method for Analyzing Sequential Firing Patterns Involving an Arbitrary Number of Neurons Based on Relative Time Order". Journal of Neurophysiology 92, n.º 4 (outubro de 2004): 2555–73. http://dx.doi.org/10.1152/jn.01030.2003.

Texto completo da fonte
Resumo:
Information processing in the brain is believed to require coordinated activity across many neurons. With the recent development of techniques for simultaneously recording the spiking activity of large numbers of individual neurons, the search for complex multicell firing patterns that could help reveal this neural code has become possible. Here we develop a new approach for analyzing sequential firing patterns involving an arbitrary number of neurons based on relative firing order. Specifically, we develop a combinatorial method for quantifying the degree of matching between a “reference sequence” of N distinct “letters” (representing a particular target order of firing by N cells) and an arbitrarily long “word” composed of any subset of those letters including repeats (representing the relative time order of spikes in an arbitrary firing pattern). The method involves computing the probability that a random permutation of the word's letters would by chance alone match the reference sequence as well as or better than the actual word does, assuming all permutations were equally likely. Lower probabilities thus indicate better matching. The overall degree and statistical significance of sequence matching across a heterogeneous set of words (such as those produced during the course of an experiment) can be computed from the corresponding set of probabilities. This approach can reduce the sample size problem associated with analyzing complex firing patterns. The approach is general and thus applicable to other types of neural data beyond multiple spike trains, such as EEG events or imaging signals from multiple locations. We have recently applied this method to quantify memory traces of sequential experience in the rodent hippocampus during slow wave sleep.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Uusitalo, R. O., M. Juusola e M. Weckstrom. "Graded responses and spiking properties of identified first-order visual interneurons of the fly compound eye". Journal of Neurophysiology 73, n.º 5 (1 de maio de 1995): 1782–92. http://dx.doi.org/10.1152/jn.1995.73.5.1782.

Texto completo da fonte
Resumo:
1. We studied the graded and spiking properties of the "non-spiking" first-order visual interneurons of the fly compound eye in situ with the use of intracellular recordings. Iontophoretical QX-314 injections, Lucifer yellow marking, and (discontinuous) current-clamp method together with transfer function analysis were used to characterize the neural signal processing mechanisms in these neurons. 2. A light-OFF spike was seen in one identified anatomic subtype (L3, n = 6) of the three first-order visual interneurons (L1, L2, and L3, or LMCs) when recorded from synaptic region (i.e., in the 1st visual ganglion, lamina ganglionaris) in dark-adapted conditions. Hyperpolarization of the membrane potential by current caused the identified L1 (n = 4), as well as L3 (n = 6), to produce an OFF spike, a number of action potentials, and some subthreshold depolarizations after the light-ON response. In L2 the OFF spike or action potentials could not be elicited. 3. To produce action potentials in L1 and L3, it was found to be necessary to hyperpolarize the cells approximately 35-45 mV (n = 43) below the resting potential (RP) in the synaptic zone. Recordings from the axons of these cells revealed that near the second neuropil (chiasma) the threshold of these spikes was near to (approximately 10 mV below, n = 16) or even at the RP when an ON spike was also produced (n = 4). 4. The recorded spikes were up to 54 mV in amplitude, appeared with a maximum frequency of up to 120 impulses/s, and had a duration of approximately 8 ms. In L1 and L3 the spikes were elicited either after a light pulse (L3) or after a negative current step that was superimposed on a hyperpolarizing steady-state current (L3 and L1). A positive current step (similarly superimposed on a hyperpolarizing steady-state current) also triggered the spikes during the step. 5. Iontophoretic injection of a potent intracellularly effective blocker of voltage-gated sodium channels, QX-314, irreversibly eradicated the spikes and subthreshold depolarizations (n = 5). In addition, further injections elongated the light-ON responses and decreased or even abolished the light-OFF response. 6. Negative prepulses followed by positive current steps were applied from the RP, to test the activation-inactivation properties of the channels responsible for the OFF spike.(ABSTRACT TRUNCATED AT 400 WORDS)
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Barrio, L. C., A. Araque e W. Buno. "Participation of voltage-gated conductances on the response succeeding inhibitory synaptic potentials in the crayfish slowly adapting stretch receptor neuron". Journal of Neurophysiology 72, n.º 3 (1 de setembro de 1994): 1140–51. http://dx.doi.org/10.1152/jn.1994.72.3.1140.

Texto completo da fonte
Resumo:
1. We examined the contribution of voltage-gated conductances to inhibitory postsynaptic potential (IPSP) effects under current clamp in silent and spiking slowly adapting stretch receptor neurons (SN1s) in the slow receptor muscle of the crayfish Procambarus. The receptor exemplifies the simplest inhibitory neural circuit, with one presynaptic and one postsynaptic neuron. The effects of synaptic inhibition were compared with the outcome of hyperpolarizing current pulses. Because pulse effects were exclusively due to postsynaptic mechanisms, an estimation of the synaptic or extrasynaptic origin of the results of IPSP was possible. 2. Inhibition by single IPSPs increased gradually with the time elapsed from the preceding spike in 60% of the spiking SN1s. However, early IPSP arrivals were exclusively excitatory in the rest of the cases. Inhibition was restricted to a single expanded SN1 interspike interval, but the early excitation and the postinhibitory rebound lasted several intervals. Rebound was invariably present; it was the only consequence of IPSPs in silent receptors and could be extremely long lasting (> 25 s). 3. The membrane potential of the SN1 neuron was clamped at hyperpolarized values (greater than -65 mV) by prolonged IPSP barrages at high rate (> 20/s). A prominent depolarizing sag and a gradual reduction of the IPSP amplitude were observed with prolonged presynaptic stimulation. There were subthreshold IPSP amplitude oscillations consisting of gradual increases and decreases of the post-IPSP peak depolarization at lower presynaptic rates. IPSP amplitude variations (< or = 10 mV) were primarily due to larger local responses. 4. Essentially all IPSP effects were mimicked by hyperpolarizing pulses. Sag was also evoked by pulses and was accompanied by a gradual conductance increase preceded by a brief initial drop. Sag and rebound were markedly reduced by Cs+ (2 mM) and tetrodotoxin (1 microM) and less by Ba2+ (5 mM) or tetraethylammonium (25 mM) superfusion. Both were somewhat decreased by acetylcholine (30 microM), which also markedly depolarized and accelerated firings, results which were usually reduced by atropine (10 microM). 5. In conclusion, IPSP and hyperpolarizing pulse effects were essentially identical, implying that extrasynaptic membrane properties were decisive. Interestingly, net excitatory consequences were usual, effectively increasing sensitivity and reducing the sensory threshold. Pharmacological evidence is provided suggesting that the hyperpolarization-activated current, IQ, and also probably the K+ M-current, the A-current, and the low-threshold, persistent Na+ conductances participate in sag and rebound genesis.(ABSTRACT TRUNCATED AT 400 WORDS)
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Yan, Yulong, Haoming Chu, Yi Jin, Yuxiang Huan, Zhuo Zou e Lirong Zheng. "Backpropagation With Sparsity Regularization for Spiking Neural Network Learning". Frontiers in Neuroscience 16 (14 de abril de 2022). http://dx.doi.org/10.3389/fnins.2022.760298.

Texto completo da fonte
Resumo:
The spiking neural network (SNN) is a possible pathway for low-power and energy-efficient processing and computing exploiting spiking-driven and sparsity features of biological systems. This article proposes a sparsity-driven SNN learning algorithm, namely backpropagation with sparsity regularization (BPSR), aiming to achieve improved spiking and synaptic sparsity. Backpropagation incorporating spiking regularization is utilized to minimize the spiking firing rate with guaranteed accuracy. Backpropagation realizes the temporal information capture and extends to the spiking recurrent layer to support brain-like structure learning. The rewiring mechanism with synaptic regularization is suggested to further mitigate the redundancy of the network structure. Rewiring based on weight and gradient regulates the pruning and growth of synapses. Experimental results demonstrate that the network learned by BPSR has synaptic sparsity and is highly similar to the biological system. It not only balances the accuracy and firing rate, but also facilitates SNN learning by suppressing the information redundancy. We evaluate the proposed BPSR on the visual dataset MNIST, N-MNIST, and CIFAR10, and further test it on the sensor dataset MIT-BIH and gas sensor. Results bespeak that our algorithm achieves comparable or superior accuracy compared to related works, with sparse spikes and synapses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Guo, Yufei, Xuhui Huang e Zhe Ma. "Direct learning-based deep spiking neural networks: a review". Frontiers in Neuroscience 17 (16 de junho de 2023). http://dx.doi.org/10.3389/fnins.2023.1209795.

Texto completo da fonte
Resumo:
The spiking neural network (SNN), as a promising brain-inspired computational model with binary spike information transmission mechanism, rich spatially-temporal dynamics, and event-driven characteristics, has received extensive attention. However, its intricately discontinuous spike mechanism brings difficulty to the optimization of the deep SNN. Since the surrogate gradient method can greatly mitigate the optimization difficulty and shows great potential in directly training deep SNNs, a variety of direct learning-based deep SNN works have been proposed and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these direct learning-based deep SNN works, mainly categorized into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. In addition, we also divide these categorizations into finer granularities further to better organize and introduce them. Finally, the challenges and trends that may be faced in future research are prospected.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Politi, Antonio, e Alessandro Torcini. "A robust balancing mechanism for spiking neural networks". Chaos: An Interdisciplinary Journal of Nonlinear Science 34, n.º 4 (1 de abril de 2024). http://dx.doi.org/10.1063/5.0199298.

Texto completo da fonte
Resumo:
Dynamical balance of excitation and inhibition is usually invoked to explain the irregular low firing activity observed in the cortex. We propose a robust nonlinear balancing mechanism for a random network of spiking neurons, which works also in the absence of strong external currents. Biologically, the mechanism exploits the plasticity of excitatory–excitatory synapses induced by short-term depression. Mathematically, the nonlinear response of the synaptic activity is the key ingredient responsible for the emergence of a stable balanced regime. Our claim is supported by a simple self-consistent analysis accompanied by extensive simulations performed for increasing network sizes. The observed regime is essentially fluctuation driven and characterized by highly irregular spiking dynamics of all neurons.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Wang, Jing. "Training multi-layer spiking neural networks with plastic synaptic weights and delays". Frontiers in Neuroscience 17 (24 de janeiro de 2024). http://dx.doi.org/10.3389/fnins.2023.1253830.

Texto completo da fonte
Resumo:
Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Pandey, Shagun. "Advancements in Gas Recognition Techniques for Electronic Nose Systems: A Comparative Review of Classical Methods and Spiking Neural Networks". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n.º 07 (22 de julho de 2023). http://dx.doi.org/10.55041/ijsrem24791.

Texto completo da fonte
Resumo:
An electronic nose (E-nose) system's ability to recognize multivariate responses from gas sensors in a variety of applications necessitates gas recognition. Principal component analysis (PCA) and other traditional gas recognition methods have been widely used in E-nose systems for decades. ANNs have transformed the field of E- nose, particularly spiking neural networks (SNNs), significantly in recent years. In this paper, we compare and contrast recent E-nose gas recognition techniques in terms of algorithms and hardware implementations. Each classical gas recognition method has a relatively fixed framework and few parameters, making it easy to design. It works well with few gas samples but poorly with multiple gas recognition when noise is present. Keywords: Gas detection, electronic nose, artificial neural network, and spiking neural network.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia