Статті в журналах з теми "NEURONS NEURAL NETWORK"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: NEURONS NEURAL NETWORK.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "NEURONS NEURAL NETWORK".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Ribar, Srdjan, Vojislav V. Mitic, and Goran Lazovic. "Neural Networks Application on Human Skin Biophysical Impedance Characterizations." Biophysical Reviews and Letters 16, no. 01 (February 6, 2021): 9–19. http://dx.doi.org/10.1142/s1793048021500028.

Повний текст джерела
Анотація:
Artificial neural networks (ANNs) are basically the structures that perform input–output mapping. This mapping mimics the signal processing in biological neural networks. The basic element of biological neural network is a neuron. Neurons receive input signals from other neurons or the environment, process them, and generate their output which represents the input to another neuron of the network. Neurons can change their sensitivity to input signals. Each neuron has a simple rule to process an input signal. Biological neural networks have the property that signals are processed through many parallel connections (massively parallel processing). The activity of all neurons in these parallel connections is summed and represents the output of the whole network. The main feature of biological neural networks is that changes in the sensitivity of the neurons lead to changes in the operation of the entire network. This is called adaptation and is correlated with the learning process of living organisms. In this paper, a set of artificial neural networks are used for classifying the human skin biophysical impedance data.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Dalhoum, Abdel Latif Abu, and Mohammed Al-Rawi. "High-Order Neural Networks are Equivalent to Ordinary Neural Networks." Modern Applied Science 13, no. 2 (January 27, 2019): 228. http://dx.doi.org/10.5539/mas.v13n2p228.

Повний текст джерела
Анотація:
Equivalence of computational systems can assist in obtaining abstract systems, and thus enable better understanding of issues related their design and performance. For more than four decades, artificial neural networks have been used in many scientific applications to solve classification problems as well as other problems. Since the time of their introduction, multilayer feedforward neural network referred as Ordinary Neural Network (ONN), that contains only summation activation (Sigma) neurons, and multilayer feedforward High-order Neural Network (HONN), that contains Sigma neurons, and product activation (Pi) neurons, have been treated in the literature as different entities. In this work, we studied whether HONNs are mathematically equivalent to ONNs. We have proved that every HONN could be converted to some equivalent ONN. In most cases, one just needs to modify the neuronal transfer function of the Pi neuron to convert it to a Sigma neuron. The theorems that we have derived clearly show that the original HONN and its corresponding equivalent ONN would give exactly the same output, which means; they can both be used to perform exactly the same functionality. We also derived equivalence theorems for several other non-standard neural networks, for example, recurrent HONNs and HONNs with translated multiplicative neurons. This work rejects the hypothesis that HONNs and ONNs are different entities, a conclusion that might initiate a new research frontier in artificial neural network research.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Geva, Shlomo, and Joaquin Sitte. "An Exponential Response Neural Net." Neural Computation 3, no. 4 (December 1991): 623–32. http://dx.doi.org/10.1162/neco.1991.3.4.623.

Повний текст джерела
Анотація:
By using artificial neurons with exponential transfer functions one can design perfect autoassociative and heteroassociative memory networks, with virtually unlimited storage capacity, for real or binary valued input and output. The autoassociative network has two layers: input and memory, with feedback between the two. The exponential response neurons are in the memory layer. By adding an encoding layer of conventional neurons the network becomes a heteroassociator and classifier. Because for real valued input vectors the dot-product with the weight vector is no longer a measure for similarity, we also consider a euclidean distance based neuron excitation and present Lyapunov functions for both cases. The network has energy minima corresponding only to stored prototype vectors. The exponential neurons make it simpler to build fast adaptive learning directly into classification networks that map real valued input to any class structure at its output.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sharma, Subhash Kumar. "An Overview on Neural Network and Its Application." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 1242–48. http://dx.doi.org/10.22214/ijraset.2021.37597.

Повний текст джерела
Анотація:
Abstract: In this paper an overview on neural network and its application is focused. In Real-world business applications for neural networks are booming. In some cases, NNs have already become the method of choice for businesses that use hedge fund analytics, marketing segmentation, and fraud detection. Here are some neural network innovators who are changing the business landscape. Here shown that how the biological model of neural network functions, all mammalian brains consist of interconnected neurons that transmit electrochemical signals. Neurons have several components: the body, which includes a nucleus and dendrites; axons, which connect to other cells; and axon terminals or synapses, which transmit information or stimuli from one neuron to another. Combined, this unit carries out communication and integration functions in the nervous system. Keywords: Neurons, neural network, biological model of neural network functions
Стилі APA, Harvard, Vancouver, ISO та ін.
5

YAMAZAKI, TADASHI, and SHIGERU TANAKA. "A NEURAL NETWORK MODEL FOR TRACE CONDITIONING." International Journal of Neural Systems 15, no. 01n02 (February 2005): 23–30. http://dx.doi.org/10.1142/s0129065705000037.

Повний текст джерела
Анотація:
We studied the dynamics of a neural network that has both recurrent excitatory and random inhibitory connections. Neurons started to become active when a relatively weak transient excitatory signal was presented and the activity was sustained due to the recurrent excitatory connections. The sustained activity stopped when a strong transient signal was presented or when neurons were disinhibited. The random inhibitory connections modulated the activity patterns of neurons so that the patterns evolved without recurrence with time. Hence, a time passage between the onsets of the two transient signals was represented by the sequence of activity patterns. We then applied this model to represent the trace eyeblink conditioning, which is mediated by the hippocampus. We assumed this model as CA3 of the hippocampus and considered an output neuron corresponding to a neuron in CA1. The activity pattern of the output neuron was similar to that of CA1 neurons during trace eyeblink conditioning, which was experimentally observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kim, Choongmin, Jacob A. Abraham, Woochul Kang, and Jaeyong Chung. "A Neural Network Decomposition Algorithm for Mapping on Crossbar-Based Computing Systems." Electronics 9, no. 9 (September 18, 2020): 1526. http://dx.doi.org/10.3390/electronics9091526.

Повний текст джерела
Анотація:
Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Atanassov, Krassimir, Sotir Sotirov, and Tania Pencheva. "Intuitionistic Fuzzy Deep Neural Network." Mathematics 11, no. 3 (January 31, 2023): 716. http://dx.doi.org/10.3390/math11030716.

Повний текст джерела
Анотація:
The concept of an intuitionistic fuzzy deep neural network (IFDNN) is introduced here as a demonstration of a combined use of artificial neural networks and intuitionistic fuzzy sets, aiming to benefit from the advantages of both methods. The investigation presents in a methodological way the whole process of IFDNN development, starting with the simplest form—an intuitionistic fuzzy neural network (IFNN) with one layer with single-input neuron, passing through IFNN with one layer with one multi-input neuron, further subsequent complication—an IFNN with one layer with many multi-input neurons, and finally—the true IFDNN with many layers with many multi-input neurons. The formulas for strongly optimistic, optimistic, average, pessimistic and strongly pessimistic formulas for NN parameters estimation, represented in the form of intuitionistic fuzzy pairs, are given here for the first time for each one of the presented IFNNs. To demonstrate its workability, an example of an IFDNN application to biomedical data is here presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bensimon, Moshe, Shlomo Greenberg, and Moshe Haiut. "Using a Low-Power Spiking Continuous Time Neuron (SCTN) for Sound Signal Processing." Sensors 21, no. 4 (February 4, 2021): 1065. http://dx.doi.org/10.3390/s21041065.

Повний текст джерела
Анотація:
This work presents a new approach based on a spiking neural network for sound preprocessing and classification. The proposed approach is biologically inspired by the biological neuron’s characteristic using spiking neurons, and Spike-Timing-Dependent Plasticity (STDP)-based learning rule. We propose a biologically plausible sound classification framework that uses a Spiking Neural Network (SNN) for detecting the embedded frequencies contained within an acoustic signal. This work also demonstrates an efficient hardware implementation of the SNN network based on the low-power Spike Continuous Time Neuron (SCTN). The proposed sound classification framework suggests direct Pulse Density Modulation (PDM) interfacing of the acoustic sensor with the SCTN-based network avoiding the usage of costly digital-to-analog conversions. This paper presents a new connectivity approach applied to Spiking Neuron (SN)-based neural networks. We suggest considering the SCTN neuron as a basic building block in the design of programmable analog electronics circuits. Usually, a neuron is used as a repeated modular element in any neural network structure, and the connectivity between the neurons located at different layers is well defined. Thus, generating a modular Neural Network structure composed of several layers with full or partial connectivity. The proposed approach suggests controlling the behavior of the spiking neurons, and applying smart connectivity to enable the design of simple analog circuits based on SNN. Unlike existing NN-based solutions for which the preprocessing phase is carried out using analog circuits and analog-to-digital conversion, we suggest integrating the preprocessing phase into the network. This approach allows referring to the basic SCTN as an analog module enabling the design of simple analog circuits based on SNN with unique inter-connections between the neurons. The efficiency of the proposed approach is demonstrated by implementing SCTN-based resonators for sound feature extraction and classification. The proposed SCTN-based sound classification approach demonstrates a classification accuracy of 98.73% using the Real-World Computing Partnership (RWCP) database.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Weaver, Adam L., and Scott L. Hooper. "Follower Neurons in Lobster (Panulirus interruptus) Pyloric Network Regulate Pacemaker Period in Complementary Ways." Journal of Neurophysiology 89, no. 3 (March 1, 2003): 1327–38. http://dx.doi.org/10.1152/jn.00704.2002.

Повний текст джерела
Анотація:
Distributed neural networks (ones characterized by high levels of interconnectivity among network neurons) are not well understood. Increased insight into these systems can be obtained by perturbing network activity so as to study the functions of specific neurons not only in the network's “baseline” activity but across a range of network activities. We applied this technique to study cycle period control in the rhythmic pyloric network of the lobster, Panulirus interruptus. Pyloric rhythmicity is driven by an endogenous oscillator, the Anterior Burster (AB) neuron. Two network neurons feed back onto the pacemaker, the Lateral Pyloric (LP) neuron by inhibition and the Ventricular Dilator (VD) neuron by electrical coupling. LP and VD neuron effects on pyloric cycle period can be studied across a range of periods by altering period by injecting current into the AB neuron and functionally removing (by hyperpolarization) the LP and VD neurons from the network at each period. Within a range of pacemaker periods, the LP and VD neurons regulate period in complementary ways. LP neuron removal speeds the network and VD neuron removal slows it. Outside this range, network activity is disrupted because the LP neuron cannot follow slow periods, and the VD neuron cannot follow fast periods. These neurons thus also limit, in complementary ways, normal pyloric activity to a certain period range. These data show that follower neurons in pacemaker networks can play central roles in controlling pacemaker period and suggest that in some cases specific functions can be assigned to individual network neurons.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sajedinia, Zahra, and Sébastien Hélie. "A New Computational Model for Astrocytes and Their Role in Biologically Realistic Neural Networks." Computational Intelligence and Neuroscience 2018 (July 5, 2018): 1–10. http://dx.doi.org/10.1155/2018/3689487.

Повний текст джерела
Анотація:
Recent studies in neuroscience show that astrocytes alongside neurons participate in modulating synapses. It led to the new concept of “tripartite synapse”, which means that a synapse consists of three parts: presynaptic neuron, postsynaptic neuron, and neighboring astrocytes. However, it is still unclear what role is played by the astrocytes in the tripartite synapse. Detailed biocomputational modeling may help generate testable hypotheses. In this article, we aim to study the role of astrocytes in synaptic plasticity by exploring whether tripartite synapses are capable of improving the performance of a neural network. To achieve this goal, we developed a computational model of astrocytes based on the Izhikevich simple model of neurons. Next, two neural networks were implemented. The first network was only composed of neurons and had standard bipartite synapses. The second network included both neurons and astrocytes and had tripartite synapses. We used reinforcement learning and tested the networks on categorizing random stimuli. The results show that tripartite synapses are able to improve the performance of a neural network and lead to higher accuracy in a classification task. However, the bipartite network was more robust to noise. This research provides computational evidence to begin elucidating the possible beneficial role of astrocytes in synaptic plasticity and performance of a neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ruzek, Martin. "ARTIFICIAL NEURAL NETWORK FOR MODELS OF HUMAN OPERATOR." Acta Polytechnica CTU Proceedings 12 (December 15, 2017): 99. http://dx.doi.org/10.14311/app.2017.12.0099.

Повний текст джерела
Анотація:
This paper presents a new approach to mental functions modeling with the use of artificial neural networks. The artificial neural networks seems to be a promising method for the modeling of a human operator because the architecture of the ANN is directly inspired by the biological neuron. On the other hand, the classical paradigms of artificial neural networks are not suitable because they simplify too much the real processes in biological neural network. The search for a compromise between the complexity of biological neural network and the practical feasibility of the artificial network led to a new learning algorithm. This algorithm is based on the classical multilayered neural network; however, the learning rule is different. The neurons are updating their parameters in a way that is similar to real biological processes. The basic idea is that the neurons are competing for resources and the criterion to decide which neuron will survive is the usefulness of the neuron to the whole neural network. The neuron is not using "teacher" or any kind of superior system, the neuron receives only the information that is present in the biological system. The learning process can be seen as searching of some equilibrium point that is equal to a state with maximal importance of the neuron for the neural network. This position can change if the environment changes. The name of this type of learning, the homeostatic artificial neural network, originates from this idea, as it is similar to the process of homeostasis known in any living cell. The simulation results suggest that this type of learning can be useful also in other tasks of artificial learning and recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kubo, Masao, Akihiro Yamaguchi, Sadayoshi Mikami, and Mitsuo Wada. "Logistic Chaos Protects Evolution against Environment Noise." Journal of Robotics and Mechatronics 10, no. 4 (August 20, 1998): 350–57. http://dx.doi.org/10.20965/jrm.1998.p0350.

Повний текст джерела
Анотація:
We propose a neuron with noise generator specially for the evolutionary robotics approach to incremental knowledge acquisition. Genetically evolving neural networks are modified continuously by genetic operations. Difficulty in incrementing knowledge when a neural network acts as a robotic controller arises when network operates unlike in the past due to disturbance by neurons added by genetic operators. To evolve a network robust against such internal noise, we propose adding noise generators to neurons. We show the effectiveness of the application of a logistic chaos noise generator to neurons by comparing several noise generators.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wang, Guanzheng, Rubin Wang, Wanzeng Kong, and Jianhai Zhang. "The Relationship between Sparseness and Energy Consumption of Neural Networks." Neural Plasticity 2020 (November 25, 2020): 1–13. http://dx.doi.org/10.1155/2020/8848901.

Повний текст джерела
Анотація:
About 50-80% of total energy is consumed by signaling in neural networks. A neural network consumes much energy if there are many active neurons in the network. If there are few active neurons in a neural network, the network consumes very little energy. The ratio of active neurons to all neurons of a neural network, that is, the sparseness, affects the energy consumption of a neural network. Laughlin’s studies show that the sparseness of an energy-efficient code depends on the balance between signaling and fixed costs. Laughlin did not give an exact ratio of signaling to fixed costs, nor did they give the ratio of active neurons to all neurons in most energy-efficient neural networks. In this paper, we calculated the ratio of signaling costs to fixed costs by the data from physiology experiments. The ratio of signaling costs to fixed costs is between 1.3 and 2.1. We calculated the ratio of active neurons to all neurons in most energy-efficient neural networks. The ratio of active neurons to all neurons in neural networks is between 0.3 and 0.4. Our results are consistent with the data from many relevant physiological experiments, indicating that the model used in this paper may meet neural coding under real conditions. The calculation results of this paper may be helpful to the study of neural coding.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Gelenbe, Erol. "Stability of the Random Neural Network Model." Neural Computation 2, no. 2 (June 1990): 239–47. http://dx.doi.org/10.1162/neco.1990.2.2.239.

Повний текст джерела
Анотація:
In a recent paper (Gelenbe 1989) we introduced a new neural network model, called the Random Network, in which “negative” or “positive” signals circulate, modeling inhibitory and excitatory signals. These signals can arrive either from other neurons or from the outside world: they are summed at the input of each neuron and constitute its signal potential. The state of each neuron in this model is its signal potential, while the network state is the vector of signal potentials at each neuron. If its potential is positive, a neuron fires, and sends out signals to the other neurons of the network or to the outside world. As it does so its signal potential is depleted. We have shown (Gelenbe 1989) that in the Markovian case, this model has product form, that is, the steady-state probability distribution of its potential vector is the product of the marginal probabilities of the potential at each neuron. The signal flow equations of the network, which describe the rate at which positive or negative signals arrive at each neuron, are nonlinear, so that their existence and uniqueness are not easily established except for the case of feedforward (or backpropagation) networks (Gelenbe 1989). In this paper we show that whenever the solution to these signal flow equations exists, it is unique. We then examine two subclasses of networks — balanced and damped networks — and obtain stability conditions in each case. In practical terms, these stability conditions guarantee that the unique solution can be found to the signal flow equations and therefore that the network has a well-defined steady-state behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ma, Minglin, Yaping Lu, Zhijun Li, Yichuang Sun, and Chunhua Wang. "Multistability and Phase Synchronization of Rulkov Neurons Coupled with a Locally Active Discrete Memristor." Fractal and Fractional 7, no. 1 (January 11, 2023): 82. http://dx.doi.org/10.3390/fractalfract7010082.

Повний текст джерела
Анотація:
In order to enrich the dynamic behaviors of discrete neuron models and more effectively mimic biological neural networks, this paper proposes a bistable locally active discrete memristor (LADM) model to mimic synapses. We explored the dynamic behaviors of neural networks by introducing the LADM into two identical Rulkov neurons. Based on numerical simulation, the neural network manifested multistability and new firing behaviors under different system parameters and initial values. In addition, the phase synchronization between the neurons was explored. Additionally, it is worth mentioning that the Rulkov neurons showed synchronization transition behavior; that is, anti-phase synchronization changed to in-phase synchronization with the change in the coupling strength. In particular, the anti-phase synchronization of different firing patterns in the neural network was investigated. This can characterize the different firing behaviors of coupled homogeneous neurons in the different functional areas of the brain, which is helpful to understand the formation of functional areas. This paper has a potential research value and lays the foundation for biological neuron experiments and neuron-based engineering applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yildirim, Sahin, Asli Durmusoglu, Caglar Sevim, Mehmet Safa Bingol, and Menderes Kalkat. "Design of neural predictors for predicting and analysing COVID-19 cases in different regions." Neural Network World 32, no. 5 (2022): 233–51. http://dx.doi.org/10.14311/nnw.2022.32.014.

Повний текст джерела
Анотація:
Nowadays, some unexpected viruses are affecting people with many troubles. COVID-19 virus is spread in the world very rapidly. However, it seems that predicting cases and death fatalities is not easy. Artificial neural networks are employed in many areas for predicting the system’s parameters in simulation or real-time approaches. This paper presents the design of neural predictors for analysing the cases of COVID-19 in three countries. Three countries were selected because of their different regions. Especially, these major countries’ cases were selected for predicting future effects. Furthermore, three types of neural network predictors were employed to analyse COVID-19 cases. NAR-NN is one of the proposed neural networks that have three layers with one input layer neurons, hidden layer neurons and an output layer with fifteen neurons. Each neuron consisted of the activation functions of the tan-sigmoid. The other proposed neural network, ANFIS, consists of five layers with two inputs and one output and ARIMA uses four iterative steps to predict. The proposed neural network types have been selected from many other types of neural network types. These neural network structures are feed-forward types rather than recurrent neural networks. Learning time is better and faster than other types of networks. Finally, three types of neural predictors were used to predict the cases. The R2 and MSE results improved that three types of neural networks have good performance to predict and analyse three region cases of countries.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Humphries, Mark D. "Dynamical networks: Finding, measuring, and tracking neural population activity using network science." Network Neuroscience 1, no. 4 (December 2017): 324–38. http://dx.doi.org/10.1162/netn_a_00020.

Повний текст джерела
Анотація:
Systems neuroscience is in a headlong rush to record from as many neurons at the same time as possible. As the brain computes and codes using neuron populations, it is hoped these data will uncover the fundamentals of neural computation. But with hundreds, thousands, or more simultaneously recorded neurons come the inescapable problems of visualizing, describing, and quantifying their interactions. Here I argue that network science provides a set of scalable, analytical tools that already solve these problems. By treating neurons as nodes and their interactions as links, a single network can visualize and describe an arbitrarily large recording. I show that with this description we can quantify the effects of manipulating a neural circuit, track changes in population dynamics over time, and quantitatively define theoretical concepts of neural populations such as cell assemblies. Using network science as a core part of analyzing population recordings will thus provide both qualitative and quantitative advances to our understanding of neural computation.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Anghel, Daniel Constantin, and Nadia Belu. "Contributions to Ranking an Ergonomic Workstation, Considering the Human Effort and the Microclimate Parameters, Using Neural Networks." Applied Mechanics and Materials 371 (August 2013): 812–16. http://dx.doi.org/10.4028/www.scientific.net/amm.371.812.

Повний текст джерела
Анотація:
The paper presents a method to use a feed forward neural network in order to rank a working place from the manufacture industry. Neural networks excel in gathering difficult non-linear relationships between the inputs and outputs of a system. The neural network is simulated with a simple simulator: SSNN. In this paper, we considered as relevant for a work place ranking, 6 input parameters: temperature, humidity, noise, luminosity, load and frequency. The neural network designed for the study presented in this paper has 6 input neurons, 13 neurons in the hidden layer and 1 neuron in the output layer. We present also some experimental results obtained through simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Shibata, Atsushi, Fangyan Dong, and Kaoru Hirota. "Neural Network Structure Analysis Based on Hierarchical Force-Directed Graph Drawing for Multi-Task Learning." Journal of Advanced Computational Intelligence and Intelligent Informatics 19, no. 2 (March 20, 2015): 225–31. http://dx.doi.org/10.20965/jaciii.2015.p0225.

Повний текст джерела
Анотація:
A hierarchical force-directed graph drawing is proposed for the analysis of a neural network structure that expresses the relationship between multitask and processes in neural networks represented as neuron clusters. The process revealed by our proposal indicates the neurons that are related to each task and the number of neurons or learning epochs that are sufficient. Our proposal is evaluated by visualizing neural networks learned on the Mixed National Institute of Standards and Technology (MNIST) database of handwritten digits, and the results show that inactive neurons, namely those that do not have a close relationship with any tasks, are located on the periphery part of the visualized network, and that cutting half of the training data on one specific task (out of ten) causes a 15% increase in the variance of neurons in clusters that react to the specific task compared to the reaction to all tasks. The proposal aims to be developed in order to support the design process of neural networks that consider multitasking of different categories, for example, one neural network for both the vision and motion system of a robot.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Park, Jihoon, Koki Ichinose, Yuji Kawai, Junichi Suzuki, Minoru Asada, and Hiroki Mori. "Macroscopic Cluster Organizations Change the Complexity of Neural Activity." Entropy 21, no. 2 (February 23, 2019): 214. http://dx.doi.org/10.3390/e21020214.

Повний текст джерела
Анотація:
In this study, simulations are conducted using a network model to examine how the macroscopic network in the brain is related to the complexity of activity for each region. The network model is composed of multiple neuron groups, each of which consists of spiking neurons with different topological properties of a macroscopic network based on the Watts and Strogatz model. The complexity of spontaneous activity is analyzed using multiscale entropy, and the structural properties of the network are analyzed using complex network theory. Experimental results show that a macroscopic structure with high clustering and high degree centrality increases the firing rates of neurons in a neuron group and enhances intraconnections from the excitatory neurons to inhibitory neurons in a neuron group. As a result, the intensity of the specific frequency components of neural activity increases. This decreases the complexity of neural activity. Finally, we discuss the research relevance of the complexity of the brain activity.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Belyakin, Sergey, and Sergey Shuteev. "Classical Soliton Theory for Studying the Dynamics and Evolution of in Network." Journal of Clinical Research and Reports 08, no. 04 (July 31, 2021): 01–06. http://dx.doi.org/10.31579/2690-1919/186.

Повний текст джерела
Анотація:
This paper presents the dynamic model ofthe soliton. Based on this model, it is supposed to study the state of the network. The term neural networks refersto the networks of neurons in the mammalian brain. Neurons are its main units of computation. In the brain, they are connected together in a network to process data. This can be a very complex task, and so the dynamics of neural networks in the mammalian brain in response to external stimuli can be quite complex. The inputs and outputs of each neuron change as a function of time, in the form of so-called spike chains, but the network itself also changes. We learn and improve our data processing capabilities by establishing reconnections between neurons.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

YANG, XIAO-SONG, QUAN YUAN, and LIN WANG. "WHAT CONNECTION TOPOLOGY PROHIBIT CHAOS IN CONTINUOUS TIME NETWORKS?" Advances in Complex Systems 10, no. 04 (December 2007): 449–61. http://dx.doi.org/10.1142/s021952590700129x.

Повний текст джерела
Анотація:
In this paper, we are concerned with two interesting problems in the dynamics of neural networks. What connection topology will prohibit chaotic behavior in a continuous time neural network (NN). To what extent is a continuous time neural network (NN) described by continuous ordinary differential equations simple enough yet still able to exhibit chaos? We study these problems in the context of the classical neural networks with three neurons, which can be described by three-dimensional autonomous ordinary differential equations. We first consider the case where there is no direct interconnection between the first neuron and the third neuron. We then discuss the case where each pair of neurons has a direct connection. We show that the existence of the directed loop in connection topology is necessary for chaos to occur.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Kim, Eunhee, Sungwoong Jeon, Hyun-Kyu An, Mehrnoosh Kianpour, Seong-Woon Yu, Jin-young Kim, Jong-Cheol Rah, and Hongsoo Choi. "A magnetically actuated microrobot for targeted neural cell delivery and selective connection of neural networks." Science Advances 6, no. 39 (September 2020): eabb5696. http://dx.doi.org/10.1126/sciadv.abb5696.

Повний текст джерела
Анотація:
There has been a great deal of interest in the development of technologies for actively manipulating neural networks in vitro, providing natural but simplified environments in a highly reproducible manner in which to study brain function and related diseases. Platforms for these in vitro neural networks require precise and selective neural connections at the target location, with minimal external influences, and measurement of neural activity to determine how neurons communicate. Here, we report a neuron-loaded microrobot for selective connection of neural networks via precise delivery to a gap between two neural clusters by an external magnetic field. In addition, the extracellular action potential was propagated from one cluster to the other through the neurons on the microrobot. The proposed technique shows the potential for use in experiments to understand how neurons communicate in the neural network by actively connecting neural clusters.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lavrenkov, Yury N. "Synthesis of an optical neuromorphic structure with differentiated artificial neurons for information flow distribution." Journal Of Applied Informatics 17, no. 3 (May 31, 2022): 55–72. http://dx.doi.org/10.37791/2687-0649-2022-17-3-55-72.

Повний текст джерела
Анотація:
The work presents analysis of possible application of self-generating neural networks, which can independently generate a topological map of neuron connections while modelling biological neurogenesis, in multi-threaded information communication systems. A basic optical neural network cell is designed on the basis of the applied layered composition performing data processing. A map of neuron connections represents not an ordered structure providing a regular graph for exchange of information between neurons, but a set of cognitive reserve represented as an unconnected set of neuromorphic cells. Modelling of neuron death (apoptosis) and creation of dendrite-axon connections makes it possible to implement a stepwise neural network growth algorithm. Despite challenges in implementing this process, creating a growing network in an optical neural network framework solves the problem of initial forming of the neural network architecture, which greatly simplifies the learning process. Neural network cells used with the network growth algorithm resulted in neural network structures that use internal self-sustaining rhythmic activity to process information. This activity is a result of spontaneously formed closed neural circuits with common neurons among neuronal cells. Such organisation of recirculation memory leads to solutions with reference to such intra-network activity. As a result, response of the network is determined not only by stimuli, but also by the internal state of the network and its rhythmic activity. Network functioning is affected by internal rhythms, which depend on the information passing through the neuron clusters, which results in formation of a specific rhythmic memory. This can be used for tasks that require solutions to be worked out based on certain parameters, but they shall be unreproducible when the network is repeatedly stimulated by the same influences. Such tasks include ensuring information transmission security when using some set of carriers. The task of determining a number of frequencies and their frequency plan depends on external factors. To exclude possible repeating generation of the same carrier allocation, it is necessary to use networks of the configuration under consideration that can influence generation of solutions through the gathered experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Shang, Xiao Jing. "The Identification of Neurons Research." Advanced Materials Research 756-759 (September 2013): 2813–18. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2813.

Повний текст джерела
Анотація:
In view of the present medical neurons characteristic cognition and human brain plan in the neurons of the limitation of recognition, this paper puts forward the neurons identification method. First the L - Measure software to neuron geometry feature extraction, and then to extract high dimensional feature through the principal component analysis dimension reduction processing. Combined classifier with pyramidal neurons, general Ken wild neurons, motor neuron, sensory neurons, double neurons, level 3 neurons and multistage neurons 7 kinds of neurons are classified. Experimental results prove that the probabilistic neural network, the BP neural network, fuzzy classifier composed of classifier recognition effect is superior to the arbitrary single classifier.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Iyoda, Eduardo Masato, Kaoru Hirota, and Fernando J. Von Zuben. "Sigma-Pi Cascade Extended Hybrid Neural Network." Journal of Advanced Computational Intelligence and Intelligent Informatics 6, no. 3 (October 20, 2002): 126–34. http://dx.doi.org/10.20965/jaciii.2002.p0126.

Повний текст джерела
Анотація:
A nonparametric neural architecture called the Sigma-Pi Cascade extended Hybrid Neural Network σπ-(CHNN) is proposed to extend approximation capabilities in neural architectures such as Projection Pursuit Learning (PPL) and Hybrid Neural Networks (HNN). Like PPL and HNN, σπ-CHNN also uses distinct activation functions in its neurons but, unlike these previous neural architectures, it may consider multiplicative operators in its hidden neurons, enabling it to extract higher-order information from given data. σπ-CHNN uses arbitrary connectivity patterns among neurons. An evolutionary learning algorithm combined with a conjugate gradient algorithm is proposed to automatically design the topology and weights of σπ-CHNN. σπ-CHNN performance is evaluated in five benchmark regression problems. Results show that σπ-CHNN provides competitive performance compared to PPL and HNN in most problems, either in computational requirements to implement the proposed neural architecture or in approximation accuracy. In some problems, σπ-CHNN reduces the approximation error on the order of 10-1 compared to PPL and HNN, whereas in other cases it achieves the same approximation error as these neural architectures but uses a smaller number of hidden neurons (usually 1 hidden neuron less than PPL and HNN).
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Naudin, Loïs. "Biological emergent properties in non-spiking neural networks." AIMS Mathematics 7, no. 10 (2022): 19415–39. http://dx.doi.org/10.3934/math.20221066.

Повний текст джерела
Анотація:
<abstract><p>A central goal of neuroscience is to understand the way nervous systems work to produce behavior. Experimental measurements in freely moving animals (<italic>e.g.</italic> in the <italic>C. elegans</italic> worm) suggest that ON- and OFF-states in non-spiking nervous tissues underlie many physiological behaviors. Such states are defined by the collective activity of non-spiking neurons with correlated up- and down-states of their membrane potentials. How these network states emerge from the intrinsic neuron dynamics and their couplings remains unclear. In this paper, we develop a rigorous mathematical framework for better understanding their emergence. To that end, we use a recent simple phenomenological model capable of reproducing the experimental behavior of non-spiking neurons. The analysis of the stationary points and the bifurcation dynamics of this model are performed. Then, we give mathematical conditions to monitor the impact of network activity on intrinsic neuron properties. From then on, we highlight that ON- and OFF-states in non-spiking coupled neurons could be a consequence of bistable synaptic inputs, and not of intrinsic neuron dynamics. In other words, the apparent up- and down-states in the neuron's bimodal voltage distribution do not necessarily result from an intrinsic bistability of the cell. Rather, these states could be driven by bistable presynaptic neurons, ubiquitous in non-spiking nervous tissues, which dictate their behaviors to their postsynaptic ones.</p></abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Men, Hong, Hai Yan Liu, Lei Wang, and Yun Peng Pan. "An Optimizing Method of Competitive Neural Network." Key Engineering Materials 467-469 (February 2011): 894–99. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.894.

Повний текст джерела
Анотація:
This paper presents an optimizing method of competitive neural network(CNN):During clustering analysis fixed on the optimum number of output neurons according to the change of DB value,and then adjusted connected weight including increasing ,dividing , delete. Each neuron had the different variety trend of learning rate according with the change of the probability of neurons. The optimizing method made classification more accurate. Simulation results showed that optimized network structure had a strong ability to adjust the number of clusters dynamically and good results of classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Qiu, Rong, Yujiao Dong, Xin Jiang, and Guangyi Wang. "Two-Neuron Based Memristive Hopfield Neural Network with Synaptic Crosstalk." Electronics 11, no. 19 (September 23, 2022): 3034. http://dx.doi.org/10.3390/electronics11193034.

Повний текст джерела
Анотація:
Synaptic crosstalk is an important biological phenomenon that widely exists in neural networks. The crosstalk can influence the ability of neurons to control the synaptic weights, thereby causing rich dynamics of neural networks. Based on the crosstalk between synapses, this paper presents a novel two-neuron based memristive Hopfield neural network with a hyperbolic memristor emulating synaptic crosstalk. The dynamics of the neural networks with varying memristive parameters and crosstalk weights are analyzed via the phase portraits, time-domain waveforms, bifurcation diagrams, and basin of attraction. Complex phenomena, especially coexisting dynamics, chaos and transient chaos emerge in the neural network. Finally, the circuit simulation results verify the effectiveness of theoretical analyses and mathematical simulation and further illustrate the feasibility of the two-neuron based memristive Hopfield neural network hardware.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Shen, Xuefei, Yi Yang, Shanshan Tian, Yu Zhao, and Tao Chen. "Microfluidic array chip based on excimer laser processing technology for the construction of in vitro graphical neuronal network." Journal of Bioactive and Compatible Polymers 35, no. 3 (May 2020): 228–39. http://dx.doi.org/10.1177/0883911520918395.

Повний текст джерела
Анотація:
To construct a graphical neural network in vitro and explore the morphological effects of neural network structural changes on neurons, this study aimed to introduce a method for fabricating microfluidic array chips with different graphical structures based on 248-nm excimer laser one-step etching. Through the comparative analysis of the graphical neural network cultured on our microfluidic array chip with the one on the glass slide, the morphological effects of the neural network on the morphology of the neurons were studied. First, the design of the chip was completed according to the specific structure of the neurons and the simulation of the flow field. The chips were fabricated by excimer laser processing combined with the casting technology. Neurons were cultured on the chip, and a graphical neural network was formed. The growth status of the neural network was analyzed by microscopy and immunofluorescence technology, and compared with the random neural network cultured on glass slides. The results showed that the neurons on the array chips grew in microchannels, and neurites grew along the direction of the channel, interlacing to form a neural network. Furthermore, when the structure of the neural network was graphically changed, the internal neuron morphology changed: on the same culture days, the maximum length of the neurites of the graphical neural network was higher than the average length of the neurites of the random neural network. This research can provide the foundation for the exploration of the neural network mechanism of neurological diseases.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

OBANA, ICHIRO, and YASUHIRO FUKUI. "ROLE OF CHAOS IN TRIAL-AND-ERROR PROBLEM SOLVING BY AN ARTIFICIAL NEURAL NETWORK." International Journal of Neural Systems 07, no. 01 (March 1996): 101–8. http://dx.doi.org/10.1142/s0129065796000099.

Повний текст джерела
Анотація:
One role of chaotic neural activity is illustrated by means of computer simulations of an imaginary agent’s goal-oriented behavior. The agent has a simplified neural network with seven neurons and three legs. The neural network consists of one photosensory neuron and three pairs of inter- and motor neurons. The three legs whose movements are governed by the three motor neurons allow the agent to walk in six concentric radial directions on a plane. It is intended that the neural network causes the agent to walk in a direction of greater brightness, to reach finally the most brightly lit place on the plane. The presence of only one sensory neuron has an important meaning. That is, no immediate information on directions of greater brightness is sensed by the agent. In other words, random walking in the manner of trial-and-error problem solving must be involved in the agent’s walking. Chaotic firing of the motor neurons is intended to play a crucial role in generating the random walking. Brief random walking and rapid straight walking in a direction of greater brightness were observed to occur alternately in the computer simulation. Controlled chaos in naturally occurring neural networks may play a similar role.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Antipova, E. S., and S. A. Rashkovskiy. "Autoassociative Hamming Neural Network." Nelineinaya Dinamika 17, no. 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Повний текст джерела
Анотація:
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

ISOKAWA, TEIJIRO, HARUHIKO NISHIMURA, NAOTAKE KAMIURA, and NOBUYUKI MATSUI. "ASSOCIATIVE MEMORY IN QUATERNIONIC HOPFIELD NEURAL NETWORK." International Journal of Neural Systems 18, no. 02 (April 2008): 135–45. http://dx.doi.org/10.1142/s0129065708001440.

Повний текст джерела
Анотація:
Associative memory networks based on quaternionic Hopfield neural network are investigated in this paper. These networks are composed of quaternionic neurons, and input, output, threshold, and connection weights are represented in quaternions, which is a class of hypercomplex number systems. The energy function of the network and the Hebbian rule for embedding patterns are introduced. The stable states and their basins are explored for the networks with three neurons and four neurons. It is clarified that there exist at most 16 stable states, called multiplet components, as the degenerated stored patterns, and each of these states has its basin in the quaternionic networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera, and Alfian Ma'arif. "Spiking PID Control Applied in the Van de Vusse Reaction." International Journal of Robotics and Control Systems 1, no. 4 (November 25, 2021): 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.

Повний текст джерела
Анотація:
Artificial neural networks (ANN) can approximate signals and give interesting results in pattern recognition; some works use neural networks for control applications. However, biological neurons do not generate similar signals to the obtained by ANN. The spiking neurons are an interesting topic since they simulate the real behavior depicted by biological neurons. This paper employed a spiking neuron to compute a PID control, which is further applied to the Van de Vusse reaction. This reaction, as the inverse pendulum, is a benchmark used to work with systems that has inverse response producing the output to undershoot. One problem is how to code information that the neuron can interpret and decode the peak generated by the neuron to interpret the neuron's behavior. In this work, a spiking neuron is used to compute a PID control by coding in time the peaks generated by the neuron. The neuron has as synaptic weights the PID gains, and the peak observed in the axon is the coded control signal. The neuron adaptation tries to obtain the necessary weights to generate the peak instant necessary to control the chemical reaction. The simulation results show the possibility of using this kind of neuron for control issues and the possibility of using a spiking neural network to overcome the undershoot obtained due to the inverse response of the chemical reaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Aziz, Mustafa Nizamul. "A Review on Artificial Neural Networks and its’ Applicability." Bangladesh Journal of Multidisciplinary Scientific Research 2, no. 1 (June 11, 2020): 48–51. http://dx.doi.org/10.46281/bjmsr.v2i1.609.

Повний текст джерела
Анотація:
The field of artificial neural networks (ANN) started from humble beginnings in the 1950s but got attention in the 1980s. ANN tries to emulate the neural structure of the brain, which consists of several thousand cells, neuron, which is interconnected in a large network. This is done through artificial neurons, handling the input and output, and connecting to other neurons, creating a large network. The potential for artificial neural networks is considered to be huge, today there are several different uses for ANN, ranging from academic research in such fields as mathematics and medicine to business-based purposes and sports prediction. The purpose of this paper is to give words to artificial neural networks and to show its applicability. Documents analysis was used here as the data collection method. The paper figured out network structures, steps for constructing an ANN, architectures, and learning algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Barbu, Adrian, and Hongyu Mou. "The Compact Support Neural Network." Sensors 21, no. 24 (December 20, 2021): 8494. http://dx.doi.org/10.3390/s21248494.

Повний текст джерела
Анотація:
Neural networks are popular and useful in many fields, but they have the problem of giving high confidence responses for examples that are away from the training data. This makes the neural networks very confident in their prediction while making gross mistakes, thus limiting their reliability for safety-critical applications such as autonomous driving and space exploration, etc. This paper introduces a novel neuron generalization that has the standard dot-product-based neuron and the radial basis function (RBF) neuron as two extreme cases of a shape parameter. Using a rectified linear unit (ReLU) as the activation function results in a novel neuron that has compact support, which means its output is zero outside a bounded domain. To address the difficulties in training the proposed neural network, it introduces a novel training method that takes a pretrained standard neural network that is fine-tuned while gradually increasing the shape parameter to the desired value. The theoretical findings of the paper are bound on the gradient of the proposed neuron and proof that a neural network with such neurons has the universal approximation property. This means that the network can approximate any continuous and integrable function with an arbitrary degree of accuracy. The experimental findings on standard benchmark datasets show that the proposed approach has smaller test errors than the state-of-the-art competing methods and outperforms the competing methods in detecting out-of-distribution samples on two out of three datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Alvarellos-González, Alberto, Alejandro Pazos, and Ana B. Porto-Pazos. "Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/476324.

Повний текст джерела
Анотація:
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Grabusts, Peter, and Aleksejs Zorins. "The Influence of Hidden Neurons Factor on Neural Nework Training Quality Assurance." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 3 (June 16, 2015): 76. http://dx.doi.org/10.17770/etr2015vol3.213.

Повний текст джерела
Анотація:
The work shows the role of hidden neurons in the multilayer feed-forward neural networks. The numeric expression of hidden neurons is usually determined in each case empirically. The methodology for determining the number of hidden neurons are described. The neural network based approach is analyzed using a multilayer feed-forward network with backpropagation learning algorithm. We have presented neural network implementation possibility in bankruptcy prediction (the experiments have been performed in the Matlab environment). On the base of bankruptcy data analysis the effect of hidden neurons to specific neural network training quality is shown. The conformity of theoretical hidden neurons to practical solutions was carried out.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Zakaria, Mamang, Luther Pagiling, and Wa Ode Siti Nur Alam. "Sistem Penyiraman Otomatis Tanaman Semusim Berbasis Jaringan Saraf Tiruan Multilayer Perceptron." Jurnal Fokus Elektroda : Energi Listrik, Telekomunikasi, Komputer, Elektronika dan Kendali) 7, no. 1 (February 27, 2022): 35. http://dx.doi.org/10.33772/jfe.v7i1.24050.

Повний текст джерела
Анотація:
In general, farmers water plants when the conditions are met, such as dry soil, no rain, and cold temperatures. One of the efficient ways to control it is to use an artificial neural network-based automatic plant watering system. The purpose of this study was to determine the success of artificial neural networks as decision-makers to water plants automatically. The stages of designing an automatic watering system based on an artificial neural network were to build software including artificial neural network modeling and Arduino microcontroller programming, automatically watering tools, evaluating tool performance, and testing tools in real-time. The test results show that the artificial neural network-based automatic plant watering system can water plants according to the given input pattern. The artificial neural network structure obtained is three neurons in the input layer, eight neurons in the hidden layer, and one neuron in the output layer. The artificial neural network-based automatic plant watering system succeeded in automatically watering two areas of land that the success rate is a 100%.Keyword— Automatic Watering, Microcontroller, ANN, Annual Crops.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Martínez-Cañada, Pablo, Torbjørn V. Ness, Gaute T. Einevoll, Tommaso Fellin, and Stefano Panzeri. "Computation of the electroencephalogram (EEG) from network models of point neurons." PLOS Computational Biology 17, no. 4 (April 2, 2021): e1008893. http://dx.doi.org/10.1371/journal.pcbi.1008893.

Повний текст джерела
Анотація:
The electroencephalogram (EEG) is a major tool for non-invasively studying brain function and dysfunction. Comparing experimentally recorded EEGs with neural network models is important to better interpret EEGs in terms of neural mechanisms. Most current neural network models use networks of simple point neurons. They capture important properties of cortical dynamics, and are numerically or analytically tractable. However, point neurons cannot generate an EEG, as EEG generation requires spatially separated transmembrane currents. Here, we explored how to compute an accurate approximation of a rodent’s EEG with quantities defined in point-neuron network models. We constructed different approximations (or proxies) of the EEG signal that can be computed from networks of leaky integrate-and-fire (LIF) point neurons, such as firing rates, membrane potentials, and combinations of synaptic currents. We then evaluated how well each proxy reconstructed a ground-truth EEG obtained when the synaptic currents of the LIF model network were fed into a three-dimensional network model of multicompartmental neurons with realistic morphologies. Proxies based on linear combinations of AMPA and GABA currents performed better than proxies based on firing rates or membrane potentials. A new class of proxies, based on an optimized linear combination of time-shifted AMPA and GABA currents, provided the most accurate estimate of the EEG over a wide range of network states. The new linear proxies explained 85–95% of the variance of the ground-truth EEG for a wide range of network configurations including different cell morphologies, distributions of presynaptic inputs, positions of the recording electrode, and spatial extensions of the network. Non-linear EEG proxies using a convolutional neural network (CNN) on synaptic currents increased proxy performance by a further 2–8%. Our proxies can be used to easily calculate a biologically realistic EEG signal directly from point-neuron simulations thus facilitating a quantitative comparison between computational models and experimental EEG recordings.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

D’ALCHÉ-BUC, FLORENCE, VINCENT ANDRÈS, and JEAN-PIERRE NADAL. "RULE EXTRACTION WITH FUZZY NEURAL NETWORK." International Journal of Neural Systems 05, no. 01 (March 1994): 1–11. http://dx.doi.org/10.1142/s0129065794000025.

Повний текст джерела
Анотація:
This paper deals with the learning of understandable decision rules with connectionist systems. Our approach consists of extracting fuzzy control rules with a new fuzzy neural network. Whereas many other works on this area propose to use combinations of nonlinear neurons to approximate fuzzy operations, we use a fuzzy neuron that computes max-min operations. Thus, this neuron can be interpreted as a possibility estimator, just as sigma-pi neurons can support a probabilistic interpretation. Within this context, possibilistic inferences can be drawn through the multi-layered network, using a distributed representation of the information. A new learning procedure has been developed in order that each part of the network can be learnt sequentially, while other parts are frozen. Each step of the procedure is based on the same kind of learning scheme: the backpropagation of a well-chosen cost function with appropriate derivatives of max-min function. An appealing result of the learning phase is the ability of the network to automatically reduce the number of the condition-parts of the rules, if needed. The network has been successfully tested on the learning of a control rule base for an inverted pendulum.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Penn, Yaron, Menahem Segal, and Elisha Moses. "Network synchronization in hippocampal neurons." Proceedings of the National Academy of Sciences 113, no. 12 (March 9, 2016): 3341–46. http://dx.doi.org/10.1073/pnas.1515105113.

Повний текст джерела
Анотація:
Oscillatory activity is widespread in dynamic neuronal networks. The main paradigm for the origin of periodicity consists of specialized pacemaking elements that synchronize and drive the rest of the network; however, other models exist. Here, we studied the spontaneous emergence of synchronized periodic bursting in a network of cultured dissociated neurons from rat hippocampus and cortex. Surprisingly, about 60% of all active neurons were self-sustained oscillators when disconnected, each with its own natural frequency. The individual neuron’s tendency to oscillate and the corresponding oscillation frequency are controlled by its excitability. The single neuron intrinsic oscillations were blocked by riluzole, and are thus dependent on persistent sodium leak currents. Upon a gradual retrieval of connectivity, the synchrony evolves: Loose synchrony appears already at weak connectivity, with the oscillators converging to one common oscillation frequency, yet shifted in phase across the population. Further strengthening of the connectivity causes a reduction in the mean phase shifts until zero-lag is achieved, manifested by synchronous periodic network bursts. Interestingly, the frequency of network bursting matches the average of the intrinsic frequencies. Overall, the network behaves like other universal systems, where order emerges spontaneously by entrainment of independent rhythmic units. Although simplified with respect to circuitry in the brain, our results attribute a basic functional role for intrinsic single neuron excitability mechanisms in driving the network’s activity and dynamics, contributing to our understanding of developing neural circuits.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Semenova, N., and D. Brunner. "Noise-mitigation strategies in physical feedforward neural networks." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 6 (June 2022): 061106. http://dx.doi.org/10.1063/5.0096637.

Повний текст джерела
Анотація:
Physical neural networks are promising candidates for next generation artificial intelligence hardware. In such architectures, neurons and connections are physically realized and do not leverage digital concepts with their practically infinite signal-to-noise ratio to encode, transduce, and transform information. They, therefore, are prone to noise with a variety of statistical and architectural properties, and effective strategies leveraging network-inherent assets to mitigate noise in a hardware-efficient manner are important in the pursuit of next generation neural network hardware. Based on analytical derivations, we here introduce and analyze a variety of different noise-mitigation approaches. We analytically show that intra-layer connections in which the connection matrix’s squared mean exceeds the mean of its square fully suppress uncorrelated noise. We go beyond and develop two synergistic strategies for noise that is uncorrelated and correlated across populations of neurons. First, we introduce the concept of ghost neurons, where each group of neurons perturbed by correlated noise has a negative connection to a single neuron, yet without receiving any input information. Second, we show that pooling of neuron populations is an efficient approach to suppress uncorrelated noise. As such, we developed a general noise-mitigation strategy leveraging the statistical properties of the different noise terms most relevant in analog hardware. Finally, we demonstrate the effectiveness of this combined approach for a trained neural network classifying the modified National Institute of Standards and Technology handwritten digits, for which we achieve a fourfold improvement of the output signal-to-noise ratio. Our noise mitigation lifts the 92.07% classification accuracy of the noisy neural network to 97.49%, which is essentially identical to the 97.54% of the noise-free network.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Nagai, T., H. Katayama, K. Aihara, and T. Yamamoto. "Pruning of rat cortical taste neurons by an artificial neural network model." Journal of Neurophysiology 74, no. 3 (September 1, 1995): 1010–19. http://dx.doi.org/10.1152/jn.1995.74.3.1010.

Повний текст джерела
Анотація:
1. Taste qualities are believed to be coded in the activity of ensembles of taste neurons. However, it is not clear whether all neurons are equally responsible for coding. To clarify the point, the relative contribution of each taste neuron to coding needs to be assessed. 2. We constructed simple three-layer neural networks with input units representing cortical taste neurons of the rat. The networks were trained by the back-propagation learning algorithm to classify the neural response patterns to the basic taste stimuli (sucrose, HCl, quinine hydrochloride, and NaCl). The networks had four output units representing the basic taste qualities, the values of which provide a measure for similarity of test stimuli (salts, tartaric acid, and umami substances) to the basic taste stimuli. 3. Trained networks discriminated the response patterns to the test stimuli in a plausible manner in light of previous physiological and psychological experiments. Profiles of output values of the networks paralleled those of across-neuron correlations with respect to the highest or second-highest values in the profiles. 4. We evaluated relative contributions of input units to the taste discrimination of the network by examining their significance Sj, which is defined as the sum of the absolute values of the connection weights from the jth input unit to the hidden layer. When the input units with weaker connection weights (e.g., 15 of 39 input units) were "pruned" from the trained network, the ability of the network to discriminate the basic taste qualities as well as other test stimuli was not greatly affected. On the other hand, the taste discrimination of the network progressively deteriorated much more rapidly with pruning of input units with stronger connection weights. 5. These results suggest that cortical taste neurons differentially contribute to the coding of taste qualities. The pruning technique may enable the evaluation of a given taste neuron in terms of its relative contribution to the coding, with Sj providing a quantitative measure for such evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Muresan, Raul C., and Cristina Savin. "Resonance or Integration? Self-Sustained Dynamics and Excitability of Neural Microcircuits." Journal of Neurophysiology 97, no. 3 (March 2007): 1911–30. http://dx.doi.org/10.1152/jn.01043.2006.

Повний текст джерела
Анотація:
We investigated spontaneous activity and excitability in large networks of artificial spiking neurons. We compared three different spiking neuron models: integrate-and-fire (IF), regular-spiking (RS), and resonator (RES). First, we show that different models have different frequency-dependent response properties, yielding large differences in excitability. Then, we investigate the responsiveness of these models to a single afferent inhibitory/excitatory spike and calibrate the total synaptic drive such that they would exhibit similar peaks of the postsynaptic potentials (PSP). Based on the synaptic calibration, we build large microcircuits of IF, RS, and RES neurons and show that the resonance property favors homeostasis and self-sustainability of the network activity. On the other hand, integration produces instability while it endows the network with other useful properties, such as responsiveness to external inputs. We also investigate other potential sources of stable self-sustained activity and their relation to the membrane properties of neurons. We conclude that resonance and integration at the neuron level might interact in the brain to promote stability as well as flexibility and responsiveness to external input and that membrane properties, in general, are essential for determining the behavior of large networks of neurons.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Stiefel, Klaus M., and G. Bard Ermentrout. "Neurons as oscillators." Journal of Neurophysiology 116, no. 6 (December 1, 2016): 2950–60. http://dx.doi.org/10.1152/jn.00525.2015.

Повний текст джерела
Анотація:
Regularly spiking neurons can be described as oscillators. In this article we review some of the insights gained from this conceptualization and their relevance for systems neuroscience. First, we explain how a regularly spiking neuron can be viewed as an oscillator and how the phase-response curve (PRC) describes the response of the neuron's spike times to small perturbations. We then discuss the meaning of the PRC for a single neuron's spiking behavior and review the PRCs measured from a variety of neurons in a range of spiking regimes. Next, we show how the PRC can be related to a number of common measures used to quantify neuronal firing, such as the spike-triggered average and the peristimulus histogram. We further show that the response of a neuron to correlated inputs depends on the shape of the PRC. We then explain how the PRC of single neurons can be used to predict neural network behavior. Given the PRC, conduction delays, and the waveform and time course of the synaptic potentials, it is possible to predict neural population behavior such as synchronization. The PRC also allows us to quantify the robustness of the synchronization to heterogeneity and noise. We finally ask how to combine the measured PRCs and the predictions based on PRC to further the understanding of systems neuroscience. As an example, we discuss how the change of the PRC by the neuromodulator acetylcholine could lead to a destabilization of cortical network dynamics. Although all of these studies are grounded in mathematical abstractions that do not strictly hold in biology, they provide good estimates for the emergence of the brain's network activity from the properties of individual neurons. The study of neurons as oscillators can provide testable hypotheses and mechanistic explanations for systems neuroscience.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Jung, R., T. Kiemel, and A. H. Cohen. "Dynamic behavior of a neural network model of locomotor control in the lamprey." Journal of Neurophysiology 75, no. 3 (March 1, 1996): 1074–86. http://dx.doi.org/10.1152/jn.1996.75.3.1074.

Повний текст джерела
Анотація:
1. Experimental studies have shown that a central pattern generator in the spinal cord of the lamprey can produce the basic rhythm for locomotion. This pattern generator interacts with the reticular neurons forming a spinoreticulospinal loop. To better understand and investigate the mechanisms for locomotor pattern generation in the lamprey, we examine the dynamic behavior of a simplified neural network model representing a unit spinal pattern generator (uPG) and its interaction with the reticular system. We use the techniques of bifurcation analysis and specifically examine the effects on the dynamic behavior of the system of 1) changing tonic drives to the different neurons of the uPG; 2) altering inhibitory and excitatory interconnection strengths among the uPG neurons; and 3) feedforward-feedback interactions between the uPG and the reticular neurons. 2. The model analyzed is a qualitative left-right symmetric network based on proposed functional architecture with one class of phasic reticular neurons and three classes of uPG neurons: excitatory (E), lateral (L), and crossed (C) interneurons. In the model each class is represented by one left and one right neuron. Each neuron has basic passive properties akin to biophysical neurons and receives tonic synaptic drive and weighted synaptic input from other connecting neurons. The neuron's output as a function of voltage is given by a nonlinear function with a strict threshold and saturation. 3. With an appropriate set of parameter values, the voltage of each neuron can oscillate periodically with phase relationships among the different neurons that are qualitatively similar to those observed experimentally. The uPG alone can also oscillate, as observed experimentally in isolated lamprey spinal cords. Varying the parameters can, however, profoundly change the state of the system via different kinds of bifurcations. Change in a single parameter can move the system from nonoscillatory to oscillatory states via different kinds of bifurcations. For some parameter values the system can also exhibit multistable behavior (e.g., an oscillatory state and a nonoscillatory state). The analysis also shows us how the amplitudes of the oscillations vary and the periods of limit cycles change as different bifurcation points are approached. 4. Altering tonic drive to just one class of uPG neurons (without altering the interconnections) can change the state of the system by altering the stability of fixed points, converting fixed points to oscillations, single oscillations to two stable oscillations, etc. Two-parameter bifurcation diagrams show the critical regions in which a balance between the tonic drives is necessary to maintain stable oscillations. A minimum tonic drive is necessary to obtain stable oscillatory output. With appropriate changes in the tonic drives to the L and C neurons, stable oscillatory output can be obtained even after eliminating the E neurons. Indeed, the presence of active E neurons in the biological system does not prove they play a functional role in the system, because tonic drive from other sources can substitute for them. On the other hand, very high excitation of any one class of neurons can terminate oscillations. Appropriate balance of tonic drives to different neuron classes can help sustain stable oscillations for larger tonic drives. Published experimental results concerning changes in amplitude and swimming frequency with increased tonic drives are mimicked by the model's responses to increased tonic drive. 5. Interconnectivity among the neurons plays a crucial role. The analysis indicates that the C and L classes of neurons are essential components of the model network. Sufficient inhibition from the L to C neurons as well as mutual inhibition between the left and right halves is necessary to obtain stable oscillatory output. When the E neurons are present in the model network, they must receive appropriate tonic drive and provide appropriate excitation
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Prasetyo, Simeon Yuda. "Prediksi Gagal Jantung Menggunakan Artificial Neural Network." Jurnal SAINTEKOM 13, no. 1 (March 31, 2023): 79–88. http://dx.doi.org/10.33020/saintekom.v13i1.379.

Повний текст джерела
Анотація:
Cardiovascular disease or heart problems are the leading cause of death worldwide. According to WHO (World Health Organization) every year there are more than 17.9 million deaths worldwide. In previous studies, there have been many studies related to the application of machine learning to predict heart failure and obtained quite good results, ranging from 85 percent to 90 percent, with sophisticated models optimized using neural networks. In this research, experiments were carried out using similar architectures based on the state of the art from previous research, namely Artificial Neural Networks by conducting several hyperparameter tests, namely the number of hidden layers and the number of neuron units in the hidden layer. Based on the test results, the Artificial Neural Network model get the best results by implementing 2 hidden layers with 15 units of neurons in the first hidden layer and 10 units of neurons in the second hidden layer. This model get accuracy on data testing of 92,032% and AUC of 93%.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Kong, Jing, and Su Su Bao. "Improved CPN Network Applying in the Failure Diagnosis of Electronic Gasoline." Advanced Materials Research 179-180 (January 2011): 60–63. http://dx.doi.org/10.4028/www.scientific.net/amr.179-180.60.

Повний текст джерела
Анотація:
We diagnose the Electronic Gasoline failure according to the improved CPN neural network. In the normal CPN neural network, it allows one neuron to win. We improved the CPN neural network based on the developed CPN and allow two neurons to win in the competition layer. The two winning neurons affect the Weights simultaneously. This improved method can adjust the weight values in the CPN model more accurately and give more accurate output for testing data.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Vreeswijk, C. van, and D. Hansel. "Patterns of Synchrony in Neural Networks with Spike Adaptation." Neural Computation 13, no. 5 (May 1, 2001): 959–92. http://dx.doi.org/10.1162/08997660151134280.

Повний текст джерела
Анотація:
We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrate-and-fire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance-based model neurons.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії