Статті в журналах з теми "Potts Attractor Neural Network"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Potts Attractor Neural Network.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Potts Attractor Neural Network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Abdukhamidov, Eldor, Firuz Juraev, Mohammed Abuhamad, Shaker El-Sappagh, and Tamer AbuHmed. "Sentiment Analysis of Users’ Reactions on Social Media During the Pandemic." Electronics 11, no. 10 (May 22, 2022): 1648. http://dx.doi.org/10.3390/electronics11101648.

Повний текст джерела
Анотація:
During the outbreak of the COVID-19 pandemic, social networks became the preeminent medium for communication, social discussion, and entertainment. Social network users are regularly expressing their opinions about the impacts of the coronavirus pandemic. Therefore, social networks serve as a reliable source for studying the topics, emotions, and attitudes of users that have been discussed during the pandemic. In this paper, we investigate the reactions and attitudes of people towards topics raised on social media platforms. We collected data of two large-scale COVID-19 datasets from Twitter and Instagram for six and three months, respectively. This paper analyzes the reaction of social network users in terms of different aspects including sentiment analysis, topic detection, emotions, and the geo-temporal characteristics of our dataset. We show that the dominant sentiment reactions on social media are neutral, while the most discussed topics by social network users are about health issues. This paper examines the countries that attracted a higher number of posts and reactions from people, as well as the distribution of health-related topics discussed in the most mentioned countries. We shed light on the temporal shift of topics over countries. Our results show that posts from the top-mentioned countries influence and attract more reactions worldwide than posts from other parts of the world.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

O'Kane, D., and D. Sherrington. "A feature retrieving attractor neural network." Journal of Physics A: Mathematical and General 26, no. 10 (May 21, 1993): 2333–42. http://dx.doi.org/10.1088/0305-4470/26/10/008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Deng, Hanming, Yang Hua, Tao Song, Zhengui Xue, Ruhui Ma, Neil Robertson, and Haibing Guan. "Reinforcing Neural Network Stability with Attractor Dynamics." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3765–72. http://dx.doi.org/10.1609/aaai.v34i04.5787.

Повний текст джерела
Анотація:
Recent approaches interpret deep neural works (DNNs) as dynamical systems, drawing the connection between stability in forward propagation and generalization of DNNs. In this paper, we take a step further to be the first to reinforce this stability of DNNs without changing their original structure and verify the impact of the reinforced stability on the network representation from various aspects. More specifically, we reinforce stability by modeling attractor dynamics of a DNN and propose relu-max attractor network (RMAN), a light-weight module readily to be deployed on state-of-the-art ResNet-like networks. RMAN is only needed during training so as to modify a ResNet's attractor dynamics by minimizing an energy function together with the loss of the original learning task. Through intensive experiments, we show that RMAN-modified attractor dynamics bring a more structured representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like networks in supervised tasks due to reinforced stability.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

TAN, Z., and L. SCHÜLKE. "THE ATTRACTOR BASIN OF NEURAL NETWORK WITH CORRELATED INTERACTIONS." International Journal of Modern Physics B 10, no. 26 (November 30, 1996): 3549–60. http://dx.doi.org/10.1142/s0217979296001902.

Повний текст джерела
Анотація:
The attractor basin is studied when correlations (symmetries) in the interaction are imposed for diluted Gardner-Derrida neural network. A clear relation between the attractor basin and the symmetry is obtained in a small range of symmetry parameter. The size of the attractor basin is smaller than that in the case without imposed correlations. For special values of the symmetry parameter the model leads to results obtained in the model with no correlations.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Badoni, Davide, Roberto Riccardi, and Gaetano Salina. "LEARNING ATTRACTOR NEURAL NETWORK: THE ELECTRONIC IMPLEMENTATION." International Journal of Neural Systems 03, supp01 (January 1992): 13–24. http://dx.doi.org/10.1142/s0129065792000334.

Повний текст джерела
Анотація:
In this article we describe the electronic implementation of an attractor neural network with plastic analog synapses. The project for a 27 neurons fully connected network will be shown together with the most important electronic tests we have carried out on a smaller network.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Frolov, A. A., D. Husek, I. P. Muraviev, and P. Yu Polyakov. "Boolean Factor Analysis by Attractor Neural Network." IEEE Transactions on Neural Networks 18, no. 3 (May 2007): 698–707. http://dx.doi.org/10.1109/tnn.2007.891664.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

ZOU, FAN, and JOSEF A. NOSSEK. "AN AUTONOMOUS CHAOTIC CELLULAR NEURAL NETWORK AND CHUA'S CIRCUIT." Journal of Circuits, Systems and Computers 03, no. 02 (June 1993): 591–601. http://dx.doi.org/10.1142/s0218126693000368.

Повний текст джерела
Анотація:
Cellular neural networks (CNN) are time-continuous nonlinear dynamical systems. Like in Chua's circuit, the nonlinearity of these networks is defined as a piecewise-linear function. For CNNs with at least three cells chaotic behavior may be possible in certain regions of parameter values. In this paper we report such a chaotic attractor with an autonomous three-cell CNN. It can be shown that the attractor has a structure very similar to the double-scroll Chua's attractor. Through some equivalent transformations this circuit, in three major subspaces of its state space, is shown to belong to Chua's circuit family, although originating from a completely different field.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dominguez, D. R. C., and D. Bollé. "Categorization by a three-state attractor neural network." Physical Review E 56, no. 6 (December 1, 1997): 7306–9. http://dx.doi.org/10.1103/physreve.56.7306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

SERULNIK, SERGIO D., and MOSHE GUR. "AN ATTRACTOR NEURAL NETWORK MODEL OF CLASSICAL CONDITIONING." International Journal of Neural Systems 07, no. 01 (March 1996): 1–18. http://dx.doi.org/10.1142/s0129065796000026.

Повний текст джерела
Анотація:
Living beings learn to associate known stimuli that exhibit specific temporal correlations. This kind of learning is called associative learning, and the process by which animals change their responses according to the schedule of arriving stimuli is called “classical conditioning”. In this paper, a conditionable neural network which exhibits features like forward conditioning, dependency on the interstimulus interval, and absence of backward and reverse conditioning is presented. An asymmetric neural network was used and its ability to retrieve a sequence of embedded patterns using a single recalling input was exploited. The main assumption was that synapses that respond with different time constants coexist in the system. These synapses induce transitions between different embedded patterns. The appearance of a correct transition when only the first stimulus is applied, is interpreted as a realization of the conditioning process. The model also allows the analytical description of the conditioning process in terms of internal and external or researcher-controlled variables.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wong, K. Y. M., and C. Ho. "Attractor properties of dynamical systems: neural network models." Journal of Physics A: Mathematical and General 27, no. 15 (August 7, 1994): 5167–85. http://dx.doi.org/10.1088/0305-4470/27/15/017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

González, Mario, David Dominguez, Ángel Sánchez, and Francisco B. Rodríguez. "Increase attractor capacity using an ensembled neural network." Expert Systems with Applications 71 (April 2017): 206–15. http://dx.doi.org/10.1016/j.eswa.2016.11.035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Steffan, Helmut, and Reimer K�hn. "Replica symmetry breaking in attractor neural network models." Zeitschrift f�r Physik B Condensed Matter 95, no. 2 (June 1994): 249–60. http://dx.doi.org/10.1007/bf01312198.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Funabashi, Masatoshi. "Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network." International Journal of Bifurcation and Chaos 25, no. 04 (April 2015): 1550054. http://dx.doi.org/10.1142/s0218127415500546.

Повний текст джерела
Анотація:
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Dominguez, D., K. Koroutchev, E. Serrano, and F. B. Rodríguez. "Information and Topology in Attractor Neural Networks." Neural Computation 19, no. 4 (April 2007): 956–73. http://dx.doi.org/10.1162/neco.2007.19.4.956.

Повний текст джерела
Анотація:
A wide range of networks, including those with small-world topology, can be modeled by the connectivity ratio and randomness of the links. Both learning and attractor abilities of a neural network can be measured by the mutual information (MI) as a function of the load and the overlap between patterns and retrieval states. In this letter, we use MI to search for the optimal topology with regard to the storage and attractor properties of the network in an Amari-Hopfield model. We find that while an optimal storage implies an extremely diluted topology, a large basin of attraction leads to moderate levels of connectivity. This optimal topology is related to the clustering and path length of the network. We also build a diagram for the dynamical phases with random or local initial overlap and show that very diluted networks lose their attractor ability.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gislén, Lars, Carsten Peterson, and Bo Söderberg. "Complex Scheduling with Potts Neural Networks." Neural Computation 4, no. 6 (November 1992): 805–31. http://dx.doi.org/10.1162/neco.1992.4.6.805.

Повний текст джерела
Анотація:
In a recent paper (Gislén et al. 1989) a convenient encoding and an efficient mean field algorithm for solving scheduling problems using a Potts neural network was developed and numerically explored on simplified and synthetic problems. In this work the approach is extended to realistic applications both with respect to problem complexity and size. This extension requires among other things the interaction of Potts neurons with different number of components. We analyze the corresponding linearized mean field equations with respect to estimating the phase transition temperature. Also a brief comparison with the linear programming approach is given. Testbeds consisting of generated problems within the Swedish high school system are solved efficiently with high quality solutions as results.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Bollé, D., P. Dupont, and J. Huyghebaert. "Thermodynamic properties of theQ-state Potts-glass neural network." Physical Review A 45, no. 6 (March 1, 1992): 4194–97. http://dx.doi.org/10.1103/physreva.45.4194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

IRWIN, J., H. BOHR, K. MOCHIZUKI, and P. G. WOLYNES. "CLASSIFICATION AND PREDICTION OF PROTEIN SIDE-CHAINS BY NEURAL NETWORK TECHNIQUES." International Journal of Neural Systems 03, supp01 (January 1992): 177–82. http://dx.doi.org/10.1142/s0129065792000504.

Повний текст джерела
Анотація:
Neural Network methodology is used to classify and predict side-chain configurations in proteins on the basis of their sequence and in some cases also Cα-atomic distance information. In some of these methods, where Potts Associative Memories are employed, a mixed set of Potts systems each describe the various orientational states of a particular side-chain. The methods can find the correct side-chain orientations in proteins reasonably well after being trained on a data set of other proteins of known 3-dimensional structure.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wu, Si, Kosuke Hamaguchi, and Shun-ichi Amari. "Dynamics and Computation of Continuous Attractors." Neural Computation 20, no. 4 (April 2008): 994–1025. http://dx.doi.org/10.1162/neco.2008.10-06-378.

Повний текст джерела
Анотація:
Continuous attractor is a promising model for describing the encoding of continuous stimuli in neural systems. In a continuous attractor, the stationary states of the neural system form a continuous parameter space, on which the system is neutrally stable. This property enables the neutral system to track time-varying stimuli smoothly, but it also degrades the accuracy of information retrieval, since these stationary states are easily disturbed by external noise. In this work, based on a simple model, we systematically investigate the dynamics and the computational properties of continuous attractors. In order to analyze the dynamics of a large-size network, which is otherwise extremely complicated, we develop a strategy to reduce its dimensionality by utilizing the fact that a continuous attractor can eliminate the noise components perpendicular to the attractor space very quickly. We therefore project the network dynamics onto the tangent of the attractor space and simplify it successfully as a one-dimensional Ornstein-Uhlenbeck process. Based on this simplified model, we investigate (1) the decoding error of a continuous attractor under the driving of external noisy inputs, (2) the tracking speed of a continuous attractor when external stimulus experiences abrupt changes, (3) the neural correlation structure associated with the specific dynamics of a continuous attractor, and (4) the consequence of asymmetric neural correlation on statistical population decoding. The potential implications of these results on our understanding of neural information processing are also discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Lattanzi, G., G. Nardulli, and S. Stramaglia. "A Neural Network with Permanent and Volatile Memory." Modern Physics Letters B 11, no. 24 (October 20, 1997): 1037–45. http://dx.doi.org/10.1142/s0217984997001250.

Повний текст джерела
Анотація:
We discuss an attractor neural network where both neurons and synapses obey dynamical equations; it models the compresence of permanent and short-term memories. By signal-noise analysis and a discussion in terms of an energy-like function, we outline the behavior of the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ruppin, E., and M. Usher. "An attractor neural network model of semantic fact retrieval." Network: Computation in Neural Systems 1, no. 3 (January 1990): 325–44. http://dx.doi.org/10.1088/0954-898x_1_3_003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Tsodyks, Misha. "Attractor neural network models of spatial maps in hippocampus." Hippocampus 9, no. 4 (1999): 481–89. http://dx.doi.org/10.1002/(sici)1098-1063(1999)9:4<481::aid-hipo14>3.0.co;2-s.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Соловьева, К. П., and K. P. Solovyeva. "Self-Organized Maps on Continuous Bump Attractors." Mathematical Biology and Bioinformatics 8, no. 1 (May 27, 2013): 234–47. http://dx.doi.org/10.17537/2013.8.234.

Повний текст джерела
Анотація:
In this article, we describe a simple binary neuron system, which implements a self-organized map. The system consists of R input neurons (R receptors), and N output neurons of a recurrent neural network. The neural network has a quasi-continuous set of attractor states (one-dimensional “bump attractor”). Due to the dynamics of the network, each external signal (i.e. activity state of receptors) imposes transition of the recurrent network into one of its stable states (points of its attractor). That makes our system different from the “winner takes all” construction of T.Kohonen. In case, when there is a one-dimensional cyclical manifold of external signals in R-dimensional input space, and the recurrent neural network presents a complete ring of neurons with local excitatory connections, there exists a process of learning of connections between the receptors and the neurons of the recurrent network, which enables a topologically correct mapping of input signals into the stable states of the neural network. The convergence rate of learning and the role of noises and other factors affecting the described phenomenon has been evaluated in computational simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Hoffman, Ralph E. "Additional tests of Amit's attractor neural networks." Behavioral and Brain Sciences 18, no. 4 (December 1995): 634–35. http://dx.doi.org/10.1017/s0140525x00040255.

Повний текст джерела
Анотація:
AbstractFurther tests of Amit's model are indicated. One strategy is to use the apparent coding sparseness of the model to make predictions about coding sparseness in Miyashita's network. A second approach is to use memory overload to induce false positive responses in modules and biological systems. In closing, the importance of temporal coding and timing requirements in developing biologically plausible attractor networks is mentioned.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tang, Buzhou, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. "Recognizing Continuous and Discontinuous Adverse Drug Reaction Mentions from Social Media Using LSTM-CRF." Wireless Communications and Mobile Computing 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/2379208.

Повний текст джерела
Анотація:
Social media in medicine, where patients can express their personal treatment experiences by personal computers and mobile devices, usually contains plenty of useful medical information, such as adverse drug reactions (ADRs); mining this useful medical information from social media has attracted more and more attention from researchers. In this study, we propose a deep neural network (called LSTM-CRF) combining long short-term memory (LSTM) neural networks (a type of recurrent neural networks) and conditional random fields (CRFs) to recognize ADR mentions from social media in medicine and investigate the effects of three factors on ADR mention recognition. The three factors are as follows: (1) representation for continuous and discontinuous ADR mentions: two novel representations, that is, “BIOHD” and “Multilabel,” are compared; (2) subject of posts: each post has a subject (i.e., drug here); and (3) external knowledge bases. Experiments conducted on a benchmark corpus, that is, CADEC, show that LSTM-CRF achieves better F-score than CRF; “Multilabel” is better in representing continuous and discontinuous ADR mentions than “BIOHD”; both subjects of comments and external knowledge bases are individually beneficial to ADR mention recognition. To the best of our knowledge, this is the first time to investigate deep neural networks to mine continuous and discontinuous ADRs from social media.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Akιn, H. "Phase diagrams of lattice models on Cayley tree and chandelier network: a review." Condensed Matter Physics 25, no. 3 (2022): 32501. http://dx.doi.org/10.5488/cmp.25.32501.

Повний текст джерела
Анотація:
The main purpose of this review paper is to give systematically all the known results on phase diagrams corresponding to lattice models (Ising and Potts) on Cayley tree (or Bethe lattice) and chandelier networks. A detailed survey of various modelling applications of lattice models is reported. By using Vannimenus's approach, the recursive equations of Ising and Potts models associated to a given Hamiltonian on the Cayley tree are presented and analyzed. The corresponding phase diagrams with programming codes in different programming languages are plotted. To detect the phase transitions in the modulated phase, we investigate in detail the actual variation of the wave-vector q with temperature and the Lyapunov exponent associated with the trajectory of our current recursive system. We determine the transition between commensurate (C) and incommensurate (I) phases by means of the Lyapunov exponents, wave-vector, and strange attractor for a comprehensive comparison. We survey the dynamical behavior of the Ising model on the chandelier network. We examine the phase diagrams of the Ising model corresponding to a given Hamiltonian on a new type of "Cayley-tree-like lattice", such as triangular, rectangular, pentagonal chandelier networks (lattices). Moreover, several open problems are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

BROUWER, ROELOF K. "AN INTEGER RECURRENT ARTIFICIAL NEURAL NETWORK FOR CLASSIFYING FEATURE VECTORS." International Journal of Pattern Recognition and Artificial Intelligence 14, no. 03 (May 2000): 339–55. http://dx.doi.org/10.1142/s0218001400000222.

Повний текст джерела
Анотація:
The main contribution of this paper is the development of an Integer Recurrent Artificial Neural Network (IRANN) for classification of feature vectors. The network consists both of threshold units or perceptrons and of counters, which are non-threshold units with binary input and integer output. Input and output of the network consists of vectors of natural numbers that may be used to represent feature vectors. For classification purposes, representatives of sets are stored by calculating a connection matrix such that all the elements in a training set are attracted to members of the same training set. The class of its attractor then classifies an arbitrary element if the attractor is a member of one of the original training sets. The network is successfully applied to the classification of sugar diabetes data, credit application data, and the iris data set.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ahissar, Ehud. "Are single-cell data sufficient for testing neural network models?" Behavioral and Brain Sciences 18, no. 4 (December 1995): 626–27. http://dx.doi.org/10.1017/s0140525x00040176.

Повний текст джерела
Анотація:
AbstractPersistent activity can be the product of mechanisms other than attractor reverberations. The single-unit data presented by Amit cannot discriminate between the different mechanisms. In fact, single-unit data do not appear to be adequate for testing neural network models.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Horn, D., and E. Ruppin. "Compensatory Mechanisms in an Attractor Neural Network Model of Schizophrenia." Neural Computation 7, no. 1 (January 1995): 182–205. http://dx.doi.org/10.1162/neco.1995.7.1.182.

Повний текст джерела
Анотація:
We investigate the effect of synaptic compensation on the dynamic behavior of an attractor neural network receiving its input stimuli as external fields projecting on the network. It is shown how, in the face of weakened inputs, memory performance may be preserved by strengthening internal synaptic connections and increasing the noise level. Yet, these compensatory changes necessarily have adverse side effects, leading to spontaneous, stimulus-independent retrieval of stored patterns. These results can support Stevens' recent hypothesis that the onset of schizophrenia is associated with frontal synaptic regeneration, occurring subsequent to the degeneration of temporal neurons projecting on these areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Fink, Wolfgang. "Neural attractor network for application in visual field data classification." Physics in Medicine and Biology 49, no. 13 (June 12, 2004): 2799–809. http://dx.doi.org/10.1088/0031-9155/49/13/003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yu, Jiali, Huajin Tang, Haizhou Li, and Luping Shi. "Dynamical properties of continuous attractor neural network with background tuning." Neurocomputing 99 (January 2013): 439–47. http://dx.doi.org/10.1016/j.neucom.2012.06.029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Igarashi, Yasuhiko, Masafumi Oizumi, Yosuke Otsubo, Kenji Nagata, and Masato Okada. "Statistical mechanics of attractor neural network models with synaptic depression." Journal of Physics: Conference Series 197 (December 1, 2009): 012018. http://dx.doi.org/10.1088/1742-6596/197/1/012018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lakshmi, C., K. Thenmozhi, John Bosco Balaguru Rayappan, and Rengarajan Amirtharajan. "Hopfield attractor-trusted neural network: an attack-resistant image encryption." Neural Computing and Applications 32, no. 15 (November 29, 2019): 11477–89. http://dx.doi.org/10.1007/s00521-019-04637-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Seow, M. J., and V. K. Asari. "Recurrent Neural Network as a Linear Attractor for Pattern Association." IEEE Transactions on Neural Networks 17, no. 1 (January 2006): 246–50. http://dx.doi.org/10.1109/tnn.2005.860869.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Deco, Gustavo, and Edmund T. Rolls. "Sequential Memory: A Putative Neural and Synaptic Dynamical Mechanism." Journal of Cognitive Neuroscience 17, no. 2 (February 2005): 294–307. http://dx.doi.org/10.1162/0898929053124875.

Повний текст джерела
Анотація:
A key issue in the neurophysiology of cognition is the problem of sequential learning. Sequential learning refers to the ability to encode and represent the temporal order of discrete elements occurring in a sequence. We show that the short-term memory for a sequence of items can be implemented in an autoassociation neural network. Each item is one of the attractor states of the network. The autoassociation network is implemented at the level of integrate-and-fire neurons so that the contributions of different biophysical mechanisms to sequence learning can be investigated. It is shown that if it is a property of the synapses or neurons that support each attractor state that they adapt, then everytime the network is made quiescent (e.g., by inhibition), then the attractor state that emerges next is the next item in the sequence. We show with numerical simulations implementations of the mechanisms using (1) a sodium inactivation-based spike-frequency-adaptation mechanism, (2) a Ca2+-activated K+ current, and (3) short-term synaptic depression, with sequences of up to three items. The network does not need repeated training on a particular sequence and will repeat the items in the order that they were last presented. The time between the items in a sequence is not fixed, allowing the items to be read out as required over a period of up to many seconds. The network thus uses adaptation rather than associative synaptic modification to recall the order of the items in a recently presented sequence.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

ABDI, H. "A NEURAL NETWORK PRIMER." Journal of Biological Systems 02, no. 03 (September 1994): 247–81. http://dx.doi.org/10.1142/s0218339094000179.

Повний текст джерела
Анотація:
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some specific input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hopfield network, the Boltzmann machine, the back-propagation network and the radial basis function network.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Fiorelli, Eliana, Igor Lesanovsky, and Markus Müller. "Phase diagram of quantum generalized Potts-Hopfield neural networks." New Journal of Physics 24, no. 3 (March 1, 2022): 033012. http://dx.doi.org/10.1088/1367-2630/ac5490.

Повний текст джерела
Анотація:
Abstract We introduce and analyze an open quantum generalization of the q-state Potts-Hopfield neural network (NN), which is an associative memory model based on multi-level classical spins. The dynamics of this many-body system is formulated in terms of a Markovian master equation of Lindblad type, which allows to incorporate both probabilistic classical and coherent quantum processes on an equal footing. By employing a mean field description we investigate how classical fluctuations due to temperature and quantum fluctuations effectuated by coherent spin rotations affect the ability of the network to retrieve stored memory patterns. We construct the corresponding phase diagram, which in the low temperature regime displays pattern retrieval in analogy to the classical Potts-Hopfield NN. When increasing quantum fluctuations, however, a limit cycle phase emerges, which has no classical counterpart. This shows that quantum effects can qualitatively alter the structure of the stationary state manifold with respect to the classical model, and potentially allow one to encode and retrieve novel types of patterns.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Xiong, Daxing, and Hong Zhao. "Estimates of storage capacity in theq-state Potts-glass neural network." Journal of Physics A: Mathematical and Theoretical 43, no. 44 (October 13, 2010): 445001. http://dx.doi.org/10.1088/1751-8113/43/44/445001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kang, Chol, Michelangelo Naim, Vezha Boboeva, and Alessandro Treves. "Life on the Edge: Latching Dynamics in a Potts Neural Network." Entropy 19, no. 9 (September 3, 2017): 468. http://dx.doi.org/10.3390/e19090468.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Marconi, Carlo, Pau Colomer Saus, María García Díaz, and Anna Sanpera. "The role of coherence theory in attractor quantum neural networks." Quantum 6 (September 8, 2022): 794. http://dx.doi.org/10.22331/q-2022-09-08-794.

Повний текст джерела
Анотація:
We investigate attractor quantum neural networks (aQNNs) within the framework of coherence theory. We show that: i) aQNNs are associated to non-coherence-generating quantum channels; ii) the depth of the network is given by the decohering power of the corresponding quantum map; and iii) the attractor associated to an arbitrary input state is the one minimizing their relative entropy. Further, we examine faulty aQNNs described by noisy quantum channels, derive their physical implementation and analyze under which conditions their performance can be enhanced by using entanglement or coherence as external resources.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Amit†, Daniel, and Nicolas Brunel. "Learning internal representations in an attractor neural network with analogue neurons." Network: Computation in Neural Systems 6, no. 3 (August 1, 1995): 359–88. http://dx.doi.org/10.1088/0954-898x/6/3/004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Fassnacht, C., and A. Zippelius. "Recognition and categorization in a structured neural network with attractor dynamics." Network: Computation in Neural Systems 2, no. 1 (January 1991): 63–84. http://dx.doi.org/10.1088/0954-898x_2_1_004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Brunel, Nicolas. "Dynamics of an attractor neural network converting temporal into spatial correlations." Network: Computation in Neural Systems 5, no. 4 (January 1994): 449–70. http://dx.doi.org/10.1088/0954-898x_5_4_003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Badoni, Davide, Stefano Bertazzoni, Stefano Buglioni, Gaetano Salina, Daniel J. Amit, and Stefano Fusi. "Electronic implementation of an analogue attractor neural network with stochastic learning." Network: Computation in Neural Systems 6, no. 2 (January 1995): 125–57. http://dx.doi.org/10.1088/0954-898x_6_2_002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Beňušková, Lubica. "Modelling transpositional invariancy of melody recognition with an attractor neural network." Network: Computation in Neural Systems 6, no. 3 (January 1995): 313–31. http://dx.doi.org/10.1088/0954-898x_6_3_001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Amit, Daniel J., and Nicolas Brunel. "Learning internal representations in an attractor neural network with analogue neurons." Network: Computation in Neural Systems 6, no. 3 (January 1995): 359–88. http://dx.doi.org/10.1088/0954-898x_6_3_004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Frolov, Alexander A., Dusan Husek, Pavel Y. Polyakov, and Vaclav Snasel. "New BFA method based on attractor neural network and likelihood maximization." Neurocomputing 132 (May 2014): 14–29. http://dx.doi.org/10.1016/j.neucom.2013.07.047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Torres, J. J., J. M. Cortes, J. Marro, and H. J. Kappen. "Competition Between Synaptic Depression and Facilitation in Attractor Neural Networks." Neural Computation 19, no. 10 (October 2007): 2739–55. http://dx.doi.org/10.1162/neco.2007.19.10.2739.

Повний текст джерела
Анотація:
We study the effect of competition between short-term synaptic depression and facilitation on the dynamic properties of attractor neural networks, using Monte Carlo simulation and a mean-field analysis. Depending on the balance of depression, facilitation, and the underlying noise, the network displays different behaviors, including associative memory and switching of activity between different attractors. We conclude that synaptic facilitation enhances the attractor instability in a way that (1) intensifies the system adaptability to external stimuli, which is in agreement with experiments, and (2) favors the retrieval of information with less error during short time intervals.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Brunel, Nicolas. "Hebbian Learning of Context in Recurrent Neural Networks." Neural Computation 8, no. 8 (November 1996): 1677–710. http://dx.doi.org/10.1162/neco.1996.8.8.1677.

Повний текст джерела
Анотація:
Single electrode recordings in the inferotemporal cortex of monkeys during delayed visual memory tasks provide evidence for attractor dynamics in the observed region. The persistent elevated delay activities could be internal representations of features of the learned visual stimuli shown to the monkey during training. When uncorrelated stimuli are presented during training in a fixed sequence, these experiments display significant correlations between the internal representations. Recently a simple model of attractor neural network has reproduced quantitatively the measured correlations. An underlying assumption of the model is that the synaptic matrix formed during the training phase contains in its efficacies information about the contiguity of persistent stimuli in the training sequence. We present here a simple unsupervised learning dynamics that produces such a synaptic matrix if sequences of stimuli are repeatedly presented to the network at fixed order. The resulting matrix is then shown to convert temporal correlations during training into spatial correlations between attractors. The scenario is that, in the presence of selective delay activity, at the presentation of each stimulus, the activity distribution in the neural assembly contains information of both the current stimulus and the previous one (carried by the attractor). Thus the recurrent synaptic matrix can code not only for each of the stimuli presented to the network but also for their context. We combine the idea that for learning to be effective, synaptic modification should be stochastic, with the fact that attractors provide learnable information about two consecutive stimuli. We calculate explicitly the probability distribution of synaptic efficacies as a function of training protocol, that is, the order in which stimuli are presented to the network. We then solve for the dynamics of a network composed of integrate-and-fire excitatory and inhibitory neurons with a matrix of synaptic collaterals resulting from the learning dynamics. The network has a stable spontaneous activity, and stable delay activity develops after a critical learning stage. The availability of a learning dynamics makes possible a number of experimental predictions for the dependence of the delay activity distributions and the correlations between them, on the learning stage and the learning protocol. In particular it makes specific predictions for pair-associates delay experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

KRAWIECKI, A., and R. A. KOSIŃSKI. "ON–OFF INTERMITTENCY IN SMALL NEURAL NETWORKS WITH TIME-DEPENDENT SYNAPTIC NOISE." International Journal of Bifurcation and Chaos 09, no. 01 (January 1999): 97–105. http://dx.doi.org/10.1142/s0218127499000055.

Повний текст джерела
Анотація:
Numerical evidence is presented for the occurence of on–off intermittency and attractor bubbling in the time series of synaptic potentials of analog neurons with time-dependent synaptic noise. The cases of continuous noise with uniform distribution of values and (physiologically motivated) quantal noise with binomial distribution are considered. The results were obtained for a single neuron with synaptic self-connection and a network of two neurons with various weights of synaptic connections. In the latter case coexistence of a neuron showing on–off intermittency with another one, showing attractor bubbling, is possible.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

AKCAN, BURCU, and YİĞIT GÜNDÜÇ. "A MONTE CARLO STUDY OF THE STORAGE CAPACITY AND EFFECTS OF THE CORRELATIONS IN q-STATE POTTS NEURON SYSTEM." International Journal of Modern Physics C 13, no. 02 (February 2002): 199–206. http://dx.doi.org/10.1142/s012918310200305x.

Повний текст джерела
Анотація:
The storage capacity of the Potts neural network is studied by using Monte Carlo techniques. It is observed that the critical storage capacity formula of Kanter fits well to our data. The increase of the correlation between the initial patterns reduces the storage capacity. This reduction in capacity is proportional to the percentage of correlations and inversely proportional to the number of orientations that the Potts neurons can possess
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії