Добірка наукової літератури з теми "Potts Attractor Neural Network"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Potts Attractor Neural Network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Potts Attractor Neural Network"

1

Abdukhamidov, Eldor, Firuz Juraev, Mohammed Abuhamad, Shaker El-Sappagh, and Tamer AbuHmed. "Sentiment Analysis of Users’ Reactions on Social Media During the Pandemic." Electronics 11, no. 10 (May 22, 2022): 1648. http://dx.doi.org/10.3390/electronics11101648.

Повний текст джерела
Анотація:
During the outbreak of the COVID-19 pandemic, social networks became the preeminent medium for communication, social discussion, and entertainment. Social network users are regularly expressing their opinions about the impacts of the coronavirus pandemic. Therefore, social networks serve as a reliable source for studying the topics, emotions, and attitudes of users that have been discussed during the pandemic. In this paper, we investigate the reactions and attitudes of people towards topics raised on social media platforms. We collected data of two large-scale COVID-19 datasets from Twitter and Instagram for six and three months, respectively. This paper analyzes the reaction of social network users in terms of different aspects including sentiment analysis, topic detection, emotions, and the geo-temporal characteristics of our dataset. We show that the dominant sentiment reactions on social media are neutral, while the most discussed topics by social network users are about health issues. This paper examines the countries that attracted a higher number of posts and reactions from people, as well as the distribution of health-related topics discussed in the most mentioned countries. We shed light on the temporal shift of topics over countries. Our results show that posts from the top-mentioned countries influence and attract more reactions worldwide than posts from other parts of the world.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

O'Kane, D., and D. Sherrington. "A feature retrieving attractor neural network." Journal of Physics A: Mathematical and General 26, no. 10 (May 21, 1993): 2333–42. http://dx.doi.org/10.1088/0305-4470/26/10/008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Deng, Hanming, Yang Hua, Tao Song, Zhengui Xue, Ruhui Ma, Neil Robertson, and Haibing Guan. "Reinforcing Neural Network Stability with Attractor Dynamics." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3765–72. http://dx.doi.org/10.1609/aaai.v34i04.5787.

Повний текст джерела
Анотація:
Recent approaches interpret deep neural works (DNNs) as dynamical systems, drawing the connection between stability in forward propagation and generalization of DNNs. In this paper, we take a step further to be the first to reinforce this stability of DNNs without changing their original structure and verify the impact of the reinforced stability on the network representation from various aspects. More specifically, we reinforce stability by modeling attractor dynamics of a DNN and propose relu-max attractor network (RMAN), a light-weight module readily to be deployed on state-of-the-art ResNet-like networks. RMAN is only needed during training so as to modify a ResNet's attractor dynamics by minimizing an energy function together with the loss of the original learning task. Through intensive experiments, we show that RMAN-modified attractor dynamics bring a more structured representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like networks in supervised tasks due to reinforced stability.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

TAN, Z., and L. SCHÜLKE. "THE ATTRACTOR BASIN OF NEURAL NETWORK WITH CORRELATED INTERACTIONS." International Journal of Modern Physics B 10, no. 26 (November 30, 1996): 3549–60. http://dx.doi.org/10.1142/s0217979296001902.

Повний текст джерела
Анотація:
The attractor basin is studied when correlations (symmetries) in the interaction are imposed for diluted Gardner-Derrida neural network. A clear relation between the attractor basin and the symmetry is obtained in a small range of symmetry parameter. The size of the attractor basin is smaller than that in the case without imposed correlations. For special values of the symmetry parameter the model leads to results obtained in the model with no correlations.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Badoni, Davide, Roberto Riccardi, and Gaetano Salina. "LEARNING ATTRACTOR NEURAL NETWORK: THE ELECTRONIC IMPLEMENTATION." International Journal of Neural Systems 03, supp01 (January 1992): 13–24. http://dx.doi.org/10.1142/s0129065792000334.

Повний текст джерела
Анотація:
In this article we describe the electronic implementation of an attractor neural network with plastic analog synapses. The project for a 27 neurons fully connected network will be shown together with the most important electronic tests we have carried out on a smaller network.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Frolov, A. A., D. Husek, I. P. Muraviev, and P. Yu Polyakov. "Boolean Factor Analysis by Attractor Neural Network." IEEE Transactions on Neural Networks 18, no. 3 (May 2007): 698–707. http://dx.doi.org/10.1109/tnn.2007.891664.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

ZOU, FAN, and JOSEF A. NOSSEK. "AN AUTONOMOUS CHAOTIC CELLULAR NEURAL NETWORK AND CHUA'S CIRCUIT." Journal of Circuits, Systems and Computers 03, no. 02 (June 1993): 591–601. http://dx.doi.org/10.1142/s0218126693000368.

Повний текст джерела
Анотація:
Cellular neural networks (CNN) are time-continuous nonlinear dynamical systems. Like in Chua's circuit, the nonlinearity of these networks is defined as a piecewise-linear function. For CNNs with at least three cells chaotic behavior may be possible in certain regions of parameter values. In this paper we report such a chaotic attractor with an autonomous three-cell CNN. It can be shown that the attractor has a structure very similar to the double-scroll Chua's attractor. Through some equivalent transformations this circuit, in three major subspaces of its state space, is shown to belong to Chua's circuit family, although originating from a completely different field.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dominguez, D. R. C., and D. Bollé. "Categorization by a three-state attractor neural network." Physical Review E 56, no. 6 (December 1, 1997): 7306–9. http://dx.doi.org/10.1103/physreve.56.7306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

SERULNIK, SERGIO D., and MOSHE GUR. "AN ATTRACTOR NEURAL NETWORK MODEL OF CLASSICAL CONDITIONING." International Journal of Neural Systems 07, no. 01 (March 1996): 1–18. http://dx.doi.org/10.1142/s0129065796000026.

Повний текст джерела
Анотація:
Living beings learn to associate known stimuli that exhibit specific temporal correlations. This kind of learning is called associative learning, and the process by which animals change their responses according to the schedule of arriving stimuli is called “classical conditioning”. In this paper, a conditionable neural network which exhibits features like forward conditioning, dependency on the interstimulus interval, and absence of backward and reverse conditioning is presented. An asymmetric neural network was used and its ability to retrieve a sequence of embedded patterns using a single recalling input was exploited. The main assumption was that synapses that respond with different time constants coexist in the system. These synapses induce transitions between different embedded patterns. The appearance of a correct transition when only the first stimulus is applied, is interpreted as a realization of the conditioning process. The model also allows the analytical description of the conditioning process in terms of internal and external or researcher-controlled variables.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wong, K. Y. M., and C. Ho. "Attractor properties of dynamical systems: neural network models." Journal of Physics A: Mathematical and General 27, no. 15 (August 7, 1994): 5167–85. http://dx.doi.org/10.1088/0305-4470/27/15/017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Potts Attractor Neural Network"

1

Seybold, John. "An attractor neural network model of spoken word recognition." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335839.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Pereira, Patrícia. "Attractor Neural Network modelling of the Lifespan Retrieval Curve." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280732.

Повний текст джерела
Анотація:
Human capability to recall episodic memories depends on how much time has passed since the memory was encoded. This dependency is described by a memory retrieval curve that reflects an interesting phenomenon referred to as a reminiscence bump - a tendency for older people to recall more memories formed during their young adulthood than in other periods of life. This phenomenon can be modelled with an attractor neural network, for example, the firing-rate Bayesian Confidence Propagation Neural Network (BCPNN) with incremental learning. In this work, the mechanisms underlying the reminiscence bump in the neural network model are systematically studied. The effects of synaptic plasticity, network architecture and other relevant parameters on the characteristics of the reminiscence bump are systematically investigated. The most influential factors turn out to be the magnitude of dopamine-linked plasticity at birth and the time constant of exponential plasticity decay with age that set the position of the bump. The other parameters mainly influence the general amplitude of the lifespan retrieval curve. Furthermore, the recency phenomenon, i.e. the tendency to remember the most recent memories, can also be parameterized by adding a constant to the exponentially decaying plasticity function representing the decrease in the level of dopamine neurotransmitters.
Människans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ericson, Julia. "Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291553.

Повний текст джерела
Анотація:
In the last decades, computational models have become useful tools for studying biological neural networks. These models are typically constrained by either behavioural data from neuropsychological studies or by biological data from neuroscience. One model of the latter kind is the Bayesian Confidence Propagating Neural Network (BCPNN) - an attractor network with a Bayesian learning rule which has been proposed as a model for various types of memory. In this thesis, I have further studied the potential of the BCPNN in short-term sequential memory. More specifically, I have investigated if the network can be used to qualitatively replicate behaviours of immediate verbal serial recall, and thereby offer insight into the network-level mechanisms which give rise to these behaviours. The simulations showed that the model was able to reproduce various benchmark effects such as the word length and irrelevant speech effects. It could also simulate the bow shaped positional accuracy curve as well as some backward recall if the to-be recalled sequence was short enough. Finally, the model showed some ability to handle sequences with repeated patterns. However, the current model architecture was not sufficient for simulating the effects of rhythm such as temporally grouping the inputs or stressing a specific element in the sequence. Overall, even though the model is not complete, it showed promising results as a tool for investigating biological memory and it could explain various benchmark behaviours in immediate serial recall through neuroscientifically inspired learning rules and architecture.
Under de senaste årtionden har datorsimulationer blivit ett allt mer populärt verktyg för att undersöka biologiska neurala nätverk. Dessa modeller är vanligtvis inspirerade av antingen beteendedata från neuropsykologiska studier eller av biologisk data från neurovetenskapen. En modell av den senare typen är ett Bayesian Confidence Propagating Neural Network (BCPNN) - ett autoassociativt nätverk med en Bayesiansk inlärningsregel, vilket tidigare har använts för att modellera flera typer av minne. I det här examensarbetet har jag vidare undersökt om nätverket kan användas som en modell för sekventiellt korttidsminne genom att undersöka dess förmåga att replikera beteenden inom verbalt sekventiellt korttidsminne. Experimenten visade att modellen kunde simulera ett flertal viktiga nyckeleffekter såsom the word length effect och the irrelevant speech effect. Däröver kunde modellen även simulera den bågformade kurvan som beskriver andelen lyckade repetitioner som en funktion av position, och den kunde dessutom repetera korta sekvenser baklänges. Modellen visade också på viss förmåga att hantera sekvenser där ett element återkom senare i sekvensen. Den nuvarande modellen var däremot inte tillräcklig för att simulera effekterna som tillkommer av rytm, såsom temporär gruppering eller en betoning på specifika element i sekvensen. I sin helhet ser modellen däremot lovande ut, även om den inte är fullständig i sin nuvarande form, då den kunde simulera ett flertal viktiga nyckeleffekter och förklara dessa med hjälp av neurovetenskapligt inspirerade inlärningsregler.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.

Повний текст джерела
Анотація:
Feedforward neural networks have been widely studied and used in many applications in science and engineering. The training of this type of networks is mainly undertaken using the well-known backpropagation based learning algorithms. One major problem with this type of algorithms is the slow training convergence speed, which hinders their applications. In order to improve the training convergence speed of this type of algorithms, many researchers have developed different improvements and enhancements. However, the slow convergence problem has not been fully addressed. This thesis makes several contributions by proposing new backpropagation learning algorithms based on the terminal attractor concept to improve the existing backpropagation learning algorithms such as the gradient descent and Levenberg-Marquardt algorithms. These new algorithms enable fast convergence both at a distance from and in a close range of the ideal weights. In particular, a new fast convergence mechanism is proposed which is based on the fast terminal attractor concept. Comprehensive simulation studies are undertaken to demonstrate the effectiveness of the proposed backpropagataion algorithms with terminal attractors. Finally, three practical application cases of time series forecasting, character recognition and image interpolation are chosen to show the practicality and usefulness of the proposed learning algorithms with comprehensive comparative studies with existing algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Villani, Gianluca. "Analysis of an Attractor Neural Network Model for Working Memory: A Control Theory Approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260079.

Повний текст джерела
Анотація:
Working Memory (WM) is a general-purpose cognitive system responsible for temporaryholding information in service of higher order cognition systems, e.g. decision making. Inthis thesis we focus on a non-spiking model belonging to a special family of biologicallyinspired recurrent Artificial Neural Network aiming to account for human experimentaldata on free recall. Considering its modular structure, this thesis gives a networked systemrepresentation of WM in order to analyze its stability and synchronization properties.Furthermore, with the tools provided by bifurcation analysis we investigate the role of thedifferent parameters on the generated synchronized patterns. To the best of our knowledge,the proposed dynamical recurrent neural network has not been studied before froma control theory perspective.
Arbetsminne är ett brett, övergripande kognitivt system som ansvarar för temporär informationslagringhos högre ordningens tänkande, såsom beslutsfattning. Denna masteravhandlingämnar sig åt att studera icke-spikande modeller tillhörande en speciell gren avbiologiskt inspirerade återkommande neuronnät, för att redogöra mänsklig experimentelldata för fenomenet free recall. Med avseende på dess modulära struktur, framför denna avhandlingen nätverkssystemsrepresentation av arbetsminne sådant att dess stabilitets- samtsynkroniseringsegenskaper kan granskas. Innebörden av olika systemparametrar av de genereradesynkroniseringsmönstren undersöktes genom användandet av bifurkationsanalys.Som vi förstår, har den föreslagna dynamiska återkommande neuronätet inte studerats frånett reglertekniskt perspektiv tidigare.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ferland, Guy J. M. G. "A new paradigm for the classification of patterns: The 'race to the attractor' neural network model." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9298.

Повний текст джерела
Анотація:
The human brain is arguably the best known classifier around. It can learn complex classification tasks with little apparent effort. It can learn to classify new patterns without forgetting old ones. It can learn a seemingly unlimited number of pattern classes. And it displays amazing resilience through its ability to persevere with reliable classifications despite damage to itself (e.g., dying neurons). These advantages have motivated researchers from many fields in the quest to understand the brain in order to duplicate its ability in an artificial system. And yet, little is known about the way the brain really works. But one fact which is apparent from available data is that 'TIME' is a critical component of its computational process. The brain is a dynamical system whose state evolves with time. Outside stimulus is processed and transformed repeatedly within it, with a multitude of signals interacting with each other in a complex, time-dependent manner. As a result, the process of pattern recognition inside the brain is also a time-dependent evolution of states where the initial image of the unknown pattern is progressively transformed into a form which represents the class of that pattern. In this thesis, we seek to achieve some of the advantages of the brain as a classifier by defining a model which captures the importance of time in the recognition process. The 'race to the attractor' neural network model involves the use of dynamical systems which transform initially unknown patterns into simpler prototypes which each represent a pattern class. The time required for this transformation to occur increases as the resemblance between the unknown pattern and the class prototype decreases. This results in a race where dynamical systems compete to transform the unknown pattern as quickly as possible. The winner of this race identifies the unknown pattern as a member of the class which the prototype of that dynamical system represents.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rosay, Sophie. "A statistical mechanics approach to the modelling and analysis of place-cell activity." Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0010/document.

Повний текст джерела
Анотація:
Les cellules de lieu de l’hippocampe sont des neurones aux propriétés intrigantes, commele fait que leur activité soit corrélée à la position spatiale de l’animal. Il est généralementconsidéré que ces propriétés peuvent être expliquées en grande partie par les comporte-ments collectifs de modèles schématiques de neurones en interaction. La physique statis-tique fournit des outils permettant l’étude analytique et numérique de ces comportementscollectifs.Nous abordons ici le problème de l’utilisation de ces outils dans le cadre du paradigmedu “réseau attracteur”, une hypothèse théorique sur la nature de la mémoire. La questionest de savoir comment ces méthodes et ce cadre théorique peuvent aider à comprendrel’activité des cellules de lieu. Dans un premier temps, nous proposons un modèle de cellulesde lieu dans lequel la localisation spatiale de l’activité neuronale est le résultat d’unedynamique d’attracteur. Plusieurs aspects des propriétés collectives de ce modèle sontétudiés. La simplicité du modèle permet de les comprendre en profondeur. Le diagrammede phase du modèle est calculé et discuté en comparaison avec des travaux précedents.Du point de vue dynamique, l’évolution du système présente des motifs particulièrementriches. La seconde partie de cette thèse est à propos du décodage de l’activité des cellulesde lieu. Nous nous demandons quelle est l’implication de l’hypothèse des attracteurs surce problème. Nous comparons plusieurs méthodes de décodage et leurs résultats sur letraitement de données expérimentales
Place cells in the hippocampus are neurons with interesting properties such as the corre-lation between their activity and the animal’s position in space. It is believed that theseproperties can be for the most part understood by collective behaviours of models of inter-acting simplified neurons. Statistical mechanics provides tools permitting to study thesecollective behaviours, both analytically and numerically.Here, we address how these tools can be used to understand place-cell activity withinthe attractor neural network paradigm, a theory for memory. We first propose a modelfor place cells in which the formation of a localized bump of activity is accounted for byattractor dynamics. Several aspects of the collective properties of this model are studied.Thanks to the simplicity of the model, they can be understood in great detail. The phasediagram of the model is computed and discussed in relation with previous works on at-tractor neural networks. The dynamical evolution of the system displays particularly richpatterns. The second part of this thesis deals with decoding place-cell activity, and theimplications of the attractor hypothesis on this problem. We compare several decodingmethods and their results on the processing of experimental recordings of place cells in afreely behaving rat
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Strandqvist, Jonas. "Attractors of autoencoders : Memorization in neural networks." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97746.

Повний текст джерела
Анотація:
It is an important question in machine learning to understand how neural networks learn. This thesis sheds further light onto this by studying autoencoder neural networks which can memorize data by storing it as attractors.What this means is that an autoencoder can learn a training set and later produce parts or all of this training set even when using other inputs not belonging to this set. We seek out to illuminate the effect on how ReLU networks handle memorization when trained with different setups: with and without bias, for different widths and depths, and using two different types of training images -- from the CIFAR10 dataset and randomly generated. For this, we created controlled experiments in which we train autoencoders and compute the eigenvalues of their Jacobian matrices to discern the number of data points stored as attractors.We also manually verify and analyze these results for patterns and behavior. With this thesis we broaden the understanding of ReLU autoencoders: We find that the structure of the data has an impact on the number of attractors. For instance, we produced autoencoders where every training image became an attractor when we trained with random pictures but not with CIFAR10. Changes to depth and width on these two types of data also show different behaviour.Moreover, we observe that loss has less of an impact than expected on attractors of trained autoencoders.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Martí, Ortega Daniel. "Neural stochastic dynamics of perceptual decision making." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7552.

Повний текст джерела
Анотація:
Models computacionals basats en xarxes a gran escala d'inspiració neurobiològica permeten descriure els correlats neurals de la decisió observats en certes àrees corticals com una transició entre atractors de la xarxa cortical. L'estimulació provoca un canvi en el paisatge d'atractors que afavoreix la transició entre l'atractor neutre inicial a un dels atractors associats a les eleccions categòriques. El soroll present en el sistema introdueix indeterminació en les transicions. En aquest treball mostrem l'existència de dos mecanismes de decisió qualitativament diferents, cadascun amb signatures psicofísiques diferenciades. El mecanisme que apareix a baixes intensitats, induït exclusivament pel soroll, dóna lloc a temps de decisió distribuïts asimètricament, amb una mitjana dictada per l'amplitud del soroll.A més, tant els temps de decisió com el rendiment psicofísic són funcions decreixents de l'estimulació externa. També proposem dos mètodes, un basat en l'aproximació macroscòpica i un altre en la teoria de la varietat central, que simplifiquen la descripció de sistemes estocàstics multistables.
Computational models based on large-scale, neurobiologically-inspired networks describe the decision-related activity observed in some cortical areas as a transition between attractors of the cortical network. Stimulation induces a change in the attractor configuration and drives the system out from its initial resting attractor to one of the existing attractors associated with the categorical choices. The noise present in the system renders transitions random. We show that there exist two qualitatively different mechanisms for decision, each with distinctive psychophysical signatures. The decision mechanism arising at low inputs, entirely driven by noise, leads to skewed distributions of decision times, with a mean governed by the amplitude of the noise. Moreover, both decision times and performances are monotonically decreasing functions of the overall external stimulation. We also propose two methods, one based on the macroscopic approximation and one based on center manifold theory, to simplify the description of multistable stochastic neural systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Posani, Lorenzo. "Inference and modeling of biological networks : a statistical-physics approach to neural attractors and protein fitness landscapes." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE043/document.

Повний текст джерела
Анотація:
L'avènement récent des procédures expérimentales à haut débit a ouvert une nouvelle ère pour l'étude quantitative des systèmes biologiques. De nos jours, les enregistrements d'électrophysiologie et l'imagerie du calcium permettent l'enregistrement simultané in vivo de centaines à des milliers de neurones. Parallèlement, grâce à des procédures de séquençage automatisées, les bibliothèques de protéines fonctionnelles connues ont été étendues de milliers à des millions en quelques années seulement. L'abondance actuelle de données biologiques ouvre une nouvelle série de défis aux théoriciens. Des méthodes d’analyse précises et transparentes sont nécessaires pour traiter cette quantité massive de données brutes en observables significatifs. Parallèlement, l'observation simultanée d'un grand nombre d'unités en interaction permet de développer et de valider des modèles théoriques visant à la compréhension mécanistique du comportement collectif des systèmes biologiques. Dans ce manuscrit, nous proposons une approche de ces défis basée sur des méthodes et des modèles issus de la physique statistique, en développent et appliquant ces méthodes au problèmes issu de la neuroscience et de la bio-informatique : l’étude de la mémoire spatiale dans le réseau hippocampique, et la reconstruction du paysage adaptatif local d'une protéine
The recent advent of high-throughput experimental procedures has opened a new era for the quantitative study of biological systems. Today, electrophysiology recordings and calcium imaging allow for the in vivo simultaneous recording of hundreds to thousands of neurons. In parallel, thanks to automated sequencing procedures, the libraries of known functional proteins expanded from thousands to millions in just a few years. This current abundance of biological data opens a new series of challenges for theoreticians. Accurate and transparent analysis methods are needed to process this massive amount of raw data into meaningful observables. Concurrently, the simultaneous observation of a large number of interacting units enables the development and validation of theoretical models aimed at the mechanistic understanding of the collective behavior of biological systems. In this manuscript, we propose an approach to both these challenges based on methods and models from statistical physics. We present an application of these methods to problems from neuroscience and bioinformatics, focusing on (1) the spatial memory and navigation task in the hippocampal loop and (2) the reconstruction of the fitness landscape of proteins from homologous sequence data
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Potts Attractor Neural Network"

1

Lansner, Anders, Anders Sandberg, Karl Magnus Petersson, and Martin Ingvar. "On Forgetful Attractor Network Memories." In Artificial Neural Networks in Medicine and Biology, 54–62. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0513-8_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Del Giudice, Paolo, and Stefano Fusi. "Attractor dynamics in an electronic neural network." In Lecture Notes in Computer Science, 1265–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0020325.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zou, Xiaolong, Zilong Ji, Xiao Liu, Yuanyuan Mi, K. Y. Michael Wong, and Si Wu. "Learning a Continuous Attractor Neural Network from Real Images." In Neural Information Processing, 622–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70093-9_66.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Okamoto, Hiroshi. "Local Detection of Communities by Attractor Neural-Network Dynamics." In Springer Series in Bio-/Neuroinformatics, 115–25. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09903-3_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Seow, Ming-Jung, and Vijayan K. Asari. "Recurrent Network as a Nonlinear Line Attractor for Skin Color Association." In Advances in Neural Networks – ISNN 2004, 870–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Koroutchev, Kostadin, and Elka Korutcheva. "Improved Storage Capacity of Hebbian Learning Attractor Neural Network with Bump Formations." In Artificial Neural Networks – ICANN 2006, 234–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Okamoto, Hiroshi. "Community Detection as Pattern Restoration by Attractor Neural-Network Dynamics." In Information Processing in Cells and Tissues, 197–207. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23108-2_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hamid, Oussama H., and Jochen Braun. "Reinforcement Learning and Attractor Neural Network Models of Associative Learning." In Studies in Computational Intelligence, 327–49. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16469-0_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Carrasco, Marco P., and Margarida V. Pato. "A Potts Neural Network Heuristic for the Class/Teacher Timetabling Problem." In Applied Optimization, 173–86. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-4137-7_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Frolov, Alexander A., Dušan Húsek, and Pavel Yu Polyakov. "Attractor Neural Network Combined with Likelihood Maximization Algorithm for Boolean Factor Analysis." In Advances in Neural Networks – ISNN 2012, 1–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Potts Attractor Neural Network"

1

PIRMORADIAN, SAHAR, and ALESSANDRO TREVES. "ENCODING WORDS INTO A POTTS ATTRACTOR NETWORK." In Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Doboli, S., and A. A. Minai. "Network capacity for latent attractor computation." In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.857840.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kuijie Cai and Jihong Shen. "Continuous attractor neural network model of multisensory integration." In 2011 International Conference on System Science, Engineering Design and Manufacturing Informatization (ICSEM). IEEE, 2011. http://dx.doi.org/10.1109/icssem.2011.6081317.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Formanek, Lukas, and Ondrej Karpis. "Leaming Lorenz attractor differential equations using neural network." In 2020 5th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM). IEEE, 2020. http://dx.doi.org/10.1109/seeda-cecnsm49515.2020.9221785.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Usher, M., and E. Ruppin. "An attractor neural network model of semantic fact retrieval." In 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137917.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Koroutchev, Kostadin. "Spatial asymmetric retrieval states in binary attractor neural network." In NOISE AND FLUCTUATIONS: 18th International Conference on Noise and Fluctuations - ICNF 2005. AIP, 2005. http://dx.doi.org/10.1063/1.2036825.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pereira, Patricia, Anders Lansner, and Pawel Herman. "Incremental Attractor Neural Network Modelling of the Lifespan Retrieval Curve." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9891922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Azarpour, M., S. A. Seyyedsalehi, and A. Taherkhani. "Robust pattern recognition using chaotic dynamics in Attractor Recurrent Neural Network." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596375.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zandi Mehran, Y., and A. M. Nasrabadi. "Neural network application in strange attractor investigation to detect a FGD." In 2008 4th International IEEE Conference "Intelligent Systems" (IS). IEEE, 2008. http://dx.doi.org/10.1109/is.2008.4670470.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rathore, S., D. Bush, P. Latham, and N. Burgess. "Oscillatory dynamics in an attractor neural network with firing rate adaptation." In PHYSICS, COMPUTATION, AND THE MIND - ADVANCES AND CHALLENGES AT INTERFACES: Proceedings of the 12th Granada Seminar on Computational and Statistical Physics. AIP, 2013. http://dx.doi.org/10.1063/1.4776524.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії