Dissertations / Theses on the topic 'Potts Attractor Neural Network'

To see the other types of publications on this topic, follow the link: Potts Attractor Neural Network.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Potts Attractor Neural Network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Seybold, John. "An attractor neural network model of spoken word recognition." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pereira, Patrícia. "Attractor Neural Network modelling of the Lifespan Retrieval Curve." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280732.

Full text
Abstract:
Human capability to recall episodic memories depends on how much time has passed since the memory was encoded. This dependency is described by a memory retrieval curve that reflects an interesting phenomenon referred to as a reminiscence bump - a tendency for older people to recall more memories formed during their young adulthood than in other periods of life. This phenomenon can be modelled with an attractor neural network, for example, the firing-rate Bayesian Confidence Propagation Neural Network (BCPNN) with incremental learning. In this work, the mechanisms underlying the reminiscence bump in the neural network model are systematically studied. The effects of synaptic plasticity, network architecture and other relevant parameters on the characteristics of the reminiscence bump are systematically investigated. The most influential factors turn out to be the magnitude of dopamine-linked plasticity at birth and the time constant of exponential plasticity decay with age that set the position of the bump. The other parameters mainly influence the general amplitude of the lifespan retrieval curve. Furthermore, the recency phenomenon, i.e. the tendency to remember the most recent memories, can also be parameterized by adding a constant to the exponentially decaying plasticity function representing the decrease in the level of dopamine neurotransmitters.
Människans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
APA, Harvard, Vancouver, ISO, and other styles
3

Ericson, Julia. "Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291553.

Full text
Abstract:
In the last decades, computational models have become useful tools for studying biological neural networks. These models are typically constrained by either behavioural data from neuropsychological studies or by biological data from neuroscience. One model of the latter kind is the Bayesian Confidence Propagating Neural Network (BCPNN) - an attractor network with a Bayesian learning rule which has been proposed as a model for various types of memory. In this thesis, I have further studied the potential of the BCPNN in short-term sequential memory. More specifically, I have investigated if the network can be used to qualitatively replicate behaviours of immediate verbal serial recall, and thereby offer insight into the network-level mechanisms which give rise to these behaviours. The simulations showed that the model was able to reproduce various benchmark effects such as the word length and irrelevant speech effects. It could also simulate the bow shaped positional accuracy curve as well as some backward recall if the to-be recalled sequence was short enough. Finally, the model showed some ability to handle sequences with repeated patterns. However, the current model architecture was not sufficient for simulating the effects of rhythm such as temporally grouping the inputs or stressing a specific element in the sequence. Overall, even though the model is not complete, it showed promising results as a tool for investigating biological memory and it could explain various benchmark behaviours in immediate serial recall through neuroscientifically inspired learning rules and architecture.
Under de senaste årtionden har datorsimulationer blivit ett allt mer populärt verktyg för att undersöka biologiska neurala nätverk. Dessa modeller är vanligtvis inspirerade av antingen beteendedata från neuropsykologiska studier eller av biologisk data från neurovetenskapen. En modell av den senare typen är ett Bayesian Confidence Propagating Neural Network (BCPNN) - ett autoassociativt nätverk med en Bayesiansk inlärningsregel, vilket tidigare har använts för att modellera flera typer av minne. I det här examensarbetet har jag vidare undersökt om nätverket kan användas som en modell för sekventiellt korttidsminne genom att undersöka dess förmåga att replikera beteenden inom verbalt sekventiellt korttidsminne. Experimenten visade att modellen kunde simulera ett flertal viktiga nyckeleffekter såsom the word length effect och the irrelevant speech effect. Däröver kunde modellen även simulera den bågformade kurvan som beskriver andelen lyckade repetitioner som en funktion av position, och den kunde dessutom repetera korta sekvenser baklänges. Modellen visade också på viss förmåga att hantera sekvenser där ett element återkom senare i sekvensen. Den nuvarande modellen var däremot inte tillräcklig för att simulera effekterna som tillkommer av rytm, såsom temporär gruppering eller en betoning på specifika element i sekvensen. I sin helhet ser modellen däremot lovande ut, även om den inte är fullständig i sin nuvarande form, då den kunde simulera ett flertal viktiga nyckeleffekter och förklara dessa med hjälp av neurovetenskapligt inspirerade inlärningsregler.
APA, Harvard, Vancouver, ISO, and other styles
4

Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.

Full text
Abstract:
Feedforward neural networks have been widely studied and used in many applications in science and engineering. The training of this type of networks is mainly undertaken using the well-known backpropagation based learning algorithms. One major problem with this type of algorithms is the slow training convergence speed, which hinders their applications. In order to improve the training convergence speed of this type of algorithms, many researchers have developed different improvements and enhancements. However, the slow convergence problem has not been fully addressed. This thesis makes several contributions by proposing new backpropagation learning algorithms based on the terminal attractor concept to improve the existing backpropagation learning algorithms such as the gradient descent and Levenberg-Marquardt algorithms. These new algorithms enable fast convergence both at a distance from and in a close range of the ideal weights. In particular, a new fast convergence mechanism is proposed which is based on the fast terminal attractor concept. Comprehensive simulation studies are undertaken to demonstrate the effectiveness of the proposed backpropagataion algorithms with terminal attractors. Finally, three practical application cases of time series forecasting, character recognition and image interpolation are chosen to show the practicality and usefulness of the proposed learning algorithms with comprehensive comparative studies with existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Villani, Gianluca. "Analysis of an Attractor Neural Network Model for Working Memory: A Control Theory Approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260079.

Full text
Abstract:
Working Memory (WM) is a general-purpose cognitive system responsible for temporaryholding information in service of higher order cognition systems, e.g. decision making. Inthis thesis we focus on a non-spiking model belonging to a special family of biologicallyinspired recurrent Artificial Neural Network aiming to account for human experimentaldata on free recall. Considering its modular structure, this thesis gives a networked systemrepresentation of WM in order to analyze its stability and synchronization properties.Furthermore, with the tools provided by bifurcation analysis we investigate the role of thedifferent parameters on the generated synchronized patterns. To the best of our knowledge,the proposed dynamical recurrent neural network has not been studied before froma control theory perspective.
Arbetsminne är ett brett, övergripande kognitivt system som ansvarar för temporär informationslagringhos högre ordningens tänkande, såsom beslutsfattning. Denna masteravhandlingämnar sig åt att studera icke-spikande modeller tillhörande en speciell gren avbiologiskt inspirerade återkommande neuronnät, för att redogöra mänsklig experimentelldata för fenomenet free recall. Med avseende på dess modulära struktur, framför denna avhandlingen nätverkssystemsrepresentation av arbetsminne sådant att dess stabilitets- samtsynkroniseringsegenskaper kan granskas. Innebörden av olika systemparametrar av de genereradesynkroniseringsmönstren undersöktes genom användandet av bifurkationsanalys.Som vi förstår, har den föreslagna dynamiska återkommande neuronätet inte studerats frånett reglertekniskt perspektiv tidigare.
APA, Harvard, Vancouver, ISO, and other styles
6

Ferland, Guy J. M. G. "A new paradigm for the classification of patterns: The 'race to the attractor' neural network model." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9298.

Full text
Abstract:
The human brain is arguably the best known classifier around. It can learn complex classification tasks with little apparent effort. It can learn to classify new patterns without forgetting old ones. It can learn a seemingly unlimited number of pattern classes. And it displays amazing resilience through its ability to persevere with reliable classifications despite damage to itself (e.g., dying neurons). These advantages have motivated researchers from many fields in the quest to understand the brain in order to duplicate its ability in an artificial system. And yet, little is known about the way the brain really works. But one fact which is apparent from available data is that 'TIME' is a critical component of its computational process. The brain is a dynamical system whose state evolves with time. Outside stimulus is processed and transformed repeatedly within it, with a multitude of signals interacting with each other in a complex, time-dependent manner. As a result, the process of pattern recognition inside the brain is also a time-dependent evolution of states where the initial image of the unknown pattern is progressively transformed into a form which represents the class of that pattern. In this thesis, we seek to achieve some of the advantages of the brain as a classifier by defining a model which captures the importance of time in the recognition process. The 'race to the attractor' neural network model involves the use of dynamical systems which transform initially unknown patterns into simpler prototypes which each represent a pattern class. The time required for this transformation to occur increases as the resemblance between the unknown pattern and the class prototype decreases. This results in a race where dynamical systems compete to transform the unknown pattern as quickly as possible. The winner of this race identifies the unknown pattern as a member of the class which the prototype of that dynamical system represents.
APA, Harvard, Vancouver, ISO, and other styles
7

Rosay, Sophie. "A statistical mechanics approach to the modelling and analysis of place-cell activity." Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0010/document.

Full text
Abstract:
Les cellules de lieu de l’hippocampe sont des neurones aux propriétés intrigantes, commele fait que leur activité soit corrélée à la position spatiale de l’animal. Il est généralementconsidéré que ces propriétés peuvent être expliquées en grande partie par les comporte-ments collectifs de modèles schématiques de neurones en interaction. La physique statis-tique fournit des outils permettant l’étude analytique et numérique de ces comportementscollectifs.Nous abordons ici le problème de l’utilisation de ces outils dans le cadre du paradigmedu “réseau attracteur”, une hypothèse théorique sur la nature de la mémoire. La questionest de savoir comment ces méthodes et ce cadre théorique peuvent aider à comprendrel’activité des cellules de lieu. Dans un premier temps, nous proposons un modèle de cellulesde lieu dans lequel la localisation spatiale de l’activité neuronale est le résultat d’unedynamique d’attracteur. Plusieurs aspects des propriétés collectives de ce modèle sontétudiés. La simplicité du modèle permet de les comprendre en profondeur. Le diagrammede phase du modèle est calculé et discuté en comparaison avec des travaux précedents.Du point de vue dynamique, l’évolution du système présente des motifs particulièrementriches. La seconde partie de cette thèse est à propos du décodage de l’activité des cellulesde lieu. Nous nous demandons quelle est l’implication de l’hypothèse des attracteurs surce problème. Nous comparons plusieurs méthodes de décodage et leurs résultats sur letraitement de données expérimentales
Place cells in the hippocampus are neurons with interesting properties such as the corre-lation between their activity and the animal’s position in space. It is believed that theseproperties can be for the most part understood by collective behaviours of models of inter-acting simplified neurons. Statistical mechanics provides tools permitting to study thesecollective behaviours, both analytically and numerically.Here, we address how these tools can be used to understand place-cell activity withinthe attractor neural network paradigm, a theory for memory. We first propose a modelfor place cells in which the formation of a localized bump of activity is accounted for byattractor dynamics. Several aspects of the collective properties of this model are studied.Thanks to the simplicity of the model, they can be understood in great detail. The phasediagram of the model is computed and discussed in relation with previous works on at-tractor neural networks. The dynamical evolution of the system displays particularly richpatterns. The second part of this thesis deals with decoding place-cell activity, and theimplications of the attractor hypothesis on this problem. We compare several decodingmethods and their results on the processing of experimental recordings of place cells in afreely behaving rat
APA, Harvard, Vancouver, ISO, and other styles
8

Strandqvist, Jonas. "Attractors of autoencoders : Memorization in neural networks." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97746.

Full text
Abstract:
It is an important question in machine learning to understand how neural networks learn. This thesis sheds further light onto this by studying autoencoder neural networks which can memorize data by storing it as attractors.What this means is that an autoencoder can learn a training set and later produce parts or all of this training set even when using other inputs not belonging to this set. We seek out to illuminate the effect on how ReLU networks handle memorization when trained with different setups: with and without bias, for different widths and depths, and using two different types of training images -- from the CIFAR10 dataset and randomly generated. For this, we created controlled experiments in which we train autoencoders and compute the eigenvalues of their Jacobian matrices to discern the number of data points stored as attractors.We also manually verify and analyze these results for patterns and behavior. With this thesis we broaden the understanding of ReLU autoencoders: We find that the structure of the data has an impact on the number of attractors. For instance, we produced autoencoders where every training image became an attractor when we trained with random pictures but not with CIFAR10. Changes to depth and width on these two types of data also show different behaviour.Moreover, we observe that loss has less of an impact than expected on attractors of trained autoencoders.
APA, Harvard, Vancouver, ISO, and other styles
9

Martí, Ortega Daniel. "Neural stochastic dynamics of perceptual decision making." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7552.

Full text
Abstract:
Models computacionals basats en xarxes a gran escala d'inspiració neurobiològica permeten descriure els correlats neurals de la decisió observats en certes àrees corticals com una transició entre atractors de la xarxa cortical. L'estimulació provoca un canvi en el paisatge d'atractors que afavoreix la transició entre l'atractor neutre inicial a un dels atractors associats a les eleccions categòriques. El soroll present en el sistema introdueix indeterminació en les transicions. En aquest treball mostrem l'existència de dos mecanismes de decisió qualitativament diferents, cadascun amb signatures psicofísiques diferenciades. El mecanisme que apareix a baixes intensitats, induït exclusivament pel soroll, dóna lloc a temps de decisió distribuïts asimètricament, amb una mitjana dictada per l'amplitud del soroll.A més, tant els temps de decisió com el rendiment psicofísic són funcions decreixents de l'estimulació externa. També proposem dos mètodes, un basat en l'aproximació macroscòpica i un altre en la teoria de la varietat central, que simplifiquen la descripció de sistemes estocàstics multistables.
Computational models based on large-scale, neurobiologically-inspired networks describe the decision-related activity observed in some cortical areas as a transition between attractors of the cortical network. Stimulation induces a change in the attractor configuration and drives the system out from its initial resting attractor to one of the existing attractors associated with the categorical choices. The noise present in the system renders transitions random. We show that there exist two qualitatively different mechanisms for decision, each with distinctive psychophysical signatures. The decision mechanism arising at low inputs, entirely driven by noise, leads to skewed distributions of decision times, with a mean governed by the amplitude of the noise. Moreover, both decision times and performances are monotonically decreasing functions of the overall external stimulation. We also propose two methods, one based on the macroscopic approximation and one based on center manifold theory, to simplify the description of multistable stochastic neural systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Posani, Lorenzo. "Inference and modeling of biological networks : a statistical-physics approach to neural attractors and protein fitness landscapes." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE043/document.

Full text
Abstract:
L'avènement récent des procédures expérimentales à haut débit a ouvert une nouvelle ère pour l'étude quantitative des systèmes biologiques. De nos jours, les enregistrements d'électrophysiologie et l'imagerie du calcium permettent l'enregistrement simultané in vivo de centaines à des milliers de neurones. Parallèlement, grâce à des procédures de séquençage automatisées, les bibliothèques de protéines fonctionnelles connues ont été étendues de milliers à des millions en quelques années seulement. L'abondance actuelle de données biologiques ouvre une nouvelle série de défis aux théoriciens. Des méthodes d’analyse précises et transparentes sont nécessaires pour traiter cette quantité massive de données brutes en observables significatifs. Parallèlement, l'observation simultanée d'un grand nombre d'unités en interaction permet de développer et de valider des modèles théoriques visant à la compréhension mécanistique du comportement collectif des systèmes biologiques. Dans ce manuscrit, nous proposons une approche de ces défis basée sur des méthodes et des modèles issus de la physique statistique, en développent et appliquant ces méthodes au problèmes issu de la neuroscience et de la bio-informatique : l’étude de la mémoire spatiale dans le réseau hippocampique, et la reconstruction du paysage adaptatif local d'une protéine
The recent advent of high-throughput experimental procedures has opened a new era for the quantitative study of biological systems. Today, electrophysiology recordings and calcium imaging allow for the in vivo simultaneous recording of hundreds to thousands of neurons. In parallel, thanks to automated sequencing procedures, the libraries of known functional proteins expanded from thousands to millions in just a few years. This current abundance of biological data opens a new series of challenges for theoreticians. Accurate and transparent analysis methods are needed to process this massive amount of raw data into meaningful observables. Concurrently, the simultaneous observation of a large number of interacting units enables the development and validation of theoretical models aimed at the mechanistic understanding of the collective behavior of biological systems. In this manuscript, we propose an approach to both these challenges based on methods and models from statistical physics. We present an application of these methods to problems from neuroscience and bioinformatics, focusing on (1) the spatial memory and navigation task in the hippocampal loop and (2) the reconstruction of the fitness landscape of proteins from homologous sequence data
APA, Harvard, Vancouver, ISO, and other styles
11

Battista, Aldo. "Low-dimensional continuous attractors in recurrent neural networks : from statistical physics to computational neuroscience." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE012.

Full text
Abstract:
La manière dont l'information sensorielle est codée et traitée par les circuits neuronaux est une question centrale en neurosciences computationnelles. Dans de nombreuses régions du cerveau, on constate que l'activité des neurones dépend fortement de certains corrélats sensoriels continus ; on peut citer comme exemples les cellules simples de la zone V1 du cortex visuel codant pour l'orientation d'une barre présentée à la rétine, et les cellules de direction de la tête dans le subiculum ou les cellules de lieu dans l'hippocampe, dont les activités dépendent, respectivement, de l'orientation de la tête et de la position d'un animal dans l'espace physique. Au cours des dernières décennies, les réseaux neuronaux à attracteur continu ont été introduits comme un modèle abstrait pour la représentation de quelques variables continues dans une grande population de neurones bruités. Grâce à un ensemble approprié d'interactions par paires entre les neurones, la dynamique du réseau neuronal est contrainte de s'étendre sur une variété de faible dimension dans l'espace de haute dimension des configurations d'activités, et code ainsi quelques coordonnées continues sur la variété, correspondant à des informations spatiales ou sensorielles. Alors que le modèle original était basé sur la construction d'une variété continue unique dans un espace à haute dimension, on s'est vite rendu compte que le même réseau neuronal pouvait coder pour de nombreux attracteurs distincts, correspondant à différents environnements spatiaux ou situations contextuelles. Une solution approximative à ce problème plus difficile a été proposée il y a vingt ans, et reposait sur une prescription ad hoc pour les interactions par paires entre les neurones, résumant les différentes contributions correspondant à chaque attracteur pris indépendamment des autres. Cette solution souffre cependant de deux problèmes majeurs : l'interférence entre les cartes limitent fortement la capacité de stockage, et la résolution spatiale au sein d'une carte n'est pas contrôlée. Dans le présent manuscrit, nous abordons ces deux questions. Nous montrons comment parvenir à un stockage optimal des attracteurs continus et étudions le compromis optimal entre capacité et résolution spatiale, c'est-à-dire comment l'exigence d'une résolution spatiale plus élevée affecte le nombre maximal d'attracteurs pouvant être stockés, prouvant que les réseaux neuronaux récurrents sont des dispositifs de mémoire très efficaces capables de stocker de nombreux attracteurs continus à haute résolution. Afin de résoudre ces problèmes, nous avons utilisé une combinaison de techniques issues de la physique statistique des systèmes désordonnés et de la théorie des matrices aléatoires. D'une part, nous avons étendu la théorie de l'apprentissage de Gardner au cas des modèles présentant de fortes corrélations spatiales. D'autre part, nous avons introduit et étudié les propriétés spectrales d'un nouvel ensemble de matrices aléatoires, c'est-à-dire la superposition additive d'un grand nombre de matrices aléatoires euclidiennes indépendantes dans le régime de haute densité. En outre, cette approche définit un cadre concret pour répondre à de nombreuses questions, en lien étroit avec les expériences en cours, liées notamment à la discussion de l'hypothèse du remapping aléatoire et au codage de l'information spatiale et au développement des circuits cérébraux chez les jeunes animaux. Enfin, nous discutons d'un mécanisme possible pour l'apprentissage des attracteurs continus à partir d'images réelles
How sensory information is encoded and processed by neuronal circuits is a central question in computational neuroscience. In many brain areas, the activity of neurons is found to depend strongly on some continuous sensory correlate; examples include simple cells in the V1 area of the visual cortex coding for the orientation of a bar presented to the retina, and head direction cells in the subiculum or place cells in the hippocampus, whose activities depend, respectively, on the orientation of the head and the position of an animal in the physical space. Over the past decades, continuous attractor neural networks were introduced as an abstract model for the representation of a few continuous variables in a large population of noisy neurons. Through an appropriate set of pairwise interactions between the neurons, the dynamics of the neural network is constrained to span a low-dimensional manifold in the high-dimensional space of activity configurations, and thus codes for a few continuous coordinates on the manifold, corresponding to spatial or sensory information. While the original model was based on how to build a single continuous manifold in an high-dimensional space, it was soon realized that the same neural network should code for many distinct attractors, {em i.e.}, corresponding to different spatial environments or contextual situations. An approximate solution to this harder problem was proposed twenty years ago, and relied on an ad hoc prescription for the pairwise interactions between neurons, summing up the different contributions corresponding to each single attractor taken independently of the others. This solution, however, suffers from two major issues: the interference between maps strongly limit the storage capacity, and the spatial resolution within a map is not controlled. In the present manuscript, we address these two issues. We show how to achieve optimal storage of continuous attractors and study the optimal trade-off between capacity and spatial resolution, that is, how the requirement of higher spatial resolution affects the maximal number of attractors that can be stored, proving that recurrent neural networks are very efficient memory devices capable of storing many continuous attractors at high resolution. In order to tackle these problems we used a combination of techniques from statistical physics of disordered systems and random matrix theory. On the one hand we extended Gardner's theory of learning to the case of patterns with strong spatial correlations. On the other hand we introduced and studied the spectral properties of a new ensemble of random matrices, {em i.e.}, the additive superimposition of an extensive number of independent Euclidean random matrices in the high-density regime. In addition, this approach defines a concrete framework to address many questions, in close connection with ongoing experiments, related in particular to the discussion of the random remapping hypothesis and to the coding of spatial information and the development of brain circuits in young animals. Finally, we discuss a possible mechanism for the learning of continuous attractors from real images
APA, Harvard, Vancouver, ISO, and other styles
12

Doria, Felipe França. "Padrões estruturados e campo aleatório em redes complexas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/144076.

Full text
Abstract:
Este trabalho foca no estudo de duas redes complexas. A primeira é um modelo de Ising com campo aleatório. Este modelo segue uma distribuição de campo gaussiana e bimodal. Uma técnica de conectividade finita foi utilizada para resolvê-lo. Assim como um método de Monte Carlo foi aplicado para verificar os resultados. Há uma indicação em nossos resultados que para a distribuição gaussiana a transição de fase é sempre de segunda ordem. Para as distribuições bimodais há um ponto tricrítico, dependente do valor da conectividade . Abaixo de um certo mínimo de , só existe transição de segunda ordem. A segunda é uma rede neural atratora métrica. Mais precisamente, estudamos a capacidade deste modelo para armazenar os padrões estruturados. Em particular, os padrões escolhidos foram retirados de impressões digitais, que apresentam algumas características locais. Os resultados mostram que quanto menor a atividade de padrões de impressões digitais, maior a relação de carga e a qualidade de recuperação. Uma teoria, também foi desenvolvido como uma função de cinco parâmetros: a relação de carga, a conectividade, o grau de densidade da rede, a relação de aleatoriedade e a correlação do padrão espacial.
This work focus on the study of two complex networks. The first one is a random field Ising model. This model follows a gaussian and bimodal distribution, for the random field. A finite connectivity technique was utilized to solve it. As well as a Monte Carlo method was applied to verify our results. There is an indication in our results that for a gaussian distribution the phase transition is always second-order. For the bimodal distribution there is a tricritical point, tha depends on the value of the connectivity . Below a certain minimum , there is only a second-order transition. The second one is a metric attractor neural network. More precisely we study the ability of this model to learn structured patterns. In particular, the chosen patterns were taken from fingerprints, which present some local features. Our results show that the higher the load ratio and retrieval quality are the lower is the fingerprint patterns activity. A theoretical framework was also developed as a function of five parameters: the load ratio, the connectivity, the density degree of the network, the randomness ratio and the spatial pattern correlation.
APA, Harvard, Vancouver, ISO, and other styles
13

Tully, Philip. "Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits." Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205568.

Full text
Abstract:
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations.    In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels.    The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations.

QC 20170421

APA, Harvard, Vancouver, ISO, and other styles
14

Martínez-García, Marina. "Statistical analysis of neural correlates in decision-making." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/283111.

Full text
Abstract:
We investigated the neuronal processes which occur during a decision- making task based on a perceptual classi cation judgment. For this purpose we have analysed three di erent experimental paradigms (somatosensory, visual, and auditory) in two di erent species (monkey and rat), with the common goal of shedding light into the information carried by neurons. In particular, we focused on how the information content is preserved in the underlying neuronal activity over time. Furthermore we considered how the decision, the stimuli, and the con dence are encoded in memory and, when the experimental paradigm allowed it, how the attention modulates these features. Finally, we went one step further, and we investigated the interactions between brain areas that arise during the process of decision- making.
Durant aquesta tesi hem investigat els processos neuronals que es pro- dueixen durant tasques de presa de decisions, tasques basades en un ju- dici l ogic de classi caci o perceptual. Per a aquest prop osit hem analitzat tres paradigmes experimentals diferents (somatosensorial, visual i auditiu) en dues espcies diferents (micos i rates), amb l'objectiu d'il.lustrar com les neurones codi quen informaci on referents a les t asques. En particular, ens hem centrat en com certes informacions estan cod- i cades en l'activitat neuronal al llarg del temps. Concretament, com la informaci o sobre: la decisi o comportamental, els factors externs, i la con- ana en la resposta, b e codi cada en la mem oria. A m es a m es, quan el paradigma experimental ens ho va permetre, com l'atenci o modula aquests aspectes. Finalment, hem anat un pas m es enll a, i hem analitzat la comu- nicaci o entre les diferents arees corticals, mentre els subjectes resolien una tasca de presa de decisions.
APA, Harvard, Vancouver, ISO, and other styles
15

Insabato, Andrea. "Neurodynamical theory of decision confidence." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/129463.

Full text
Abstract:
Decision confidence offers a window on introspection and onto the evaluation mechanisms associated with decision-making. Nonetheless we do not have yet a thorough understanding of its neurophysiological and computational substrate. There are mainly two experimental paradigms to measure decision confidence in animals: post-decision wagering and uncertain option. In this thesis we explore and try to shed light on the computational mechanisms underlying confidence based decision-making in both experimental paradigms. We propose that a double-layer attractor neural network can account for neural recordings and behavior of rats in a post-decision wagering experiment. In this model a decision-making layer takes the perceptual decision and a separate confidence layer monitors the activity of the decision-making layer and makes a judgment about the confidence in the decision. Moreover we test the prediction of the model by analyizing neuronal data from monkeys performing a decision-making task. We show the existence of neurons in ventral Premotor cortex that encode decision confidence. We also found that both a continuous and discrete encoding of decision confidence are present in the primate brain. In particular we show that different neurons encode confidence through three different mechanisms: 1. Switch time coding, 2. rate coding and 3. binary coding. Furthermore we propose a multiple-choice attractor network model in order to account for uncertain option tasks. In this model the confidence emerges from the stochastic dynamics of decision neurons, thus making a separate monitoring network (like in the model of the post-decision wagering task) unnecessary. The model explains the behavioral and neural data recorded in monkeys lateral intraparietal area as a result of the multistable dynamics of the attractor network, whereby it is possible to make several testable predictions. The rich neurophysiological representation and computational mechanisms of decision confidence evidence the basis of different functional aspects of confidence in the making of a decision.
El estudio de la confianza en la decisión ofrece una perspectiva ventajosa sobre los procesos de introspección y sobre los procesos de evaluación de la toma de decisiones. No obstante todav'ia no tenemos un conocimiento exhaustivo del sustrato neurofisiológico y computacional de la confianza en la decisión. Existen principalmente dos paradigmas experimentales para medir la confianza en la decisión en los sujetos no humanos: apuesta post-decisional (post-decision wagering) y opción insegura (uncertain option). En esta tesis tratamos de aclarar los mecanísmos computacionales que subyacen a los procesos de toma de decisiones y juicios de confianza en ambos paradigmas experimentales. El modelo que proponemos para explicar los experimentos de apuesta post-decisional es una red neuronal de atractores de dos capas. En este modelo la primera capa se encarga de la toma de decisiones, mientras la segunda capa vigila la actividad de la primera capa y toma un juicio sobre la confianza en la decisión. Sucesivamente testeamos la predicción de este modelo analizando la actividad de neuronas registrada en el cerebro de dos monos, mientras estos desempeñaban una tarea de toma de decisiones. Con este análisis mostramos la existencia de neuronas en la corteza premotora ventral que codifican la confianza en la decisión. Nuestros resultados muestran también que en el cerebro de los primates existen tanto neuronas que codifican confianza como neuronas que la codifican de forma continua. Más en específico mostramos que existen tres mecanismos de codificación: 1. codificación por tiempo de cambio, 2. codificación por tasa de disparo, 3. codificación binaria. En relación a las tareas de opción insegura proponemos un modelo de red de atractores para opciones multiplas. En este modelo la confianza emerge de la dinámica estocástica de las neuronas de decisión, volviéndose así innecesaria la supervisión del proceso de toma de decisiones por parte de otra red (como en el modelo de la tarea de apuesta post-decisional). El modelo explica los datos de comportamiento de los monos y los registros de la actividad de neuronas del área lateral intraparietal como efectos de la dinámica multiestable de la red de atractores. Además el modelo produce interesantes y novedosas predicciones que se podrán testear en experimentos futuros. La compleja representación neurofisiológica y los distintos mecanísmos computacionales que emergen de este trabajo sugieren distintos aspectos funcionales de la confianza en la toma de decisiones.
APA, Harvard, Vancouver, ISO, and other styles
16

Larsson, Johan P. "Modelling neuronal mechanisms of the processing of tones and phonemes in the higher auditory system." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/97293.

Full text
Abstract:
S'ha investigat molt tant els mecanismes neuronals bàsics de l'audició com l'organització psicològica de la percepció de la parla. Tanmateix, en ambdós temes n'hi ha una relativa escassetat en quant a modelització. Aquí describim dos treballs de modelització. Un d'ells proposa un nou mecanisme de millora de selectivitat de freqüències que explica resultats de experiments neurofisiològics investigant manifestacions de forward masking y sobretot auditory streaming en l'escorça auditiva principal (A1). El mecanisme funciona en una xarxa feed-forward amb depressió sináptica entre el tàlem y l'escorça, però mostrem que és robust a l'introducció d'una organització realista del circuit de A1, que per la seva banda explica cantitat de dades neurofisiològics. L'altre treball descriu un mecanisme candidat d'explicar la trobada en estudis psicofísics de diferències en la percepció de paraules entre bilinguës primerencs y simultànis. Simulant tasques de decisió lèxica y discriminació de fonemes, fortifiquem l'hipòtesi de que persones sovint exposades a variacions dialectals de paraules poden guardar aquestes en el seu lèxic, sense alterar representacions fonemàtiques .
Though much experimental research exists on both basic neural mechanisms of hearing and the psychological organization of language perception, there is a relative paucity of modelling work on these subjects. Here we describe two modelling efforts. One proposes a novel mechanism of frequency selectivity improvement that accounts for results of neurophysiological experiments investigating manifestations of forward masking and above all auditory streaming in the primary auditory cortex (A1). The mechanism works in a feed-forward network with depressing thalamocortical synapses, but is further showed to be robust to a realistic organization of the neural circuitry in A1, which accounts for a wealth of neurophysiological data. The other effort describes a candidate mechanism for explaining differences in word/non-word perception between early and simultaneous bilinguals found in psychophysical studies. By simulating lexical decision and phoneme discrimination tasks in an attractor neural network model, we strengthen the hypothesis that people often exposed to dialectal word variations can store these in their lexicons, without altering their phoneme representations.
Se ha investigado mucho tanto los mecanismos neuronales básicos de la audición como la organización psicológica de la percepción del habla. Sin embargo, en ambos temas hay una relativa escasez en cuanto a modelización. Aquí describimos dos trabajos de modelización. Uno propone un nuevo mecanismo de mejora de selectividad de frecuencias que explica resultados de experimentos neurofisiológicos investigando manifestaciones de forward masking y sobre todo auditory streaming en la corteza auditiva principal (A1). El mecanismo funciona en una red feed-forward con depresión sináptica entre el tálamo y la corteza, pero mostramos que es robusto a la introducción de una organización realista del circuito de A1, que a su vez explica cantidad de datos neurofisiológicos. El otro trabajo describe un mecanismo candidato de explicar el hallazgo en estudios psicofísicos de diferencias en la percepción de palabras entre bilinguës tempranos y simultáneos. Simulando tareas de decisión léxica y discriminación de fonemas, fortalecemos la hipótesis de que personas expuestas a menudo a variaciones dialectales de palabras pueden guardar éstas en su léxico, sin alterar representaciones fonémicas.
APA, Harvard, Vancouver, ISO, and other styles
17

Engelken, Rainer. "Chaotic Neural Circuit Dynamics." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-002E-E349-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography