Dissertations / Theses on the topic 'Potts Attractor Neural Network'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 17 dissertations / theses for your research on the topic 'Potts Attractor Neural Network.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Seybold, John. "An attractor neural network model of spoken word recognition." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335839.
Full textPereira, Patrícia. "Attractor Neural Network modelling of the Lifespan Retrieval Curve." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280732.
Full textMänniskans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
Ericson, Julia. "Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291553.
Full textUnder de senaste årtionden har datorsimulationer blivit ett allt mer populärt verktyg för att undersöka biologiska neurala nätverk. Dessa modeller är vanligtvis inspirerade av antingen beteendedata från neuropsykologiska studier eller av biologisk data från neurovetenskapen. En modell av den senare typen är ett Bayesian Confidence Propagating Neural Network (BCPNN) - ett autoassociativt nätverk med en Bayesiansk inlärningsregel, vilket tidigare har använts för att modellera flera typer av minne. I det här examensarbetet har jag vidare undersökt om nätverket kan användas som en modell för sekventiellt korttidsminne genom att undersöka dess förmåga att replikera beteenden inom verbalt sekventiellt korttidsminne. Experimenten visade att modellen kunde simulera ett flertal viktiga nyckeleffekter såsom the word length effect och the irrelevant speech effect. Däröver kunde modellen även simulera den bågformade kurvan som beskriver andelen lyckade repetitioner som en funktion av position, och den kunde dessutom repetera korta sekvenser baklänges. Modellen visade också på viss förmåga att hantera sekvenser där ett element återkom senare i sekvensen. Den nuvarande modellen var däremot inte tillräcklig för att simulera effekterna som tillkommer av rytm, såsom temporär gruppering eller en betoning på specifika element i sekvensen. I sin helhet ser modellen däremot lovande ut, även om den inte är fullständig i sin nuvarande form, då den kunde simulera ett flertal viktiga nyckeleffekter och förklara dessa med hjälp av neurovetenskapligt inspirerade inlärningsregler.
Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.
Full textVillani, Gianluca. "Analysis of an Attractor Neural Network Model for Working Memory: A Control Theory Approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260079.
Full textArbetsminne är ett brett, övergripande kognitivt system som ansvarar för temporär informationslagringhos högre ordningens tänkande, såsom beslutsfattning. Denna masteravhandlingämnar sig åt att studera icke-spikande modeller tillhörande en speciell gren avbiologiskt inspirerade återkommande neuronnät, för att redogöra mänsklig experimentelldata för fenomenet free recall. Med avseende på dess modulära struktur, framför denna avhandlingen nätverkssystemsrepresentation av arbetsminne sådant att dess stabilitets- samtsynkroniseringsegenskaper kan granskas. Innebörden av olika systemparametrar av de genereradesynkroniseringsmönstren undersöktes genom användandet av bifurkationsanalys.Som vi förstår, har den föreslagna dynamiska återkommande neuronätet inte studerats frånett reglertekniskt perspektiv tidigare.
Ferland, Guy J. M. G. "A new paradigm for the classification of patterns: The 'race to the attractor' neural network model." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9298.
Full textRosay, Sophie. "A statistical mechanics approach to the modelling and analysis of place-cell activity." Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0010/document.
Full textPlace cells in the hippocampus are neurons with interesting properties such as the corre-lation between their activity and the animal’s position in space. It is believed that theseproperties can be for the most part understood by collective behaviours of models of inter-acting simplified neurons. Statistical mechanics provides tools permitting to study thesecollective behaviours, both analytically and numerically.Here, we address how these tools can be used to understand place-cell activity withinthe attractor neural network paradigm, a theory for memory. We first propose a modelfor place cells in which the formation of a localized bump of activity is accounted for byattractor dynamics. Several aspects of the collective properties of this model are studied.Thanks to the simplicity of the model, they can be understood in great detail. The phasediagram of the model is computed and discussed in relation with previous works on at-tractor neural networks. The dynamical evolution of the system displays particularly richpatterns. The second part of this thesis deals with decoding place-cell activity, and theimplications of the attractor hypothesis on this problem. We compare several decodingmethods and their results on the processing of experimental recordings of place cells in afreely behaving rat
Strandqvist, Jonas. "Attractors of autoencoders : Memorization in neural networks." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97746.
Full textMartí, Ortega Daniel. "Neural stochastic dynamics of perceptual decision making." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7552.
Full textComputational models based on large-scale, neurobiologically-inspired networks describe the decision-related activity observed in some cortical areas as a transition between attractors of the cortical network. Stimulation induces a change in the attractor configuration and drives the system out from its initial resting attractor to one of the existing attractors associated with the categorical choices. The noise present in the system renders transitions random. We show that there exist two qualitatively different mechanisms for decision, each with distinctive psychophysical signatures. The decision mechanism arising at low inputs, entirely driven by noise, leads to skewed distributions of decision times, with a mean governed by the amplitude of the noise. Moreover, both decision times and performances are monotonically decreasing functions of the overall external stimulation. We also propose two methods, one based on the macroscopic approximation and one based on center manifold theory, to simplify the description of multistable stochastic neural systems.
Posani, Lorenzo. "Inference and modeling of biological networks : a statistical-physics approach to neural attractors and protein fitness landscapes." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE043/document.
Full textThe recent advent of high-throughput experimental procedures has opened a new era for the quantitative study of biological systems. Today, electrophysiology recordings and calcium imaging allow for the in vivo simultaneous recording of hundreds to thousands of neurons. In parallel, thanks to automated sequencing procedures, the libraries of known functional proteins expanded from thousands to millions in just a few years. This current abundance of biological data opens a new series of challenges for theoreticians. Accurate and transparent analysis methods are needed to process this massive amount of raw data into meaningful observables. Concurrently, the simultaneous observation of a large number of interacting units enables the development and validation of theoretical models aimed at the mechanistic understanding of the collective behavior of biological systems. In this manuscript, we propose an approach to both these challenges based on methods and models from statistical physics. We present an application of these methods to problems from neuroscience and bioinformatics, focusing on (1) the spatial memory and navigation task in the hippocampal loop and (2) the reconstruction of the fitness landscape of proteins from homologous sequence data
Battista, Aldo. "Low-dimensional continuous attractors in recurrent neural networks : from statistical physics to computational neuroscience." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE012.
Full textHow sensory information is encoded and processed by neuronal circuits is a central question in computational neuroscience. In many brain areas, the activity of neurons is found to depend strongly on some continuous sensory correlate; examples include simple cells in the V1 area of the visual cortex coding for the orientation of a bar presented to the retina, and head direction cells in the subiculum or place cells in the hippocampus, whose activities depend, respectively, on the orientation of the head and the position of an animal in the physical space. Over the past decades, continuous attractor neural networks were introduced as an abstract model for the representation of a few continuous variables in a large population of noisy neurons. Through an appropriate set of pairwise interactions between the neurons, the dynamics of the neural network is constrained to span a low-dimensional manifold in the high-dimensional space of activity configurations, and thus codes for a few continuous coordinates on the manifold, corresponding to spatial or sensory information. While the original model was based on how to build a single continuous manifold in an high-dimensional space, it was soon realized that the same neural network should code for many distinct attractors, {em i.e.}, corresponding to different spatial environments or contextual situations. An approximate solution to this harder problem was proposed twenty years ago, and relied on an ad hoc prescription for the pairwise interactions between neurons, summing up the different contributions corresponding to each single attractor taken independently of the others. This solution, however, suffers from two major issues: the interference between maps strongly limit the storage capacity, and the spatial resolution within a map is not controlled. In the present manuscript, we address these two issues. We show how to achieve optimal storage of continuous attractors and study the optimal trade-off between capacity and spatial resolution, that is, how the requirement of higher spatial resolution affects the maximal number of attractors that can be stored, proving that recurrent neural networks are very efficient memory devices capable of storing many continuous attractors at high resolution. In order to tackle these problems we used a combination of techniques from statistical physics of disordered systems and random matrix theory. On the one hand we extended Gardner's theory of learning to the case of patterns with strong spatial correlations. On the other hand we introduced and studied the spectral properties of a new ensemble of random matrices, {em i.e.}, the additive superimposition of an extensive number of independent Euclidean random matrices in the high-density regime. In addition, this approach defines a concrete framework to address many questions, in close connection with ongoing experiments, related in particular to the discussion of the random remapping hypothesis and to the coding of spatial information and the development of brain circuits in young animals. Finally, we discuss a possible mechanism for the learning of continuous attractors from real images
Doria, Felipe França. "Padrões estruturados e campo aleatório em redes complexas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/144076.
Full textThis work focus on the study of two complex networks. The first one is a random field Ising model. This model follows a gaussian and bimodal distribution, for the random field. A finite connectivity technique was utilized to solve it. As well as a Monte Carlo method was applied to verify our results. There is an indication in our results that for a gaussian distribution the phase transition is always second-order. For the bimodal distribution there is a tricritical point, tha depends on the value of the connectivity . Below a certain minimum , there is only a second-order transition. The second one is a metric attractor neural network. More precisely we study the ability of this model to learn structured patterns. In particular, the chosen patterns were taken from fingerprints, which present some local features. Our results show that the higher the load ratio and retrieval quality are the lower is the fingerprint patterns activity. A theoretical framework was also developed as a function of five parameters: the load ratio, the connectivity, the density degree of the network, the randomness ratio and the spatial pattern correlation.
Tully, Philip. "Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits." Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205568.
Full textQC 20170421
Martínez-García, Marina. "Statistical analysis of neural correlates in decision-making." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/283111.
Full textDurant aquesta tesi hem investigat els processos neuronals que es pro- dueixen durant tasques de presa de decisions, tasques basades en un ju- dici l ogic de classi caci o perceptual. Per a aquest prop osit hem analitzat tres paradigmes experimentals diferents (somatosensorial, visual i auditiu) en dues espcies diferents (micos i rates), amb l'objectiu d'il.lustrar com les neurones codi quen informaci on referents a les t asques. En particular, ens hem centrat en com certes informacions estan cod- i cades en l'activitat neuronal al llarg del temps. Concretament, com la informaci o sobre: la decisi o comportamental, els factors externs, i la con- ana en la resposta, b e codi cada en la mem oria. A m es a m es, quan el paradigma experimental ens ho va permetre, com l'atenci o modula aquests aspectes. Finalment, hem anat un pas m es enll a, i hem analitzat la comu- nicaci o entre les diferents arees corticals, mentre els subjectes resolien una tasca de presa de decisions.
Insabato, Andrea. "Neurodynamical theory of decision confidence." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/129463.
Full textEl estudio de la confianza en la decisión ofrece una perspectiva ventajosa sobre los procesos de introspección y sobre los procesos de evaluación de la toma de decisiones. No obstante todav'ia no tenemos un conocimiento exhaustivo del sustrato neurofisiológico y computacional de la confianza en la decisión. Existen principalmente dos paradigmas experimentales para medir la confianza en la decisión en los sujetos no humanos: apuesta post-decisional (post-decision wagering) y opción insegura (uncertain option). En esta tesis tratamos de aclarar los mecanísmos computacionales que subyacen a los procesos de toma de decisiones y juicios de confianza en ambos paradigmas experimentales. El modelo que proponemos para explicar los experimentos de apuesta post-decisional es una red neuronal de atractores de dos capas. En este modelo la primera capa se encarga de la toma de decisiones, mientras la segunda capa vigila la actividad de la primera capa y toma un juicio sobre la confianza en la decisión. Sucesivamente testeamos la predicción de este modelo analizando la actividad de neuronas registrada en el cerebro de dos monos, mientras estos desempeñaban una tarea de toma de decisiones. Con este análisis mostramos la existencia de neuronas en la corteza premotora ventral que codifican la confianza en la decisión. Nuestros resultados muestran también que en el cerebro de los primates existen tanto neuronas que codifican confianza como neuronas que la codifican de forma continua. Más en específico mostramos que existen tres mecanismos de codificación: 1. codificación por tiempo de cambio, 2. codificación por tasa de disparo, 3. codificación binaria. En relación a las tareas de opción insegura proponemos un modelo de red de atractores para opciones multiplas. En este modelo la confianza emerge de la dinámica estocástica de las neuronas de decisión, volviéndose así innecesaria la supervisión del proceso de toma de decisiones por parte de otra red (como en el modelo de la tarea de apuesta post-decisional). El modelo explica los datos de comportamiento de los monos y los registros de la actividad de neuronas del área lateral intraparietal como efectos de la dinámica multiestable de la red de atractores. Además el modelo produce interesantes y novedosas predicciones que se podrán testear en experimentos futuros. La compleja representación neurofisiológica y los distintos mecanísmos computacionales que emergen de este trabajo sugieren distintos aspectos funcionales de la confianza en la toma de decisiones.
Larsson, Johan P. "Modelling neuronal mechanisms of the processing of tones and phonemes in the higher auditory system." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/97293.
Full textThough much experimental research exists on both basic neural mechanisms of hearing and the psychological organization of language perception, there is a relative paucity of modelling work on these subjects. Here we describe two modelling efforts. One proposes a novel mechanism of frequency selectivity improvement that accounts for results of neurophysiological experiments investigating manifestations of forward masking and above all auditory streaming in the primary auditory cortex (A1). The mechanism works in a feed-forward network with depressing thalamocortical synapses, but is further showed to be robust to a realistic organization of the neural circuitry in A1, which accounts for a wealth of neurophysiological data. The other effort describes a candidate mechanism for explaining differences in word/non-word perception between early and simultaneous bilinguals found in psychophysical studies. By simulating lexical decision and phoneme discrimination tasks in an attractor neural network model, we strengthen the hypothesis that people often exposed to dialectal word variations can store these in their lexicons, without altering their phoneme representations.
Se ha investigado mucho tanto los mecanismos neuronales básicos de la audición como la organización psicológica de la percepción del habla. Sin embargo, en ambos temas hay una relativa escasez en cuanto a modelización. Aquí describimos dos trabajos de modelización. Uno propone un nuevo mecanismo de mejora de selectividad de frecuencias que explica resultados de experimentos neurofisiológicos investigando manifestaciones de forward masking y sobre todo auditory streaming en la corteza auditiva principal (A1). El mecanismo funciona en una red feed-forward con depresión sináptica entre el tálamo y la corteza, pero mostramos que es robusto a la introducción de una organización realista del circuito de A1, que a su vez explica cantidad de datos neurofisiológicos. El otro trabajo describe un mecanismo candidato de explicar el hallazgo en estudios psicofísicos de diferencias en la percepción de palabras entre bilinguës tempranos y simultáneos. Simulando tareas de decisión léxica y discriminación de fonemas, fortalecemos la hipótesis de que personas expuestas a menudo a variaciones dialectales de palabras pueden guardar éstas en su léxico, sin alterar representaciones fonémicas.
Engelken, Rainer. "Chaotic Neural Circuit Dynamics." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-002E-E349-9.
Full text