Academic literature on the topic 'Potts Attractor Neural Network'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Potts Attractor Neural Network.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Potts Attractor Neural Network"
Abdukhamidov, Eldor, Firuz Juraev, Mohammed Abuhamad, Shaker El-Sappagh, and Tamer AbuHmed. "Sentiment Analysis of Users’ Reactions on Social Media During the Pandemic." Electronics 11, no. 10 (May 22, 2022): 1648. http://dx.doi.org/10.3390/electronics11101648.
Full textO'Kane, D., and D. Sherrington. "A feature retrieving attractor neural network." Journal of Physics A: Mathematical and General 26, no. 10 (May 21, 1993): 2333–42. http://dx.doi.org/10.1088/0305-4470/26/10/008.
Full textDeng, Hanming, Yang Hua, Tao Song, Zhengui Xue, Ruhui Ma, Neil Robertson, and Haibing Guan. "Reinforcing Neural Network Stability with Attractor Dynamics." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3765–72. http://dx.doi.org/10.1609/aaai.v34i04.5787.
Full textTAN, Z., and L. SCHÜLKE. "THE ATTRACTOR BASIN OF NEURAL NETWORK WITH CORRELATED INTERACTIONS." International Journal of Modern Physics B 10, no. 26 (November 30, 1996): 3549–60. http://dx.doi.org/10.1142/s0217979296001902.
Full textBadoni, Davide, Roberto Riccardi, and Gaetano Salina. "LEARNING ATTRACTOR NEURAL NETWORK: THE ELECTRONIC IMPLEMENTATION." International Journal of Neural Systems 03, supp01 (January 1992): 13–24. http://dx.doi.org/10.1142/s0129065792000334.
Full textFrolov, A. A., D. Husek, I. P. Muraviev, and P. Yu Polyakov. "Boolean Factor Analysis by Attractor Neural Network." IEEE Transactions on Neural Networks 18, no. 3 (May 2007): 698–707. http://dx.doi.org/10.1109/tnn.2007.891664.
Full textZOU, FAN, and JOSEF A. NOSSEK. "AN AUTONOMOUS CHAOTIC CELLULAR NEURAL NETWORK AND CHUA'S CIRCUIT." Journal of Circuits, Systems and Computers 03, no. 02 (June 1993): 591–601. http://dx.doi.org/10.1142/s0218126693000368.
Full textDominguez, D. R. C., and D. Bollé. "Categorization by a three-state attractor neural network." Physical Review E 56, no. 6 (December 1, 1997): 7306–9. http://dx.doi.org/10.1103/physreve.56.7306.
Full textSERULNIK, SERGIO D., and MOSHE GUR. "AN ATTRACTOR NEURAL NETWORK MODEL OF CLASSICAL CONDITIONING." International Journal of Neural Systems 07, no. 01 (March 1996): 1–18. http://dx.doi.org/10.1142/s0129065796000026.
Full textWong, K. Y. M., and C. Ho. "Attractor properties of dynamical systems: neural network models." Journal of Physics A: Mathematical and General 27, no. 15 (August 7, 1994): 5167–85. http://dx.doi.org/10.1088/0305-4470/27/15/017.
Full textDissertations / Theses on the topic "Potts Attractor Neural Network"
Seybold, John. "An attractor neural network model of spoken word recognition." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335839.
Full textPereira, Patrícia. "Attractor Neural Network modelling of the Lifespan Retrieval Curve." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280732.
Full textMänniskans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
Ericson, Julia. "Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291553.
Full textUnder de senaste årtionden har datorsimulationer blivit ett allt mer populärt verktyg för att undersöka biologiska neurala nätverk. Dessa modeller är vanligtvis inspirerade av antingen beteendedata från neuropsykologiska studier eller av biologisk data från neurovetenskapen. En modell av den senare typen är ett Bayesian Confidence Propagating Neural Network (BCPNN) - ett autoassociativt nätverk med en Bayesiansk inlärningsregel, vilket tidigare har använts för att modellera flera typer av minne. I det här examensarbetet har jag vidare undersökt om nätverket kan användas som en modell för sekventiellt korttidsminne genom att undersöka dess förmåga att replikera beteenden inom verbalt sekventiellt korttidsminne. Experimenten visade att modellen kunde simulera ett flertal viktiga nyckeleffekter såsom the word length effect och the irrelevant speech effect. Däröver kunde modellen även simulera den bågformade kurvan som beskriver andelen lyckade repetitioner som en funktion av position, och den kunde dessutom repetera korta sekvenser baklänges. Modellen visade också på viss förmåga att hantera sekvenser där ett element återkom senare i sekvensen. Den nuvarande modellen var däremot inte tillräcklig för att simulera effekterna som tillkommer av rytm, såsom temporär gruppering eller en betoning på specifika element i sekvensen. I sin helhet ser modellen däremot lovande ut, även om den inte är fullständig i sin nuvarande form, då den kunde simulera ett flertal viktiga nyckeleffekter och förklara dessa med hjälp av neurovetenskapligt inspirerade inlärningsregler.
Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.
Full textVillani, Gianluca. "Analysis of an Attractor Neural Network Model for Working Memory: A Control Theory Approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260079.
Full textArbetsminne är ett brett, övergripande kognitivt system som ansvarar för temporär informationslagringhos högre ordningens tänkande, såsom beslutsfattning. Denna masteravhandlingämnar sig åt att studera icke-spikande modeller tillhörande en speciell gren avbiologiskt inspirerade återkommande neuronnät, för att redogöra mänsklig experimentelldata för fenomenet free recall. Med avseende på dess modulära struktur, framför denna avhandlingen nätverkssystemsrepresentation av arbetsminne sådant att dess stabilitets- samtsynkroniseringsegenskaper kan granskas. Innebörden av olika systemparametrar av de genereradesynkroniseringsmönstren undersöktes genom användandet av bifurkationsanalys.Som vi förstår, har den föreslagna dynamiska återkommande neuronätet inte studerats frånett reglertekniskt perspektiv tidigare.
Ferland, Guy J. M. G. "A new paradigm for the classification of patterns: The 'race to the attractor' neural network model." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9298.
Full textRosay, Sophie. "A statistical mechanics approach to the modelling and analysis of place-cell activity." Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0010/document.
Full textPlace cells in the hippocampus are neurons with interesting properties such as the corre-lation between their activity and the animal’s position in space. It is believed that theseproperties can be for the most part understood by collective behaviours of models of inter-acting simplified neurons. Statistical mechanics provides tools permitting to study thesecollective behaviours, both analytically and numerically.Here, we address how these tools can be used to understand place-cell activity withinthe attractor neural network paradigm, a theory for memory. We first propose a modelfor place cells in which the formation of a localized bump of activity is accounted for byattractor dynamics. Several aspects of the collective properties of this model are studied.Thanks to the simplicity of the model, they can be understood in great detail. The phasediagram of the model is computed and discussed in relation with previous works on at-tractor neural networks. The dynamical evolution of the system displays particularly richpatterns. The second part of this thesis deals with decoding place-cell activity, and theimplications of the attractor hypothesis on this problem. We compare several decodingmethods and their results on the processing of experimental recordings of place cells in afreely behaving rat
Strandqvist, Jonas. "Attractors of autoencoders : Memorization in neural networks." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97746.
Full textMartí, Ortega Daniel. "Neural stochastic dynamics of perceptual decision making." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7552.
Full textComputational models based on large-scale, neurobiologically-inspired networks describe the decision-related activity observed in some cortical areas as a transition between attractors of the cortical network. Stimulation induces a change in the attractor configuration and drives the system out from its initial resting attractor to one of the existing attractors associated with the categorical choices. The noise present in the system renders transitions random. We show that there exist two qualitatively different mechanisms for decision, each with distinctive psychophysical signatures. The decision mechanism arising at low inputs, entirely driven by noise, leads to skewed distributions of decision times, with a mean governed by the amplitude of the noise. Moreover, both decision times and performances are monotonically decreasing functions of the overall external stimulation. We also propose two methods, one based on the macroscopic approximation and one based on center manifold theory, to simplify the description of multistable stochastic neural systems.
Posani, Lorenzo. "Inference and modeling of biological networks : a statistical-physics approach to neural attractors and protein fitness landscapes." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE043/document.
Full textThe recent advent of high-throughput experimental procedures has opened a new era for the quantitative study of biological systems. Today, electrophysiology recordings and calcium imaging allow for the in vivo simultaneous recording of hundreds to thousands of neurons. In parallel, thanks to automated sequencing procedures, the libraries of known functional proteins expanded from thousands to millions in just a few years. This current abundance of biological data opens a new series of challenges for theoreticians. Accurate and transparent analysis methods are needed to process this massive amount of raw data into meaningful observables. Concurrently, the simultaneous observation of a large number of interacting units enables the development and validation of theoretical models aimed at the mechanistic understanding of the collective behavior of biological systems. In this manuscript, we propose an approach to both these challenges based on methods and models from statistical physics. We present an application of these methods to problems from neuroscience and bioinformatics, focusing on (1) the spatial memory and navigation task in the hippocampal loop and (2) the reconstruction of the fitness landscape of proteins from homologous sequence data
Book chapters on the topic "Potts Attractor Neural Network"
Lansner, Anders, Anders Sandberg, Karl Magnus Petersson, and Martin Ingvar. "On Forgetful Attractor Network Memories." In Artificial Neural Networks in Medicine and Biology, 54–62. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0513-8_7.
Full textDel Giudice, Paolo, and Stefano Fusi. "Attractor dynamics in an electronic neural network." In Lecture Notes in Computer Science, 1265–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0020325.
Full textZou, Xiaolong, Zilong Ji, Xiao Liu, Yuanyuan Mi, K. Y. Michael Wong, and Si Wu. "Learning a Continuous Attractor Neural Network from Real Images." In Neural Information Processing, 622–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70093-9_66.
Full textOkamoto, Hiroshi. "Local Detection of Communities by Attractor Neural-Network Dynamics." In Springer Series in Bio-/Neuroinformatics, 115–25. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09903-3_6.
Full textSeow, Ming-Jung, and Vijayan K. Asari. "Recurrent Network as a Nonlinear Line Attractor for Skin Color Association." In Advances in Neural Networks – ISNN 2004, 870–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_143.
Full textKoroutchev, Kostadin, and Elka Korutcheva. "Improved Storage Capacity of Hebbian Learning Attractor Neural Network with Bump Formations." In Artificial Neural Networks – ICANN 2006, 234–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_25.
Full textOkamoto, Hiroshi. "Community Detection as Pattern Restoration by Attractor Neural-Network Dynamics." In Information Processing in Cells and Tissues, 197–207. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23108-2_17.
Full textHamid, Oussama H., and Jochen Braun. "Reinforcement Learning and Attractor Neural Network Models of Associative Learning." In Studies in Computational Intelligence, 327–49. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16469-0_17.
Full textCarrasco, Marco P., and Margarida V. Pato. "A Potts Neural Network Heuristic for the Class/Teacher Timetabling Problem." In Applied Optimization, 173–86. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-4137-7_8.
Full textFrolov, Alexander A., Dušan Húsek, and Pavel Yu Polyakov. "Attractor Neural Network Combined with Likelihood Maximization Algorithm for Boolean Factor Analysis." In Advances in Neural Networks – ISNN 2012, 1–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_1.
Full textConference papers on the topic "Potts Attractor Neural Network"
PIRMORADIAN, SAHAR, and ALESSANDRO TREVES. "ENCODING WORDS INTO A POTTS ATTRACTOR NETWORK." In Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0003.
Full textDoboli, S., and A. A. Minai. "Network capacity for latent attractor computation." In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.857840.
Full textKuijie Cai and Jihong Shen. "Continuous attractor neural network model of multisensory integration." In 2011 International Conference on System Science, Engineering Design and Manufacturing Informatization (ICSEM). IEEE, 2011. http://dx.doi.org/10.1109/icssem.2011.6081317.
Full textFormanek, Lukas, and Ondrej Karpis. "Leaming Lorenz attractor differential equations using neural network." In 2020 5th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM). IEEE, 2020. http://dx.doi.org/10.1109/seeda-cecnsm49515.2020.9221785.
Full textUsher, M., and E. Ruppin. "An attractor neural network model of semantic fact retrieval." In 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137917.
Full textKoroutchev, Kostadin. "Spatial asymmetric retrieval states in binary attractor neural network." In NOISE AND FLUCTUATIONS: 18th International Conference on Noise and Fluctuations - ICNF 2005. AIP, 2005. http://dx.doi.org/10.1063/1.2036825.
Full textPereira, Patricia, Anders Lansner, and Pawel Herman. "Incremental Attractor Neural Network Modelling of the Lifespan Retrieval Curve." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9891922.
Full textAzarpour, M., S. A. Seyyedsalehi, and A. Taherkhani. "Robust pattern recognition using chaotic dynamics in Attractor Recurrent Neural Network." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596375.
Full textZandi Mehran, Y., and A. M. Nasrabadi. "Neural network application in strange attractor investigation to detect a FGD." In 2008 4th International IEEE Conference "Intelligent Systems" (IS). IEEE, 2008. http://dx.doi.org/10.1109/is.2008.4670470.
Full textRathore, S., D. Bush, P. Latham, and N. Burgess. "Oscillatory dynamics in an attractor neural network with firing rate adaptation." In PHYSICS, COMPUTATION, AND THE MIND - ADVANCES AND CHALLENGES AT INTERFACES: Proceedings of the 12th Granada Seminar on Computational and Statistical Physics. AIP, 2013. http://dx.doi.org/10.1063/1.4776524.
Full text