Literatura académica sobre el tema "Potts Attractor Neural Network"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Potts Attractor Neural Network".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Potts Attractor Neural Network"
Abdukhamidov, Eldor, Firuz Juraev, Mohammed Abuhamad, Shaker El-Sappagh y Tamer AbuHmed. "Sentiment Analysis of Users’ Reactions on Social Media During the Pandemic". Electronics 11, n.º 10 (22 de mayo de 2022): 1648. http://dx.doi.org/10.3390/electronics11101648.
Texto completoO'Kane, D. y D. Sherrington. "A feature retrieving attractor neural network". Journal of Physics A: Mathematical and General 26, n.º 10 (21 de mayo de 1993): 2333–42. http://dx.doi.org/10.1088/0305-4470/26/10/008.
Texto completoDeng, Hanming, Yang Hua, Tao Song, Zhengui Xue, Ruhui Ma, Neil Robertson y Haibing Guan. "Reinforcing Neural Network Stability with Attractor Dynamics". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 3765–72. http://dx.doi.org/10.1609/aaai.v34i04.5787.
Texto completoTAN, Z. y L. SCHÜLKE. "THE ATTRACTOR BASIN OF NEURAL NETWORK WITH CORRELATED INTERACTIONS". International Journal of Modern Physics B 10, n.º 26 (30 de noviembre de 1996): 3549–60. http://dx.doi.org/10.1142/s0217979296001902.
Texto completoBadoni, Davide, Roberto Riccardi y Gaetano Salina. "LEARNING ATTRACTOR NEURAL NETWORK: THE ELECTRONIC IMPLEMENTATION". International Journal of Neural Systems 03, supp01 (enero de 1992): 13–24. http://dx.doi.org/10.1142/s0129065792000334.
Texto completoFrolov, A. A., D. Husek, I. P. Muraviev y P. Yu Polyakov. "Boolean Factor Analysis by Attractor Neural Network". IEEE Transactions on Neural Networks 18, n.º 3 (mayo de 2007): 698–707. http://dx.doi.org/10.1109/tnn.2007.891664.
Texto completoZOU, FAN y JOSEF A. NOSSEK. "AN AUTONOMOUS CHAOTIC CELLULAR NEURAL NETWORK AND CHUA'S CIRCUIT". Journal of Circuits, Systems and Computers 03, n.º 02 (junio de 1993): 591–601. http://dx.doi.org/10.1142/s0218126693000368.
Texto completoDominguez, D. R. C. y D. Bollé. "Categorization by a three-state attractor neural network". Physical Review E 56, n.º 6 (1 de diciembre de 1997): 7306–9. http://dx.doi.org/10.1103/physreve.56.7306.
Texto completoSERULNIK, SERGIO D. y MOSHE GUR. "AN ATTRACTOR NEURAL NETWORK MODEL OF CLASSICAL CONDITIONING". International Journal of Neural Systems 07, n.º 01 (marzo de 1996): 1–18. http://dx.doi.org/10.1142/s0129065796000026.
Texto completoWong, K. Y. M. y C. Ho. "Attractor properties of dynamical systems: neural network models". Journal of Physics A: Mathematical and General 27, n.º 15 (7 de agosto de 1994): 5167–85. http://dx.doi.org/10.1088/0305-4470/27/15/017.
Texto completoTesis sobre el tema "Potts Attractor Neural Network"
Seybold, John. "An attractor neural network model of spoken word recognition". Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335839.
Texto completoPereira, Patrícia. "Attractor Neural Network modelling of the Lifespan Retrieval Curve". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280732.
Texto completoMänniskans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
Ericson, Julia. "Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291553.
Texto completoUnder de senaste årtionden har datorsimulationer blivit ett allt mer populärt verktyg för att undersöka biologiska neurala nätverk. Dessa modeller är vanligtvis inspirerade av antingen beteendedata från neuropsykologiska studier eller av biologisk data från neurovetenskapen. En modell av den senare typen är ett Bayesian Confidence Propagating Neural Network (BCPNN) - ett autoassociativt nätverk med en Bayesiansk inlärningsregel, vilket tidigare har använts för att modellera flera typer av minne. I det här examensarbetet har jag vidare undersökt om nätverket kan användas som en modell för sekventiellt korttidsminne genom att undersöka dess förmåga att replikera beteenden inom verbalt sekventiellt korttidsminne. Experimenten visade att modellen kunde simulera ett flertal viktiga nyckeleffekter såsom the word length effect och the irrelevant speech effect. Däröver kunde modellen även simulera den bågformade kurvan som beskriver andelen lyckade repetitioner som en funktion av position, och den kunde dessutom repetera korta sekvenser baklänges. Modellen visade också på viss förmåga att hantera sekvenser där ett element återkom senare i sekvensen. Den nuvarande modellen var däremot inte tillräcklig för att simulera effekterna som tillkommer av rytm, såsom temporär gruppering eller en betoning på specifika element i sekvensen. I sin helhet ser modellen däremot lovande ut, även om den inte är fullständig i sin nuvarande form, då den kunde simulera ett flertal viktiga nyckeleffekter och förklara dessa med hjälp av neurovetenskapligt inspirerade inlärningsregler.
Batbayar, Batsukh y S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning". RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.
Texto completoVillani, Gianluca. "Analysis of an Attractor Neural Network Model for Working Memory: A Control Theory Approach". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260079.
Texto completoArbetsminne är ett brett, övergripande kognitivt system som ansvarar för temporär informationslagringhos högre ordningens tänkande, såsom beslutsfattning. Denna masteravhandlingämnar sig åt att studera icke-spikande modeller tillhörande en speciell gren avbiologiskt inspirerade återkommande neuronnät, för att redogöra mänsklig experimentelldata för fenomenet free recall. Med avseende på dess modulära struktur, framför denna avhandlingen nätverkssystemsrepresentation av arbetsminne sådant att dess stabilitets- samtsynkroniseringsegenskaper kan granskas. Innebörden av olika systemparametrar av de genereradesynkroniseringsmönstren undersöktes genom användandet av bifurkationsanalys.Som vi förstår, har den föreslagna dynamiska återkommande neuronätet inte studerats frånett reglertekniskt perspektiv tidigare.
Ferland, Guy J. M. G. "A new paradigm for the classification of patterns: The 'race to the attractor' neural network model". Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9298.
Texto completoRosay, Sophie. "A statistical mechanics approach to the modelling and analysis of place-cell activity". Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0010/document.
Texto completoPlace cells in the hippocampus are neurons with interesting properties such as the corre-lation between their activity and the animal’s position in space. It is believed that theseproperties can be for the most part understood by collective behaviours of models of inter-acting simplified neurons. Statistical mechanics provides tools permitting to study thesecollective behaviours, both analytically and numerically.Here, we address how these tools can be used to understand place-cell activity withinthe attractor neural network paradigm, a theory for memory. We first propose a modelfor place cells in which the formation of a localized bump of activity is accounted for byattractor dynamics. Several aspects of the collective properties of this model are studied.Thanks to the simplicity of the model, they can be understood in great detail. The phasediagram of the model is computed and discussed in relation with previous works on at-tractor neural networks. The dynamical evolution of the system displays particularly richpatterns. The second part of this thesis deals with decoding place-cell activity, and theimplications of the attractor hypothesis on this problem. We compare several decodingmethods and their results on the processing of experimental recordings of place cells in afreely behaving rat
Strandqvist, Jonas. "Attractors of autoencoders : Memorization in neural networks". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97746.
Texto completoMartí, Ortega Daniel. "Neural stochastic dynamics of perceptual decision making". Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7552.
Texto completoComputational models based on large-scale, neurobiologically-inspired networks describe the decision-related activity observed in some cortical areas as a transition between attractors of the cortical network. Stimulation induces a change in the attractor configuration and drives the system out from its initial resting attractor to one of the existing attractors associated with the categorical choices. The noise present in the system renders transitions random. We show that there exist two qualitatively different mechanisms for decision, each with distinctive psychophysical signatures. The decision mechanism arising at low inputs, entirely driven by noise, leads to skewed distributions of decision times, with a mean governed by the amplitude of the noise. Moreover, both decision times and performances are monotonically decreasing functions of the overall external stimulation. We also propose two methods, one based on the macroscopic approximation and one based on center manifold theory, to simplify the description of multistable stochastic neural systems.
Posani, Lorenzo. "Inference and modeling of biological networks : a statistical-physics approach to neural attractors and protein fitness landscapes". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE043/document.
Texto completoThe recent advent of high-throughput experimental procedures has opened a new era for the quantitative study of biological systems. Today, electrophysiology recordings and calcium imaging allow for the in vivo simultaneous recording of hundreds to thousands of neurons. In parallel, thanks to automated sequencing procedures, the libraries of known functional proteins expanded from thousands to millions in just a few years. This current abundance of biological data opens a new series of challenges for theoreticians. Accurate and transparent analysis methods are needed to process this massive amount of raw data into meaningful observables. Concurrently, the simultaneous observation of a large number of interacting units enables the development and validation of theoretical models aimed at the mechanistic understanding of the collective behavior of biological systems. In this manuscript, we propose an approach to both these challenges based on methods and models from statistical physics. We present an application of these methods to problems from neuroscience and bioinformatics, focusing on (1) the spatial memory and navigation task in the hippocampal loop and (2) the reconstruction of the fitness landscape of proteins from homologous sequence data
Capítulos de libros sobre el tema "Potts Attractor Neural Network"
Lansner, Anders, Anders Sandberg, Karl Magnus Petersson y Martin Ingvar. "On Forgetful Attractor Network Memories". En Artificial Neural Networks in Medicine and Biology, 54–62. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0513-8_7.
Texto completoDel Giudice, Paolo y Stefano Fusi. "Attractor dynamics in an electronic neural network". En Lecture Notes in Computer Science, 1265–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0020325.
Texto completoZou, Xiaolong, Zilong Ji, Xiao Liu, Yuanyuan Mi, K. Y. Michael Wong y Si Wu. "Learning a Continuous Attractor Neural Network from Real Images". En Neural Information Processing, 622–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70093-9_66.
Texto completoOkamoto, Hiroshi. "Local Detection of Communities by Attractor Neural-Network Dynamics". En Springer Series in Bio-/Neuroinformatics, 115–25. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09903-3_6.
Texto completoSeow, Ming-Jung y Vijayan K. Asari. "Recurrent Network as a Nonlinear Line Attractor for Skin Color Association". En Advances in Neural Networks – ISNN 2004, 870–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_143.
Texto completoKoroutchev, Kostadin y Elka Korutcheva. "Improved Storage Capacity of Hebbian Learning Attractor Neural Network with Bump Formations". En Artificial Neural Networks – ICANN 2006, 234–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_25.
Texto completoOkamoto, Hiroshi. "Community Detection as Pattern Restoration by Attractor Neural-Network Dynamics". En Information Processing in Cells and Tissues, 197–207. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23108-2_17.
Texto completoHamid, Oussama H. y Jochen Braun. "Reinforcement Learning and Attractor Neural Network Models of Associative Learning". En Studies in Computational Intelligence, 327–49. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16469-0_17.
Texto completoCarrasco, Marco P. y Margarida V. Pato. "A Potts Neural Network Heuristic for the Class/Teacher Timetabling Problem". En Applied Optimization, 173–86. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-4137-7_8.
Texto completoFrolov, Alexander A., Dušan Húsek y Pavel Yu Polyakov. "Attractor Neural Network Combined with Likelihood Maximization Algorithm for Boolean Factor Analysis". En Advances in Neural Networks – ISNN 2012, 1–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_1.
Texto completoActas de conferencias sobre el tema "Potts Attractor Neural Network"
PIRMORADIAN, SAHAR y ALESSANDRO TREVES. "ENCODING WORDS INTO A POTTS ATTRACTOR NETWORK". En Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0003.
Texto completoDoboli, S. y A. A. Minai. "Network capacity for latent attractor computation". En Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.857840.
Texto completoKuijie Cai y Jihong Shen. "Continuous attractor neural network model of multisensory integration". En 2011 International Conference on System Science, Engineering Design and Manufacturing Informatization (ICSEM). IEEE, 2011. http://dx.doi.org/10.1109/icssem.2011.6081317.
Texto completoFormanek, Lukas y Ondrej Karpis. "Leaming Lorenz attractor differential equations using neural network". En 2020 5th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM). IEEE, 2020. http://dx.doi.org/10.1109/seeda-cecnsm49515.2020.9221785.
Texto completoUsher, M. y E. Ruppin. "An attractor neural network model of semantic fact retrieval". En 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137917.
Texto completoKoroutchev, Kostadin. "Spatial asymmetric retrieval states in binary attractor neural network". En NOISE AND FLUCTUATIONS: 18th International Conference on Noise and Fluctuations - ICNF 2005. AIP, 2005. http://dx.doi.org/10.1063/1.2036825.
Texto completoPereira, Patricia, Anders Lansner y Pawel Herman. "Incremental Attractor Neural Network Modelling of the Lifespan Retrieval Curve". En 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9891922.
Texto completoAzarpour, M., S. A. Seyyedsalehi y A. Taherkhani. "Robust pattern recognition using chaotic dynamics in Attractor Recurrent Neural Network". En 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596375.
Texto completoZandi Mehran, Y. y A. M. Nasrabadi. "Neural network application in strange attractor investigation to detect a FGD". En 2008 4th International IEEE Conference "Intelligent Systems" (IS). IEEE, 2008. http://dx.doi.org/10.1109/is.2008.4670470.
Texto completoRathore, S., D. Bush, P. Latham y N. Burgess. "Oscillatory dynamics in an attractor neural network with firing rate adaptation". En PHYSICS, COMPUTATION, AND THE MIND - ADVANCES AND CHALLENGES AT INTERFACES: Proceedings of the 12th Granada Seminar on Computational and Statistical Physics. AIP, 2013. http://dx.doi.org/10.1063/1.4776524.
Texto completo