Littérature scientifique sur le sujet « Potts Attractor Neural Network »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Potts Attractor Neural Network ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Potts Attractor Neural Network"
Abdukhamidov, Eldor, Firuz Juraev, Mohammed Abuhamad, Shaker El-Sappagh et Tamer AbuHmed. « Sentiment Analysis of Users’ Reactions on Social Media During the Pandemic ». Electronics 11, no 10 (22 mai 2022) : 1648. http://dx.doi.org/10.3390/electronics11101648.
Texte intégralO'Kane, D., et D. Sherrington. « A feature retrieving attractor neural network ». Journal of Physics A : Mathematical and General 26, no 10 (21 mai 1993) : 2333–42. http://dx.doi.org/10.1088/0305-4470/26/10/008.
Texte intégralDeng, Hanming, Yang Hua, Tao Song, Zhengui Xue, Ruhui Ma, Neil Robertson et Haibing Guan. « Reinforcing Neural Network Stability with Attractor Dynamics ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 04 (3 avril 2020) : 3765–72. http://dx.doi.org/10.1609/aaai.v34i04.5787.
Texte intégralTAN, Z., et L. SCHÜLKE. « THE ATTRACTOR BASIN OF NEURAL NETWORK WITH CORRELATED INTERACTIONS ». International Journal of Modern Physics B 10, no 26 (30 novembre 1996) : 3549–60. http://dx.doi.org/10.1142/s0217979296001902.
Texte intégralBadoni, Davide, Roberto Riccardi et Gaetano Salina. « LEARNING ATTRACTOR NEURAL NETWORK : THE ELECTRONIC IMPLEMENTATION ». International Journal of Neural Systems 03, supp01 (janvier 1992) : 13–24. http://dx.doi.org/10.1142/s0129065792000334.
Texte intégralFrolov, A. A., D. Husek, I. P. Muraviev et P. Yu Polyakov. « Boolean Factor Analysis by Attractor Neural Network ». IEEE Transactions on Neural Networks 18, no 3 (mai 2007) : 698–707. http://dx.doi.org/10.1109/tnn.2007.891664.
Texte intégralZOU, FAN, et JOSEF A. NOSSEK. « AN AUTONOMOUS CHAOTIC CELLULAR NEURAL NETWORK AND CHUA'S CIRCUIT ». Journal of Circuits, Systems and Computers 03, no 02 (juin 1993) : 591–601. http://dx.doi.org/10.1142/s0218126693000368.
Texte intégralDominguez, D. R. C., et D. Bollé. « Categorization by a three-state attractor neural network ». Physical Review E 56, no 6 (1 décembre 1997) : 7306–9. http://dx.doi.org/10.1103/physreve.56.7306.
Texte intégralSERULNIK, SERGIO D., et MOSHE GUR. « AN ATTRACTOR NEURAL NETWORK MODEL OF CLASSICAL CONDITIONING ». International Journal of Neural Systems 07, no 01 (mars 1996) : 1–18. http://dx.doi.org/10.1142/s0129065796000026.
Texte intégralWong, K. Y. M., et C. Ho. « Attractor properties of dynamical systems : neural network models ». Journal of Physics A : Mathematical and General 27, no 15 (7 août 1994) : 5167–85. http://dx.doi.org/10.1088/0305-4470/27/15/017.
Texte intégralThèses sur le sujet "Potts Attractor Neural Network"
Seybold, John. « An attractor neural network model of spoken word recognition ». Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335839.
Texte intégralPereira, Patrícia. « Attractor Neural Network modelling of the Lifespan Retrieval Curve ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280732.
Texte intégralMänniskans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
Ericson, Julia. « Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291553.
Texte intégralUnder de senaste årtionden har datorsimulationer blivit ett allt mer populärt verktyg för att undersöka biologiska neurala nätverk. Dessa modeller är vanligtvis inspirerade av antingen beteendedata från neuropsykologiska studier eller av biologisk data från neurovetenskapen. En modell av den senare typen är ett Bayesian Confidence Propagating Neural Network (BCPNN) - ett autoassociativt nätverk med en Bayesiansk inlärningsregel, vilket tidigare har använts för att modellera flera typer av minne. I det här examensarbetet har jag vidare undersökt om nätverket kan användas som en modell för sekventiellt korttidsminne genom att undersöka dess förmåga att replikera beteenden inom verbalt sekventiellt korttidsminne. Experimenten visade att modellen kunde simulera ett flertal viktiga nyckeleffekter såsom the word length effect och the irrelevant speech effect. Däröver kunde modellen även simulera den bågformade kurvan som beskriver andelen lyckade repetitioner som en funktion av position, och den kunde dessutom repetera korta sekvenser baklänges. Modellen visade också på viss förmåga att hantera sekvenser där ett element återkom senare i sekvensen. Den nuvarande modellen var däremot inte tillräcklig för att simulera effekterna som tillkommer av rytm, såsom temporär gruppering eller en betoning på specifika element i sekvensen. I sin helhet ser modellen däremot lovande ut, även om den inte är fullständig i sin nuvarande form, då den kunde simulera ett flertal viktiga nyckeleffekter och förklara dessa med hjälp av neurovetenskapligt inspirerade inlärningsregler.
Batbayar, Batsukh, et S3099885@student rmit edu au. « Improving Time Efficiency of Feedforward Neural Network Learning ». RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.
Texte intégralVillani, Gianluca. « Analysis of an Attractor Neural Network Model for Working Memory : A Control Theory Approach ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260079.
Texte intégralArbetsminne är ett brett, övergripande kognitivt system som ansvarar för temporär informationslagringhos högre ordningens tänkande, såsom beslutsfattning. Denna masteravhandlingämnar sig åt att studera icke-spikande modeller tillhörande en speciell gren avbiologiskt inspirerade återkommande neuronnät, för att redogöra mänsklig experimentelldata för fenomenet free recall. Med avseende på dess modulära struktur, framför denna avhandlingen nätverkssystemsrepresentation av arbetsminne sådant att dess stabilitets- samtsynkroniseringsegenskaper kan granskas. Innebörden av olika systemparametrar av de genereradesynkroniseringsmönstren undersöktes genom användandet av bifurkationsanalys.Som vi förstår, har den föreslagna dynamiska återkommande neuronätet inte studerats frånett reglertekniskt perspektiv tidigare.
Ferland, Guy J. M. G. « A new paradigm for the classification of patterns : The 'race to the attractor' neural network model ». Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9298.
Texte intégralRosay, Sophie. « A statistical mechanics approach to the modelling and analysis of place-cell activity ». Thesis, Paris, Ecole normale supérieure, 2014. http://www.theses.fr/2014ENSU0010/document.
Texte intégralPlace cells in the hippocampus are neurons with interesting properties such as the corre-lation between their activity and the animal’s position in space. It is believed that theseproperties can be for the most part understood by collective behaviours of models of inter-acting simplified neurons. Statistical mechanics provides tools permitting to study thesecollective behaviours, both analytically and numerically.Here, we address how these tools can be used to understand place-cell activity withinthe attractor neural network paradigm, a theory for memory. We first propose a modelfor place cells in which the formation of a localized bump of activity is accounted for byattractor dynamics. Several aspects of the collective properties of this model are studied.Thanks to the simplicity of the model, they can be understood in great detail. The phasediagram of the model is computed and discussed in relation with previous works on at-tractor neural networks. The dynamical evolution of the system displays particularly richpatterns. The second part of this thesis deals with decoding place-cell activity, and theimplications of the attractor hypothesis on this problem. We compare several decodingmethods and their results on the processing of experimental recordings of place cells in afreely behaving rat
Strandqvist, Jonas. « Attractors of autoencoders : Memorization in neural networks ». Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97746.
Texte intégralMartí, Ortega Daniel. « Neural stochastic dynamics of perceptual decision making ». Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7552.
Texte intégralComputational models based on large-scale, neurobiologically-inspired networks describe the decision-related activity observed in some cortical areas as a transition between attractors of the cortical network. Stimulation induces a change in the attractor configuration and drives the system out from its initial resting attractor to one of the existing attractors associated with the categorical choices. The noise present in the system renders transitions random. We show that there exist two qualitatively different mechanisms for decision, each with distinctive psychophysical signatures. The decision mechanism arising at low inputs, entirely driven by noise, leads to skewed distributions of decision times, with a mean governed by the amplitude of the noise. Moreover, both decision times and performances are monotonically decreasing functions of the overall external stimulation. We also propose two methods, one based on the macroscopic approximation and one based on center manifold theory, to simplify the description of multistable stochastic neural systems.
Posani, Lorenzo. « Inference and modeling of biological networks : a statistical-physics approach to neural attractors and protein fitness landscapes ». Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE043/document.
Texte intégralThe recent advent of high-throughput experimental procedures has opened a new era for the quantitative study of biological systems. Today, electrophysiology recordings and calcium imaging allow for the in vivo simultaneous recording of hundreds to thousands of neurons. In parallel, thanks to automated sequencing procedures, the libraries of known functional proteins expanded from thousands to millions in just a few years. This current abundance of biological data opens a new series of challenges for theoreticians. Accurate and transparent analysis methods are needed to process this massive amount of raw data into meaningful observables. Concurrently, the simultaneous observation of a large number of interacting units enables the development and validation of theoretical models aimed at the mechanistic understanding of the collective behavior of biological systems. In this manuscript, we propose an approach to both these challenges based on methods and models from statistical physics. We present an application of these methods to problems from neuroscience and bioinformatics, focusing on (1) the spatial memory and navigation task in the hippocampal loop and (2) the reconstruction of the fitness landscape of proteins from homologous sequence data
Chapitres de livres sur le sujet "Potts Attractor Neural Network"
Lansner, Anders, Anders Sandberg, Karl Magnus Petersson et Martin Ingvar. « On Forgetful Attractor Network Memories ». Dans Artificial Neural Networks in Medicine and Biology, 54–62. London : Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0513-8_7.
Texte intégralDel Giudice, Paolo, et Stefano Fusi. « Attractor dynamics in an electronic neural network ». Dans Lecture Notes in Computer Science, 1265–70. Berlin, Heidelberg : Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0020325.
Texte intégralZou, Xiaolong, Zilong Ji, Xiao Liu, Yuanyuan Mi, K. Y. Michael Wong et Si Wu. « Learning a Continuous Attractor Neural Network from Real Images ». Dans Neural Information Processing, 622–31. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70093-9_66.
Texte intégralOkamoto, Hiroshi. « Local Detection of Communities by Attractor Neural-Network Dynamics ». Dans Springer Series in Bio-/Neuroinformatics, 115–25. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09903-3_6.
Texte intégralSeow, Ming-Jung, et Vijayan K. Asari. « Recurrent Network as a Nonlinear Line Attractor for Skin Color Association ». Dans Advances in Neural Networks – ISNN 2004, 870–75. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_143.
Texte intégralKoroutchev, Kostadin, et Elka Korutcheva. « Improved Storage Capacity of Hebbian Learning Attractor Neural Network with Bump Formations ». Dans Artificial Neural Networks – ICANN 2006, 234–43. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840817_25.
Texte intégralOkamoto, Hiroshi. « Community Detection as Pattern Restoration by Attractor Neural-Network Dynamics ». Dans Information Processing in Cells and Tissues, 197–207. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23108-2_17.
Texte intégralHamid, Oussama H., et Jochen Braun. « Reinforcement Learning and Attractor Neural Network Models of Associative Learning ». Dans Studies in Computational Intelligence, 327–49. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16469-0_17.
Texte intégralCarrasco, Marco P., et Margarida V. Pato. « A Potts Neural Network Heuristic for the Class/Teacher Timetabling Problem ». Dans Applied Optimization, 173–86. Boston, MA : Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-4137-7_8.
Texte intégralFrolov, Alexander A., Dušan Húsek et Pavel Yu Polyakov. « Attractor Neural Network Combined with Likelihood Maximization Algorithm for Boolean Factor Analysis ». Dans Advances in Neural Networks – ISNN 2012, 1–10. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_1.
Texte intégralActes de conférences sur le sujet "Potts Attractor Neural Network"
PIRMORADIAN, SAHAR, et ALESSANDRO TREVES. « ENCODING WORDS INTO A POTTS ATTRACTOR NETWORK ». Dans Proceedings of the 13th Neural Computation and Psychology Workshop. WORLD SCIENTIFIC, 2013. http://dx.doi.org/10.1142/9789814458849_0003.
Texte intégralDoboli, S., et A. A. Minai. « Network capacity for latent attractor computation ». Dans Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing : New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.857840.
Texte intégralKuijie Cai et Jihong Shen. « Continuous attractor neural network model of multisensory integration ». Dans 2011 International Conference on System Science, Engineering Design and Manufacturing Informatization (ICSEM). IEEE, 2011. http://dx.doi.org/10.1109/icssem.2011.6081317.
Texte intégralFormanek, Lukas, et Ondrej Karpis. « Leaming Lorenz attractor differential equations using neural network ». Dans 2020 5th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM). IEEE, 2020. http://dx.doi.org/10.1109/seeda-cecnsm49515.2020.9221785.
Texte intégralUsher, M., et E. Ruppin. « An attractor neural network model of semantic fact retrieval ». Dans 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137917.
Texte intégralKoroutchev, Kostadin. « Spatial asymmetric retrieval states in binary attractor neural network ». Dans NOISE AND FLUCTUATIONS : 18th International Conference on Noise and Fluctuations - ICNF 2005. AIP, 2005. http://dx.doi.org/10.1063/1.2036825.
Texte intégralPereira, Patricia, Anders Lansner et Pawel Herman. « Incremental Attractor Neural Network Modelling of the Lifespan Retrieval Curve ». Dans 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9891922.
Texte intégralAzarpour, M., S. A. Seyyedsalehi et A. Taherkhani. « Robust pattern recognition using chaotic dynamics in Attractor Recurrent Neural Network ». Dans 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596375.
Texte intégralZandi Mehran, Y., et A. M. Nasrabadi. « Neural network application in strange attractor investigation to detect a FGD ». Dans 2008 4th International IEEE Conference "Intelligent Systems" (IS). IEEE, 2008. http://dx.doi.org/10.1109/is.2008.4670470.
Texte intégralRathore, S., D. Bush, P. Latham et N. Burgess. « Oscillatory dynamics in an attractor neural network with firing rate adaptation ». Dans PHYSICS, COMPUTATION, AND THE MIND - ADVANCES AND CHALLENGES AT INTERFACES : Proceedings of the 12th Granada Seminar on Computational and Statistical Physics. AIP, 2013. http://dx.doi.org/10.1063/1.4776524.
Texte intégral