Littérature scientifique sur le sujet « COMPUTATIONAL NEUROSCIENCE MODELS »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « COMPUTATIONAL NEUROSCIENCE MODELS ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Krasovskaya, Sofia, et W. Joseph MacInnes. « Salience Models : A Computational Cognitive Neuroscience Review ». Vision 3, no 4 (25 octobre 2019) : 56. http://dx.doi.org/10.3390/vision3040056.

Texte intégral
Résumé :
The seminal model by Laurent Itti and Cristoph Koch demonstrated that we can compute the entire flow of visual processing from input to resulting fixations. Despite many replications and follow-ups, few have matched the impact of the original model—so what made this model so groundbreaking? We have selected five key contributions that distinguish the original salience model by Itti and Koch; namely, its contribution to our theoretical, neural, and computational understanding of visual processing, as well as the spatial and temporal predictions for fixation distributions. During the last 20 years, advances in the field have brought up various techniques and approaches to salience modelling, many of which tried to improve or add to the initial Itti and Koch model. One of the most recent trends has been to adopt the computational power of deep learning neural networks; however, this has also shifted their primary focus to spatial classification. We present a review of recent approaches to modelling salience, starting from direct variations of the Itti and Koch salience model to sophisticated deep-learning architectures, and discuss the models from the point of view of their contribution to computational cognitive neuroscience.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bisht, Raj Kishor. « Design and Development of Mathematical Models for Computational Neuroscience ». Mathematical Statistician and Engineering Applications 70, no 1 (31 janvier 2021) : 612–20. http://dx.doi.org/10.17762/msea.v70i1.2515.

Texte intégral
Résumé :
Through the use of mathematical and computer models, computational neuroscience is an interdisciplinary study that seeks to comprehend the principles and mechanisms underlying the functioning of the brain. These models are essential for understanding intricate neurological processes, giving information about the dynamics of the brain, and directing experimental research. The design and creation of mathematical models for computational neuroscience are presented in-depth in this study.The sections that follow explore several mathematical modelling strategies used in computational neuroscience. These include statistical models that analyse and infer associations from experimental data, biophysical models that explain the electrical properties of individual neurons and their connections, and network models that capture the connectivity and dynamics of brain circuits. Each modelling strategy is examined in terms of its mathematical foundation, underlying presuppositions, and potential limits, as well as instances of how it has been applied to certain areas of neuroscience study.The research also examines the model parameterization, validation, and refinement stages of the model creation process. It emphasises how the integration of experimental evidence, theoretical understanding, and computational simulations leads to iterative model refining. The difficulties and unanswered concerns associated with modelling complex neurological systems are also covered, emphasising the necessity of multi-scale and multi-modal methods to fully capture the complex dynamics of the brain.The paper ends with a prognosis for mathematical modeling's future in computational neuroscience. The development of virtual brain simulations for comprehending brain illnesses and planning therapeutic approaches are some of the rising trends that are highlighted in this article, along with the incorporation of machine learning techniques, anatomical and physiological restrictions, and the assimilation of these trends into models.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Martin, Andrea E. « A Compositional Neural Architecture for Language ». Journal of Cognitive Neuroscience 32, no 8 (août 2020) : 1407–27. http://dx.doi.org/10.1162/jocn_a_01552.

Texte intégral
Résumé :
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Chirimuuta, M. « Minimal models and canonical neural computations : the distinctness of computational explanation in neuroscience ». Synthese 191, no 2 (27 novembre 2013) : 127–53. http://dx.doi.org/10.1007/s11229-013-0369-y.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Fellous, Jean-Marc, et Christiane Linster. « Computational Models of Neuromodulation ». Neural Computation 10, no 4 (1 mai 1998) : 771–805. http://dx.doi.org/10.1162/089976698300017476.

Texte intégral
Résumé :
Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Migliore, Michele, Thomas M. Morse, Andrew P. Davison, Luis Marenco, Gordon M. Shepherd et Michael L. Hines. « ModelDB : Making Models Publicly Accessible to Support Computational Neuroscience ». Neuroinformatics 1, no 1 (2003) : 135–40. http://dx.doi.org/10.1385/ni:1:1:135.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Jiang, Weihang. « Applications of machine learning in neuroscience and inspiration of reinforcement learning for computational neuroscience ». Applied and Computational Engineering 4, no 1 (14 juin 2023) : 473–78. http://dx.doi.org/10.54254/2755-2721/4/2023308.

Texte intégral
Résumé :
High-performance machine learning algorithms have always been one of the concerns of many researchers. Since its birth, machine learning has been a product of multidisciplinary integration. Especially in the field of neuroscience, models from related fields continue to inspire the development of neural networks and deepen people's understanding of neural networks. The mathematical and quantitative modeling approach to research brought about by machine learning is also feeding into the development of neuroscience. One of the emerging products of this is computational neuroscience. Computational neuroscience has been pushing the boundaries of models of brain function in recent years, and just as early studies of visual hierarchy influenced neural networks, computational neuroscience has great potential to lead to higher performance machine learning algorithms, particularly in the development of deep learning algorithms with strong links to neuroscience. In this paper, it first reviews the help and achievements of machine learning for neuroscience in recent years specially in fMRI image recognition and look at the possibilities for the future development of neural networks due to the recent development of the computational neuroscience in psychiatry of the temporal difference model for dopamine and serotonin.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gardner, Justin L., et Elisha P. Merriam. « Population Models, Not Analyses, of Human Neuroscience Measurements ». Annual Review of Vision Science 7, no 1 (15 septembre 2021) : 225–55. http://dx.doi.org/10.1146/annurev-vision-093019-111124.

Texte intégral
Résumé :
Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Grindrod, Peter, et Desmond J. Higham. « Evolving graphs : dynamical models, inverse problems and propagation ». Proceedings of the Royal Society A : Mathematical, Physical and Engineering Sciences 466, no 2115 (11 novembre 2009) : 753–70. http://dx.doi.org/10.1098/rspa.2009.0456.

Texte intégral
Résumé :
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gisiger, T. « Computational models of association cortex ». Current Opinion in Neurobiology 10, no 2 (1 avril 2000) : 250–59. http://dx.doi.org/10.1016/s0959-4388(00)00075-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Marsh, Steven Joseph Thomas. « Efficient programming models for neurocomputation ». Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709268.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhu, Mengchen. « Sparse coding models of neural response in the primary visual cortex ». Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53868.

Texte intégral
Résumé :
Sparse coding is an influential unsupervised learning approach proposed as a theoretical model of the encoding process in the primary visual cortex (V1). While sparse coding has been successful in explaining classical receptive field properties of simple cells, it was unclear whether it can account for more complex response properties in a variety of cell types. In this dissertation, we demonstrate that sparse coding and its variants are consistent with key aspects of neural response in V1, including many contextual and nonlinear effects, a number of inhibitory interneuron properties, as well as the variance and correlation distributions in the population response. The results suggest that important response properties in V1 can be interpreted as emergent effects of a neural population efficiently representing the statistical structures of natural scenes under resource constraints. Based on the models, we make predictions of the circuit structure and response properties in V1 that can be verified by future experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Földiak, Peter. « Models of sensory coding ». Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239097.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Woldman, Wessel. « Emergent phenomena from dynamic network models : mathematical analysis of EEG from people with IGE ». Thesis, University of Exeter, 2016. http://hdl.handle.net/10871/23297.

Texte intégral
Résumé :
In this thesis mathematical techniques and models are applied to electroencephalographic (EEG) recordings to study mechanisms of idiopathic generalised epilepsy (IGE). First, we compare network structures derived from resting-state EEG from people with IGE, their unaffected relatives, and healthy controls. Next, these static networks are combined with a dynamical model describing the ac- tivity of a cortical region as a population of phase-oscillators. We then examine the potential of the differences found in the static networks and the emergent properties of the dynamic network as individual biomarkers of IGE. The emphasis of this approach is on discerning the potential of these markers at the level of an indi- vidual subject rather than their ability to identify differences at a group level. Finally, we extend a dynamic model of seizure onset to investigate how epileptiform discharges vary over the course of the day in ambulatory EEG recordings from people with IGE. By per- turbing the dynamics describing the excitability of the system, we demonstrate the model can reproduce discharge distributions on an individual level which are shown to express a circadian tone. The emphasis of the model approach is on understanding how changes in excitability within brain regions, modulated by sleep, metabolism, endocrine axes, or anti-epileptic drugs (AEDs), can drive the emer- gence of epileptiform activity in large-scale brain networks. Our results demonstrate that studying EEG recordings from peo- ple with IGE can lead to new mechanistic insight on the idiopathic nature of IGE, and may eventually lead to clinical applications. We show that biomarkers derived from dynamic network models perform significantly better as classifiers than biomarkers based on static network properties. Hence, our results provide additional ev- idence that the interplay between the dynamics of specific brain re- gions, and the network topology governing the interactions between these regions, is crucial in the generation of emergent epileptiform activity. Pathological activity may emerge due to abnormalities in either of those factors, or a combination of both, and hence it is essential to develop new techniques to characterise this interplay theoretically and to validate predictions experimentally.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Mender, Bedeho M. W. « Models of primate supraretinal visual representations ». Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:ce1fff8e-db5c-46e4-b5aa-7439465c2a77.

Texte intégral
Résumé :
This thesis investigates a set of non-classical visual receptive field properties observed in the primate brain. Two main phenomena were explored. The first phenomenon was neurons with head-centered visual receptive fields, in which a neuron responds maximally to a visual stimulus in the same head-centered location across all eye positions. The second phenomenon was perisaccadic receptive field dynamics, which involves a range of experimentally observed response behaviours of an eye-centered neuron associated with the advent of a saccade that relocates the neuron's receptive field. For each of these two phenomena, a hypothesis was proposed for how a neural circuit with a suitable initial architecture and synaptic learning rules could, when subjected to visually-guided training, develop the receptive field properties in question. Corresponding neural network models were first trained as hypothesized, and subsequently tested in conditions similar to experimental tasks used to interrogate the physiology of the relevant primate neural circuits. The behaviour of the models was compared to neurophysiological observations as a metric for their explanatory power. In both cases the neural network models were in broad agreement with experimental observations, and the operation of these models was studied to shed light on the neural processing behind these neural phenomena in the brain.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Shepardson, Dylan. « Algorithms for inverting Hodgkin-Huxley type neuron models ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31686.

Texte intégral
Résumé :
Thesis (Ph.D)--Algorithms, Combinatorics, and Optimization, Georgia Institute of Technology, 2010.
Committee Chair: Tovey, Craig; Committee Member: Butera, Rob; Committee Member: Nemirovski, Arkadi; Committee Member: Prinz, Astrid; Committee Member: Sokol, Joel. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Boatin, William. « Characterization of neuron models ». Thesis, Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-04182005-181732/.

Texte intégral
Résumé :
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2006.
Dr. Robert H. Lee, Committee Member ; Dr. Kurt Wiesenfeld, Committee Member ; Dr Robert J. Butera, Committee Member.
Styles APA, Harvard, Vancouver, ISO, etc.
8

BIDDELL, KEVIN MICHAEL. « CREATION OF A BIOPHYSICAL MODEL OF A STRIATAL DORSAL LATERAL MEDIUM SPINY NEURON INCORPORATING DENDRITIC EXCITATION BY NMDA AND AMPA RECEPTOR MODELS ». University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1196211076.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Vellmer, Sebastian. « Applications of the Fokker-Planck Equation in Computational and Cognitive Neuroscience ». Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21597.

Texte intégral
Résumé :
In dieser Arbeit werden mithilfe der Fokker-Planck-Gleichung die Statistiken, vor allem die Leistungsspektren, von Punktprozessen berechnet, die von mehrdimensionalen Integratorneuronen [Engl. integrate-and-fire (IF) neuron], Netzwerken von IF Neuronen und Entscheidungsfindungsmodellen erzeugt werden. Im Gehirn werden Informationen durch Pulszüge von Aktionspotentialen kodiert. IF Neurone mit radikal vereinfachter Erzeugung von Aktionspotentialen haben sich in Studien die auf Pulszeiten fokussiert sind als Standardmodelle etabliert. Eindimensionale IF Modelle können jedoch beobachtetes Pulsverhalten oft nicht beschreiben und müssen dazu erweitert werden. Im erste Teil dieser Arbeit wird eine Theorie zur Berechnung der Pulszugleistungsspektren von stochastischen, multidimensionalen IF Neuronen entwickelt. Ausgehend von der zugehörigen Fokker-Planck-Gleichung werden partiellen Differentialgleichung abgeleitet, deren Lösung sowohl die stationäre Wahrscheinlichkeitsverteilung und Feuerrate, als auch das Pulszugleistungsspektrum beschreibt. Im zweiten Teil wird eine Theorie für große, spärlich verbundene und homogene Netzwerke aus IF Neuronen entwickelt, in der berücksichtigt wird, dass die zeitlichen Korrelationen von Pulszügen selbstkonsistent sind. Neuronale Eingangströme werden durch farbiges Gaußsches Rauschen modelliert, das von einem mehrdimensionalen Ornstein-Uhlenbeck Prozess (OUP) erzeugt wird. Die Koeffizienten des OUP sind vorerst unbekannt und sind als Lösung der Theorie definiert. Um heterogene Netzwerke zu untersuchen, wird eine iterative Methode erweitert. Im dritten Teil wird die Fokker-Planck-Gleichung auf Binärentscheidungen von Diffusionsentscheidungsmodellen [Engl. diffusion-decision models (DDM)] angewendet. Explizite Gleichungen für die Entscheidungszugstatistiken werden für den einfachsten und analytisch lösbaren Fall von der Fokker-Planck-Gleichung hergeleitet. Für nichtliniear Modelle wird die Schwellwertintegrationsmethode erweitert.
This thesis is concerned with the calculation of statistics, in particular the power spectra, of point processes generated by stochastic multidimensional integrate-and-fire (IF) neurons, networks of IF neurons and decision-making models from the corresponding Fokker-Planck equations. In the brain, information is encoded by sequences of action potentials. In studies that focus on spike timing, IF neurons that drastically simplify the spike generation have become the standard model. One-dimensional IF neurons do not suffice to accurately model neural dynamics, however, the extension towards multiple dimensions yields realistic behavior at the price of growing complexity. The first part of this work develops a theory of spike-train power spectra for stochastic, multidimensional IF neurons. From the Fokker-Planck equation, a set of partial differential equations is derived that describes the stationary probability density, the firing rate and the spike-train power spectrum. In the second part of this work, a mean-field theory of large and sparsely connected homogeneous networks of spiking neurons is developed that takes into account the self-consistent temporal correlations of spike trains. Neural input is approximated by colored Gaussian noise generated by a multidimensional Ornstein-Uhlenbeck process of which the coefficients are initially unknown but determined by the self-consistency condition and define the solution of the theory. To explore heterogeneous networks, an iterative scheme is extended to determine the distribution of spectra. In the third part, the Fokker-Planck equation is applied to calculate the statistics of sequences of binary decisions from diffusion-decision models (DDM). For the analytically tractable DDM, the statistics are calculated from the corresponding Fokker-Planck equation. To determine the statistics for nonlinear models, the threshold-integration method is generalized.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Hendrickson, Eric B. « Morphologically simplified conductance based neuron models : principles of construction and use in parameter optimization ». Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33905.

Texte intégral
Résumé :
The dynamics of biological neural networks are of great interest to neuroscientists and are frequently studied using conductance-based compartmental neuron models. For speed and ease of use, neuron models are often reduced in morphological complexity. This reduction may affect input processing and prevent the accurate reproduction of neural dynamics. However, such effects are not yet well understood. Therefore, for my first aim I analyzed the processing capabilities of 'branched' or 'unbranched' reduced models by collapsing the dendritic tree of a morphologically realistic 'full' globus pallidus neuron model while maintaining all other model parameters. Branched models maintained the original detailed branching structure of the full model while the unbranched models did not. I found that full model responses to somatic inputs were generally preserved by both types of reduced model but that branched reduced models were better able to maintain responses to dendritic inputs. However, inputs that caused dendritic sodium spikes, for instance, could not be accurately reproduced by any reduced model. Based on my analyses, I provide recommendations on how to construct reduced models and indicate suitable applications for different levels of reduction. In particular, I recommend that unbranched reduced models be used for fast searches of parameter space given somatic input output data. The intrinsic electrical properties of neurons depend on the modifiable behavior of their ion channels. Obtaining a quality match between recorded voltage traces and the output of a conductance based compartmental neuron model depends on accurate estimates of the kinetic parameters of the channels in the biological neuron. Indeed, mismatches in channel kinetics may be detectable as failures to match somatic neural recordings when tuning model conductance densities. In my first aim, I showed that this is a task for which unbranched reduced models are ideally suited. Therefore, for my second aim I optimized unbranched reduced model parameters to match three experimentally characterized globus pallidus neurons by performing two stages of automated searches. In the first stage, I set conductance densities free and found that even the best matches to experimental data exhibited unavoidable problems. I hypothesized that these mismatches were due to limitations in channel model kinetics. To test this hypothesis, I performed a second stage of searches with free channel kinetics and observed decreases in the mismatches from the first stage. Additionally, some kinetic parameters consistently shifted to new values in multiple cells, suggesting the possibility for tailored improvements to channel models. Given my results and the potential for cell specific modulation of channel kinetics, I recommend that experimental kinetic data be considered as a starting point rather than as a gold standard for the development of neuron models.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Hecht-Nielsen, Robert, et Thomas McKenna, dir. Computational Models for Neuroscience. London : Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0085-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

L, Schwartz Eric, dir. Computational neuroscience. Cambridge, Mass : MIT Press, 1990.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Computational neuroscience. Cambridge, Mass : MIT Press, 1990.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Pardalos, P. M. Computational neuroscience. New York : Springer, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

A, Ascoli Georgio, dir. Computational neuroanatomy : Principles and methods. Totowa, N.J : Humana Press, 2002.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chakravarthy, V. Srinivasa, et Ahmed A. Moustafa. Computational Neuroscience Models of the Basal Ganglia. Singapore : Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8494-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

de, Schutter Erik, dir. Computational modeling methods for neuroscientists. Cambridge, Mass : MIT Press, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Fundamentals of computational neuroscience. 2e éd. Oxford : Oxford University Press, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Trappenberg, Thomas P. Fundamentals of computational neuroscience. 2e éd. Oxford : Oxford University Press, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Trappenberg, Thomas P. Fundamentals of computational neuroscience. 2e éd. Oxford : Oxford University Press, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "COMPUTATIONAL NEUROSCIENCE MODELS"

1

van Gils, Stephan, et Wim van Drongelen. « Epilepsy : Computational Models ». Dans Encyclopedia of Computational Neuroscience, 1121–34. New York, NY : Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_504.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

van Gils, Stephan, et Wim van Drongelen. « Epilepsy : Computational Models ». Dans Encyclopedia of Computational Neuroscience, 1–17. New York, NY : Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7320-6_504-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Skinner, Frances K., Nancy Kopell et Brian Mulloney. « Mathematical Models of the Crayfish Swimmeret System ». Dans Computational Neuroscience, 839–43. Boston, MA : Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-9800-5_130.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lee, D. D., B. Y. Reis, H. S. Seung et D. W. Tank. « Nonlinear Network Models of the Oculomotor Integrator ». Dans Computational Neuroscience, 371–77. Boston, MA : Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-9800-5_60.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lansner, Anders, Örjan Ekeberg, Erik Fransén, Per Hammarlund et Tomas Wilhelmsson. « Detailed Simulation of Large Scale Neural Network Models ». Dans Computational Neuroscience, 931–35. Boston, MA : Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-9800-5_144.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Bednar, James A., et Risto Miikkulainen. « Pattern-Generator-Driven Development in Self-Organizing Models ». Dans Computational Neuroscience, 317–23. Boston, MA : Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-4831-7_53.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Cheong, Jin Hyun, Eshin Jolly, Sunhae Sul et Luke J. Chang. « Computational Models in Social Neuroscience ». Dans Computational Models of Brain and Behavior, 229–44. Chichester, UK : John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781119159193.ch17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Yu, Angela J. « Computational Models of Neuromodulation ». Dans Encyclopedia of Computational Neuroscience, 761–66. New York, NY : Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_625.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Yu, Angela J. « Computational Models of Neuromodulation ». Dans Encyclopedia of Computational Neuroscience, 1–6. New York, NY : Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7320-6_625-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Jardri, Renaud, et Sophie Denève. « Computational Models of Hallucinations ». Dans The Neuroscience of Hallucinations, 289–313. New York, NY : Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4121-2_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Tirupattur, Naveen, Christopher C. Lapish, Snehasis Mukhopadhyay, Tuan D. Pham, Xiaobo Zhou, Hiroshi Tanaka, Mayumi Oyama-Higa et al. « Text Mining for Neuroscience ». Dans 2011 INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL MODELS FOR LIFE SCIENCES (CMLS-11). AIP, 2011. http://dx.doi.org/10.1063/1.3596634.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mohnert, Florian, Mateo Tošić et Falk Lieder. « Testing Computational Models of Goal Pursuit ». Dans 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA : Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1350-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Uchiyama, Ryutaro, Claudio Tennie et Charley Wu. « Model-Based Assimilation Transmits and Recombines World Models ». Dans 2023 Conference on Cognitive Computational Neuroscience. Oxford, United Kingdom : Cognitive Computational Neuroscience, 2023. http://dx.doi.org/10.32470/ccn.2023.1722-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Muzellec, Sabine, Mathieu Chalvidal, Thomas Serre et Rufin VanRullen. « Accurate implementation of computational neuroscience models through neural ODEs ». Dans 2022 Conference on Cognitive Computational Neuroscience. San Francisco, California, USA : Cognitive Computational Neuroscience, 2022. http://dx.doi.org/10.32470/ccn.2022.1165-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Belledonne, Mario, Chloë Geller et Ilker Yildirim. « Goal-conditioned world models : Adaptive computation over multi-granular generative models explains human scene perception ». Dans 2023 Conference on Cognitive Computational Neuroscience. Oxford, United Kingdom : Cognitive Computational Neuroscience, 2023. http://dx.doi.org/10.32470/ccn.2023.1663-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Speekenbrink, Maarten. « Identifiability of Gaussian Bayesian bandit models ». Dans 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA : Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1335-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Baltieri, Manuel, et Christopher L. Buckley. « Active Inference : Computational Models of Motor Control without Efference Copy ». Dans 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA : Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1144-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Yao, Yuanwei, Tatia Buidze et Jan Gläscher. « Effectiveness of different computational models during the Tacit Communication Game ». Dans 2023 Conference on Cognitive Computational Neuroscience. Oxford, United Kingdom : Cognitive Computational Neuroscience, 2023. http://dx.doi.org/10.32470/ccn.2023.1268-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Weidinger, Laura, Andrea Gradassi, Lucas Molleman et Wouter van den Bos. « Test-retest reliability of canonical reinforcement learning models ». Dans 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA : Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1053-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ceja, Vanessa, Yussuf Ezzeldine et Megan A. K. Peters. « Models of confidence to facilitate engaging task designs ». Dans 2022 Conference on Cognitive Computational Neuroscience. San Francisco, California, USA : Cognitive Computational Neuroscience, 2022. http://dx.doi.org/10.32470/ccn.2022.1150-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo et Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets : The Dawn of the Age of Camelot. [б. в.], novembre 2018. http://dx.doi.org/10.31812/123456789/2648.

Texte intégral
Résumé :
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet add-ins for non-linear optimization, creation of neural networks in the spreadsheet environment without add-ins and macros. After analyzing a collection of writings of 1890-1950, the research determines the role of the scientific journal “Bulletin of Mathematical Biophysics”, its founder Nicolas Rashevsky and the scientific community around the journal in creating and developing models and methods of computational neuroscience. There are identified psychophysical basics of creating neural networks, mathematical foundations of neural computing and methods of neuroengineering (image recognition, in particular). The role of Walter Pitts in combining the descriptive and quantitative theories of training is discussed. It is shown that to acquire neural simulation competences in the spreadsheet environment, one should master the models based on the historical and genetic approach. It is indicated that there are three groups of models, which are promising in terms of developing corresponding methods – the continuous two-factor model of Rashevsky, the discrete model of McCulloch and Pitts, and the discrete-continuous models of Householder and Landahl.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie