Добірка наукової літератури з теми "COMPUTATIONAL NEUROSCIENCE MODELS"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "COMPUTATIONAL NEUROSCIENCE MODELS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Krasovskaya, Sofia, and W. Joseph MacInnes. "Salience Models: A Computational Cognitive Neuroscience Review." Vision 3, no. 4 (October 25, 2019): 56. http://dx.doi.org/10.3390/vision3040056.

Повний текст джерела
Анотація:
The seminal model by Laurent Itti and Cristoph Koch demonstrated that we can compute the entire flow of visual processing from input to resulting fixations. Despite many replications and follow-ups, few have matched the impact of the original model—so what made this model so groundbreaking? We have selected five key contributions that distinguish the original salience model by Itti and Koch; namely, its contribution to our theoretical, neural, and computational understanding of visual processing, as well as the spatial and temporal predictions for fixation distributions. During the last 20 years, advances in the field have brought up various techniques and approaches to salience modelling, many of which tried to improve or add to the initial Itti and Koch model. One of the most recent trends has been to adopt the computational power of deep learning neural networks; however, this has also shifted their primary focus to spatial classification. We present a review of recent approaches to modelling salience, starting from direct variations of the Itti and Koch salience model to sophisticated deep-learning architectures, and discuss the models from the point of view of their contribution to computational cognitive neuroscience.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bisht, Raj Kishor. "Design and Development of Mathematical Models for Computational Neuroscience." Mathematical Statistician and Engineering Applications 70, no. 1 (January 31, 2021): 612–20. http://dx.doi.org/10.17762/msea.v70i1.2515.

Повний текст джерела
Анотація:
Through the use of mathematical and computer models, computational neuroscience is an interdisciplinary study that seeks to comprehend the principles and mechanisms underlying the functioning of the brain. These models are essential for understanding intricate neurological processes, giving information about the dynamics of the brain, and directing experimental research. The design and creation of mathematical models for computational neuroscience are presented in-depth in this study.The sections that follow explore several mathematical modelling strategies used in computational neuroscience. These include statistical models that analyse and infer associations from experimental data, biophysical models that explain the electrical properties of individual neurons and their connections, and network models that capture the connectivity and dynamics of brain circuits. Each modelling strategy is examined in terms of its mathematical foundation, underlying presuppositions, and potential limits, as well as instances of how it has been applied to certain areas of neuroscience study.The research also examines the model parameterization, validation, and refinement stages of the model creation process. It emphasises how the integration of experimental evidence, theoretical understanding, and computational simulations leads to iterative model refining. The difficulties and unanswered concerns associated with modelling complex neurological systems are also covered, emphasising the necessity of multi-scale and multi-modal methods to fully capture the complex dynamics of the brain.The paper ends with a prognosis for mathematical modeling's future in computational neuroscience. The development of virtual brain simulations for comprehending brain illnesses and planning therapeutic approaches are some of the rising trends that are highlighted in this article, along with the incorporation of machine learning techniques, anatomical and physiological restrictions, and the assimilation of these trends into models.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Martin, Andrea E. "A Compositional Neural Architecture for Language." Journal of Cognitive Neuroscience 32, no. 8 (August 2020): 1407–27. http://dx.doi.org/10.1162/jocn_a_01552.

Повний текст джерела
Анотація:
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chirimuuta, M. "Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience." Synthese 191, no. 2 (November 27, 2013): 127–53. http://dx.doi.org/10.1007/s11229-013-0369-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fellous, Jean-Marc, and Christiane Linster. "Computational Models of Neuromodulation." Neural Computation 10, no. 4 (May 1, 1998): 771–805. http://dx.doi.org/10.1162/089976698300017476.

Повний текст джерела
Анотація:
Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Migliore, Michele, Thomas M. Morse, Andrew P. Davison, Luis Marenco, Gordon M. Shepherd, and Michael L. Hines. "ModelDB: Making Models Publicly Accessible to Support Computational Neuroscience." Neuroinformatics 1, no. 1 (2003): 135–40. http://dx.doi.org/10.1385/ni:1:1:135.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jiang, Weihang. "Applications of machine learning in neuroscience and inspiration of reinforcement learning for computational neuroscience." Applied and Computational Engineering 4, no. 1 (June 14, 2023): 473–78. http://dx.doi.org/10.54254/2755-2721/4/2023308.

Повний текст джерела
Анотація:
High-performance machine learning algorithms have always been one of the concerns of many researchers. Since its birth, machine learning has been a product of multidisciplinary integration. Especially in the field of neuroscience, models from related fields continue to inspire the development of neural networks and deepen people's understanding of neural networks. The mathematical and quantitative modeling approach to research brought about by machine learning is also feeding into the development of neuroscience. One of the emerging products of this is computational neuroscience. Computational neuroscience has been pushing the boundaries of models of brain function in recent years, and just as early studies of visual hierarchy influenced neural networks, computational neuroscience has great potential to lead to higher performance machine learning algorithms, particularly in the development of deep learning algorithms with strong links to neuroscience. In this paper, it first reviews the help and achievements of machine learning for neuroscience in recent years specially in fMRI image recognition and look at the possibilities for the future development of neural networks due to the recent development of the computational neuroscience in psychiatry of the temporal difference model for dopamine and serotonin.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gardner, Justin L., and Elisha P. Merriam. "Population Models, Not Analyses, of Human Neuroscience Measurements." Annual Review of Vision Science 7, no. 1 (September 15, 2021): 225–55. http://dx.doi.org/10.1146/annurev-vision-093019-111124.

Повний текст джерела
Анотація:
Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Grindrod, Peter, and Desmond J. Higham. "Evolving graphs: dynamical models, inverse problems and propagation." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 466, no. 2115 (November 11, 2009): 753–70. http://dx.doi.org/10.1098/rspa.2009.0456.

Повний текст джерела
Анотація:
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gisiger, T. "Computational models of association cortex." Current Opinion in Neurobiology 10, no. 2 (April 1, 2000): 250–59. http://dx.doi.org/10.1016/s0959-4388(00)00075-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Marsh, Steven Joseph Thomas. "Efficient programming models for neurocomputation." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709268.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhu, Mengchen. "Sparse coding models of neural response in the primary visual cortex." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53868.

Повний текст джерела
Анотація:
Sparse coding is an influential unsupervised learning approach proposed as a theoretical model of the encoding process in the primary visual cortex (V1). While sparse coding has been successful in explaining classical receptive field properties of simple cells, it was unclear whether it can account for more complex response properties in a variety of cell types. In this dissertation, we demonstrate that sparse coding and its variants are consistent with key aspects of neural response in V1, including many contextual and nonlinear effects, a number of inhibitory interneuron properties, as well as the variance and correlation distributions in the population response. The results suggest that important response properties in V1 can be interpreted as emergent effects of a neural population efficiently representing the statistical structures of natural scenes under resource constraints. Based on the models, we make predictions of the circuit structure and response properties in V1 that can be verified by future experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Földiak, Peter. "Models of sensory coding." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239097.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Woldman, Wessel. "Emergent phenomena from dynamic network models : mathematical analysis of EEG from people with IGE." Thesis, University of Exeter, 2016. http://hdl.handle.net/10871/23297.

Повний текст джерела
Анотація:
In this thesis mathematical techniques and models are applied to electroencephalographic (EEG) recordings to study mechanisms of idiopathic generalised epilepsy (IGE). First, we compare network structures derived from resting-state EEG from people with IGE, their unaffected relatives, and healthy controls. Next, these static networks are combined with a dynamical model describing the ac- tivity of a cortical region as a population of phase-oscillators. We then examine the potential of the differences found in the static networks and the emergent properties of the dynamic network as individual biomarkers of IGE. The emphasis of this approach is on discerning the potential of these markers at the level of an indi- vidual subject rather than their ability to identify differences at a group level. Finally, we extend a dynamic model of seizure onset to investigate how epileptiform discharges vary over the course of the day in ambulatory EEG recordings from people with IGE. By per- turbing the dynamics describing the excitability of the system, we demonstrate the model can reproduce discharge distributions on an individual level which are shown to express a circadian tone. The emphasis of the model approach is on understanding how changes in excitability within brain regions, modulated by sleep, metabolism, endocrine axes, or anti-epileptic drugs (AEDs), can drive the emer- gence of epileptiform activity in large-scale brain networks. Our results demonstrate that studying EEG recordings from peo- ple with IGE can lead to new mechanistic insight on the idiopathic nature of IGE, and may eventually lead to clinical applications. We show that biomarkers derived from dynamic network models perform significantly better as classifiers than biomarkers based on static network properties. Hence, our results provide additional ev- idence that the interplay between the dynamics of specific brain re- gions, and the network topology governing the interactions between these regions, is crucial in the generation of emergent epileptiform activity. Pathological activity may emerge due to abnormalities in either of those factors, or a combination of both, and hence it is essential to develop new techniques to characterise this interplay theoretically and to validate predictions experimentally.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mender, Bedeho M. W. "Models of primate supraretinal visual representations." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:ce1fff8e-db5c-46e4-b5aa-7439465c2a77.

Повний текст джерела
Анотація:
This thesis investigates a set of non-classical visual receptive field properties observed in the primate brain. Two main phenomena were explored. The first phenomenon was neurons with head-centered visual receptive fields, in which a neuron responds maximally to a visual stimulus in the same head-centered location across all eye positions. The second phenomenon was perisaccadic receptive field dynamics, which involves a range of experimentally observed response behaviours of an eye-centered neuron associated with the advent of a saccade that relocates the neuron's receptive field. For each of these two phenomena, a hypothesis was proposed for how a neural circuit with a suitable initial architecture and synaptic learning rules could, when subjected to visually-guided training, develop the receptive field properties in question. Corresponding neural network models were first trained as hypothesized, and subsequently tested in conditions similar to experimental tasks used to interrogate the physiology of the relevant primate neural circuits. The behaviour of the models was compared to neurophysiological observations as a metric for their explanatory power. In both cases the neural network models were in broad agreement with experimental observations, and the operation of these models was studied to shed light on the neural processing behind these neural phenomena in the brain.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shepardson, Dylan. "Algorithms for inverting Hodgkin-Huxley type neuron models." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31686.

Повний текст джерела
Анотація:
Thesis (Ph.D)--Algorithms, Combinatorics, and Optimization, Georgia Institute of Technology, 2010.
Committee Chair: Tovey, Craig; Committee Member: Butera, Rob; Committee Member: Nemirovski, Arkadi; Committee Member: Prinz, Astrid; Committee Member: Sokol, Joel. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Boatin, William. "Characterization of neuron models." Thesis, Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-04182005-181732/.

Повний текст джерела
Анотація:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2006.
Dr. Robert H. Lee, Committee Member ; Dr. Kurt Wiesenfeld, Committee Member ; Dr Robert J. Butera, Committee Member.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

BIDDELL, KEVIN MICHAEL. "CREATION OF A BIOPHYSICAL MODEL OF A STRIATAL DORSAL LATERAL MEDIUM SPINY NEURON INCORPORATING DENDRITIC EXCITATION BY NMDA AND AMPA RECEPTOR MODELS." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1196211076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Vellmer, Sebastian. "Applications of the Fokker-Planck Equation in Computational and Cognitive Neuroscience." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21597.

Повний текст джерела
Анотація:
In dieser Arbeit werden mithilfe der Fokker-Planck-Gleichung die Statistiken, vor allem die Leistungsspektren, von Punktprozessen berechnet, die von mehrdimensionalen Integratorneuronen [Engl. integrate-and-fire (IF) neuron], Netzwerken von IF Neuronen und Entscheidungsfindungsmodellen erzeugt werden. Im Gehirn werden Informationen durch Pulszüge von Aktionspotentialen kodiert. IF Neurone mit radikal vereinfachter Erzeugung von Aktionspotentialen haben sich in Studien die auf Pulszeiten fokussiert sind als Standardmodelle etabliert. Eindimensionale IF Modelle können jedoch beobachtetes Pulsverhalten oft nicht beschreiben und müssen dazu erweitert werden. Im erste Teil dieser Arbeit wird eine Theorie zur Berechnung der Pulszugleistungsspektren von stochastischen, multidimensionalen IF Neuronen entwickelt. Ausgehend von der zugehörigen Fokker-Planck-Gleichung werden partiellen Differentialgleichung abgeleitet, deren Lösung sowohl die stationäre Wahrscheinlichkeitsverteilung und Feuerrate, als auch das Pulszugleistungsspektrum beschreibt. Im zweiten Teil wird eine Theorie für große, spärlich verbundene und homogene Netzwerke aus IF Neuronen entwickelt, in der berücksichtigt wird, dass die zeitlichen Korrelationen von Pulszügen selbstkonsistent sind. Neuronale Eingangströme werden durch farbiges Gaußsches Rauschen modelliert, das von einem mehrdimensionalen Ornstein-Uhlenbeck Prozess (OUP) erzeugt wird. Die Koeffizienten des OUP sind vorerst unbekannt und sind als Lösung der Theorie definiert. Um heterogene Netzwerke zu untersuchen, wird eine iterative Methode erweitert. Im dritten Teil wird die Fokker-Planck-Gleichung auf Binärentscheidungen von Diffusionsentscheidungsmodellen [Engl. diffusion-decision models (DDM)] angewendet. Explizite Gleichungen für die Entscheidungszugstatistiken werden für den einfachsten und analytisch lösbaren Fall von der Fokker-Planck-Gleichung hergeleitet. Für nichtliniear Modelle wird die Schwellwertintegrationsmethode erweitert.
This thesis is concerned with the calculation of statistics, in particular the power spectra, of point processes generated by stochastic multidimensional integrate-and-fire (IF) neurons, networks of IF neurons and decision-making models from the corresponding Fokker-Planck equations. In the brain, information is encoded by sequences of action potentials. In studies that focus on spike timing, IF neurons that drastically simplify the spike generation have become the standard model. One-dimensional IF neurons do not suffice to accurately model neural dynamics, however, the extension towards multiple dimensions yields realistic behavior at the price of growing complexity. The first part of this work develops a theory of spike-train power spectra for stochastic, multidimensional IF neurons. From the Fokker-Planck equation, a set of partial differential equations is derived that describes the stationary probability density, the firing rate and the spike-train power spectrum. In the second part of this work, a mean-field theory of large and sparsely connected homogeneous networks of spiking neurons is developed that takes into account the self-consistent temporal correlations of spike trains. Neural input is approximated by colored Gaussian noise generated by a multidimensional Ornstein-Uhlenbeck process of which the coefficients are initially unknown but determined by the self-consistency condition and define the solution of the theory. To explore heterogeneous networks, an iterative scheme is extended to determine the distribution of spectra. In the third part, the Fokker-Planck equation is applied to calculate the statistics of sequences of binary decisions from diffusion-decision models (DDM). For the analytically tractable DDM, the statistics are calculated from the corresponding Fokker-Planck equation. To determine the statistics for nonlinear models, the threshold-integration method is generalized.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hendrickson, Eric B. "Morphologically simplified conductance based neuron models: principles of construction and use in parameter optimization." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33905.

Повний текст джерела
Анотація:
The dynamics of biological neural networks are of great interest to neuroscientists and are frequently studied using conductance-based compartmental neuron models. For speed and ease of use, neuron models are often reduced in morphological complexity. This reduction may affect input processing and prevent the accurate reproduction of neural dynamics. However, such effects are not yet well understood. Therefore, for my first aim I analyzed the processing capabilities of 'branched' or 'unbranched' reduced models by collapsing the dendritic tree of a morphologically realistic 'full' globus pallidus neuron model while maintaining all other model parameters. Branched models maintained the original detailed branching structure of the full model while the unbranched models did not. I found that full model responses to somatic inputs were generally preserved by both types of reduced model but that branched reduced models were better able to maintain responses to dendritic inputs. However, inputs that caused dendritic sodium spikes, for instance, could not be accurately reproduced by any reduced model. Based on my analyses, I provide recommendations on how to construct reduced models and indicate suitable applications for different levels of reduction. In particular, I recommend that unbranched reduced models be used for fast searches of parameter space given somatic input output data. The intrinsic electrical properties of neurons depend on the modifiable behavior of their ion channels. Obtaining a quality match between recorded voltage traces and the output of a conductance based compartmental neuron model depends on accurate estimates of the kinetic parameters of the channels in the biological neuron. Indeed, mismatches in channel kinetics may be detectable as failures to match somatic neural recordings when tuning model conductance densities. In my first aim, I showed that this is a task for which unbranched reduced models are ideally suited. Therefore, for my second aim I optimized unbranched reduced model parameters to match three experimentally characterized globus pallidus neurons by performing two stages of automated searches. In the first stage, I set conductance densities free and found that even the best matches to experimental data exhibited unavoidable problems. I hypothesized that these mismatches were due to limitations in channel model kinetics. To test this hypothesis, I performed a second stage of searches with free channel kinetics and observed decreases in the mismatches from the first stage. Additionally, some kinetic parameters consistently shifted to new values in multiple cells, suggesting the possibility for tailored improvements to channel models. Given my results and the potential for cell specific modulation of channel kinetics, I recommend that experimental kinetic data be considered as a starting point rather than as a gold standard for the development of neuron models.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Hecht-Nielsen, Robert, and Thomas McKenna, eds. Computational Models for Neuroscience. London: Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0085-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

L, Schwartz Eric, ed. Computational neuroscience. Cambridge, Mass: MIT Press, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Computational neuroscience. Cambridge, Mass: MIT Press, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pardalos, P. M. Computational neuroscience. New York: Springer, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

A, Ascoli Georgio, ed. Computational neuroanatomy: Principles and methods. Totowa, N.J: Humana Press, 2002.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chakravarthy, V. Srinivasa, and Ahmed A. Moustafa. Computational Neuroscience Models of the Basal Ganglia. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8494-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

de, Schutter Erik, ed. Computational modeling methods for neuroscientists. Cambridge, Mass: MIT Press, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fundamentals of computational neuroscience. 2nd ed. Oxford: Oxford University Press, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Trappenberg, Thomas P. Fundamentals of computational neuroscience. 2nd ed. Oxford: Oxford University Press, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Trappenberg, Thomas P. Fundamentals of computational neuroscience. 2nd ed. Oxford: Oxford University Press, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "COMPUTATIONAL NEUROSCIENCE MODELS"

1

van Gils, Stephan, and Wim van Drongelen. "Epilepsy: Computational Models." In Encyclopedia of Computational Neuroscience, 1121–34. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

van Gils, Stephan, and Wim van Drongelen. "Epilepsy: Computational Models." In Encyclopedia of Computational Neuroscience, 1–17. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7320-6_504-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Skinner, Frances K., Nancy Kopell, and Brian Mulloney. "Mathematical Models of the Crayfish Swimmeret System." In Computational Neuroscience, 839–43. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-9800-5_130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lee, D. D., B. Y. Reis, H. S. Seung, and D. W. Tank. "Nonlinear Network Models of the Oculomotor Integrator." In Computational Neuroscience, 371–77. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-9800-5_60.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lansner, Anders, Örjan Ekeberg, Erik Fransén, Per Hammarlund, and Tomas Wilhelmsson. "Detailed Simulation of Large Scale Neural Network Models." In Computational Neuroscience, 931–35. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-9800-5_144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bednar, James A., and Risto Miikkulainen. "Pattern-Generator-Driven Development in Self-Organizing Models." In Computational Neuroscience, 317–23. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-4831-7_53.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cheong, Jin Hyun, Eshin Jolly, Sunhae Sul, and Luke J. Chang. "Computational Models in Social Neuroscience." In Computational Models of Brain and Behavior, 229–44. Chichester, UK: John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781119159193.ch17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yu, Angela J. "Computational Models of Neuromodulation." In Encyclopedia of Computational Neuroscience, 761–66. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_625.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yu, Angela J. "Computational Models of Neuromodulation." In Encyclopedia of Computational Neuroscience, 1–6. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7320-6_625-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jardri, Renaud, and Sophie Denève. "Computational Models of Hallucinations." In The Neuroscience of Hallucinations, 289–313. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4121-2_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Tirupattur, Naveen, Christopher C. Lapish, Snehasis Mukhopadhyay, Tuan D. Pham, Xiaobo Zhou, Hiroshi Tanaka, Mayumi Oyama-Higa, et al. "Text Mining for Neuroscience." In 2011 INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL MODELS FOR LIFE SCIENCES (CMLS-11). AIP, 2011. http://dx.doi.org/10.1063/1.3596634.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mohnert, Florian, Mateo Tošić, and Falk Lieder. "Testing Computational Models of Goal Pursuit." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1350-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Uchiyama, Ryutaro, Claudio Tennie, and Charley Wu. "Model-Based Assimilation Transmits and Recombines World Models." In 2023 Conference on Cognitive Computational Neuroscience. Oxford, United Kingdom: Cognitive Computational Neuroscience, 2023. http://dx.doi.org/10.32470/ccn.2023.1722-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Muzellec, Sabine, Mathieu Chalvidal, Thomas Serre, and Rufin VanRullen. "Accurate implementation of computational neuroscience models through neural ODEs." In 2022 Conference on Cognitive Computational Neuroscience. San Francisco, California, USA: Cognitive Computational Neuroscience, 2022. http://dx.doi.org/10.32470/ccn.2022.1165-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Belledonne, Mario, Chloë Geller, and Ilker Yildirim. "Goal-conditioned world models: Adaptive computation over multi-granular generative models explains human scene perception." In 2023 Conference on Cognitive Computational Neuroscience. Oxford, United Kingdom: Cognitive Computational Neuroscience, 2023. http://dx.doi.org/10.32470/ccn.2023.1663-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Speekenbrink, Maarten. "Identifiability of Gaussian Bayesian bandit models." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1335-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Baltieri, Manuel, and Christopher L. Buckley. "Active Inference: Computational Models of Motor Control without Efference Copy." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1144-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yao, Yuanwei, Tatia Buidze, and Jan Gläscher. "Effectiveness of different computational models during the Tacit Communication Game." In 2023 Conference on Cognitive Computational Neuroscience. Oxford, United Kingdom: Cognitive Computational Neuroscience, 2023. http://dx.doi.org/10.32470/ccn.2023.1268-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Weidinger, Laura, Andrea Gradassi, Lucas Molleman, and Wouter van den Bos. "Test-retest reliability of canonical reinforcement learning models." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1053-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ceja, Vanessa, Yussuf Ezzeldine, and Megan A. K. Peters. "Models of confidence to facilitate engaging task designs." In 2022 Conference on Cognitive Computational Neuroscience. San Francisco, California, USA: Cognitive Computational Neuroscience, 2022. http://dx.doi.org/10.32470/ccn.2022.1150-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "COMPUTATIONAL NEUROSCIENCE MODELS"

1

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo, and Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], November 2018. http://dx.doi.org/10.31812/123456789/2648.

Повний текст джерела
Анотація:
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet add-ins for non-linear optimization, creation of neural networks in the spreadsheet environment without add-ins and macros. After analyzing a collection of writings of 1890-1950, the research determines the role of the scientific journal “Bulletin of Mathematical Biophysics”, its founder Nicolas Rashevsky and the scientific community around the journal in creating and developing models and methods of computational neuroscience. There are identified psychophysical basics of creating neural networks, mathematical foundations of neural computing and methods of neuroengineering (image recognition, in particular). The role of Walter Pitts in combining the descriptive and quantitative theories of training is discussed. It is shown that to acquire neural simulation competences in the spreadsheet environment, one should master the models based on the historical and genetic approach. It is indicated that there are three groups of models, which are promising in terms of developing corresponding methods – the continuous two-factor model of Rashevsky, the discrete model of McCulloch and Pitts, and the discrete-continuous models of Householder and Landahl.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії