Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: COMPUTATIONAL NEUROSCIENCE MODELS.

Artykuły w czasopismach na temat „COMPUTATIONAL NEUROSCIENCE MODELS”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „COMPUTATIONAL NEUROSCIENCE MODELS”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Krasovskaya, Sofia, i W. Joseph MacInnes. "Salience Models: A Computational Cognitive Neuroscience Review". Vision 3, nr 4 (25.10.2019): 56. http://dx.doi.org/10.3390/vision3040056.

Pełny tekst źródła
Streszczenie:
The seminal model by Laurent Itti and Cristoph Koch demonstrated that we can compute the entire flow of visual processing from input to resulting fixations. Despite many replications and follow-ups, few have matched the impact of the original model—so what made this model so groundbreaking? We have selected five key contributions that distinguish the original salience model by Itti and Koch; namely, its contribution to our theoretical, neural, and computational understanding of visual processing, as well as the spatial and temporal predictions for fixation distributions. During the last 20 years, advances in the field have brought up various techniques and approaches to salience modelling, many of which tried to improve or add to the initial Itti and Koch model. One of the most recent trends has been to adopt the computational power of deep learning neural networks; however, this has also shifted their primary focus to spatial classification. We present a review of recent approaches to modelling salience, starting from direct variations of the Itti and Koch salience model to sophisticated deep-learning architectures, and discuss the models from the point of view of their contribution to computational cognitive neuroscience.
Style APA, Harvard, Vancouver, ISO itp.
2

Bisht, Raj Kishor. "Design and Development of Mathematical Models for Computational Neuroscience". Mathematical Statistician and Engineering Applications 70, nr 1 (31.01.2021): 612–20. http://dx.doi.org/10.17762/msea.v70i1.2515.

Pełny tekst źródła
Streszczenie:
Through the use of mathematical and computer models, computational neuroscience is an interdisciplinary study that seeks to comprehend the principles and mechanisms underlying the functioning of the brain. These models are essential for understanding intricate neurological processes, giving information about the dynamics of the brain, and directing experimental research. The design and creation of mathematical models for computational neuroscience are presented in-depth in this study.The sections that follow explore several mathematical modelling strategies used in computational neuroscience. These include statistical models that analyse and infer associations from experimental data, biophysical models that explain the electrical properties of individual neurons and their connections, and network models that capture the connectivity and dynamics of brain circuits. Each modelling strategy is examined in terms of its mathematical foundation, underlying presuppositions, and potential limits, as well as instances of how it has been applied to certain areas of neuroscience study.The research also examines the model parameterization, validation, and refinement stages of the model creation process. It emphasises how the integration of experimental evidence, theoretical understanding, and computational simulations leads to iterative model refining. The difficulties and unanswered concerns associated with modelling complex neurological systems are also covered, emphasising the necessity of multi-scale and multi-modal methods to fully capture the complex dynamics of the brain.The paper ends with a prognosis for mathematical modeling's future in computational neuroscience. The development of virtual brain simulations for comprehending brain illnesses and planning therapeutic approaches are some of the rising trends that are highlighted in this article, along with the incorporation of machine learning techniques, anatomical and physiological restrictions, and the assimilation of these trends into models.
Style APA, Harvard, Vancouver, ISO itp.
3

Martin, Andrea E. "A Compositional Neural Architecture for Language". Journal of Cognitive Neuroscience 32, nr 8 (sierpień 2020): 1407–27. http://dx.doi.org/10.1162/jocn_a_01552.

Pełny tekst źródła
Streszczenie:
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Style APA, Harvard, Vancouver, ISO itp.
4

Chirimuuta, M. "Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience". Synthese 191, nr 2 (27.11.2013): 127–53. http://dx.doi.org/10.1007/s11229-013-0369-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Fellous, Jean-Marc, i Christiane Linster. "Computational Models of Neuromodulation". Neural Computation 10, nr 4 (1.05.1998): 771–805. http://dx.doi.org/10.1162/089976698300017476.

Pełny tekst źródła
Streszczenie:
Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.
Style APA, Harvard, Vancouver, ISO itp.
6

Migliore, Michele, Thomas M. Morse, Andrew P. Davison, Luis Marenco, Gordon M. Shepherd i Michael L. Hines. "ModelDB: Making Models Publicly Accessible to Support Computational Neuroscience". Neuroinformatics 1, nr 1 (2003): 135–40. http://dx.doi.org/10.1385/ni:1:1:135.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Jiang, Weihang. "Applications of machine learning in neuroscience and inspiration of reinforcement learning for computational neuroscience". Applied and Computational Engineering 4, nr 1 (14.06.2023): 473–78. http://dx.doi.org/10.54254/2755-2721/4/2023308.

Pełny tekst źródła
Streszczenie:
High-performance machine learning algorithms have always been one of the concerns of many researchers. Since its birth, machine learning has been a product of multidisciplinary integration. Especially in the field of neuroscience, models from related fields continue to inspire the development of neural networks and deepen people's understanding of neural networks. The mathematical and quantitative modeling approach to research brought about by machine learning is also feeding into the development of neuroscience. One of the emerging products of this is computational neuroscience. Computational neuroscience has been pushing the boundaries of models of brain function in recent years, and just as early studies of visual hierarchy influenced neural networks, computational neuroscience has great potential to lead to higher performance machine learning algorithms, particularly in the development of deep learning algorithms with strong links to neuroscience. In this paper, it first reviews the help and achievements of machine learning for neuroscience in recent years specially in fMRI image recognition and look at the possibilities for the future development of neural networks due to the recent development of the computational neuroscience in psychiatry of the temporal difference model for dopamine and serotonin.
Style APA, Harvard, Vancouver, ISO itp.
8

Gardner, Justin L., i Elisha P. Merriam. "Population Models, Not Analyses, of Human Neuroscience Measurements". Annual Review of Vision Science 7, nr 1 (15.09.2021): 225–55. http://dx.doi.org/10.1146/annurev-vision-093019-111124.

Pełny tekst źródła
Streszczenie:
Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements.
Style APA, Harvard, Vancouver, ISO itp.
9

Grindrod, Peter, i Desmond J. Higham. "Evolving graphs: dynamical models, inverse problems and propagation". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 466, nr 2115 (11.11.2009): 753–70. http://dx.doi.org/10.1098/rspa.2009.0456.

Pełny tekst źródła
Streszczenie:
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Style APA, Harvard, Vancouver, ISO itp.
10

Gisiger, T. "Computational models of association cortex". Current Opinion in Neurobiology 10, nr 2 (1.04.2000): 250–59. http://dx.doi.org/10.1016/s0959-4388(00)00075-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

O’Reilly, Randall C., Seth A. Herd i Wolfgang M. Pauli. "Computational models of cognitive control". Current Opinion in Neurobiology 20, nr 2 (kwiecień 2010): 257–61. http://dx.doi.org/10.1016/j.conb.2010.01.008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Geffner, Hector. "Computational models of planning". Wiley Interdisciplinary Reviews: Cognitive Science 4, nr 4 (18.03.2013): 341–56. http://dx.doi.org/10.1002/wcs.1233.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Charpentier, Caroline J., i John P. O’Doherty. "The application of computational models to social neuroscience: promises and pitfalls". Social Neuroscience 13, nr 6 (12.09.2018): 637–47. http://dx.doi.org/10.1080/17470919.2018.1518834.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Poirazi, Panayiota, i Athanasia Papoutsi. "Illuminating dendritic function with computational models". Nature Reviews Neuroscience 21, nr 6 (11.05.2020): 303–21. http://dx.doi.org/10.1038/s41583-020-0301-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Bazhenov, Maxim, Igor Timofeev, Mircea Steriade i Terrence J. Sejnowski. "Computational Models of Thalamocortical Augmenting Responses". Journal of Neuroscience 18, nr 16 (15.08.1998): 6444–65. http://dx.doi.org/10.1523/jneurosci.18-16-06444.1998.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Gonzalez, Bryan, i Luke J. Chang. "Arbitrating Computational Models of Observational Learning". Neuron 106, nr 4 (maj 2020): 558–60. http://dx.doi.org/10.1016/j.neuron.2020.04.028.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Herd, Seth A., Kai A. Krueger, Trenton E. Kriete, Tsung-Ren Huang, Thomas E. Hazy i Randall C. O'Reilly. "Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach". Computational Intelligence and Neuroscience 2013 (2013): 1–18. http://dx.doi.org/10.1155/2013/149329.

Pełny tekst źródła
Streszczenie:
We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.
Style APA, Harvard, Vancouver, ISO itp.
18

Vladusich, Tony. "Towards a computational neuroscience of autism-psychosis spectrum disorders". Behavioral and Brain Sciences 31, nr 3 (czerwiec 2008): 282–83. http://dx.doi.org/10.1017/s0140525x08004433.

Pełny tekst źródła
Streszczenie:
AbstractCrespi & Badcock (C&B) hypothesize that psychosis and autism represent opposite poles of human social cognition. I briefly outline how computational models of cognitive brain function may be used as a resource to further develop and experimentally test hypotheses concerning “autism-psychosis spectrum disorders.”1
Style APA, Harvard, Vancouver, ISO itp.
19

O'Reilly, Randall C. "Generalization in Interactive Networks: The Benefits of Inhibitory Competition and Hebbian Learning". Neural Computation 13, nr 6 (1.06.2001): 1199–241. http://dx.doi.org/10.1162/08997660152002834.

Pełny tekst źródła
Streszczenie:
Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bid irectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.
Style APA, Harvard, Vancouver, ISO itp.
20

Peyser, Alexander, Sandra Diaz Pier, Wouter Klijn, Abigail Morrison i Jochen Triesch. "Editorial: Linking experimental and computational connectomics". Network Neuroscience 3, nr 4 (styczeń 2019): 902–4. http://dx.doi.org/10.1162/netn_e_00108.

Pełny tekst źródła
Streszczenie:
Large-scale in silico experimentation depends on the generation of connectomes beyond available anatomical structure. We suggest that linking research across the fields of experimental connectomics, theoretical neuroscience, and high-performance computing can enable a new generation of models bridging the gap between biophysical detail and global function. This Focus Feature on ”Linking Experimental and Computational Connectomics” aims to bring together some examples from these domains as a step toward the development of more comprehensive generative models of multiscale connectomes.
Style APA, Harvard, Vancouver, ISO itp.
21

Dodig-Crnkovic, G. "Natural morphological computation as foundation of learning to learn in humans, other living organisms, and intelligent machines". Philosophical Problems of Information Technologies and Cyberspace, nr 1 (14.07.2021): 4–34. http://dx.doi.org/10.17726/philit.2021.1.1.

Pełny tekst źródła
Streszczenie:
The emerging contemporary natural philosophy provides a common ground for the integrative view of the natural, the artificial, and the human-social knowledge and practices. Learning process is central for acquiring, maintaining, and managing knowledge, both theoretical and practical. This paper explores the relationships between the present advances in understanding of learning in the sciences of the artificial (deep learning, robotics), natural sciences (neuroscience, cognitive science, biology), and philosophy (philosophy of computing, philosophy of mind, natural philosophy). The question is, what at this stage of the development the inspiration from nature, specifically its computational models such as info-computation through morphological computing, can contribute to machine learning and artificial intelligence, and how much on the other hand models and experiments in machine learning and robotics can motivate, justify, and inform research in computational cognitive science, neurosciences, and computing nature. We propose that one contribution can be understanding of the mechanisms of ‘learning to learn’, as a step towards deep learning with symbolic layer of computation/information processing in a framework linking connectionism with symbolism. As all natural systems possessing intelligence are cognitive systems, we describe the evolutionary arguments for the necessity of learning to learn for a system to reach humanlevel intelligence through evolution and development. The paper thus presents a contribution to the epistemology of the contemporary philosophy of nature.
Style APA, Harvard, Vancouver, ISO itp.
22

Dodig-Crnkovic, Gordana. "Natural Morphological Computation as Foundation of Learning to Learn in Humans, Other Living Organisms, and Intelligent Machines". Philosophies 5, nr 3 (1.09.2020): 17. http://dx.doi.org/10.3390/philosophies5030017.

Pełny tekst źródła
Streszczenie:
The emerging contemporary natural philosophy provides a common ground for the integrative view of the natural, the artificial, and the human-social knowledge and practices. Learning process is central for acquiring, maintaining, and managing knowledge, both theoretical and practical. This paper explores the relationships between the present advances in understanding of learning in the sciences of the artificial (deep learning, robotics), natural sciences (neuroscience, cognitive science, biology), and philosophy (philosophy of computing, philosophy of mind, natural philosophy). The question is, what at this stage of the development the inspiration from nature, specifically its computational models such as info-computation through morphological computing, can contribute to machine learning and artificial intelligence, and how much on the other hand models and experiments in machine learning and robotics can motivate, justify, and inform research in computational cognitive science, neurosciences, and computing nature. We propose that one contribution can be understanding of the mechanisms of ‘learning to learn’, as a step towards deep learning with symbolic layer of computation/information processing in a framework linking connectionism with symbolism. As all natural systems possessing intelligence are cognitive systems, we describe the evolutionary arguments for the necessity of learning to learn for a system to reach human-level intelligence through evolution and development. The paper thus presents a contribution to the epistemology of the contemporary philosophy of nature.
Style APA, Harvard, Vancouver, ISO itp.
23

Kawato, Mitsuo, i Aurelio Cortese. "From internal models toward metacognitive AI". Biological Cybernetics 115, nr 5 (październik 2021): 415–30. http://dx.doi.org/10.1007/s00422-021-00904-7.

Pełny tekst źródła
Streszczenie:
AbstractIn several papers published in Biological Cybernetics in the 1980s and 1990s, Kawato and colleagues proposed computational models explaining how internal models are acquired in the cerebellum. These models were later supported by neurophysiological experiments using monkeys and neuroimaging experiments involving humans. These early studies influenced neuroscience from basic, sensory-motor control to higher cognitive functions. One of the most perplexing enigmas related to internal models is to understand the neural mechanisms that enable animals to learn large-dimensional problems with so few trials. Consciousness and metacognition—the ability to monitor one’s own thoughts, may be part of the solution to this enigma. Based on literature reviews of the past 20 years, here we propose a computational neuroscience model of metacognition. The model comprises a modular hierarchical reinforcement-learning architecture of parallel and layered, generative-inverse model pairs. In the prefrontal cortex, a distributed executive network called the “cognitive reality monitoring network” (CRMN) orchestrates conscious involvement of generative-inverse model pairs in perception and action. Based on mismatches between computations by generative and inverse models, as well as reward prediction errors, CRMN computes a “responsibility signal” that gates selection and learning of pairs in perception, action, and reinforcement learning. A high responsibility signal is given to the pairs that best capture the external world, that are competent in movements (small mismatch), and that are capable of reinforcement learning (small reward-prediction error). CRMN selects pairs with higher responsibility signals as objects of metacognition, and consciousness is determined by the entropy of responsibility signals across all pairs. This model could lead to new-generation AI, which exhibits metacognition, consciousness, dimension reduction, selection of modules and corresponding representations, and learning from small samples. It may also lead to the development of a new scientific paradigm that enables the causal study of consciousness by combining CRMN and decoded neurofeedback.
Style APA, Harvard, Vancouver, ISO itp.
24

Levenstein, Daniel, Veronica A. Alvarez, Asohan Amarasingham, Habiba Azab, Zhe S. Chen, Richard C. Gerkin, Andrea Hasenstaub i in. "On the Role of Theory and Modeling in Neuroscience". Journal of Neuroscience 43, nr 7 (15.02.2023): 1074–88. http://dx.doi.org/10.1523/jneurosci.1179-22.2022.

Pełny tekst źródła
Streszczenie:
In recent years, the field of neuroscience has gone through rapid experimental advances and a significant increase in the use of quantitative and computational methods. This growth has created a need for clearer analyses of the theory and modeling approaches used in the field. This issue is particularly complex in neuroscience because the field studies phenomena that cross a wide range of scales and often require consideration at varying degrees of abstraction, from precise biophysical interactions to the computations they implement. We argue that a pragmatic perspective of science, in which descriptive, mechanistic, and normative models and theories each play a distinct role in defining and bridging levels of abstraction, will facilitate neuroscientific practice. This analysis leads to methodological suggestions, including selecting a level of abstraction that is appropriate for a given problem, identifying transfer functions to connect models and data, and the use of models themselves as a form of experiment.
Style APA, Harvard, Vancouver, ISO itp.
25

Yang, Charles. "Computational models of syntactic acquisition". Wiley Interdisciplinary Reviews: Cognitive Science 3, nr 2 (5.12.2011): 205–13. http://dx.doi.org/10.1002/wcs.1154.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Petzschner, Frederike H., Sarah N. Garfinkel, Martin P. Paulus, Christof Koch i Sahib S. Khalsa. "Computational Models of Interoception and Body Regulation". Trends in Neurosciences 44, nr 1 (styczeń 2021): 63–76. http://dx.doi.org/10.1016/j.tins.2020.09.012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Voytek, B. "Emergent Basal Ganglia Pathology within Computational Models". Journal of Neuroscience 26, nr 28 (12.07.2006): 7317–18. http://dx.doi.org/10.1523/jneurosci.2255-06.2006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Gluck, Mark A. "Computational models of hippocampal function in memory". Hippocampus 6, nr 6 (1996): 565–66. http://dx.doi.org/10.1002/(sici)1098-1063(1996)6:6<565::aid-hipo1>3.0.co;2-g.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Raffone, Antonino. "Synthetic computational models of selective attention". Neural Networks 19, nr 9 (listopad 2006): 1458–60. http://dx.doi.org/10.1016/j.neunet.2006.09.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Stabler, Edward P. "Computational models of language processing". Behavioral and Brain Sciences 9, nr 3 (wrzesień 1986): 550–51. http://dx.doi.org/10.1017/s0140525x0004704x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Gerstner, Wulfram, Henning Sprekeler i Gustavo Deco. "Theory and Simulation in Neuroscience". Science 338, nr 6103 (4.10.2012): 60–65. http://dx.doi.org/10.1126/science.1227356.

Pełny tekst źródła
Streszczenie:
Modeling work in neuroscience can be classified using two different criteria. The first one is the complexity of the model, ranging from simplified conceptual models that are amenable to mathematical analysis to detailed models that require simulations in order to understand their properties. The second criterion is that of direction of workflow, which can be from microscopic to macroscopic scales (bottom-up) or from behavioral target functions to properties of components (top-down). We review the interaction of theory and simulation using examples of top-down and bottom-up studies and point to some current developments in the fields of computational and theoretical neuroscience.
Style APA, Harvard, Vancouver, ISO itp.
32

Borisyuk, Roman. "Encyclopedia of computational neuroscience: The end of the second millennium". Behavioral and Brain Sciences 23, nr 4 (sierpień 2000): 534–35. http://dx.doi.org/10.1017/s0140525x00243367.

Pełny tekst źródła
Streszczenie:
Arbib et al. describe mathematical and computational models in neuroscience as well as neuroanatomy and neurophysiology of several important brain structures. This is a useful guide to mathematical and computational modelling of the structure and function of nervous system. The book highlights the need to develop a theory of brain functioning, and it offers some useful approaches and concepts.
Style APA, Harvard, Vancouver, ISO itp.
33

Ranken, D. M., i J. S. George. "MRIVIEW: Computational Models for Functional Brain Imaging". NeuroImage 7, nr 4 (maj 1998): S801. http://dx.doi.org/10.1016/s1053-8119(18)31634-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Katritzky, Alan R., Dimitar A. Dobchev, Iva B. Stoyanova-Slavova, Minati Kuanar, Maxim M. Bespalov, Mati Karelson i Mart Saarma. "Novel computational models for predicting dopamine interactions". Experimental Neurology 211, nr 1 (maj 2008): 150–71. http://dx.doi.org/10.1016/j.expneurol.2008.01.018.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Quartz, Steven R. "FROM COGNITIVE SCIENCE TO COGNITIVE NEUROSCIENCE TO NEUROECONOMICS". Economics and Philosophy 24, nr 3 (listopad 2008): 459–71. http://dx.doi.org/10.1017/s0266267108002083.

Pełny tekst źródła
Streszczenie:
As an emerging discipline, neuroeconomics faces considerable methodological and practical challenges. In this paper, I suggest that these challenges can be understood by exploring the similarities and dissimilarities between the emergence of neuroeconomics and the emergence of cognitive and computational neuroscience two decades ago. From these parallels, I suggest the major challenge facing theory formation in the neural and behavioural sciences is that of being under-constrained by data, making a detailed understanding of physical implementation necessary for theory construction in neuroeconomics. Rather than following a top-down strategy, neuroeconomists should be pragmatic in the use of available data from animal models, information regarding neural pathways and projections, computational models of neural function, functional imaging and behavioural data. By providing convergent evidence across multiple levels of organization, neuroeconomics will have its most promising prospects of success.
Style APA, Harvard, Vancouver, ISO itp.
36

Goddard, Nigel H., Michael Hucka, Fred Howell, Hugo Cornelis, Kavita Shankar i David Beeman. "Towards NeuroML: Model Description Methods for Collaborative Modelling in Neuroscience". Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 356, nr 1412 (29.08.2001): 1209–28. http://dx.doi.org/10.1098/rstb.2001.0910.

Pełny tekst źródła
Streszczenie:
Biological nervous systems and the mechanisms underlying their operation exhibit astonishing complexity. Computational models of these systems have been correspondingly complex. As these models become ever more sophisticated, they become increasingly difficult to define, comprehend, manage and communicate. Consequently, for scientific understanding of biological nervous systems to progress, it is crucial for modellers to have software tools that support discussion, development and exchange of computational models. We describe methodologies that focus on these tasks, improving the ability of neuroscientists to engage in the modelling process. We report our findings on the requirements for these tools and discuss the use of declarative forms of model description—equivalent to object–oriented classes and database schema—which we call templates. We introduce NeuroML, a mark–up language for the neurosciences which is defined syntactically using templates, and its specific component intended as a common format for communication between modelling–related tools. Finally, we propose a template hierarchy for this modelling component of NeuroML, sufficient for describing models ranging in structural levels from neuron cell membranes to neural networks. These templates support both a framework for user–level interaction with models, and a high–performance framework for efficient simulation of the models.
Style APA, Harvard, Vancouver, ISO itp.
37

Van Pottelbergh, Tomas, Guillaume Drion i Rodolphe Sepulchre. "Robust Modulation of Integrate-and-Fire Models". Neural Computation 30, nr 4 (kwiecień 2018): 987–1011. http://dx.doi.org/10.1162/neco_a_01065.

Pełny tekst źródła
Streszczenie:
By controlling the state of neuronal populations, neuromodulators ultimately affect behavior. A key neuromodulation mechanism is the alteration of neuronal excitability via the modulation of ion channel expression. This type of neuromodulation is normally studied with conductance-based models, but those models are computationally challenging for large-scale network simulations needed in population studies. This article studies the modulation properties of the multiquadratic integrate-and-fire model, a generalization of the classical quadratic integrate-and-fire model. The model is shown to combine the computational economy of integrate-and-fire modeling and the physiological interpretability of conductance-based modeling. It is therefore a good candidate for affordable computational studies of neuromodulation in large networks.
Style APA, Harvard, Vancouver, ISO itp.
38

Sejnowski, Terrence J. "Computational models and the development of topographic projections". Trends in Neurosciences 10, nr 8 (sierpień 1987): 304–5. http://dx.doi.org/10.1016/0166-2236(87)90081-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Litwin-Kumar, Ashok, i Srinivas C. Turaga. "Constraining computational models using electron microscopy wiring diagrams". Current Opinion in Neurobiology 58 (październik 2019): 94–100. http://dx.doi.org/10.1016/j.conb.2019.07.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Choi, C. T. M., W. D. Lai i Y. B. Chen. "Optimization of Cochlear Implant Electrode Array Using Genetic Algorithms and Computational Neuroscience Models". IEEE Transactions on Magnetics 40, nr 2 (marzec 2004): 639–42. http://dx.doi.org/10.1109/tmag.2004.824912.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Carrillo, José Antonio, Stéphane Cordier i Simona Mancini. "One-dimensional Fokker-Planck reduced dynamics of decision making models in computational neuroscience". Communications in Mathematical Sciences 11, nr 2 (2013): 523–40. http://dx.doi.org/10.4310/cms.2013.v11.n2.a10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Goldstone, Robert L., i Marco A. Janssen. "Computational models of collective behavior". Trends in Cognitive Sciences 9, nr 9 (wrzesień 2005): 424–30. http://dx.doi.org/10.1016/j.tics.2005.07.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Yousefi, Bardia, i Chu Kiong Loo. "Biologically-Inspired Computational Neural Mechanism for Human Action/activity Recognition: A Review". Electronics 8, nr 10 (15.10.2019): 1169. http://dx.doi.org/10.3390/electronics8101169.

Pełny tekst źródła
Streszczenie:
Theoretical neuroscience investigation shows valuable information on the mechanism for recognizing the biological movements in the mammalian visual system. This involves many different fields of researches such as psychological, neurophysiology, neuro-psychological, computer vision, and artificial intelligence (AI). The research on these areas provided massive information and plausible computational models. Here, a review on this subject is presented. This paper describes different perspective to look at this task including action perception, computational and knowledge based modeling, psychological, and neuroscience approaches.
Style APA, Harvard, Vancouver, ISO itp.
44

D’Mello, Sidney K., Louis Tay i Rosy Southwell. "Psychological Measurement in the Information Age: Machine-Learned Computational Models". Current Directions in Psychological Science 31, nr 1 (luty 2022): 76–87. http://dx.doi.org/10.1177/09637214211056906.

Pełny tekst źródła
Streszczenie:
Psychological science can benefit from and contribute to emerging approaches from the computing and information sciences driven by the availability of real-world data and advances in sensing and computing. We focus on one such approach, machine-learned computational models (MLCMs)—computer programs learned from data, typically with human supervision. We introduce MLCMs and discuss how they contrast with traditional computational models and assessment in the psychological sciences. Examples of MLCMs from cognitive and affective science, neuroscience, education, organizational psychology, and personality and social psychology are provided. We consider the accuracy and generalizability of MLCM-based measures, cautioning researchers to consider the underlying context and intended use when interpreting their performance. We conclude that in addition to known data privacy and security concerns, the use of MLCMs entails a reconceptualization of fairness, bias, interpretability, and responsible use.
Style APA, Harvard, Vancouver, ISO itp.
45

Brette, Romain. "Exact Simulation of Integrate-and-Fire Models with Synaptic Conductances". Neural Computation 18, nr 8 (sierpień 2006): 2004–27. http://dx.doi.org/10.1162/neco.2006.18.8.2004.

Pełny tekst źródła
Streszczenie:
Computational neuroscience relies heavily on the simulation of large networks of neuron models. There are essentially two simulation strategies: (1) using an approximation method (e.g., Runge-Kutta) with spike times binned to the time step and (2) calculating spike times exactly in an event-driven fashion. In large networks, the computation time of the best algorithm for either strategy scales linearly with the number of synapses, but each strategy has its own assets and constraints: approximation methods can be applied to any model but are inexact; exact simulation avoids numerical artifacts but is limited to simple models. Previous work has focused on improving the accuracy of approximation methods. In this article, we extend the range of models that can be simulated exactly to a more realistic model: an integrate-and-fire model with exponential synaptic conductances.
Style APA, Harvard, Vancouver, ISO itp.
46

Mujica-Parodi, Lilianne R., i Helmut H. Strey. "Making Sense of Computational Psychiatry". International Journal of Neuropsychopharmacology 23, nr 5 (27.03.2020): 339–47. http://dx.doi.org/10.1093/ijnp/pyaa013.

Pełny tekst źródła
Streszczenie:
Abstract In psychiatry we often speak of constructing “models.” Here we try to make sense of what such a claim might mean, starting with the most fundamental question: “What is (and isn’t) a model?” We then discuss, in a concrete measurable sense, what it means for a model to be useful. In so doing, we first identify the added value that a computational model can provide in the context of accuracy and power. We then present limitations of standard statistical methods and provide suggestions for how we can expand the explanatory power of our analyses by reconceptualizing statistical models as dynamical systems. Finally, we address the problem of model building—suggesting ways in which computational psychiatry can escape the potential for cognitive biases imposed by classical hypothesis-driven research, exploiting deep systems-level information contained within neuroimaging data to advance our understanding of psychiatric neuroscience.
Style APA, Harvard, Vancouver, ISO itp.
47

O’Reilly, Jamie A., Jordan Wehrman i Paul F. Sowman. "A Guided Tutorial on Modelling Human Event-Related Potentials with Recurrent Neural Networks". Sensors 22, nr 23 (28.11.2022): 9243. http://dx.doi.org/10.3390/s22239243.

Pełny tekst źródła
Streszczenie:
In cognitive neuroscience research, computational models of event-related potentials (ERP) can provide a means of developing explanatory hypotheses for the observed waveforms. However, researchers trained in cognitive neurosciences may face technical challenges in implementing these models. This paper provides a tutorial on developing recurrent neural network (RNN) models of ERP waveforms in order to facilitate broader use of computational models in ERP research. To exemplify the RNN model usage, the P3 component evoked by target and non-target visual events, measured at channel Pz, is examined. Input representations of experimental events and corresponding ERP labels are used to optimize the RNN in a supervised learning paradigm. Linking one input representation with multiple ERP waveform labels, then optimizing the RNN to minimize mean-squared-error loss, causes the RNN output to approximate the grand-average ERP waveform. Behavior of the RNN can then be evaluated as a model of the computational principles underlying ERP generation. Aside from fitting such a model, the current tutorial will also demonstrate how to classify hidden units of the RNN by their temporal responses and characterize them using principal component analysis. Statistical hypothesis testing can also be applied to these data. This paper focuses on presenting the modelling approach and subsequent analysis of model outputs in a how-to format, using publicly available data and shared code. While relatively less emphasis is placed on specific interpretations of P3 response generation, the results initiate some interesting discussion points.
Style APA, Harvard, Vancouver, ISO itp.
48

Pelot, Nicole, Eric Musselman, Daniel Marshall, Christopher Davis, Edgar Pena, Minhaj Hussein, Will Huffman, Andrew Shoffstall i Warren Grill. "Advancing autonomic nerve stimulation through computational models". Brain Stimulation 16, nr 1 (styczeń 2023): 164–65. http://dx.doi.org/10.1016/j.brs.2023.01.151.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Louie, Kenway. "Asymmetric and adaptive reward coding via normalized reinforcement learning". PLOS Computational Biology 18, nr 7 (21.07.2022): e1010350. http://dx.doi.org/10.1371/journal.pcbi.1010350.

Pełny tekst źródła
Streszczenie:
Learning is widely modeled in psychology, neuroscience, and computer science by prediction error-guided reinforcement learning (RL) algorithms. While standard RL assumes linear reward functions, reward-related neural activity is a saturating, nonlinear function of reward; however, the computational and behavioral implications of nonlinear RL are unknown. Here, we show that nonlinear RL incorporating the canonical divisive normalization computation introduces an intrinsic and tunable asymmetry in prediction error coding. At the behavioral level, this asymmetry explains empirical variability in risk preferences typically attributed to asymmetric learning rates. At the neural level, diversity in asymmetries provides a computational mechanism for recently proposed theories of distributional RL, allowing the brain to learn the full probability distribution of future rewards. This behavioral and computational flexibility argues for an incorporation of biologically valid value functions in computational models of learning and decision-making.
Style APA, Harvard, Vancouver, ISO itp.
50

Birgiolas, Justas, Vergil Haynes, Padraig Gleeson, Richard C. Gerkin, Suzanne W. Dietrich i Sharon Crook. "NeuroML-DB: Sharing and characterizing data-driven neuroscience models described in NeuroML". PLOS Computational Biology 19, nr 3 (3.03.2023): e1010941. http://dx.doi.org/10.1371/journal.pcbi.1010941.

Pełny tekst źródła
Streszczenie:
As researchers develop computational models of neural systems with increasing sophistication and scale, it is often the case that fully de novo model development is impractical and inefficient. Thus arises a critical need to quickly find, evaluate, re-use, and build upon models and model components developed by other researchers. We introduce the NeuroML Database (NeuroML-DB.org), which has been developed to address this need and to complement other model sharing resources. NeuroML-DB stores over 1,500 previously published models of ion channels, cells, and networks that have been translated to the modular NeuroML model description language. The database also provides reciprocal links to other neuroscience model databases (ModelDB, Open Source Brain) as well as access to the original model publications (PubMed). These links along with Neuroscience Information Framework (NIF) search functionality provide deep integration with other neuroscience community modeling resources and greatly facilitate the task of finding suitable models for reuse. Serving as an intermediate language, NeuroML and its tooling ecosystem enable efficient translation of models to other popular simulator formats. The modular nature also enables efficient analysis of a large number of models and inspection of their properties. Search capabilities of the database, together with web-based, programmable online interfaces, allow the community of researchers to rapidly assess stored model electrophysiology, morphology, and computational complexity properties. We use these capabilities to perform a database-scale analysis of neuron and ion channel models and describe a novel tetrahedral structure formed by cell model clusters in the space of model properties and features. This analysis provides further information about model similarity to enrich database search.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii