Journal articles on the topic 'Neurons Models'

To see the other types of publications on this topic, follow the link: Neurons Models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neurons Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dasika, Vasant, John A. White, and H. Steven Colburn. "Simple neuron models of ITD sensitive neurons." Journal of the Acoustical Society of America 111, no. 5 (2002): 2355. http://dx.doi.org/10.1121/1.4777912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holmstrom, Lars, Patrick D. Roberts, and Christine V. Portfors. "Responses to Social Vocalizations in the Inferior Colliculus of the Mustached Bat Are Influenced by Secondary Tuning Curves." Journal of Neurophysiology 98, no. 6 (December 2007): 3461–72. http://dx.doi.org/10.1152/jn.00638.2007.

Full text
Abstract:
Neurons in the inferior colliculus (IC) of the mustached bat integrate input from multiple frequency bands in a complex fashion. These neurons are important for encoding the bat's echolocation and social vocalizations. The purpose of this study was to quantify the contribution of complex frequency interactions on the responses of IC neurons to social vocalizations. Neural responses to single tones, two-tone pairs, and social vocalizations were recorded in the IC of the mustached bat. Three types of data driven stimulus-response models were designed for each neuron from single tone and tone pair stimuli to predict the responses of individual neurons to social vocalizations. The first model was generated only using the neuron's primary frequency tuning curve, whereas the second model incorporated the entire hearing range of the animal. The extended model often predicted responses to many social vocalizations more accurately for multiply tuned neurons. One class of multiply tuned neuron that likely encodes echolocation information also responded to many of the social vocalizations, suggesting that some neurons in the mustached bat IC have dual functions. The third model included two-tone frequency tunings of the neurons. The responses to vocalizations were better predicted by the two-tone models when the neuron had inhibitory frequency tuning curves that were not near the neuron's primary tuning curve. Our results suggest that complex frequency interactions in the IC determine neural responses to social vocalizations and some neurons in IC have dual functions that encode both echolocation and social vocalization signals.
APA, Harvard, Vancouver, ISO, and other styles
3

Duggins, Peter, and Chris Eliasmith. "Constructing functional models from biophysically-detailed neurons." PLOS Computational Biology 18, no. 9 (September 8, 2022): e1010461. http://dx.doi.org/10.1371/journal.pcbi.1010461.

Full text
Abstract:
Improving biological plausibility and functional capacity are two important goals for brain models that connect low-level neural details to high-level behavioral phenomena. We develop a method called “oracle-supervised Neural Engineering Framework” (osNEF) to train biologically-detailed spiking neural networks that realize a variety of cognitively-relevant dynamical systems. Specifically, we train networks to perform computations that are commonly found in cognitive systems (communication, multiplication, harmonic oscillation, and gated working memory) using four distinct neuron models (leaky-integrate-and-fire neurons, Izhikevich neurons, 4-dimensional nonlinear point neurons, and 4-compartment, 6-ion-channel layer-V pyramidal cell reconstructions) connected with various synaptic models (current-based synapses, conductance-based synapses, and voltage-gated synapses). We show that osNEF networks exhibit the target dynamics by accounting for nonlinearities present within the neuron models: performance is comparable across all four systems and all four neuron models, with variance proportional to task and neuron model complexity. We also apply osNEF to build a model of working memory that performs a delayed response task using a combination of pyramidal cells and inhibitory interneurons connected with NMDA and GABA synapses. The baseline performance and forgetting rate of the model are consistent with animal data from delayed match-to-sample tasks (DMTST): we observe a baseline performance of 95% and exponential forgetting with time constant τ = 8.5s, while a recent meta-analysis of DMTST performance across species observed baseline performances of 58 − 99% and exponential forgetting with time constants of τ = 2.4 − 71s. These results demonstrate that osNEF can train functional brain models using biologically-detailed components and open new avenues for investigating the relationship between biophysical mechanisms and functional capabilities.
APA, Harvard, Vancouver, ISO, and other styles
4

Prinz, Astrid A., Cyrus P. Billimoria, and Eve Marder. "Alternative to Hand-Tuning Conductance-Based Models: Construction and Analysis of Databases of Model Neurons." Journal of Neurophysiology 90, no. 6 (December 2003): 3998–4015. http://dx.doi.org/10.1152/jn.00641.2003.

Full text
Abstract:
Conventionally, the parameters of neuronal models are hand-tuned using trial-and-error searches to produce a desired behavior. Here, we present an alternative approach. We have generated a database of about 1.7 million single-compartment model neurons by independently varying 8 maximal membrane conductances based on measurements from lobster stomatogastric neurons. We classified the spontaneous electrical activity of each model neuron and its responsiveness to inputs during runtime with an adaptive algorithm and saved a reduced version of each neuron's activity pattern. Our analysis of the distribution of different activity types (silent, spiking, bursting, irregular) in the 8-dimensional conductance space indicates that the coarse grid of conductance values we chose is sufficient to capture the salient features of the distribution. The database can be searched for different combinations of neuron properties such as activity type, spike or burst frequency, resting potential, frequency–current relation, and phase-response curve. We demonstrate how the database can be screened for models that reproduce the behavior of a specific biological neuron and show that the contents of the database can give insight into the way a neuron's membrane conductances determine its activity pattern and response properties. Similar databases can be constructed to explore parameter spaces in multicompartmental models or small networks, or to examine the effects of changes in the voltage dependence of currents. In all cases, database searches can provide insight into how neuronal and network properties depend on the values of the parameters in the models.
APA, Harvard, Vancouver, ISO, and other styles
5

Hong, En, Fatma Gurel Kazanci, and Astrid A. Prinz. "Different Roles of Related Currents in Fast and Slow Spiking of Model Neurons From Two Phyla." Journal of Neurophysiology 100, no. 4 (October 2008): 2048–61. http://dx.doi.org/10.1152/jn.90567.2008.

Full text
Abstract:
Neuronal activity arises from the interplay of membrane and synaptic currents. Although many channel proteins conducting these currents are phylogenetically conserved, channels of the same type in different animals can have different voltage dependencies and dynamics. What does this mean for our ability to derive rules about the role of different types of ion channels in neuronal activity? Can results about the role of a particular channel type in a particular type of neuron be generalized to other neuron types? We compare spiking model neurons in two databases constructed by exploring the maximal conductance spaces of two models. The first is a model of crustacean stomatogastric neurons, and the second is a model of rodent thalamocortical neurons, but both models contain similar types of membrane currents. Spiking neurons in both databases show distinct fast and slow subpopulations, but our analysis reveals that related currents play different roles in fast and slow spiking in the stomatogastric versus thalamocortical neurons. This analysis involved conductance-space visualization and comparison of voltage traces, current traces, and frequency-current relationships from all spiker subpopulations. Our results are consistent with previous work indicating that the role a membrane current plays in shaping a neuron's behavior depends on the voltage dependence and dynamics of that current and may be different in different neuron types depending on the properties of other currents it is interacting with. Conclusions about the function of a type of membrane current based on experiments or simulations in one type of neuron may therefore not generalize to other neuron types.
APA, Harvard, Vancouver, ISO, and other styles
6

Karpe, Yashashree, Zhenyu Chen, and Xue-Jun Li. "Stem Cell Models and Gene Targeting for Human Motor Neuron Diseases." Pharmaceuticals 14, no. 6 (June 12, 2021): 565. http://dx.doi.org/10.3390/ph14060565.

Full text
Abstract:
Motor neurons are large projection neurons classified into upper and lower motor neurons responsible for controlling the movement of muscles. Degeneration of motor neurons results in progressive muscle weakness, which underlies several debilitating neurological disorders including amyotrophic lateral sclerosis (ALS), hereditary spastic paraplegias (HSP), and spinal muscular atrophy (SMA). With the development of induced pluripotent stem cell (iPSC) technology, human iPSCs can be derived from patients and further differentiated into motor neurons. Motor neuron disease models can also be generated by genetically modifying human pluripotent stem cells. The efficiency of gene targeting in human cells had been very low, but is greatly improved with recent gene editing technologies such as zinc-finger nucleases (ZFN), transcription activator-like effector nucleases (TALEN), and CRISPR-Cas9. The combination of human stem cell-based models and gene editing tools provides unique paradigms to dissect pathogenic mechanisms and to explore therapeutics for these devastating diseases. Owing to the critical role of several genes in the etiology of motor neuron diseases, targeted gene therapies have been developed, including antisense oligonucleotides, viral-based gene delivery, and in situ gene editing. This review summarizes recent advancements in these areas and discusses future challenges toward the development of transformative medicines for motor neuron diseases.
APA, Harvard, Vancouver, ISO, and other styles
7

Rybak, Ilya A., Julian F. R. Paton, and James S. Schwaber. "Modeling Neural Mechanisms for Genesis of Respiratory Rhythm and Pattern. II. Network Models of the Central Respiratory Pattern Generator." Journal of Neurophysiology 77, no. 4 (April 1, 1997): 2007–26. http://dx.doi.org/10.1152/jn.1997.77.4.2007.

Full text
Abstract:
Rybak, Ilya A., Julian F. R. Paton, and James S. Schwaber. Modeling neural mechanisms for genesis of respiratory rhythm and pattern. II. Network models of the central respiratory pattern generator. J. Neurophysiol. 77: 2007–2026, 1997. The present paper describes several models of the central respiratory pattern generator (CRPG) developed employing experimental data and current hypotheses for respiratory rhythmogenesis. Each CRPG model includes a network of respiratory neuron types (e.g., early inspiratory; ramp inspiratory; late inspiratory; decrementing expiratory; postinspiratory; stage II expiratory; stage II constant firing expiratory; preinspiratory) and simplified models of lung and pulmonary stretch receptors (PSR), which provide feedback to the respiratory network. The used models of single respiratory neurons were developed in the Hodgkin-Huxley style as described in the previous paper. The mechanism for termination of inspiration (the inspiratory off-switch) in all models operates via late-I neuron, which is considered to be the inspiratory off-switching neuron. Several two- and three-phase CRPG models have been developed using different accepted hypotheses of the mechanism for termination of expiration. The key elements in the two-phase models are the early-I and dec-E neurons. The expiratory off-switch mechanism in these models is based on the mutual inhibitory connections between early-I and dec-E and adaptive properties of the dec-E neuron. The difference between the two-phase models concerns the mechanism for ramp firing patterns of E2 neurons resulting either from the intrinsic neuronal properties of the E2 neuron or from disinhibition from the adapting dec-E neuron. The key element of the three-phase models is the pre-I neuron, which acts as the expiratory off-switching neuron. The three-phase models differ by the mechanisms used for termination of expiration and for the ramp firing patterns of E2 neurons. Additional CRPG models were developed employing a dual switching neuron that generates two bursts per respiratory cycle to terminate both inspiration and expiration. Although distinctly different each model generates a stable respiratory rhythm and shows physiologically plausible firing patterns of respiratory neurons with and without PSR feedback. Using our models, we analyze the roles of different respiratory neuron types and their interconnections for the respiratory rhythm and pattern generation. We also investigate the possible roles of intrinsic biophysical properties of different respiratory neurons in controlling the duration of respiratory phases and timing of switching between them. We show that intrinsic membrane properties of respiratory neurons are integrated with network properties of the CRPG at three hierarchical levels: at the cellular level to provide the specific firing patterns of respiratory neurons (e.g., ramp firing patterns); at the network level to provide switching between the respiratory phases; and at the systems level to control the duration of inspiration and expiration under different conditions (e.g., lack of PSR feedback).
APA, Harvard, Vancouver, ISO, and other styles
8

Plesser, Hans E., and Markus Diesmann. "Simplicity and Efficiency of Integrate-and-Fire Neuron Models." Neural Computation 21, no. 2 (February 2009): 353–59. http://dx.doi.org/10.1162/neco.2008.03-08-731.

Full text
Abstract:
Lovelace and Cios ( 2008 ) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 105 neurons and 109 connections on moderate computer clusters.
APA, Harvard, Vancouver, ISO, and other styles
9

Harrison, L. M., O. David, and K. J. Friston. "Stochastic models of neuronal dynamics." Philosophical Transactions of the Royal Society B: Biological Sciences 360, no. 1457 (May 29, 2005): 1075–91. http://dx.doi.org/10.1098/rstb.2005.1648.

Full text
Abstract:
Cortical activity is the product of interactions among neuronal populations. Macroscopic electrophysiological phenomena are generated by these interactions. In principle, the mechanisms of these interactions afford constraints on biologically plausible models of electrophysiological responses. In other words, the macroscopic features of cortical activity can be modelled in terms of the microscopic behaviour of neurons. An evoked response potential (ERP) is the mean electrical potential measured from an electrode on the scalp, in response to some event. The purpose of this paper is to outline a population density approach to modelling ERPs. We propose a biologically plausible model of neuronal activity that enables the estimation of physiologically meaningful parameters from electrophysiological data. The model encompasses four basic characteristics of neuronal activity and organization: (i) neurons are dynamic units, (ii) driven by stochastic forces, (iii) organized into populations with similar biophysical properties and response characteristics and (iv) multiple populations interact to form functional networks. This leads to a formulation of population dynamics in terms of the Fokker–Planck equation. The solution of this equation is the temporal evolution of a probability density over state-space, representing the distribution of an ensemble of trajectories. Each trajectory corresponds to the changing state of a neuron. Measurements can be modelled by taking expectations over this density, e.g. mean membrane potential, firing rate or energy consumption per neuron. The key motivation behind our approach is that ERPs represent an average response over many neurons. This means it is sufficient to model the probability density over neurons, because this implicitly models their average state. Although the dynamics of each neuron can be highly stochastic, the dynamics of the density is not. This means we can use Bayesian inference and estimation tools that have already been established for deterministic systems. The potential importance of modelling density dynamics (as opposed to more conventional neural mass models) is that they include interactions among the moments of neuronal states (e.g. the mean depolarization may depend on the variance of synaptic currents through nonlinear mechanisms). Here, we formulate a population model, based on biologically informed model-neurons with spike-rate adaptation and synaptic dynamics. Neuronal sub-populations are coupled to form an observation model, with the aim of estimating and making inferences about coupling among sub-populations using real data. We approximate the time-dependent solution of the system using a bi-orthogonal set and first-order perturbation expansion. For didactic purposes, the model is developed first in the context of deterministic input, and then extended to include stochastic effects. The approach is demonstrated using synthetic data, where model parameters are identified using a Bayesian estimation scheme we have described previously.
APA, Harvard, Vancouver, ISO, and other styles
10

Sajjad, Hassan, Nadir Durrani, and Fahim Dalvi. "Neuron-level Interpretation of Deep NLP Models: A Survey." Transactions of the Association for Computational Linguistics 10 (2022): 1285–303. http://dx.doi.org/10.1162/tacl_a_00519.

Full text
Abstract:
Abstract The proliferation of Deep Neural Networks in various domains has seen an increased need for interpretability of these models. Preliminary work done along this line, and papers that surveyed such, are focused on high-level representation analysis. However, a recent branch of work has concentrated on interpretability at a more granular level of analyzing neurons within these models. In this paper, we survey the work done on neuron analysis including: i) methods to discover and understand neurons in a network; ii) evaluation methods; iii) major findings including cross architectural comparisons that neuron analysis has unraveled; iv) applications of neuron probing such as: controlling the model, domain adaptation, and so forth; and v) a discussion on open issues and future research directions.
APA, Harvard, Vancouver, ISO, and other styles
11

Genc, Baris, Oge Gozutok, Nuran Kocak, and P. Hande Ozdinler. "The Timing and Extent of Motor Neuron Vulnerability in ALS Correlates with Accumulation of Misfolded SOD1 Protein in the Cortex and in the Spinal Cord." Cells 9, no. 2 (February 22, 2020): 502. http://dx.doi.org/10.3390/cells9020502.

Full text
Abstract:
Understanding the cellular and molecular basis of selective vulnerability has been challenging, especially for motor neuron diseases. Developing drugs that improve the health of neurons that display selective vulnerability relies on in vivo cell-based models and quantitative readout measures that translate to patient outcome. We initially developed and characterized UCHL1-eGFP mice, in which motor neurons are labeled with eGFP that is stable and long-lasting. By crossing UCHL1-eGFP to amyotrophic lateral sclerosis (ALS) disease models, we generated ALS mouse models with fluorescently labeled motor neurons. Their examination over time began to reveal the cellular basis of selective vulnerability even within the related motor neuron pools. Accumulation of misfolded SOD1 protein both in the corticospinal and spinal motor neurons over time correlated with the timing and extent of degeneration. This further proved simultaneous degeneration of both upper and lower motor neurons, and the requirement to consider both upper and lower motor neuron populations in drug discovery efforts. Demonstration of the direct correlation between misfolded SOD1 accumulation and motor neuron degeneration in both cortex and spinal cord is important for building cell-based assays in vivo. Our report sets the stage for shifting focus from mice to diseased neurons for drug discovery efforts, especially for motor neuron diseases.
APA, Harvard, Vancouver, ISO, and other styles
12

Zavitz, Elizabeth, and Nicholas S. C. Price. "Weighting neurons by selectivity produces near-optimal population codes." Journal of Neurophysiology 121, no. 5 (May 1, 2019): 1924–37. http://dx.doi.org/10.1152/jn.00504.2018.

Full text
Abstract:
Perception is produced by “reading out” the representation of a sensory stimulus contained in the activity of a population of neurons. To examine experimentally how populations code information, a common approach is to decode a linearly weighted sum of the neurons’ spike counts. This approach is popular because of the biological plausibility of weighted, nonlinear integration. For neurons recorded in vivo, weights are highly variable when derived through optimization methods, but it is unclear how the variability affects decoding performance in practice. To address this, we recorded from neurons in the middle temporal area (MT) of anesthetized marmosets ( Callithrix jacchus) viewing stimuli comprising a sheet of dots that moved coherently in 1 of 12 different directions. We found that high peak response and direction selectivity both predicted that a neuron would be weighted more highly in an optimized decoding model. Although learned weights differed markedly from weights chosen according to a priori rules based on a neuron’s tuning profile, decoding performance was only marginally better for the learned weights. In the models with a priori rules, selectivity is the best predictor of weighting, and defining weights according to a neuron’s preferred direction and selectivity improves decoding performance to very near the maximum level possible, as defined by the learned weights. NEW & NOTEWORTHY We examined which aspects of a neuron’s tuning account for its contribution to sensory coding. Strongly direction-selective neurons are weighted most highly by optimal decoders trained to discriminate motion direction. Models with predefined decoding weights demonstrate that this weighting scheme causally improved direction representation by a neuronal population. Optimizing decoders (using a generalized linear model or Fisher’s linear discriminant) led to only marginally better performance than decoders based purely on a neuron’s preferred direction and selectivity.
APA, Harvard, Vancouver, ISO, and other styles
13

Oliveri, Hadrien, and Alain Goriely. "Mathematical models of neuronal growth." Biomechanics and Modeling in Mechanobiology 21, no. 1 (January 7, 2022): 89–118. http://dx.doi.org/10.1007/s10237-021-01539-0.

Full text
Abstract:
AbstractThe establishment of a functioning neuronal network is a crucial step in neural development. During this process, neurons extend neurites—axons and dendrites—to meet other neurons and interconnect. Therefore, these neurites need to migrate, grow, branch and find the correct path to their target by processing sensory cues from their environment. These processes rely on many coupled biophysical effects including elasticity, viscosity, growth, active forces, chemical signaling, adhesion and cellular transport. Mathematical models offer a direct way to test hypotheses and understand the underlying mechanisms responsible for neuron development. Here, we critically review the main models of neurite growth and morphogenesis from a mathematical viewpoint. We present different models for growth, guidance and morphogenesis, with a particular emphasis on mechanics and mechanisms, and on simple mathematical models that can be partially treated analytically.
APA, Harvard, Vancouver, ISO, and other styles
14

Mulloney, Brian, and Wendy M. Hall. "Local and Intersegmental Interactions of Coordinating Neurons and Local Circuits in the Swimmeret System." Journal of Neurophysiology 98, no. 1 (July 2007): 405–13. http://dx.doi.org/10.1152/jn.00345.2007.

Full text
Abstract:
During forward swimming, periodic movements of swimmerets on different segments of the crayfish abdomen progress from back to front with the same period. Information encoded as bursts of spikes by coordinating neurons in each segmental ganglion is necessary for this coherent organization. This information is conducted to targets in other ganglia. When an individual coordinating neuron is stimulated at different phases in the system's cycle of activity, the timing of motor output from other ganglia may be altered. In models of this coordinating circuit, we assumed that each coordinating neuron encodes information about the state of the local pattern-generating circuit in its home ganglion but is not part of that local circuit. We tested this assumption by stimulating individual coordinating neurons of two kinds—ASCE and DSC—at different phases under two conditions: with the target ganglion functional, and with the target ganglion silenced. Blocking a DSC neuron's target ganglion did not alter its negligible influence on the output from its home ganglion; the phase-response curves (PRC) remained flat. Blocking an ASCE neuron's target ganglion significantly affected its influence on the output from its home ganglion. We had predicted that ASCE's modest phase-dependent influence would disappear with the target silenced, but instead the amplitude of the PRCs increased significantly. Thus we have two different results: DSC neurons conformed to prediction based on the models’ assumptions, but ASCE neurons showed an unexpected property, one that is partially masked when the bidirectional flow of information between neighboring ganglia is operating normally.
APA, Harvard, Vancouver, ISO, and other styles
15

Arunachalam, Viswanathan, Raha Akhavan-Tabatabaei, and Cristina Lopez. "Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network." Computational and Mathematical Methods in Medicine 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/374878.

Full text
Abstract:
The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008) in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.
APA, Harvard, Vancouver, ISO, and other styles
16

Vazquez, Roberto A., and Beatriz A. Garro. "Training Spiking Neural Models Using Artificial Bee Colony." Computational Intelligence and Neuroscience 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/947098.

Full text
Abstract:
Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy.
APA, Harvard, Vancouver, ISO, and other styles
17

Camera, Giancarlo La, Alexander Rauch, Hans-R. Lüscher, Walter Senn, and Stefano Fusi. "Minimal Models of Adapted Neuronal Response to In Vivo–Like Input Currents." Neural Computation 16, no. 10 (October 1, 2004): 2101–24. http://dx.doi.org/10.1162/0899766041732468.

Full text
Abstract:
Rate models are often used to study the behavior of large networks of spiking neurons. Here we propose a procedure to derive rate models that take into account the fluctuations of the input current and firing-rate adaptation, two ubiquitous features in the central nervous system that have been previously overlooked in constructing rate models. The procedure is general and applies to any model of firing unit. As examples, we apply it to the leaky integrate-and-fire (IF) neuron, the leaky IF neuron with reversal potentials, and to the quadratic IF neuron. Two mechanisms of adaptation are considered, one due to an after hyperpolarization current and the other to an adapting threshold for spike emission. The parameters of these simple models can be tuned to match experimental data obtained from neocortical pyramidal neurons. Finally, we show how the stationary model can be used to predict the time-varying activity of a large population of adapting neurons.
APA, Harvard, Vancouver, ISO, and other styles
18

Mill, Robert, Martin Coath, Thomas Wennekers, and Susan L. Denham. "Abstract Stimulus-Specific Adaptation Models." Neural Computation 23, no. 2 (February 2011): 435–76. http://dx.doi.org/10.1162/neco_a_00077.

Full text
Abstract:
Many neurons that initially respond to a stimulus stop responding if the stimulus is presented repeatedly but recover their response if a different stimulus is presented. This phenomenon is referred to as stimulus-specific adaptation (SSA). SSA has been investigated extensively using oddball experiments, which measure the responses of a neuron to sequences of stimuli. Neurons that exhibit SSA respond less vigorously to common stimuli, and the metric typically used to quantify this difference is the SSA index (SI). This article presents the first detailed analysis of the SI metric by examining the question: How should a system (e.g., a neuron) respond to stochastic input if it is to maximize the SI of its output? Questions like this one are particularly relevant to those wishing to construct computational models of SSA. If an artificial neural network receives stimulus information at a particular rate and must respond within a fixed time, what is the highest SI one can reasonably expect? We demonstrate that the optimum, average SI is constrained by the information in the input source, the length and encoding of the memory, and the assumptions concerning how the task is decomposed.
APA, Harvard, Vancouver, ISO, and other styles
19

Rybak, Ilya A., Julian F. R. Paton, and James S. Schwaber. "Modeling Neural Mechanisms for Genesis of Respiratory Rhythm and Pattern. I. Models of Respiratory Neurons." Journal of Neurophysiology 77, no. 4 (April 1, 1997): 1994–2006. http://dx.doi.org/10.1152/jn.1997.77.4.1994.

Full text
Abstract:
Rybak, Ilya A., Julian F. R. Paton, and James S. Schwaber. Modeling neural mechanisms for genesis of respiratory rhythym and pattern. I. Models of respiratory neurons. J. Neurophysiol. 77: 1994–2006, 1997. The general objectives of our research, presented in this series of papers, were to develop a computational model of the brain stem respiratory neural network and to explore possible neural mechanisms that provide the genesis of respiratory oscillations and the specific firing patterns of respiratory neurons. The present paper describes models of single respiratory neurons that have been used as the elements in our network models of the central respiratory pattern generator presented in subsequent papers. The models of respiratory neurons were developed in the Hodgkin-Huxley style employing both physiological and biophysical data obtained from brain stem neurons in mammals. Two single respiratory neuron models were developed to match the two distinct firing behaviors of respiratory neurons described in vivo: neuron type I shows an adapting firing pattern in response to synaptic excitation, and neuron type II shows a ramp firing pattern during membrane depolarization after a period of synaptic inhibition. We found that a frequency ramp firing pattern can result from intrinsic membrane properties, specifically from the combined influence of calcium-dependent KAHP(Ca), low-threshold CaT and KA channels. The neuron models with these ionic channels (type II) demonstrated ramp firing patterns similar to those recorded from respiratory neurons in vivo. Our simulations show that KAHP(Ca) channels in combination with high-threshold CaL channels produce spike frequency adaptation during synaptic excitation. However, in combination with low-threshold CaT channels, they cause a frequency ramp firing response after release from inhibition. This promotes a testable hypothesis that the main difference between the respiratory neurons that adapt (for example, early inspiratory, postinspiratory, and decrementing expiratory) and those that show ramp firing patterns (for example, ramp inspiratory and augmenting expiratory) consists of a ratio between the two types of calcium channels: CaL channels predominate in the former and CaT channels in the latter respiratory neuron types. We have analyzed the dependence of adapting and ramp firing patterns on maximal conductances of different ionic channels and values of synaptic drive. The effect of adjusting specific membrane conductances and synaptic interactions revealed plausible neuronal mechanisms that may underlie modulatory effects on respiratory neuron firing patterns and network performances. The results of computer simulation provide useful insight into functional significance of specific intrinsic membrane properties and their interactions with phasic synaptic inputs for a better understanding of respiratory neuron firing behavior.
APA, Harvard, Vancouver, ISO, and other styles
20

Shin, Grace Ji-eun, Maria Elena Pero, Luke A. Hammond, Anita Burgos, Atul Kumar, Samantha E. Galindo, Tanguy Lucas, Francesca Bartolini, and Wesley B. Grueber. "Integrins protect sensory neurons in models of paclitaxel-induced peripheral sensory neuropathy." Proceedings of the National Academy of Sciences 118, no. 15 (April 5, 2021): e2006050118. http://dx.doi.org/10.1073/pnas.2006050118.

Full text
Abstract:
Chemotherapy-induced peripheral neuropathy (CIPN) is a major side effect from cancer treatment with no known method for prevention or cure in clinics. CIPN often affects unmyelinated nociceptive sensory terminals. Despite the high prevalence, molecular and cellular mechanisms that lead to CIPN are still poorly understood. Here, we used a genetically tractableDrosophilamodel and primary sensory neurons isolated from adult mouse to examine the mechanisms underlying CIPN and identify protective pathways. We found that chronic treatment ofDrosophilalarvae with paclitaxel caused degeneration and altered the branching pattern of nociceptive neurons, and reduced thermal nociceptive responses. We further found that nociceptive neuron-specific overexpression of integrins, which are known to support neuronal maintenance in several systems, conferred protection from paclitaxel-induced cellular and behavioral phenotypes. Live imaging and superresolution approaches provide evidence that paclitaxel treatment causes cellular changes that are consistent with alterations in endosome-mediated trafficking of integrins. Paclitaxel-induced changes in recycling endosomes precede morphological degeneration of nociceptive neuron arbors, which could be prevented by integrin overexpression. We used primary dorsal root ganglia (DRG) neuron cultures to test conservation of integrin-mediated protection. We show that transduction of a human integrin β-subunit 1 also prevented degeneration following paclitaxel treatment. Furthermore, endogenous levels of surface integrins were decreased in paclitaxel-treated mouse DRG neurons, suggesting that paclitaxel disrupts recycling in vertebrate sensory neurons. Altogether, our study supports conserved mechanisms of paclitaxel-induced perturbation of integrin trafficking and a therapeutic potential of restoring neuronal interactions with the extracellular environment to antagonize paclitaxel-induced toxicity in sensory neurons.
APA, Harvard, Vancouver, ISO, and other styles
21

COURBAGE, M., and V. I. NEKORKIN. "MAP BASED MODELS IN NEURODYNAMICS." International Journal of Bifurcation and Chaos 20, no. 06 (June 2010): 1631–51. http://dx.doi.org/10.1142/s0218127410026733.

Full text
Abstract:
This tutorial reviews a new important class of mathematical phenomenological models of neural activity generated by iterative dynamical systems: the so-called map-based systems. We focus on 1-D and 2-D maps for the replication of many features of the neural activity of a single neuron. It was shown that such systems can reproduce the basic activity modes such as spiking, bursting, chaotic spiking-bursting, subthreshold oscillations, tonic and phasic spiking, normal excitability, etc. of the real biological neurons. We emphasize on the representation of chaotic spiking-bursting oscillations by chaotic attractors of 2-D models. We also explain the dynamical mechanism of formation of such attractors and transition from one mode to another. We briefly present some synchronization mehanisms of chaotic spiking-bursting activity for two coupled neurons described by 1-D maps.
APA, Harvard, Vancouver, ISO, and other styles
22

Sarwar, Farah, Shaukat Iqbal, and Muhammad Waqar Hussain. "Linear and Nonlinear Electrical Models of Neurons for Hopfield Neural Network." Zeitschrift für Naturforschung A 71, no. 11 (November 1, 2016): 995–1002. http://dx.doi.org/10.1515/zna-2016-0161.

Full text
Abstract:
AbstractA novel electrical model of neuron is proposed in this presentation. The suggested neural network model has linear/nonlinear input-output characteristics. This new deterministic model has joint biological properties in excellent agreement with the earlier deterministic neuron model of Hopfield and Tank and to the stochastic neuron model of McCulloch and Pitts. It is an accurate portrayal of differential equation presented by Hopfield and Tank to mimic neurons. Operational amplifiers, resistances, capacitor, and diodes are used to design this system. The presented biological model of neurons remains to be advantageous for simulations. Impulse response is studied and conferred to certify the stability and strength of this innovative model. A simple illustration is mapped to demonstrate the exactness of the intended system. Precisely mapped illustration exhibits 100 % accurate results.
APA, Harvard, Vancouver, ISO, and other styles
23

Abbott, L. F., and Gwendal LeMasson. "Analysis of Neuron Models with Dynamically Regulated Conductances." Neural Computation 5, no. 6 (November 1993): 823–42. http://dx.doi.org/10.1162/neco.1993.5.6.823.

Full text
Abstract:
We analyze neuron models in which the maximal conductances of membrane currents are slowly varying dynamic variables regulated by the intracellular calcium concentration. These models allow us to study possible activity-dependent effects arising from processes that maintain and modify membrane channels in real neurons. Regulated model neurons maintain a constant average level of activity over a wide range of conditions by appropriately adjusting their conductances. The intracellular calcium concentration acts as a feedback element linking maximal conductances to electrical activity. The resulting plasticity of intrinsic characteristics has important implications for network behavior. We first study a simple two-conductance model, then introduce techniques that allow us to analyze dynamic regulation with an arbitrary number of conductances, and finally illustrate this method by studying a seven-conductance model. We conclude with an analysis of spontaneous differentiation of identical model neurons in a two-cell network.
APA, Harvard, Vancouver, ISO, and other styles
24

Masuda, Naoki, and Kazuyuki Aihara. "Spatiotemporal Spike Encoding of a Continuous External Signal." Neural Computation 14, no. 7 (July 1, 2002): 1599–628. http://dx.doi.org/10.1162/08997660260028638.

Full text
Abstract:
Interspike intervals of spikes emitted from an integrator neuron model of sensory neurons can encode input information represented as a continuous signal from a deterministic system. If a real brain uses spike timing as a means of information processing, other neurons receiving spatiotemporal spikes from such sensory neurons must also be capable of treating information included in deterministic interspike intervals. In this article, we examine functions of neurons modeling cortical neurons receiving spatiotemporal spikes from many sensory neurons. We show that such neuron models can encode stimulus information passed from the sensory model neurons in the form of interspike intervals. Each sensory neuron connected to the cortical neuron contributes equally to the information collection by the cortical neuron. Although the incident spike train to the cortical neuron is a superimposition of spike trains from many sensory neurons, it need not be decomposed into spike trains according to the input neurons. These results are also preserved for generalizations of sensory neurons such as a small amount of leak, noise, inhomogeneity in firing rates, or biases introduced in the phase distributions.
APA, Harvard, Vancouver, ISO, and other styles
25

Tonnelier, A. "Categorization of Neural Excitability Using Threshold Models." Neural Computation 17, no. 7 (July 1, 2005): 1447–55. http://dx.doi.org/10.1162/0899766053723087.

Full text
Abstract:
A classification of spiking neurons according to the transition from quiescence to periodic firing of action potentials is commonly used. Nonbursting neurons are classified into two types, type I and type II excitability. We use simple phenomenological spiking neuron models to derive a criterion for the determination of the neural excitability based on the after potential following a spike. The crucial characteristic is the existence for type II model of a positive overshoot, that is, a delayed after depolarization, during the recovery process of the membrane potential. Our prediction is numerically tested using well-known type I and type II models including the Connor, Walter, & McKown (1977) model and the Hodgkin-Huxley (1952) model.
APA, Harvard, Vancouver, ISO, and other styles
26

Weber, Cornelius, and Jochen Triesch. "A Sparse Generative Model of V1 Simple Cells with Intrinsic Plasticity." Neural Computation 20, no. 5 (May 2008): 1261–84. http://dx.doi.org/10.1162/neco.2007.02-07-472.

Full text
Abstract:
Current models for learning feature detectors work on two timescales: on a fast timescale, the internal neurons' activations adapt to the current stimulus; on a slow timescale, the weights adapt to the statistics of the set of stimuli. Here we explore the adaptation of a neuron's intrinsic excitability, termed intrinsic plasticity, which occurs on a separate timescale. Here, a neuron maintains homeostasis of an exponentially distributed firing rate in a dynamic environment. We exploit this in the context of a generative model to impose sparse coding. With natural image input, localized edge detectors emerge as models of V1 simple cells. An intermediate timescale for the intrinsic plasticity parameters allows modeling aftereffects. In the tilt aftereffect, after a viewer adapts to a grid of a certain orientation, grids of a nearby orientation will be perceived as tilted away from the adapted orientation. Our results show that adapting the neurons' gain-parameter but not the threshold-parameter accounts for this effect. It occurs because neurons coding for the adapting stimulus attenuate their gain, while others increase it. Despite its simplicity and low maintenance, the intrinsic plasticity model accounts for more experimental details than previous models without this mechanism.
APA, Harvard, Vancouver, ISO, and other styles
27

Mazurek, Kevin A., and Marc H. Schieber. "Mirror neurons precede non-mirror neurons during action execution." Journal of Neurophysiology 122, no. 6 (December 1, 2019): 2630–35. http://dx.doi.org/10.1152/jn.00653.2019.

Full text
Abstract:
Mirror neurons are thought to represent an individual’s ability to understand the actions of others by discharging as one individual performs or observes another individual performing an action. Studies typically have focused on mirror neuron activity during action observation, examining activity during action execution primarily to validate mirror neuron involvement in the motor act. As a result, little is known about the precise role of mirror neurons during action execution. In this study, during execution of reach-grasp-manipulate movements, we found activity of mirror neurons generally preceded that of non-mirror neurons. Not only did the onset of task-related modulation occur earlier in mirror neurons, but state transitions detected by hidden Markov models also occurred earlier in mirror neuron populations. Our findings suggest that mirror neurons may be at the forefront of action execution. NEW & NOTEWORTHY Mirror neurons commonly are thought to provide a neural substrate for understanding the actions of others, but mirror neurons also are active during action execution, when additional, non-mirror neurons are active as well. Examining the timing of activity during execution of a naturalistic reach-grasp-manipulate task, we found that mirror neuron activity precedes that of non-mirror neurons at both the unit and the population level. Thus mirror neurons may be at the leading edge of action execution.
APA, Harvard, Vancouver, ISO, and other styles
28

Roberts, Rachael R., Joel C. Bornstein, Annette J. Bergner, and Heather M. Young. "Disturbances of colonic motility in mouse models of Hirschsprung's disease." American Journal of Physiology-Gastrointestinal and Liver Physiology 294, no. 4 (April 2008): G996—G1008. http://dx.doi.org/10.1152/ajpgi.00558.2007.

Full text
Abstract:
Mutations in genes encoding members of the GDNF and endothelin-3 (Et-3) signaling pathways can cause Hirschsprung's disease, a congenital condition associated with an absence of enteric neurons in the distal gut. GDNF signals through Ret, a receptor tyrosine kinase, and Et-3 signals through endothelin receptor B (Ednrb). The effects of Gdnf, Ret, and ET- 3 haploinsufficiency and a null mutation in ET- 3 on spontaneous motility patterns in adult and developing mice were investigated. Video recordings were used to construct spatiotemporal maps of spontaneous contractile patterns in colon from postnatal and adult mice in vitro. In Ret+/− and ET- 3+/− mice, which have normal numbers of enteric neurons, colonic migrating motor complexes (CMMCs) displayed similar properties under control conditions and following inhibition of nitric oxide synthase (NOS) activity to wild-type mice. In the colon of Gdnf+/− mice and in the ganglionic region of ET- 3−/− mice, there was a 50–60% reduction in myenteric neuron number. In Gdnf+/− mice, CMMCs were present, but abnormal, and the proportion of myenteric neurons containing NOS was not different from that of wild-type mice. In the ganglionic region of postnatal ET- 3−/− mice, CMMCs were absent, and the proportion of myenteric neurons containing NOS was over 100% higher than in wild-type mice. Thus impairments in spontaneous motility patterns in the colon of Gdnf+/− mice and in the ganglionic region of ET- 3−/− mice are correlated with a reduction in myenteric neuron density.
APA, Harvard, Vancouver, ISO, and other styles
29

DENG, BO. "CONCEPTUAL CIRCUIT MODELS OF NEURONS." Journal of Integrative Neuroscience 08, no. 03 (September 2009): 255–97. http://dx.doi.org/10.1142/s0219635209002228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kuznetsov, Alexey, Leonid Rubchinsky, Nancy Kopell, and Charles Wilson. "Models of midbrain dopaminergic neurons." Scholarpedia 2, no. 10 (2007): 1812. http://dx.doi.org/10.4249/scholarpedia.1812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kuhn, Alexandre, Ad Aertsen, and Stefan Rotter. "Higher-Order Statistics of Input Ensembles and the Response of Simple Model Neurons." Neural Computation 15, no. 1 (January 1, 2003): 67–101. http://dx.doi.org/10.1162/089976603321043702.

Full text
Abstract:
Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on the higher-order statistics of the input ensemble. We constructed two statistical models of populations of spiking neurons that fired with the same rates and had identical pairwise correlations, but differed with regard to the higher-order interactions within the population. The first ensemble was characterized by clusters of spikes synchronized over the whole population. In the second ensemble, the size of spike clusters was, on average, proportional to the pairwise correlation. For both input models, we assessed the role of the size of the population, the firing rate, and the pairwise correlation on the output rate of two simple model neurons: a continuous firing-rate model and a conductance-based leaky integrate-and-fire neuron. An approximation to the mean output rate of the firing-rate neuron could be derived analytically with the help of shot noise theory. Interestingly, the essential features of the mean response of the two neuron models were similar. For both neuron models, the three input parameters played radically different roles with respect to the postsynaptic firing rate, depending on the interaction structure of the input. For instance, in the case of an ensemble with small and distributed spike clusters, the output firing rate was efficiently controlled by the size of the input population. In addition to the interaction structure, the ratio of inhibition to excitation was found to strongly modulate the effect of correlation on the postsynaptic firing rate.
APA, Harvard, Vancouver, ISO, and other styles
32

Genc, Baris, Oge Gozutok, and P. Hande Ozdinler. "Complexity of Generating Mouse Models to Study the Upper Motor Neurons: Let Us Shift Focus from Mice to Neurons." International Journal of Molecular Sciences 20, no. 16 (August 7, 2019): 3848. http://dx.doi.org/10.3390/ijms20163848.

Full text
Abstract:
Motor neuron circuitry is one of the most elaborate circuitries in our body, which ensures voluntary and skilled movement that requires cognitive input. Therefore, both the cortex and the spinal cord are involved. The cortex has special importance for motor neuron diseases, in which initiation and modulation of voluntary movement is affected. Amyotrophic lateral sclerosis (ALS) is defined by the progressive degeneration of both the upper and lower motor neurons, whereas hereditary spastic paraplegia (HSP) and primary lateral sclerosis (PLS) are characterized mainly by the loss of upper motor neurons. In an effort to reveal the cellular and molecular basis of neuronal degeneration, numerous model systems are generated, and mouse models are no exception. However, there are many different levels of complexities that need to be considered when developing mouse models. Here, we focus our attention to the upper motor neurons, which are one of the most challenging neuron populations to study. Since mice and human differ greatly at a species level, but the cells/neurons in mice and human share many common aspects of cell biology, we offer a solution by focusing our attention to the affected neurons to reveal the complexities of diseases at a cellular level and to improve translational efforts.
APA, Harvard, Vancouver, ISO, and other styles
33

Kumar, Gautam, and Mayuresh V. Kothare. "On the Continuous Differentiability of Inter-Spike Intervals of Synaptically Connected Cortical Spiking Neurons in a Neuronal Network." Neural Computation 25, no. 12 (December 2013): 3183–206. http://dx.doi.org/10.1162/neco_a_00503.

Full text
Abstract:
We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.
APA, Harvard, Vancouver, ISO, and other styles
34

Wyatt, Tanya J., Sharyn L. Rossi, Monica M. Siegenthaler, Jennifer Frame, Rockelle Robles, Gabriel Nistor, and Hans S. Keirstead. "Human Motor Neuron Progenitor Transplantation Leads to Endogenous Neuronal Sparing in 3 Models of Motor Neuron Loss." Stem Cells International 2011 (2011): 1–11. http://dx.doi.org/10.4061/2011/207230.

Full text
Abstract:
Motor neuron loss is characteristic of many neurodegenerative disorders and results in rapid loss of muscle control, paralysis, and eventual death in severe cases. In order to investigate the neurotrophic effects of a motor neuron lineage graft, we transplanted human embryonic stem cell-derived motor neuron progenitors (hMNPs) and examined their histopathological effect in three animal models of motor neuron loss. Specifically, we transplanted hMNPs into rodent models of SMA (Δ7SMN), ALS (SOD1 G93A), and spinal cord injury (SCI). The transplanted cells survived and differentiated in all models. In addition, we have also found that hMNPs secrete physiologically active growth factorsin vivo, including NGF and NT-3, which significantly enhanced the number of spared endogenous neurons in all three animal models. The ability to maintain dying motor neurons by delivering motor neuron-specific neurotrophic support represents a powerful treatment strategy for diseases characterized by motor neuron loss.
APA, Harvard, Vancouver, ISO, and other styles
35

Guo, Liang, Shuai Zhang, Jiankang Wu, Xinyu Gao, Mingkang Zhao, and Guizhi Xu. "Theoretical Analysis of Coupled Modified Hindmarsh-rose Model Under Transcranial Magnetic-acoustic Electrical Stimulation." International Journal of Circuits, Systems and Signal Processing 16 (January 15, 2022): 610–17. http://dx.doi.org/10.46300/9106.2022.16.76.

Full text
Abstract:
Transcranial magnetic-acoustic electrical stimulation (TMAES) is a new technology with ultrasonic waves and a static magnetic field to generate an electric current in nerve tissues to modulate neuronal firing activities. The existing neuron models only simulate a single neuron, and there are few studies on coupled neurons models about TMAES. Most of the neurons in the cerebral cortex are not isolated but are coupled to each other. It is necessary to study the information transmission of coupled neurons. The types of neuron coupled synapses include electrical synapse and chemical synapse. A neuron model without considering chemical synapses is not comprehensive. Here, we modified the Hindmarsh-Rose (HR) model to simulate the smallest nervous system—two neurons coupled electrical synapses and chemical synapses under TMAES. And the environmental variables describing the synaptic coupling between two neurons and the nonlinearity of the nervous system are also taken into account. The firing behavior of the nervous system can be modulated by changing the intensity or the modulation frequency. The results show that within a certain range of parameters, the discharge frequency of coupled neurons could be increased by altering the modulation frequency, and intensity of stimulation, modulating the excitability of neurons, reducing the response time of chemical postsynaptic neurons, and accelerating the information transferring. Moreover, the discharge frequency of neurons was selective to stimulus parameters. These results demonstrate the possible theoretical regulatory mechanism of the neurons' firing frequency characteristics by TMAES. The study establishes the foundation for large-scale neural network modeling and can be taken as the theoretical basis for TMAES experimental and clinical application.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Zhiping, and Lee J. Martin. "Isolation of Mature Spinal Motor Neurons and Single-cell Analysis Using the Comet Assay of Early Low-level DNA Damage Induced In Vitro and In Vivo." Journal of Histochemistry & Cytochemistry 49, no. 8 (August 2001): 957–72. http://dx.doi.org/10.1177/002215540104900804.

Full text
Abstract:
We developed an isolation technique for motor neurons from adult rat spinal cord. Spinal cord enlargements were discretely microdissected into ventral horn tissue columns that were trypsin-digested and subjected to differential low-speed centrifugation to fractionate ventral horn cell types. A fraction enriched in α-motor neurons was isolated. Motor neuron enrichment was verified by immunofluorescence for choline acetyltransferase and prelabeling axon projections to skeletal muscle. Adult motor neurons were isolated from naïve rats and were exposed to oxidative agents or were isolated from rats with sciatic nerve lesions (avulsions). We tested the hypothesis, using single-cell gel electrophoresis (comet assay), that hydrogen peroxide, nitric oxide, and peroxynitrite exposure in vitro and axotomy in vivo induce DNA damage in adult motor neurons early during their degeneration. This study contributes three important developments in the study of motor neurons. It demonstrates that mature spinal motor neurons can be isolated and used for in vitro models of motor neuron degeneration. It shows that adult motor neurons can be isolated from in vivo models of motor neuron degeneration and evaluated on a single-cell basis. This study also demonstrates that the comet assay is a feasible method for measuring DNA damage in individual motor neurons. Using these methods, we conclude that motor neurons undergoing oxidative stress from reactive oxygen species and axotomy accumulate DNA damage early in their degeneration. (J Histochem Cytochem 49:957–972, 2001)
APA, Harvard, Vancouver, ISO, and other styles
37

Tamai, So-ichi, Keisuke Imaizumi, Nobuhiro Kurabayashi, Minh Dang Nguyen, Takaya Abe, Masatoshi Inoue, Yoshitaka Fukada, and Kamon Sanada. "Neuroprotective Role of the Basic Leucine Zipper Transcription Factor NFIL3 in Models of Amyotrophic Lateral Sclerosis." Journal of Biological Chemistry 289, no. 3 (November 26, 2013): 1629–38. http://dx.doi.org/10.1074/jbc.m113.524389.

Full text
Abstract:
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease characterized by the loss of motor neurons. Here we show that the basic leucine zipper transcription factor NFIL3 (also called E4BP4) confers neuroprotection in models of ALS. NFIL3 is up-regulated in primary neurons challenged with neurotoxic insults and in a mouse model of ALS. Overexpression of NFIL3 attenuates excitotoxic neuronal damage and protects neurons against neurodegeneration in a cell-based ALS model. Conversely, reduction of NFIL3 exacerbates neuronal demise in adverse conditions. Transgenic neuronal expression of NFIL3 in ALS mice delays disease onset and attenuates motor axon and neuron degeneration. These results suggest that NFIL3 plays a neuroprotective role in neurons and constitutes a potential therapeutic target for neurodegeneration.
APA, Harvard, Vancouver, ISO, and other styles
38

Wirths, Oliver, and Silvia Zampar. "Neuron Loss in Alzheimer’s Disease: Translation in Transgenic Mouse Models." International Journal of Molecular Sciences 21, no. 21 (October 30, 2020): 8144. http://dx.doi.org/10.3390/ijms21218144.

Full text
Abstract:
Transgenic mouse models represent an essential tool for the exploration of Alzheimer’s disease (AD) pathological mechanisms and the development of novel treatments, which at present provide only symptomatic and transient effects. While a variety of mouse models successfully reflects the main neuropathological hallmarks of AD, such as extracellular amyloid-β (Aβ) deposits, intracellular accumulation of Tau protein, the development of micro- and astrogliosis, as well as behavioral deficits, substantial neuron loss, as a key feature of the disease, seems to be more difficult to achieve. In this review, we summarize information on classic and more recent transgenic mouse models for AD, focusing in particular on loss of pyramidal, inter-, and cholinergic neurons. Although the cause of neuron loss in AD is still a matter of scientific debate, it seems to be linked to intraneuronal Aβ accumulation in several transgenic mouse models, especially in pyramidal neurons.
APA, Harvard, Vancouver, ISO, and other styles
39

Naudin, Loïs, Juan Luis Jiménez Laredo, Qiang Liu, and Nathalie Corson. "Systematic generation of biophysically detailed models with generalization capability for non-spiking neurons." PLOS ONE 17, no. 5 (May 13, 2022): e0268380. http://dx.doi.org/10.1371/journal.pone.0268380.

Full text
Abstract:
Unlike spiking neurons which compress continuous inputs into digital signals for transmitting information via action potentials, non-spiking neurons modulate analog signals through graded potential responses. Such neurons have been found in a large variety of nervous tissues in both vertebrate and invertebrate species, and have been proven to play a central role in neuronal information processing. If general and vast efforts have been made for many years to model spiking neurons using conductance-based models (CBMs), very few methods have been developed for non-spiking neurons. When a CBM is built to characterize the neuron behavior, it should be endowed with generalization capabilities (i.e. the ability to predict acceptable neuronal responses to different novel stimuli not used during the model’s building). Yet, since CBMs contain a large number of parameters, they may typically suffer from a lack of such a capability. In this paper, we propose a new systematic approach based on multi-objective optimization which builds general non-spiking models with generalization capabilities. The proposed approach only requires macroscopic experimental data from which all the model parameters are simultaneously determined without compromise. Such an approach is applied on three non-spiking neurons of the nematode Caenorhabditis elegans (C. elegans), a well-known model organism in neuroscience that predominantly transmits information through non-spiking signals. These three neurons, arbitrarily labeled by convention as RIM, AIY and AFD, represent, to date, the three possible forms of non-spiking neuronal responses of C. elegans.
APA, Harvard, Vancouver, ISO, and other styles
40

Xia, Ji, Tyler D. Marks, Michael J. Goard, and Ralf Wessel. "Diverse coactive neurons encode stimulus-driven and stimulus-independent variables." Journal of Neurophysiology 124, no. 5 (November 1, 2020): 1505–17. http://dx.doi.org/10.1152/jn.00431.2020.

Full text
Abstract:
V1 neurons covary largely independently of individual neuron’s response reliability. A single neuron’s response reliability imposes only a weak constraint on its encoding capabilities. Visual stimulus instructs a neuron’s reliability and coactivation pattern. Network models revealed using clustered external inputs.
APA, Harvard, Vancouver, ISO, and other styles
41

Bandyopadhyay, Sharba, Lina A. J. Reiss, and Eric D. Young. "Receptive Field for Dorsal Cochlear Nucleus Neurons at Multiple Sound Levels." Journal of Neurophysiology 98, no. 6 (December 2007): 3505–15. http://dx.doi.org/10.1152/jn.00539.2007.

Full text
Abstract:
Neurons in the dorsal cochlear nucleus (DCN) exhibit nonlinearities in spectral processing, which make it difficult to predict the neurons’ responses to stimuli. Here, we consider two possible sources of nonlinearity: nonmonotonic responses as sound level increases due to inhibition and interactions between frequency components. A spectral weighting function model of rate responses is used; the model approximates the neuron's rate response as a weighted sum of the frequency components of the stimulus plus a second-order sum that captures interactions between frequencies. Such models approximate DCN neurons well at low spectral contrast, i.e., when the SD (contrast) of the stimulus spectrum is limited to 3 dB. This model is compared with a first-order sum with weights that are explicit functions of sound level, so that the low-contrast model is extended to spectral contrasts of 12 dB, the range of natural stimuli. The sound-level–dependent weights improve prediction performance at large spectral contrast. However, the interactions between frequencies, represented as second-order terms, are more important at low spectral contrast. The level-dependent model is shown to predict previously described patterns of responses to spectral edges, showing that small changes in the inhibitory components of the receptive field can produce large changes in the responses of the neuron to features of natural stimuli. These results provide an effective way of characterizing nonlinear auditory neurons incorporating stimulus-dependent sensitivity changes. Such models could be used for neurons in other sensory systems that show similar effects.
APA, Harvard, Vancouver, ISO, and other styles
42

Weissenberger, Felix, Marcelo Matheus Gauy, Xun Zou, and Angelika Steger. "Mutual Inhibition with Few Inhibitory Cells via Nonlinear Inhibitory Synaptic Interaction." Neural Computation 31, no. 11 (November 2019): 2252–65. http://dx.doi.org/10.1162/neco_a_01230.

Full text
Abstract:
In computational neural network models, neurons are usually allowed to excite some and inhibit other neurons, depending on the weight of their synaptic connections. The traditional way to transform such networks into networks that obey Dale's law (i.e., a neuron can either excite or inhibit) is to accompany each excitatory neuron with an inhibitory one through which inhibitory signals are mediated. However, this requires an equal number of excitatory and inhibitory neurons, whereas a realistic number of inhibitory neurons is much smaller. In this letter, we propose a model of nonlinear interaction of inhibitory synapses on dendritic compartments of excitatory neurons that allows the excitatory neurons to mediate inhibitory signals through a subset of the inhibitory population. With this construction, the number of required inhibitory neurons can be reduced tremendously.
APA, Harvard, Vancouver, ISO, and other styles
43

Ruzek, Martin. "ARTIFICIAL NEURAL NETWORK FOR MODELS OF HUMAN OPERATOR." Acta Polytechnica CTU Proceedings 12 (December 15, 2017): 99. http://dx.doi.org/10.14311/app.2017.12.0099.

Full text
Abstract:
This paper presents a new approach to mental functions modeling with the use of artificial neural networks. The artificial neural networks seems to be a promising method for the modeling of a human operator because the architecture of the ANN is directly inspired by the biological neuron. On the other hand, the classical paradigms of artificial neural networks are not suitable because they simplify too much the real processes in biological neural network. The search for a compromise between the complexity of biological neural network and the practical feasibility of the artificial network led to a new learning algorithm. This algorithm is based on the classical multilayered neural network; however, the learning rule is different. The neurons are updating their parameters in a way that is similar to real biological processes. The basic idea is that the neurons are competing for resources and the criterion to decide which neuron will survive is the usefulness of the neuron to the whole neural network. The neuron is not using "teacher" or any kind of superior system, the neuron receives only the information that is present in the biological system. The learning process can be seen as searching of some equilibrium point that is equal to a state with maximal importance of the neuron for the neural network. This position can change if the environment changes. The name of this type of learning, the homeostatic artificial neural network, originates from this idea, as it is similar to the process of homeostasis known in any living cell. The simulation results suggest that this type of learning can be useful also in other tasks of artificial learning and recognition.
APA, Harvard, Vancouver, ISO, and other styles
44

Fiorenzano, Alessandro, Edoardo Sozzi, Malin Parmar, and Petter Storm. "Dopamine Neuron Diversity: Recent Advances and Current Challenges in Human Stem Cell Models and Single Cell Sequencing." Cells 10, no. 6 (June 1, 2021): 1366. http://dx.doi.org/10.3390/cells10061366.

Full text
Abstract:
Human midbrain dopamine (DA) neurons are a heterogeneous group of cells that share a common neurotransmitter phenotype and are in close anatomical proximity but display different functions, sensitivity to degeneration, and axonal innervation targets. The A9 DA neuron subtype controls motor function and is primarily degenerated in Parkinson’s disease (PD), whereas A10 neurons are largely unaffected by the condition, and their dysfunction is associated with neuropsychiatric disorders. Currently, DA neurons can only be reliably classified on the basis of topographical features, including anatomical location in the midbrain and projection targets in the forebrain. No systematic molecular classification at the genome-wide level has been proposed to date. Although many years of scientific efforts in embryonic and adult mouse brain have positioned us to better understand the complexity of DA neuron biology, many biological phenomena specific to humans are not amenable to being reproduced in animal models. The establishment of human cell-based systems combined with advanced computational single-cell transcriptomics holds great promise for decoding the mechanisms underlying maturation and diversification of human DA neurons, and linking their molecular heterogeneity to functions in the midbrain. Human pluripotent stem cells have emerged as a useful tool to recapitulate key molecular features of mature DA neuron subtypes. Here, we review some of the most recent advances and discuss the current challenges in using stem cells, to model human DA biology. We also describe how single cell RNA sequencing may provide key insights into the molecular programs driving DA progenitor specification into mature DA neuron subtypes. Exploiting the state-of-the-art approaches will lead to a better understanding of stem cell-derived DA neurons and their use in disease modeling and regenerative medicine.
APA, Harvard, Vancouver, ISO, and other styles
45

Mato, Germán, and Inés Samengo. "Type I and Type II Neuron Models Are Selectively Driven by Differential Stimulus Features." Neural Computation 20, no. 10 (October 2008): 2418–40. http://dx.doi.org/10.1162/neco.2008.10-07-632.

Full text
Abstract:
Neurons in the nervous system exhibit an outstanding variety of morphological and physiological properties. However, close to threshold, this remarkable richness may be grouped succinctly into two basic types of excitability, often referred to as type I and type II. The dynamical traits of these two neuron types have been extensively characterized. It would be interesting, however, to understand the information-processing consequences of their dynamical properties. To that end, here we determine the differences between the stimulus features inducing firing in type I and type II neurons. We work with both realistic conductance-based models and minimal normal forms. We conclude that type I neurons fire in response to scale-free depolarizing stimuli. Type II neurons, instead, are most efficiently driven by input stimuli containing both depolarizing and hyperpolarizing phases, with significant power in the frequency band corresponding to the intrinsic frequencies of the cell.
APA, Harvard, Vancouver, ISO, and other styles
46

Antunes, Gabriela, Samuel F. Faria da Silva, and Fabio M. Simoes de Souza. "Mirror Neurons Modeled Through Spike-Timing-Dependent Plasticity are Affected by Channelopathies Associated with Autism Spectrum Disorder." International Journal of Neural Systems 28, no. 05 (April 19, 2018): 1750058. http://dx.doi.org/10.1142/s0129065717500587.

Full text
Abstract:
Mirror neurons fire action potentials both when the agent performs a certain behavior and watches someone performing a similar action. Here, we present an original mirror neuron model based on the spike-timing-dependent plasticity (STDP) between two morpho-electrical models of neocortical pyramidal neurons. Both neurons fired spontaneously with basal firing rate that follows a Poisson distribution, and the STDP between them was modeled by the triplet algorithm. Our simulation results demonstrated that STDP is sufficient for the rise of mirror neuron function between the pairs of neocortical neurons. This is a proof of concept that pairs of neocortical neurons associating sensory inputs to motor outputs could operate like mirror neurons. In addition, we used the mirror neuron model to investigate whether channelopathies associated with autism spectrum disorder could impair the modeled mirror function. Our simulation results showed that impaired hyperpolarization-activated cationic currents (Ih) affected the mirror function between the pairs of neocortical neurons coupled by STDP.
APA, Harvard, Vancouver, ISO, and other styles
47

XU, JIAN-XIN, and XIN DENG. "STUDY ON CHEMOTAXIS BEHAVIORS OF C. ELEGANS USING DYNAMIC NEURAL NETWORK MODELS: FROM ARTIFICIAL TO BIOLOGICAL MODEL." Journal of Biological Systems 18, spec01 (October 2010): 3–33. http://dx.doi.org/10.1142/s0218339010003597.

Full text
Abstract:
With the anatomical understanding of the neural connection of the nematode Caenorhabditis elegans (C. elegans), its chemotaxis behaviors are investigated in this paper through the association with the biological nerve connections. The chemotaxis behaviors include food attraction, toxin avoidance and mixed-behaviors (finding food and avoiding toxin concurrently). Eight dynamic neural network (DNN) models, two artifical models and six biological models, are used to learn and implement the chemotaxis behaviors of C. elegans. The eight DNN models are classified into two classes with either single sensory neuron or dual sensory neurons. The DNN models are trained to learn certain switching logics according to different chemotaxis behaviors using real time recurrent learning algorithm (RTRL). First we show the good performance of the two artifical models in food attraction, toxin avoidance and the mixed-behaviors. Next, six neural wire diagrams from sensory neurons to motor neurons are extracted from the anatomical nerve connection of C. elegans. Then the extracted biological wire diagrams are trained using RTRL directly, which is the first time in this field of research by associating chemotaxis behaviors with biological neural models. An interesting discovery is the need for a memory neuron when single-sensory models are used, which is consistent with the anatomical understanding on a specific neuron that functions as a memory. In the simulations, the chemotaxis behaviors of C. elegans can be depicted by several switch logical functions which can be learned by RTRL for both artifical and biological models.
APA, Harvard, Vancouver, ISO, and other styles
48

Abarbanel, Henry D. I., R. Huerta, M. I. Rabinovich, N. F. Rulkov, P. F. Rowat, and A. I. Selverston. "Synchronized Action of Synaptically Coupled Chaotic Model Neurons." Neural Computation 8, no. 8 (November 1996): 1567–602. http://dx.doi.org/10.1162/neco.1996.8.8.1567.

Full text
Abstract:
Experimental observations of the intracellular recorded electrical activity in individual neurons show that the temporal behavior is often chaotic. We discuss both our own observations on a cell from the stom-atogastric central pattern generator of lobster and earlier observations in other cells. In this paper we work with models of chaotic neurons, building on models by Hindmarsh and Rose for bursting, spiking activity in neurons. The key feature of these simplified models of neurons is the presence of coupled slow and fast subsystems. We analyze the model neurons using the same tools employed in the analysis of our experimental data. We couple two model neurons both electrotonically and electrochemically in inhibitory and excitatory fashions. In each of these cases, we demonstrate that the model neurons can synchronize in phase and out of phase depending on the strength of the coupling. For normal synaptic coupling, we have a time delay between the action of one neuron and the response of the other. We also analyze how the synchronization depends on this delay. A rich spectrum of synchronized behaviors is possible for electrically coupled neurons and for inhibitory coupling between neurons. In synchronous neurons one typically sees chaotic motion of the coupled neurons. Excitatory coupling produces essentially periodic voltage trajectories, which are also synchronized. We display and discuss these synchronized behaviors using two “distance” measures of the synchronization.
APA, Harvard, Vancouver, ISO, and other styles
49

Almog, Mara, and Alon Korngreen. "Is realistic neuronal modeling realistic?" Journal of Neurophysiology 116, no. 5 (November 1, 2016): 2180–209. http://dx.doi.org/10.1152/jn.00360.2016.

Full text
Abstract:
Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models.
APA, Harvard, Vancouver, ISO, and other styles
50

Alvarellos-González, Alberto, Alejandro Pazos, and Ana B. Porto-Pazos. "Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/476324.

Full text
Abstract:
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography