Дисертації з теми "Computationnal Neuroscience"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Computationnal Neuroscience.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Computationnal Neuroscience".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Higgins, Irina. "Computational neuroscience of speech recognition." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:daa8d096-6534-4174-b63e-cc4161291c90.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Physical variability of speech combined with its perceptual constancy make speech recognition a challenging task. The human auditory brain, however, is able to perform speech recognition effortlessly. This thesis aims to understand the precise computational mechanisms that allow the auditory brain to do so. In particular, we look for the minimal subset of sub-cortical auditory brain areas that allow the primary auditory cortex to learn 'good representations' of speech-like auditory objects through spike-timing dependent plasticity (STDP) learning mechanisms as described by Bi & Poo (1998). A 'good representation' is defined as that which is informative of the stimulus class regardless of the variability in the raw input, while being less redundant and more compressed than the representations within the auditory nerve, which provides the firing inputs to the rest of the auditory brain hierarchy (Barlow 1961). Neurophysiological studies have provided insights into the architecture and response properties of different areas within the auditory brain hierarchy. We use these insights to guide the development of an unsupervised spiking neural network grounded in the neurophysiology of the auditory brain and equipped with spike-time dependent plasticity (STDP) learning (Bi & Poo 1998). The model was exposed to simple controlled speech- like stimuli (artificially synthesised phonemes and naturally spoken words) to investigate how stable representations that are invariant to the within- and between-speaker differences can emerge in the output area of the model. The output of the model is roughly equivalent to the primary auditory cortex. The aim of the first part of the thesis was to investigate what was the minimal taxonomy necessary for such representations to emerge through the interactions of spiking dynamics of the network neurons, their ability to learn through STDP learning and the statistics of the auditory input stimuli. It was found that sub-cortical pre-processing within the ventral cochlear nucleus and inferior colliculus was necessary to remove jitter inherent to the auditory nerve spike rasters, which would disrupt STDP learning in the primary auditory cortex otherwise. The second half of the thesis investigated the nature of neural encoding used within the primary auditory cortex stage of the model to represent the learnt auditory object categories. It was found that single cell binary encoding (DeWeese & Zador 2003) was sufficient to represent two synthesised vowel classes, however more complex population encoding using precisely timed spikes within polychronous chains (Izhikevich 2006) represented more complex naturally spoken words in speaker-invariant manner.
2

Walters, Daniel Matthew. "The computational neuroscience of head direction cells." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:d4afe06a-d44f-4a24-99a3-d0e0a2911459.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Head direction cells signal the orientation of the head in the horizontal plane. This thesis shows how some of the known head direction cell response properties might develop through learning. The research methodology employed is the computer simulation of neural network models of head direction cells that self-organize through learning. The preferred firing directions of head direction cells will change in response to the manipulation of distal visual cues, but not in response to the manipulation of proximal visual cues. Simulation results are presented of neural network models that learn to form separate representations of distal and proximal visual cues that are presented simultaneously as visual input to the network. These results demonstrate the computation required for a subpopulation of head direction cells to learn to preferentially respond to distal visual cues. Within a population of head direction cells, the angular distance between the preferred firing directions of any two cells is maintained across different environments. It is shown how a neural network model can learn to maintain the angular distance between the learned preferred firing directions of head direction cells across two different visual training environments. A population of head direction cells can update the population representation of the current head direction, in the absence of visual input, using internal idiothetic (self-generated) motion signals alone. This is called the path integration of head direction. It is important that the head direction cell system updates its internal representation of head direction at the same speed as the animal is rotating its head. Neural network models are simulated that learn to perform the path integration of head direction, using solely idiothetic signals, at the same speed as the head is rotating.
3

Chateau-Laurent, Hugo. "Modélisation Computationnelle des Interactions Entre Mémoire Épisodique et Contrôle Cognitif." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La mémoire épisodique est souvent illustrée par la madeleine de Proust comme la capacité à revivre une situation du passé suite à la perception d'un stimulus. Ce scénario simpliste ne doit pas mener à penser que la mémoire opère en isolation des autres fonctions cognitives. Au contraire, la mémoire traite des informations hautement transformées et est elle-même modulée par les fonctions exécutives pour informer la prise de décision. Ces interactions complexes donnent lieu à des fonctions cognitives supérieures comme la capacité à imaginer de futures séquences d'événements potentielles en combinant des souvenirs pertinents dans le contexte. Comment le cerveau implémente ce système de construction reste un mystère. L'objectif de cette thèse est donc d'employer des méthodes de modélisation cognitive afin de mieux comprendre les interactions entre mémoire épisodique reposant principalement sur l'hippocampe et contrôle cognitif impliquant majoritairement le cortex préfrontal. Elle propose d'abord des éléments de réponse quant au rôle de la mémoire épisodique dans la sélection de l'action. Il est montré que le Contrôle Episodique Neuronal, une méthode puissante et rapide d’apprentissage par renforcement, est en fait mathématiquement proche du traditionnel réseau de Hopfield, un modèle de mémoire associative ayant grandement influencé la compréhension de l'hippocampe. Le Contrôle Episodique Neuronal peut en effet s'inscrire dans le cadre du réseau de Hopfield universel, il est donc montré qu’il peut être utilisé pour stocker et rappeler de l'information et que d'autres types de réseaux de Hopfield peuvent être utilisés pour l'apprentissage par renforcement. La question de comment les fonctions exécutives contrôlent la mémoire épisodique est aussi posée. Un réseau inspiré de l'hippocampe est créé avec le moins d'hypothèses possible et modulé avec de l'information contextuelle. L'évaluation des performances selon le niveau auquel le contexte est envoyé propose des principes de conception de mémoire épisodique contrôlée. Enfin, un nouveau modèle bio-inspiré de l'apprentissage en un coup de séquences dans l'hippocampe est proposé. Le modèle fonctionne bien avec plusieurs jeux de données tout en reproduisant des observations biologiques. Il attribue un nouveau rôle aux connexions récurrentes de la région CA3 et à l'expansion asymétrique des champs de lieu qui est de distinguer les séquences se chevauchant en faisant émerger des cellules de séparation rétrospective. Les implications pour les théories de l'hippocampe sont discutées et de nouvelles prédictions expérimentales sont dérivées
Episodic memory is often illustrated with the madeleine de Proust excerpt as the ability to re-experience a situation from the past following the perception of a stimulus. This simplistic scenario should not lead into thinking that memory works in isolation from other cognitive functions. On the contrary, memory operations treat highly processed information and are themselves modulated by executive functions in order to inform decision making. This complex interplay can give rise to higher-level functions such as the ability to imagine potential future sequences of events by combining contextually relevant memories. How the brain implements this construction system is still largely a mystery. The objective of this thesis is to employ cognitive computational modeling methods to better understand the interactions between episodic memory, which is supported by the hippocampus, and cognitive control, which mainly involves the prefrontal cortex. It provides elements as to how episodic memory can help an agent to act. It is shown that Neural Episodic Control, a fast and powerful method for reinforcement learning, is in fact mathematically close to the traditional Hopfield Network, a model of associative memory that has greatly influenced the understanding of the hippocampus. Neural Episodic Control indeed fits within the Universal Hopfield Network framework, and it is demonstrated that it can be used to store and recall information, and that other kinds of Hopfield networks can be used for reinforcement learning. The question of how executive functions can control episodic memory operations is also tackled. A hippocampus-inspired network is constructed with as little assumption as possible and modulated with contextual information. The evaluation of performance according to the level at which contextual information is sent provides design principles for controlled episodic memory. Finally, a new biologically inspired model of one-shot sequence learning in the hippocampus is proposed. The model performs very well on multiple datasets while reproducing biological observations. It ascribes a new role to the recurrent collaterals of area CA3 and the asymmetric expansion of place fields, that is to disambiguate overlapping sequences by making retrospective splitter cells emerge. Implications for theories of the hippocampus are discussed and novel experimental predictions are derived
4

Cronin, Beau D. "Quantifying uncertainty in computational neuroscience with Bayesian statistical inference." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008.
Includes bibliographical references (p. 101-106).
Two key fields of computational neuroscience involve, respectively, the analysis of experimental recordings to understand the functional properties of neurons, and modeling how neurons and networks process sensory information in order to represent the environment. In both of these endeavors, it is crucial to understand and quantify uncertainty - when describing how the brain itself draws conclusions about the physical world, and when the experimenter interprets neuronal data. Bayesian modeling and inference methods provide many advantages for doing so. Three projects are presented that illustrate the advantages of the Bayesian approach. In the first, Markov chain Monte Carlo (MCMC) sampling methods were used to answer a range of scientific questions that arise in the analysis of physiological data from tuning curve experiments; in addition, a software toolbox is described that makes these methods widely accessible. In the second project, the model developed in the first project was extended to describe the detailed dynamics of orientation tuning in neurons in cat primary visual cortex. Using more sophisticated sampling-based inference methods, this model was applied to answer specific scientific questions about the tuning properties of a recorded population. The final project uses a Bayesian model to provide a normative explanation of sensory adaptation phenomena. The model was able to explain a range of detailed physiological adaptation phenomena.
by Beau D. Cronin.
Ph.D.
5

Lee, Ray A. "Analysis of Spreading Depolarization as a Traveling Wave in a Neuron-Astrocyte Network." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1503308416771087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Allen, John Michael. "Effects of Abstraction and Assumptions on Modeling Motoneuron Pool Output." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1495538117787703.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shepardson, Dylan. "Algorithms for inverting Hodgkin-Huxley type neuron models." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31686.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D)--Algorithms, Combinatorics, and Optimization, Georgia Institute of Technology, 2010.
Committee Chair: Tovey, Craig; Committee Member: Butera, Rob; Committee Member: Nemirovski, Arkadi; Committee Member: Prinz, Astrid; Committee Member: Sokol, Joel. Part of the SMARTech Electronic Thesis and Dissertation Collection.
8

Stevens, Martin. "Animal camouflage, receiver psychology and the computational neuroscience of avian vision." Thesis, University of Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432958.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tromans, James Matthew. "Computational neuroscience of natural scene processing in the ventral visual pathway." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:b82e1332-df7b-41db-9612-879c7a7dda39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Neural responses in the primate ventral visual system become more complex in the later stages of the pathway. For example, not only do neurons in IT cortex respond to complete objects, they also learn to respond invariantly with respect to the viewing angle of an object and also with respect to the location of an object. These types of neural responses have helped guide past research with VisNet, a computational model of the primate ventral visual pathway that self-organises during learning. In particular, previous research has focussed on presenting to the model one object at a time during training, and has placed emphasis on the transform invariant response properties of the output neurons of the model that consequently develop. This doctoral thesis extends previous VisNet research and investigates the performance of the model with a range of more challenging and ecologically valid training paradigms. For example, when multiple objects are presented to the network during training, or when objects partially occlude one another during training. The different mechanisms that help output neurons to develop object selective, transform invariant responses during learning are proposed and explored. Such mechanisms include the statistical decoupling of objects through multiple object pairings, and the separation of object representations by independent motion. Consideration is also given to the heterogeneous response properties of neurons that develop during learning. For example, although IT neurons demonstrate a number of differing invariances, they also convey spatial information and view specific information about the objects presented on the retina. A updated, scaled-up version of the VisNet model, with a significantly larger retina, is introduced in order to explore these heterogeneous neural response properties.
10

Vellmer, Sebastian. "Applications of the Fokker-Planck Equation in Computational and Cognitive Neuroscience." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21597.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In dieser Arbeit werden mithilfe der Fokker-Planck-Gleichung die Statistiken, vor allem die Leistungsspektren, von Punktprozessen berechnet, die von mehrdimensionalen Integratorneuronen [Engl. integrate-and-fire (IF) neuron], Netzwerken von IF Neuronen und Entscheidungsfindungsmodellen erzeugt werden. Im Gehirn werden Informationen durch Pulszüge von Aktionspotentialen kodiert. IF Neurone mit radikal vereinfachter Erzeugung von Aktionspotentialen haben sich in Studien die auf Pulszeiten fokussiert sind als Standardmodelle etabliert. Eindimensionale IF Modelle können jedoch beobachtetes Pulsverhalten oft nicht beschreiben und müssen dazu erweitert werden. Im erste Teil dieser Arbeit wird eine Theorie zur Berechnung der Pulszugleistungsspektren von stochastischen, multidimensionalen IF Neuronen entwickelt. Ausgehend von der zugehörigen Fokker-Planck-Gleichung werden partiellen Differentialgleichung abgeleitet, deren Lösung sowohl die stationäre Wahrscheinlichkeitsverteilung und Feuerrate, als auch das Pulszugleistungsspektrum beschreibt. Im zweiten Teil wird eine Theorie für große, spärlich verbundene und homogene Netzwerke aus IF Neuronen entwickelt, in der berücksichtigt wird, dass die zeitlichen Korrelationen von Pulszügen selbstkonsistent sind. Neuronale Eingangströme werden durch farbiges Gaußsches Rauschen modelliert, das von einem mehrdimensionalen Ornstein-Uhlenbeck Prozess (OUP) erzeugt wird. Die Koeffizienten des OUP sind vorerst unbekannt und sind als Lösung der Theorie definiert. Um heterogene Netzwerke zu untersuchen, wird eine iterative Methode erweitert. Im dritten Teil wird die Fokker-Planck-Gleichung auf Binärentscheidungen von Diffusionsentscheidungsmodellen [Engl. diffusion-decision models (DDM)] angewendet. Explizite Gleichungen für die Entscheidungszugstatistiken werden für den einfachsten und analytisch lösbaren Fall von der Fokker-Planck-Gleichung hergeleitet. Für nichtliniear Modelle wird die Schwellwertintegrationsmethode erweitert.
This thesis is concerned with the calculation of statistics, in particular the power spectra, of point processes generated by stochastic multidimensional integrate-and-fire (IF) neurons, networks of IF neurons and decision-making models from the corresponding Fokker-Planck equations. In the brain, information is encoded by sequences of action potentials. In studies that focus on spike timing, IF neurons that drastically simplify the spike generation have become the standard model. One-dimensional IF neurons do not suffice to accurately model neural dynamics, however, the extension towards multiple dimensions yields realistic behavior at the price of growing complexity. The first part of this work develops a theory of spike-train power spectra for stochastic, multidimensional IF neurons. From the Fokker-Planck equation, a set of partial differential equations is derived that describes the stationary probability density, the firing rate and the spike-train power spectrum. In the second part of this work, a mean-field theory of large and sparsely connected homogeneous networks of spiking neurons is developed that takes into account the self-consistent temporal correlations of spike trains. Neural input is approximated by colored Gaussian noise generated by a multidimensional Ornstein-Uhlenbeck process of which the coefficients are initially unknown but determined by the self-consistency condition and define the solution of the theory. To explore heterogeneous networks, an iterative scheme is extended to determine the distribution of spectra. In the third part, the Fokker-Planck equation is applied to calculate the statistics of sequences of binary decisions from diffusion-decision models (DDM). For the analytically tractable DDM, the statistics are calculated from the corresponding Fokker-Planck equation. To determine the statistics for nonlinear models, the threshold-integration method is generalized.
11

Ellaithy, Amr. "Metabotropic Glutamate Receptor 2 Activation: Computational Predictions and Experimental Validation." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5319.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
G protein-coupled receptors (GPCRs) are the largest family of signaling proteins in animals and represent the largest family of druggable targets in the human genome. Therefore, it is of no surprise that the molecular mechanisms of GPCR activation and signal transduction have attracted close attention for the past few decades. Several stabilizing interactions within the GPCR transmembrane (TM) domain helices regulate receptor activation. An example is a salt bridge between 2 highly conserved amino acids at the bottom of TM3 and TM6 that has been characterized for a large number of GPCRs. Through structural modeling and molecular dynamics (MD) simulations, we predicted several electrostatic interactions to be involved in metabotropic glutamate receptor 2 (mGlu2R) activation. To experimentally test these predictions, we employed a charge reversal mutagenesis approach to disrupt predicted receptor electrostatic intramolecular interactions as well as intermolecular interactions between the receptor and G proteins. Using two electrode voltage clamp in Xenopus laevis oocytes expressing mutant receptors and G-proteins, we revealed novel electrostatic interactions, mostly located around intracellular loops 2 and 3 of mGlu2R, that are critical for both receptor and G-protein activation. These studies contribute to elucidating the molecular determinants of mGluRs activation and conformational coupling to G-proteins, and can likely be extended to include other classes of GPCRs.
12

Nguyen, Harrison Tri Tue. "Computational Neuroscience with Deep Learning for Brain Imaging Analysis and Behaviour Classification." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27313.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent advances of artificial neural networks and deep learning model have produced significant results in problems related to neuroscience. For example, deep learning models have demonstrated superior performance in non-linear, multivariate pattern classification problems such as Alzheimer’s disease classification, brain lesion segmentation, skull stripping and brain age prediction. Deep learning provides unique advantages for high-dimensional data such as MRI data, since it does not require extensive feature engineering. The thesis investigates three problems related to neuroscience and discuss solutions to those scenarios. MRI has been used to analyse the structure of the brain and its pathology. However, for ex- ample, due to the heterogeneity of these scanners, MRI protocol, variation in site thermal and power stability can introduce scanning differences and artefacts for the same individual under- going different scans. Therefore combining images from different sites or even different days can introduce biases that obscure the signal of interest or can produce results that could be driven by these differences. An algorithm, the CycleGAN, will be presented and analysed which uses generative adversarial networks to transform a set of images from a given MRI site into images with characteristics of a different MRI site. Secondly, the MRI scans of the brain can come in the form of different modalities such as T1- weighted and FLAIR which have been used to investigate a wide range of neurological disorders. The acquisition of all of these modalities are expensive, time-consuming, inconvenient and the required modalities are often not available. As a result, these datasets contain large amounts of unpaired data, where examples in the dataset do not contain all modalities. On the other hand, there is a smaller fraction of examples that contain all modalities (paired data). This thesis presents a method to address the issue of translating between two neuroimaging modalities with a dataset of unpaired and paired, in semi-supervised learning framework. Lastly, behavioural modelling will be considered, where it is associated with an impressive range of decision-making tasks that are designed to index sub-components of psychological and neural computations that are distinct across groups of people, including people with an underlying disease. The thesis proposes a method that learns prototypical behaviours of each population in the form of readily interpretable, subsequences of choices, and classifies subjects by finding signatures of these prototypes in their behaviour.
13

Yancey, Madison E. "Computational Simulation and Analysis of Neuroplasticity." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1622582138544632.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Chakrabarty, Nilaj. "Computational Study of Axonal Transport Mechanisms of Actin and Neurofilaments." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1584441310326918.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Lai, Yi Ming. "Stochastic population oscillators in ecology and neuroscience." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:f12697fb-23fa-4817-974e-6e188b9ecb38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis we discuss the synchronization of stochastic population oscillators in ecology and neuroscience. Traditionally, the synchronization of oscillators has been studied in deterministic systems with various modes of synchrony induced by coupling between the oscillators. However, recent developments have shown that an ensemble of uncoupled oscillators can be synchronized by a common noise source alone. By considering the effects of noise-induced synchronization on biological oscillators, we are able to explain various biological phenomena in ecological and neurobiological contexts - most importantly, the long-observed Moran effect. Our formulation of the systems as limit cycle oscillators arising from populations of individuals, each with a random element to its behaviour, also allows us to examine the interaction between an external noise source and this intrinsic stochasticity. This provides possible explanations as to why in ecological systems large-amplitude cycles may not be observed in the wild. In neural population oscillators, we were able to observe not just synchronization, but also clustering in some pa- rameter regimes. Finally, we are also able to extend our methods to include coupling in our models. In particular, we examine the competing effects of dispersal and extrinsic noise on the synchronization of a pair of Rosenzweig-Macarthur predator-prey systems. We discover that common environmental noise will ultimately synchronize the oscillators, but that the approach to synchrony depends on whether or not dispersal in the absence of noise supports any stable asynchronous states. We also show how the combination of correlated (shared) and uncorrelated (unshared) noise with dispersal can lead to a multistable steady-state probability density. Similar analysis on a coupled system of neural oscillators would be an interesting project for future work, which, among other future directions of research, is discussed in the concluding section of this thesis.
16

Naze, Sebastien. "Multiscale Computational Modeling of Epileptic Seizures : from macro to microscopic dynamics." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4023/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L’évaluation expérimentale des mécanismes de l’initiation, de la propagation, et de la fin des crises d’épilepsie est un problème complexe. Cette thèse consiste en le développement d’un modèle de réseau de neurones aux caractéristiques biologiques pertinentes à la compréhension des mécanismes de genèse et de propagation de crises d’épilepsie. Nous démontrons que les décharges de type pointes ondes peuvent être générées par les neurones inhibiteurs seuls, tandis que les décharges rapides sont dues en grande partie aux neurones excitateurs. Nous concluons que les variations lentes d’excitabilité globale du système, dues aux fluctuations du milieu extracellulaire, et les interactions électro-tonique par jonctions communicantes sont les facteurs favorisant la genèse de crise localement, tandis qu’à plus large échelle spatiale les communications synaptiques excitatrices et le couplage extracellulaire qui participe davantage à la propagation des crises d’une région du cerveau à une autre
This thesis consists in the development of a network model of spiking neurons and the systematic investigation of conditions under which the network displays the emergent dynamic behaviors known from the Epileptor, a well-investigated abstract model of epileptic neural activity. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings between neurons play an essential role in seizure genesis. We demonstrate that spike-waves discharges, including interictal spikes, can be generated primarily by inhibitory neurons only, whereas excitatory neurons are responsible for the fast discharges during the wave part. We draw the conclusion that slow variations of global excitability, due to exogenous fluctuations from extracellular environment, and gap junction communication push the system into paroxysmal regimes locally, and excitatory synaptic and extracellular couplings participate in seizure spread globally across brain regions
17

Kazer, J. F. "The hippocampus in memory and anxiety : an exploration within computational neuroscience and robotics." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339963.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Vellmer, Sebastian [Verfasser]. "Applications of the Fokker-Planck Equation in Computational and Cognitive Neuroscience / Sebastian Vellmer." Berlin : Humboldt-Universität zu Berlin, 2020. http://d-nb.info/1214240682/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Zhu, Mengchen. "Sparse coding models of neural response in the primary visual cortex." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53868.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sparse coding is an influential unsupervised learning approach proposed as a theoretical model of the encoding process in the primary visual cortex (V1). While sparse coding has been successful in explaining classical receptive field properties of simple cells, it was unclear whether it can account for more complex response properties in a variety of cell types. In this dissertation, we demonstrate that sparse coding and its variants are consistent with key aspects of neural response in V1, including many contextual and nonlinear effects, a number of inhibitory interneuron properties, as well as the variance and correlation distributions in the population response. The results suggest that important response properties in V1 can be interpreted as emergent effects of a neural population efficiently representing the statistical structures of natural scenes under resource constraints. Based on the models, we make predictions of the circuit structure and response properties in V1 that can be verified by future experiments.
20

Hudson, Amber Elise. "Neuronal mechanisms for the maintenance of consistent behavior in the stomatogastric ganglion of Cancer borealis." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47654.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Each neuron needs to maintain a careful balance between the changes implicit in experience, and the demands of stability required by its function. This balance tips depending on the neuronal system, but in any role, disease or neural disorders can develop when the regulatory mechanisms involved in neuronal stability fail. The objective of this thesis was to characterize mechanisms underlying neuronal stability and activity maintenance, in the hopes that further understanding of these processes might someday lead to novel interventions for neurological disorders. The pyloric circuit of decapod crustaceans controls the rhythmic contractions of the foregut musculature, and has long been recognized as an excellent model system in which to study neuronal network stability. Recent experimental evidence has shown that each neuronal cell type of this circuit exhibits a unique set of positive linear correlations between ionic membrane conductances, which suggests that coordinated expression of ion channels plays a role in constraining neuronal electrical activity. In Aim 1, we hypothesized a causal relationship between expressed conductance correlations and features of cellular identity, namely electrical activity type. We partitioned an existing database of conductance-based model neurons based on various measures of intrinsic activity to approximate distinctions between biological cell types. We then tested individual conductance pairs for linear dependence to identify correlations. Similar to experimental results, each activity type investigated had a unique combination of correlated conductances. Furthermore, we found that populations of models that conform to a specific conductance correlation have a higher likelihood of exhibiting a particular feature of electrical activity. We conclude that regulating conductance ratios can support proper electrical activity of a wide range of cell types, particularly when the identity of the cell is well-defined by one or two features of its activity. The phenomenon of pyloric network recovery after removal of top-down neuromodulatory input--a process termed decentralization--is seen as a classic model of homeostatic change after injury. After decentralization, the pyloric central pattern generator briefly loses its characteristic rhythmic activity, but the same activity profile is recovered 3-5 days later via poorly understood homeostatic changes. This re-emergence of the pyloric rhythm occurs without the full pre-decentralization set of fixed conductance ratios. If conductance ratios stabilize pyloric activity before decentralization as we showed in Aim 1, then other mechanisms must account for the return of the pyloric rhythm after network recovery. Based on vertebrate studies demonstrating a role for the extracellular matrix (ECM) in activity regulation, we hypothesized in Aim 2 that the ECM was participating in activity maintenance in the stomatogastric nervous system. We used the enzyme chondroitinase ABC (chABC) to degrade extracellular chondroitin sulfate (CS) in the stomatogastric ganglion while in organ culture. Our results are the first to demonstrate the presence of CS in the crustacean nervous system via immunochemistry. Furthermore, we show that while ongoing activity is not disrupted by chABC treatment, recovery of pyloric activity after decentralization was significantly delayed without intact extracellular CS. Our results are the first to show that CS has a role in neuronal activity maintenance in crustaceans, and suggest that CS may be involved in initiating or directing activity maintenance needed in times of neuronal stress.
21

Banks, Jess M. "Chaos and Learning in Discrete-Time Neural Networks." Oberlin College Honors Theses / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1445945609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Ging-Jehli, Nadja Rita. "On the implementation of Computational Psychiatry within the framework of Cognitive Psychology and Neuroscience." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555338342285251.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Battista, Aldo. "Low-dimensional continuous attractors in recurrent neural networks : from statistical physics to computational neuroscience." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La manière dont l'information sensorielle est codée et traitée par les circuits neuronaux est une question centrale en neurosciences computationnelles. Dans de nombreuses régions du cerveau, on constate que l'activité des neurones dépend fortement de certains corrélats sensoriels continus ; on peut citer comme exemples les cellules simples de la zone V1 du cortex visuel codant pour l'orientation d'une barre présentée à la rétine, et les cellules de direction de la tête dans le subiculum ou les cellules de lieu dans l'hippocampe, dont les activités dépendent, respectivement, de l'orientation de la tête et de la position d'un animal dans l'espace physique. Au cours des dernières décennies, les réseaux neuronaux à attracteur continu ont été introduits comme un modèle abstrait pour la représentation de quelques variables continues dans une grande population de neurones bruités. Grâce à un ensemble approprié d'interactions par paires entre les neurones, la dynamique du réseau neuronal est contrainte de s'étendre sur une variété de faible dimension dans l'espace de haute dimension des configurations d'activités, et code ainsi quelques coordonnées continues sur la variété, correspondant à des informations spatiales ou sensorielles. Alors que le modèle original était basé sur la construction d'une variété continue unique dans un espace à haute dimension, on s'est vite rendu compte que le même réseau neuronal pouvait coder pour de nombreux attracteurs distincts, correspondant à différents environnements spatiaux ou situations contextuelles. Une solution approximative à ce problème plus difficile a été proposée il y a vingt ans, et reposait sur une prescription ad hoc pour les interactions par paires entre les neurones, résumant les différentes contributions correspondant à chaque attracteur pris indépendamment des autres. Cette solution souffre cependant de deux problèmes majeurs : l'interférence entre les cartes limitent fortement la capacité de stockage, et la résolution spatiale au sein d'une carte n'est pas contrôlée. Dans le présent manuscrit, nous abordons ces deux questions. Nous montrons comment parvenir à un stockage optimal des attracteurs continus et étudions le compromis optimal entre capacité et résolution spatiale, c'est-à-dire comment l'exigence d'une résolution spatiale plus élevée affecte le nombre maximal d'attracteurs pouvant être stockés, prouvant que les réseaux neuronaux récurrents sont des dispositifs de mémoire très efficaces capables de stocker de nombreux attracteurs continus à haute résolution. Afin de résoudre ces problèmes, nous avons utilisé une combinaison de techniques issues de la physique statistique des systèmes désordonnés et de la théorie des matrices aléatoires. D'une part, nous avons étendu la théorie de l'apprentissage de Gardner au cas des modèles présentant de fortes corrélations spatiales. D'autre part, nous avons introduit et étudié les propriétés spectrales d'un nouvel ensemble de matrices aléatoires, c'est-à-dire la superposition additive d'un grand nombre de matrices aléatoires euclidiennes indépendantes dans le régime de haute densité. En outre, cette approche définit un cadre concret pour répondre à de nombreuses questions, en lien étroit avec les expériences en cours, liées notamment à la discussion de l'hypothèse du remapping aléatoire et au codage de l'information spatiale et au développement des circuits cérébraux chez les jeunes animaux. Enfin, nous discutons d'un mécanisme possible pour l'apprentissage des attracteurs continus à partir d'images réelles
How sensory information is encoded and processed by neuronal circuits is a central question in computational neuroscience. In many brain areas, the activity of neurons is found to depend strongly on some continuous sensory correlate; examples include simple cells in the V1 area of the visual cortex coding for the orientation of a bar presented to the retina, and head direction cells in the subiculum or place cells in the hippocampus, whose activities depend, respectively, on the orientation of the head and the position of an animal in the physical space. Over the past decades, continuous attractor neural networks were introduced as an abstract model for the representation of a few continuous variables in a large population of noisy neurons. Through an appropriate set of pairwise interactions between the neurons, the dynamics of the neural network is constrained to span a low-dimensional manifold in the high-dimensional space of activity configurations, and thus codes for a few continuous coordinates on the manifold, corresponding to spatial or sensory information. While the original model was based on how to build a single continuous manifold in an high-dimensional space, it was soon realized that the same neural network should code for many distinct attractors, {em i.e.}, corresponding to different spatial environments or contextual situations. An approximate solution to this harder problem was proposed twenty years ago, and relied on an ad hoc prescription for the pairwise interactions between neurons, summing up the different contributions corresponding to each single attractor taken independently of the others. This solution, however, suffers from two major issues: the interference between maps strongly limit the storage capacity, and the spatial resolution within a map is not controlled. In the present manuscript, we address these two issues. We show how to achieve optimal storage of continuous attractors and study the optimal trade-off between capacity and spatial resolution, that is, how the requirement of higher spatial resolution affects the maximal number of attractors that can be stored, proving that recurrent neural networks are very efficient memory devices capable of storing many continuous attractors at high resolution. In order to tackle these problems we used a combination of techniques from statistical physics of disordered systems and random matrix theory. On the one hand we extended Gardner's theory of learning to the case of patterns with strong spatial correlations. On the other hand we introduced and studied the spectral properties of a new ensemble of random matrices, {em i.e.}, the additive superimposition of an extensive number of independent Euclidean random matrices in the high-density regime. In addition, this approach defines a concrete framework to address many questions, in close connection with ongoing experiments, related in particular to the discussion of the random remapping hypothesis and to the coding of spatial information and the development of brain circuits in young animals. Finally, we discuss a possible mechanism for the learning of continuous attractors from real images
24

Wright, Sean Patrick. "Cognitive neuroscience of episodic memory: behavioral, genetic, electrophysiological, and computational approaches to sequence memory." Thesis, Boston University, 2003. https://hdl.handle.net/2144/27805.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Boston University. University Professors Program Senior theses.
PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
2031-01-02
25

Woldman, Wessel. "Emergent phenomena from dynamic network models : mathematical analysis of EEG from people with IGE." Thesis, University of Exeter, 2016. http://hdl.handle.net/10871/23297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis mathematical techniques and models are applied to electroencephalographic (EEG) recordings to study mechanisms of idiopathic generalised epilepsy (IGE). First, we compare network structures derived from resting-state EEG from people with IGE, their unaffected relatives, and healthy controls. Next, these static networks are combined with a dynamical model describing the ac- tivity of a cortical region as a population of phase-oscillators. We then examine the potential of the differences found in the static networks and the emergent properties of the dynamic network as individual biomarkers of IGE. The emphasis of this approach is on discerning the potential of these markers at the level of an indi- vidual subject rather than their ability to identify differences at a group level. Finally, we extend a dynamic model of seizure onset to investigate how epileptiform discharges vary over the course of the day in ambulatory EEG recordings from people with IGE. By per- turbing the dynamics describing the excitability of the system, we demonstrate the model can reproduce discharge distributions on an individual level which are shown to express a circadian tone. The emphasis of the model approach is on understanding how changes in excitability within brain regions, modulated by sleep, metabolism, endocrine axes, or anti-epileptic drugs (AEDs), can drive the emer- gence of epileptiform activity in large-scale brain networks. Our results demonstrate that studying EEG recordings from peo- ple with IGE can lead to new mechanistic insight on the idiopathic nature of IGE, and may eventually lead to clinical applications. We show that biomarkers derived from dynamic network models perform significantly better as classifiers than biomarkers based on static network properties. Hence, our results provide additional ev- idence that the interplay between the dynamics of specific brain re- gions, and the network topology governing the interactions between these regions, is crucial in the generation of emergent epileptiform activity. Pathological activity may emerge due to abnormalities in either of those factors, or a combination of both, and hence it is essential to develop new techniques to characterise this interplay theoretically and to validate predictions experimentally.
26

Endres, Dominik M. "Bayesian and information-theoretic tools for neuroscience." Thesis, St Andrews, 2006. http://hdl.handle.net/10023/162.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

van, de Ven Gido. "Reactivation and reinstatement of hippocampal assemblies." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:edd60944-381e-4c7d-8029-4d7abb811fc9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
New memories are labile, but over time some of them are stabilized. This thesis investigates the network mechanisms in the brain underlying the gradual consolidation of memory representations. Specifically, I performed a causal test of the long-standing hypothesis that the offline reactivation of new, memory-representing cell assemblies supports memory consolidation by stabilizing those assemblies and increasing the likelihood of their later reinstatement - and therefore presumably of memory recall. I performed multi-unit extracellular recordings in the dorsal CA1 region of behaving mice, from which I detected short-timescale (25 ms) co-activation patterns of principal neurons during exploration of open-field enclosures. These cell assembly patterns appeared to represent space as their expression was spatially tuned and environment specific; and these patterns were preferentially reactivated during sharp wave-ripples (SWRs) in subsequent sleep. Importantly, after exposure to a novel - but not a familiar - enclosure, the strength with which an assembly pattern was reactivated predicted its later reinstatement strength during context re-exposure. Moreover, optogenetic silencing of hippocampal pyramidal neurons during on-the-fly detected SWRs during the sleep following exposure to a novel - but again not a familiar - enclosure impaired subsequent assembly pattern reinstatement. These results are direct evidence for a causal role of SWR-associated reactivation in the stability of new hippocampal cell assemblies. Surprisingly, offline reactivation was only important for the stability of a subset of the assembly patterns expressed in a novel enclosure. Optogenetic SWR silencing only impaired the reinstatement of "gradually strengthened" patterns that had had a significant increasing trend in their expression strength throughout the initial exposure session. Consistent with this result, a positive correlation between reactivation and subsequent reinstatement was only found for these gradually strengthened patterns and not for the other, "early stabilized" patterns. An interesting interpretation is that the properties of the gradually strengthened patterns are all consistent with the Hebbian postulate of "fire together, wire together". To enable investigation of the relation between interneurons and principal cell assembly patterns from extracellular recordings, as a final contribution this thesis describes a statistical framework for the unsupervised classification of interneurons based on their firing properties alone.
28

De, Pisapia Nicola. "A framework for implicit planning : towards a cognitive/computational neuroscience theory of prefrontal cortex function." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/24519.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis we review available experimental findings on rodents, monkeys and humans, and we suggest a unifying interpretation of the role of the Prefrontal Cortex (PFC) in behaviour. We implement computational models to test this interpretation, and propose novel experiments. Our suggestion is, as also other researchers have proposed, that the PFC is involved in Planning, i.e. in evaluating which course of actions to execute in order to reach a goal. Unlike previous researchers, we emphasize and limit ourselves to unconscious aspects of Planning, and describe a view of this process that is quite close to Instrumental Conditioning, and doesn’t involve language, external measures of time (clocks), instructions or social interactions of any kind. Nonetheless unconscious Planning can be a quite complex activity. Under this interpretation, we show reward based computational models that, while mimicking some of the known neural properties of the PFC, can perform planning. One aspect on which we focus is the capacity of neurons in Dorsolateral PFC to code temporal information, namely when to expect task related events to occur. This is a core requirement to organize and plan complex behaviour. Another aspect on which we focus is the fundamental role played in the Planning process by the Basal Ganglia. As a plan is executed successfully several times, the Basal Ganglia build a chunked representations of the whole course of actions needed to reach a goal. At the same time the Posterior cortex retains the detailed information of how and where to execute these actions. This process allows the PFC to plan about more and more complex goals.
29

O'Leary, Timothy S. "Homeostatic regulation of intrinsic excitability in hippocampal neurons." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The proper functioning of nervous systems requires electrical activity to be tightly regulated. Perturbations in the intrinsic properties of neurons, and in excitatory input, are imposed throughout nervous system development as cell morphology and network activity evolve. In mature nervous systems these changes continue as a result of synaptic plasticity and external stimuli. It is therefore likely that homeostatic mechanisms exist to regulate membrane conductances that determine the excitability of individual neurons, and several mechanisms have been characterised to date. This thesis characterises a novel in vitro model for homeostatic control of intrinsic excitability. The principal finding is that cultured hippocampal neurons respond to chronic depolarisation over a period of days by attenuating their response to injected current. This effect was found to depend on the level of depolarisation and the length of treatment, and is accompanied by changes in both active and passive membrane conductances. In addition, the effect is reversible and dependent on L-type calcium channel activity. Using experimental data to parameterise a conductance-based computer model suggests that the changes in conductance properties account for the observed differences in excitability.
30

Marsh, Steven Joseph Thomas. "Efficient programming models for neurocomputation." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709268.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Philippides, Andrew Owen. "Modelling diffusion of nitric oxide in brains." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lundh, Dan. "A computational neuroscientific model for short-term memory." Thesis, University of Exeter, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324742.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Hendrickson, Eric B. "Morphologically simplified conductance based neuron models: principles of construction and use in parameter optimization." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33905.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The dynamics of biological neural networks are of great interest to neuroscientists and are frequently studied using conductance-based compartmental neuron models. For speed and ease of use, neuron models are often reduced in morphological complexity. This reduction may affect input processing and prevent the accurate reproduction of neural dynamics. However, such effects are not yet well understood. Therefore, for my first aim I analyzed the processing capabilities of 'branched' or 'unbranched' reduced models by collapsing the dendritic tree of a morphologically realistic 'full' globus pallidus neuron model while maintaining all other model parameters. Branched models maintained the original detailed branching structure of the full model while the unbranched models did not. I found that full model responses to somatic inputs were generally preserved by both types of reduced model but that branched reduced models were better able to maintain responses to dendritic inputs. However, inputs that caused dendritic sodium spikes, for instance, could not be accurately reproduced by any reduced model. Based on my analyses, I provide recommendations on how to construct reduced models and indicate suitable applications for different levels of reduction. In particular, I recommend that unbranched reduced models be used for fast searches of parameter space given somatic input output data. The intrinsic electrical properties of neurons depend on the modifiable behavior of their ion channels. Obtaining a quality match between recorded voltage traces and the output of a conductance based compartmental neuron model depends on accurate estimates of the kinetic parameters of the channels in the biological neuron. Indeed, mismatches in channel kinetics may be detectable as failures to match somatic neural recordings when tuning model conductance densities. In my first aim, I showed that this is a task for which unbranched reduced models are ideally suited. Therefore, for my second aim I optimized unbranched reduced model parameters to match three experimentally characterized globus pallidus neurons by performing two stages of automated searches. In the first stage, I set conductance densities free and found that even the best matches to experimental data exhibited unavoidable problems. I hypothesized that these mismatches were due to limitations in channel model kinetics. To test this hypothesis, I performed a second stage of searches with free channel kinetics and observed decreases in the mismatches from the first stage. Additionally, some kinetic parameters consistently shifted to new values in multiple cells, suggesting the possibility for tailored improvements to channel models. Given my results and the potential for cell specific modulation of channel kinetics, I recommend that experimental kinetic data be considered as a starting point rather than as a gold standard for the development of neuron models.
34

Guclu, Burak Bolanowski Stanley J. "Computational studies on rapidly-adapting mechanoreceptive fibers." Related Electronic Resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2003. http://wwwlib.umi.com/cr/syr/main.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Zysman, Daniel. "The role of neuronal feedback in the detection of transient signals: a computational approach." Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28832.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study investigates the role of neuronal feedback in the detection of small amplitude transient signals. We focus specifically on how the weakly electric fish Apteronotus leptorhynchus capture prey such as Daphnia in a noisy environment, by means of its electric sense. Using the electrosensory network as a template, we build a computational model that allows us to evaluate detection performance in two different scenarios: without neuronal feedback (open-loop) and with neuronal feedback (closed-loop). For each network scenario, spike count distributions across realizations are computed in the absence and presence of the prey-related signal, and compared using ROC (Receiver Operating Characteristic) curves analyses. The area under the ROC curve (AUC) and the equal error rate (EER) are used to quantify the performance of the different network configurations. For body-object distances < 20 mm, the closed loop model results in a more robust and reliable signal detection than the open loop configuration. For larger distances, there are no differences between open and closed loop. These results depend on the exact choice of parameters for the model, in particular those controlling feedback inhibitory input strength; increasing inhibition, to a certain extent, further improves the closed-loop model. This study shows, in a simplified model framework that can be applied to a variety of sensory systems, how feedback can enhance the detection of weak transient signals.
36

Law, Judith S. "Modeling the development of organization for orientation preference in primary visual cortex." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3935.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The cerebral cortex of mammals comprises a series of topographic maps, forming sensory and motor areas such as those in the visual, auditory, and somatosensory systems. Understanding the rules that govern the development of these maps and how this topographic organization relates to information processing is critical for the understanding of cortical processing and whole brain function. Previous computational models have shown that topographic maps can develop through a process of self-organization, if spatially localized patches of cortical neurons are activated by particular stimuli. This thesis presents a series of computational models, based on this principle of self-organization, that focus on the development of the map of orientation preference in primary visual cortex (V1). This map is the prototypical example of topographic map development in the brain, and is the most widely studied, however the same self-organizing principles can also apply to maps of many other visual features and maps in many other sensory areas. Experimental evidence indicates that orientation preference maps in V1 develop in a stable way, with the initial layout determined before eye opening. This constraint is at odds with previous self-organizing models, which have used biologically unfounded ad-hoc methods to obtain robust and reliable development. Such mechanisms inherently lead to instability, by causing massive reorganization over time. The first model presented in this thesis (ALISSOM) shows how ad-hoc methods can be replaced with biologically realistic homeostatic mechanisms that lead to development that is both robust and stable. This model shows for the first time how orientation maps can remain stable despite the massive circuit reconstruction and change in visual inputs occurring during development. This model also highlights the requirements for homeostasis in the developing visual circuit. A second model shows how this development can occur using circuitry that is consistent with the known wiring in V1, unlike previous models. This new model, LESI, contains Long-range Excitatory and Short-range Inhibitory connections between model neurons. Instead of direct long-range inhibition, it uses di-synaptic inhibition to ensure that when visual stimuli are at high contrast, long-range excitatory connections have an overall inhibitory influence. The results match previous models in the special case of the high-contrast inputs that drive development most strongly, but show how the behavior relates to the underlying circuitry, and also make it possible to explore effects at a wide range of contrasts. The final part of this thesis explores the differences between rodents and higher mammals that lead to the lack of topographic organization in rodent species. A lack of organization for orientation also implies local disorder in retinotopy, and analysis of retinotopy data from two-photon calcium imaging in mouse (provided by Tom Mrsic- Flogel, University College London) confirms this hypothesis. A self-organizing model is used to investigate how this disorder can arise via variation in either feed-forward connections to V1 or lateral connections within V1, and how the effects of disorder may vary between species. These results suggest that species with and without topographic maps implement similar visual algorithms differing only in the values of some key parameters, rather than having fundamental differences in architecture. Together, these results help us understand how and why neurons develop preferences for visual features such as orientation, and how maps of these neurons are formed. The resulting models represent a synthesis of a large body of experimental evidence about V1 anatomy and function, and offer a platform for developing a more complete explanation of cortical function in future work.
37

Pendyam, Sandeep Nair Satish S. "Computational neural modeling at the cellular and network levels two case studies /." Diss., Columbia, Mo. : University of Missouri--Columbia, 2007. http://hdl.handle.net/10355/4899.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on September 15, 2009). Thesis advisor: Satish S. Nair. Includes bibliographical references.
38

Harkin, Emerson. "A Simplified Serotonin Neuron Model." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38533.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Cogliati, Dezza Irene. "“Vanilla, Vanilla .but what about Pistachio?” A Computational Cognitive Clinical Neuroscience Approach to the Exploration-Exploitation Dilemma." Doctoral thesis, Universite Libre de Bruxelles, 2018. https://dipot.ulb.ac.be/dspace/bitstream/2013/278730/3/Document1.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On the 24th November of 1859, Charles Darwin published the first edition of The Origin of Species. One hundred fifty-nine years later, our understanding of human and animal adaptation to the surrounding environment remains a major scientific challenge. How do humans and animals generate apt decision strategies in order to achieve this adaptation? How does their brain efficiently carry out complex computations in order to produce such adaptive behaviors? Although an exhaustive answer to these questions continues to feel out of reach, the investigation of adaptive processing results relevant in understanding mind/brain relationship and in elucidating scenarios where mind/brain interactions are corrupted such as in psychiatric disorders. Additionally, understanding how the brain efficiently scales problems when producing complex and adaptive behaviors can inspire and contribute to resolve Artificial Intelligence (AI) problems (e.g. scaling problems, generalization etc.) and consequently to the develop intelligent machines. During my PhD, I investigated adaptive behaviors at behavioral, cognitive, and neural level. I strongly believe that, as Marr already pointed out, in order to understand how our brain-machine works we need to investigate the phenomenon from 3 different levels: behavioral, algorithm and neural implementation. For this reason, throughout my doctoral work I took advantages of computational modeling methods together with cognitive neuroscience techniques in order to investigate the underlying mechanisms of adaptive behaviors.
Doctorat en Sciences psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
40

Benigni, Barbara. "Exploring the interplay between the human brain and the mind: a complex systems approach." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/346541.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The understanding of human brain mechanisms has captured the imagination of scientists for ages. From the quantitative perspective, there is evidence that damages to brain structure affect brain function and, as a consequence, cognitive aspects. As there is evidence that brain structure might be affected by altered cognition. However, the complex interplay between the human brain and the mind remains still poorly understood. This fact has important clinical consequences, limiting applications devoted to the prevention and treatment of brain diseases. In the present thesis, we aim to enhance our understanding of human brain mechanisms by means of an integrated and data-driven approach, by adopting a systemic perspective and leveraging on tools from computational and network neuroscience. We successfully enhance the state of the art of computational neuroscience in several manners. Firstly, we inspect human cognition by focusing on the geometric exploration of concepts in the human mind to build new datadriven metrics to complement the neurological assessment and to confirm Alzheimer’s disease diagnosis. We formalize a new stochastic process, the potential-driven random walk, able to model the trade-off between exploitation and exploration of network structure, by accounting for local and global information, providing a flexible tool to span from random walk to shortestpath based navigation. Probing the interplay between brain structure and dynamics by means of its Von Neumann entropy, we develop a new framework for the multiscale analysis of the human connectome, which is effective for discerning between healthy conditions and Alzheimer’s disease. Finally, by integrating data from the human brain structural connectivity, its functional response errors as measured by Direct Electrical Stimulation and semantic selectivity, we propose a new procedure for mapping the human brain triadic nature, thus providing a model-oriented bridge between the human brain and mind. Besides shedding more light on human brain functioning, our findings offer original and promising clues to develop integrated biomarkers for Alzheimer’s disease detection, with the potential of extension for applications to other neurodegenerative diseases and psychiatric disorders.
41

Hjorth, Johannes. "Information processing in the Striatum : a computational study." Licentiate thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3999.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Nguyen, Tung Le. "Computational Modeling of Slow Neurofilament Transport along Axons." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1547036394834075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Benichoux, Victor. "Timing cues for azimuthal sound source localization." Phd thesis, Université René Descartes - Paris V, 2013. http://tel.archives-ouvertes.fr/tel-00931645.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Azimuth sound localization in many animals relies on the processing of differences in time-of-arrival of the low-frequency sounds at both ears: the interaural time differences (ITD). It was observed in some species that this cue depends on the spectrum of the signal emitted by the source. Yet, this variation is often discarded, as humans and animals are assumed to be insensitive to it. The purpose of this thesis is to assess this dependency using acoustical techniques, and explore the consequences of this additional complexity on the neurophysiology and psychophysics of sound localization. In the vicinity of rigid spheres, a sound field is diffracted, leading to frequency-dependent wave propagation regimes. Therefore, when the head is modeled as a rigid sphere, the ITD for a given position is a frequency-dependent quantity. I show that this is indeed reflected on human ITDs by studying acoustical recordings for a large number of human and animal subjects. Furthermore, I explain the effect of this variation at two scales. Locally in frequency the ITD introduces different envelope and fine structure delays in the signals reaching the ears. Second the ITD for low-frequency sounds is generally bigger than for high frequency sounds coming from the same position. In a second part, I introduce and discuss the current views on the binaural ITD-sensitive system in mammals. I expose that the heterogenous responses of such cells are well predicted when it is assumed that they are tuned to frequency-dependent ITDs. Furthermore, I discuss how those cells can be made to be tuned to a particular position in space irregardless of the frequency content of the stimulus. Overall, I argue that current data in mammals is consistent with the hypothesis that cells are tuned to a single position in space. Finally, I explore the impact of the frequency-dependence of ITD on human behavior, using psychoacoustical techniques. Subjects are asked to match the lateral position of sounds presented with different frequency content. Those results suggest that humans perceive sounds with different frequency contents at the same position provided that they have different ITDs, as predicted from acoustical data. The extent to which this occurs is well predicted by a spherical model of the head. Combining approaches from different fields, I show that the binaural system is remarkably adapted to the cues available in its environment. This processing strategy used by animals can be of great inspiration to the design of robotic systems.
44

Milano, Isabel. "The Characterization of Alzheimer’s Disease and the Development of Early Detection Paradigms: Insights from Nosology, Biomarkers and Machine Learning." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/cmc_theses/2192.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Alzheimer’s Disease (AD) is the only condition in the top ten leading causes of death for which we do not have an effective treatment that prevents, slows, or stops its progression. Our ability to design useful interventions relies on (a) increasing our understanding of the pathological process of AD and (b) improving our ability for its early detection. These goals are impeded by our current reliance on the clinical symptoms of AD for its diagnosis. This characterizations of AD often falsely assumes a unified, underlying AD-specific pathology for similar presentations of dementia that leads to inconsistent diagnoses. It also hinges on postmortem verification, and so is not a helpful method for identifying patients and research subjects in the beginning phases of the pathophysiological process. Instead, a new biomarker-based approach provides a more biological understanding of the disease and can detect pathological changes up to 20 years before the clinical symptoms emerge. Subjects are assigned a profile according to their biomarker measures of amyloidosis (A), tauopathy (T) and neurodegeneration (N) that reflects their underlying pathology in vivo. AD is confirmed as the underlying pathology when subjects have abnormal values of both amyloid and tauopathy biomarkers, and so have a biomarker profile of A+T+(N)- or A+T+(N)+. This new biomarker based characterization of AD can be combined with machine learning techniques in multimodal classification studies to shed light on the elements of the AD pathological process and develop early detection paradigms. A guiding research framework is proposed for the development of reliable, biologically-valid and interpretable multimodal classification models.
45

Merrison-Hort, Robert. "Computational study of the mechanisms underlying oscillation in neuronal locomotor circuits." Thesis, University of Plymouth, 2014. http://hdl.handle.net/10026.1/3107.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis we model two very different movement-related neuronal circuits, both of which produce oscillatory patterns of activity. In one case we study oscillatory activity in the basal ganglia under both normal and Parkinsonian conditions. First, we used a detailed Hodgkin-Huxley type spiking model to investigate the activity patterns that arise when oscillatory cortical input is transmitted to the globus pallidus via the subthalamic nucleus. Our model reproduced a result from rodent studies which shows that two anti-phase oscillatory groups of pallidal neurons appear under Parkinsonian conditions. Secondly, we used a population model of the basal ganglia to study whether oscillations could be locally generated. The basal ganglia are thought to be organised into multiple parallel channels. In our model, isolated channels could not generate oscillations, but if the lateral inhibition between channels is sufficiently strong then the network can act as a rhythm-generating ``pacemaker'' circuit. This was particularly true when we used a set of connection strength parameters that represent the basal ganglia under Parkinsonian conditions. Since many things are not known about the anatomy and electrophysiology of the basal ganglia, we also studied oscillatory activity in another, much simpler, movement-related neuronal system: the spinal cord of the Xenopus tadpole. We built a computational model of the spinal cord containing approximately 1,500 biologically realistic Hodgkin-Huxley neurons, with synaptic connectivity derived from a computational model of axon growth. The model produced physiological swimming behaviour and was used to investigate which aspects of axon growth and neuron dynamics are behaviourally important. We found that the oscillatory attractor associated with swimming was remarkably stable, which suggests that, surprisingly, many features of axonal growth and synapse formation are not necessary for swimming to emerge. We also studied how the same spinal cord network can generate a different oscillatory pattern in which neurons on both sides of the body fire synchronously. Our results here suggest that under normal conditions the synchronous state is unstable or weakly stable, but that even small increases in spike transmission delays act to stabilise it. Finally, we found that although the basal ganglia and the tadpole spinal cord are very different systems, the underlying mechanism by which they can produce oscillations may be remarkably similar. Insights from the tadpole model allow us to predict how the basal ganglia model may be capable of producing multiple patterns of oscillatory activity.
46

Hunt, Alexander Jacob. "Neurologically Based Control for Quadruped Walking." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1445947104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lieuw, Iris. "Time Frequency Analysis of Neural Oscillations in Multi-Attribute Decision-Making." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/scripps_theses/556.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In our daily lives, we often make decisions that require the use of self-control, weighing trade-offs between various attributes: for example, selecting a food based on its health rather than its taste. Previous research suggests that re-weighting attributes may rely on selective attention, associated with decreased neural oscillations over posterior brain regions in the alpha (8-12 Hz) frequency range. Here, we utilized the high temporal resolution and whole-brain coverage of electroencephalography (EEG) to test this hypothesis in data collected from hungry human subjects exercising dietary self-control. Prior analysis of this data has found time-locked neural activity associated with each food’s perceived taste and health properties from approximately 400 to 650 ms after stimulus onset (Harris et al., 2013). We conducted time-frequency analyses to examine the role of alpha-band oscillations in this attribute weighting. Specifically, we predicted that there would be decreased alpha power in posterior electrodes beginning approximately 400 ms after stimulus onset for the presentation of healthy food relative to unhealthy food, reflecting shifts in selective attention. Consistent with this hypothesis, we found a significant decrease in alpha power for presentations of healthy relative to unhealthy foods. As predicted, this effect was most pronounced at posterior occipital and parietal electrodes and was significant from approximately 450 to 700 ms post-stimulus onset. Additionally, we found significant alpha-band decreases in right temporal electrodes during these times. These results extend previous attention research to multi-attribute choice, suggesting that the re-weighting of attributes can be measured neuro-computationally.
48

Topalidou, Meropi. "Neuroscience of decision making : from goal-directed actions to habits." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0174/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les processus de type “action-conséquence” (orienté vers un but) et stimulus-réponse sont deux composants importants du comportement. Le premier évalue le bénéfice d’une action pour choisir la meilleure parmi celles disponibles (sélection d’action) alors que le deuxième est responsable du comportement automatique, suscitant une réponse dès qu’un stimulus connu est présent. De telles habitudes sont généralement associées (et surtout opposées) aux actions orientées vers un but qui nécessitent un processus délibératif pour évaluer la meilleure option à prendre pour atteindre un objectif donné. En utilisant un modèle computationnel, nous avons étudié l’hypothèse classique de la formation et de l’expression des habitudes au niveau des ganglions de la base et nous avons formulé une nouvelle hypothèse quant aux rôles respectifs des ganglions de la base et du cortex. Inspiré par les travaux théoriques et expérimentaux de Leblois et al. (2006) et Guthrie et al. (2013), nous avons conçu un modèle computationnel des ganglions de la base, du thalamus et du cortex qui utilise des boucles distinctes (moteur, cognitif et associatif) ce qui nous a permis de poser l’hypothèse selon laquelle les ganglions de la base ne sont nécessaires que pour l’acquisition d’habitudes alors que l’expression de telles habitudes peut être faite par le cortex seul. En outre, ce modèle a permis de prédire l’existence d’un apprentissage latent dans les ganglions de la base lorsque leurs sorties (GPi) sont inhibées. En utilisant une tâche de bandit manchot à 2 choix, cette hypothèse a été expérimentalement testée et confirmée chez le singe; suggérant au final de rejeter l’idée classique selon laquelle l’automatisme est un trait subcortical
Action-outcome and stimulus-response processes are two important components of behavior. The former evaluates the benefit of an action in order to choose the best action among those available (action selection) while the latter is responsible for automatic behavior, eliciting a response as soon as a known stimulus is present. Such habits are generally associated (and mostly opposed) to goal-directed actions that require a deliberative process to evaluate the best option to take in order to reach a given goal. Using a computational model, we investigated the classic hypothesis of habits formation and expression in the basal ganglia and proposed a new hypothesis concerning the respective role for both the basal ganglia and the cortex. Inspired by previous theoretical and experimental works (Leblois et al., 2006; Guthrie et al., 2013), we designed a computational model of the basal ganglia-thalamus-cortex that uses segregated loops (motor, cognitive and associative) and makes the hypothesis that basal ganglia are only necessary for the acquisition of habits while the expression of such habits can be mediated through the cortex. Furthermore, this model predicts the existence of covert learning within the basal ganglia ganglia when their output is inhibited. Using a two-armed bandit task, this hypothesis has been experimentally tested and confirmed in monkey. Finally, this works suggest to revise the classical idea that automatism is a subcortical feature
49

Mender, Bedeho M. W. "Models of primate supraretinal visual representations." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:ce1fff8e-db5c-46e4-b5aa-7439465c2a77.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis investigates a set of non-classical visual receptive field properties observed in the primate brain. Two main phenomena were explored. The first phenomenon was neurons with head-centered visual receptive fields, in which a neuron responds maximally to a visual stimulus in the same head-centered location across all eye positions. The second phenomenon was perisaccadic receptive field dynamics, which involves a range of experimentally observed response behaviours of an eye-centered neuron associated with the advent of a saccade that relocates the neuron's receptive field. For each of these two phenomena, a hypothesis was proposed for how a neural circuit with a suitable initial architecture and synaptic learning rules could, when subjected to visually-guided training, develop the receptive field properties in question. Corresponding neural network models were first trained as hypothesized, and subsequently tested in conditions similar to experimental tasks used to interrogate the physiology of the relevant primate neural circuits. The behaviour of the models was compared to neurophysiological observations as a metric for their explanatory power. In both cases the neural network models were in broad agreement with experimental observations, and the operation of these models was studied to shed light on the neural processing behind these neural phenomena in the brain.
50

Goings, Sydney Pia. "Neural Synchrony in the Zebra Finch Brain." Scholarship @ Claremont, 2012. https://scholarship.claremont.edu/scripps_theses/41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
I am interested in discovering the role of field potential oscillations in producing synchrony within the song system of the male zebra finch brain. An important function attributed to neural synchrony is sensorimotor integration. In the production of birdsong, sensorimotor integration is crucial, as auditory feedback is necessary for the maintenance of the song. A cortical-thalamic-cortical feedback loop is thought to play a role in the integration of auditory and motor information for the purpose of producing song. Synchronous activity has been observed between at least two nuclei in this feedback loop, MMAN and HVC. Since low frequency field potential oscillations have been shown to play a role in the synchronization of nuclei within the brain of other model animals, I hypothesized that this may be the case in the zebra finch song system. In order to investigate whether oscillatory activity is a mechanism behind the synchronous activity observed between HVC and MMAN, I performed dual extracellular recordings of neural activity within the zebra finch song system. Results suggest that oscillations are likely not involved in the synchrony observed in these nuclei. Future study may reveal that the structure of the feedback loop is necessary, and possibly even sufficient, for the synchronous activity in the zebra finch song system.

До бібліографії