Dissertations / Theses on the topic 'Models of neural elements'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Models of neural elements.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Andrzej, Tuchołka. "Methodology for assessing the construction of machine elements using neural models and antipatterns : doctoral dissertation." Rozprawa doktorska, [s.n.], 2020. http://dlibra.tu.koszalin.pl/Content/1317.
Full textMiocinovic, Svjetlana. "THEORETICAL AND EXPERIMENTAL PREDICTIONS OF NEURAL ELEMENTS ACTIVATED BY DEEP BRAIN STIMULATION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=case1181758206.
Full textTomov, Petar Georgiev. "Interplay of dynamics and network topology in systems of excitable elements." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17464.
Full textIn this work we study global dynamical phenomena which emerge as a result of the interplay between network topology and single-node dynamics in systems of excitable elements. We first focus on relatively small structured networks with comprehensible complexity in terms of graph-symmetries. We discuss the constraints posed by the network topology on the dynamical flow in the phase space of the system and on the admissible synchronized states. In particular, we are interested in the stability properties of flow invariant polydiagonals and in the evolutions of attractors in the parameter spaces of such systems. As a suitable theoretical framework describing excitable elements we use the Kuramoto and Shinomoto model of sinusoidally coupled “active rotators”. We investigate plane hexagonal lattices of different size with periodic boundary conditions. We study general conditions posed on the adjacency matrix of the networks, enabling the Watanabe-Strogatz reduction, and discuss different examples. Finally, we present a generic analysis of bifurcations taking place on the submanifold associated with the Watanabe-Strogatz reduced system. In the second part of the work we investigate a global dynamical phenomenon in neuronal networks known as self-sustained activity (SSA). We consider networks of hierarchical and modular topology, comprising neurons of different cortical electrophysiological cell classes. In the investigated neural networks we show that SSA states with spiking characteristics, similar to the ones observed experimentally, can exist. By analyzing the dynamics of single neurons, as well as the phase space of the whole system, we explain the importance of inhibition for sustaining the global oscillatory activity of the network. Furthermore, we show that both network architecture, in terms of modularity level, as well as mixture of excitatory-inhibitory neurons, in terms of different cell classes, have influence on the lifetime of SSA.
Wadagbalkar, Pushkar. "Real-time prediction of projectile penetration to laminates by training machine learning models with finite element solver as the trainer." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592169428128864.
Full textCitipitioglu, Ahmet Muhtar. "Development and assessment of response and strength models for bolted steel connections using refined nonlinear 3D finite element analysis." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31691.
Full textCommittee Chair: Haj-Ali, Rami; Committee Co-Chair: Leon, Roberto; Committee Co-Chair: White, Donald; Committee Member: DesRoches, Reginald; Committee Member: Gentry, Russell. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем." Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34755.
Full textDissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем." Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34756.
Full textDissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
Levin, Robert Ian. "Dynamic Finite Element model updating using neural networks." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264075.
Full textStevenson, King Douglas Beverley. "Robust hardware elements for weightless artificial neural networks." Thesis, University of Central Lancashire, 2000. http://clok.uclan.ac.uk/1884/.
Full textVenkov, Nikola A. "Dynamics of neural field models." Thesis, University of Nottingham, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517742.
Full textOglesby, J. "Neural models for speaker recognition." Thesis, Swansea University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638359.
Full textTaylor, Neill Richard. "Neural models of temporal sequences." Thesis, King's College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300844.
Full textWhitney, William M. Eng (William F. ). Massachusetts Institute of Technology. "Disentangled representations in neural models." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106449.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Representation learning is the foundation for the recent success of neural network models. However, the distributed representations generated by neural networks are far from ideal. Due to their highly entangled nature, they are difficult to reuse and interpret, and they do a poor job of capturing the sparsity which is present in real-world transformations. In this paper, I describe methods for learning disentangled representations in the two domains of graphics and computation. These methods allow neural methods to learn representations which are easy to interpret and reuse, yet they incur little or no penalty to performance. In the Graphics section, I demonstrate the ability of these methods to infer the generating parameters of images and rerender those images under novel conditions. In the Computation section, I describe a model which is able to factorize a multitask learning problem into subtasks and which experiences no catastrophic forgetting. Together these techniques provide the tools to design a wide range of models that learn disentangled representations and better model the factors of variation in the real world.
by William Whitney.
M. Eng.
Gao, Yun. "Statistical models in neural information processing /." View online version; access limited to Brown University users, 2005. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3174606.
Full textSchmidt, Helmut. "Interface dynamics in neural field models." Thesis, University of Nottingham, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597110.
Full textMcCabe, Susan Lynda. "Neural models of subcortical auditory processing." Thesis, University of Plymouth, 1994. http://hdl.handle.net/10026.1/2167.
Full textMeng, Liang. "Statistical inferences of biophysical neural models." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12819.
Full textA fundamental issue in neuroscience is to understand the dynamic properties of, and biological mechanisms underlying, neural spiking activity. Two types of approaches have been developed: statistical and biophysical modeling. Statistical models focus on describing simple relationships between observed neural spiking activity and the signals that the brain encodes. Biophysical models concentrate on describing the biological mechanisms underlying the generation of spikes. Despite a large body of work, there remains an unbridged gap between the two model types. In this thesis, we propose a statistical framework linking observed spiking patterns to a general class of dynamic neural models. The framework uses a sequential Monte Carlo, or particle filtering, method to efficiently explore the parameter space of a detailed dynamic or biophysical model. We utilize point process theory to develop a procedure for estimating parameters and hidden variables in neuronal biophysical models given only the observed spike times. We successfully implement this method for simulated examples and address the issues of model identification and misspecification. We then apply the particle filter to actual spiking data recorded from rat layer V cortical neurons and show that it correctly identifies the dynamics of a non-traditional, intrinsic current. The method succeeds even though the observed cells exhibit two distinct classes of spiking activity: regular spiking and bursting. We propose that the approach can also frame hypotheses of underlying intrinsic currents that can be directly tested by current or future biological and/or psychological experiments. We then demonstrate the application of the proposed method to a separate problem: constructing a hypothesis test to investigate whether a point process is generated by a constant or dynamically varying intensity function. The hypothesis is formulated as an autoregressive state space model, which reduces the testing problem to a test on the variance of the state process. We apply the particle filtering method to compute test statistics and identify the rejection region. A simulation study is performed to quantify the power of this test procedure. Ultimately, the modeling framework and estimation procedures we developed provide a successful link between dynamical neural models and statistical inference from spike train data.
Murphy, Eric James. "Cell culture models for neural trauma /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487672245901403.
Full textWilliams, Bryn V. "Evolutionary neural networks : models and applications." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.
Full textHely, Timothy Alasdair. "Computational models of developing neural systems." Thesis, University of Edinburgh, 1999. http://hdl.handle.net/1842/22303.
Full textIonescu, Armand-Mihai. "Membrane computing: traces, neural inspired models, controls." Doctoral thesis, Universitat Rovira i Virgili, 2008. http://hdl.handle.net/10803/8790.
Full textTraces, Neural Inspired Models, Controls
Autor:
Armand-Mihai Ionescu
Directores:
Dr. Victor Mitrana
(URV)
Dr. Takashi Yokomori
(Universidad Waseda, Japón)
Resumen Castellano:
El presente trabajo está dedicado a una área muy activa del cálculo natural (que intenta descubrir la odalidad en la cual la naturaleza calcula, especialmente al nivel biológico), es decir el cálculo con membranas, y más preciso, a los modelos de membranas inspirados de la funcionalidad biológica de la neurona.
La disertación contribuye al área de cálculo con membranas en tres direcciones principales. Primero, introducimos una nueva manera de definir el resultado de una computación siguiendo los rastros de un objeto especificado dentro de una estructura celular o de una estructura neuronal. A continuación, nos acercamos al ámbito de la biología del cerebro, con el objetivo de obtener varias maneras de controlar la computación por medio de procesos que inhiben/de-inhiben. Tercero, introducimos e investigamos en detallo - aunque en una fase preliminar porque muchos aspectos tienen que ser clarificados - una clase de sistemas inspirados de la manera en la cual las neuronas cooperan por medio de spikes, pulsos eléctricos de formas idénticas.
English summary:
The present work is dedicated to a very active branch of natural computing (which tries to discover the way nature computes, especially at a biological level), namely membrane computing, more precisely, to those models of membrane systems mainly inspired from the functioning of the neural cell.
The present dissertation contributes to membrane computing in three main directions. First, we introduce a new way of defining the result of a computation by means of following the traces of a specified object within a cell structure or a neural structure. Then, we get closer to the biology of the brain, considering various ways to control the computation by means of inhibiting/de-inhibiting processes. Third, we introduce and investigate in a great - though preliminary, as many issues remain to be clarified - detail a class of P systems inspired from the way neurons cooperate by means of spikes, electrical pulses of identical shapes.
Xu, Shuxiang, University of Western Sydney, and of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications." THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.
Full textDoctor of Philosophy (PhD)
Tang, Chuan Zhang. "Artificial neural network models for digital implementation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq30298.pdf.
Full textXu, Shuxiang. "Neuron-adaptive neural network models and applications /." [Campbelltown, N.S.W. : The Author], 1999. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030702.085320/index.html.
Full textBraga, AntoÌ‚nio de PaÌdua. "Design models for recursive binary neural networks." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336442.
Full textRipley, Ruth Mary. "Neural network models for breast cancer prognosis." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244721.
Full textForrest, B. M. "Memory and optimisation in neural network models." Thesis, University of Edinburgh, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384164.
Full textWest, Ansgar Heinrich Ludolf. "Role of biases in neural network models." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/11546.
Full textMo, Mimi Shin Ning. "Neural vulnerability in models of Parkinson's disease." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:ac82e1c1-5d9f-473f-97ac-fcb70b2587ca.
Full textVeltz, Romain, and Romain Veltz. "Nonlinear analysis methods in neural field models." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00686695.
Full textMELLEM, MARCELO TOURASSE NASSIM. "AUTOREGRESSIVE-NEURAL HYBRID MODELS FOR TIME SERIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14541@1.
Full textEste trabalho apresenta um modelo linear por partes chamado de modelo ARN. Trata-se de uma estrutura híbrida que envolve modelos autoregressivos e redes neurais. Este modelo é comparado com o modelo AR de coeficientes fixos e com a rede neural estática aplicada à previsão. Os resultados mostram que o ARN consegue identificar a estrutura não-linear dos dados simulados e que na maioria dos casos ele possui melhor habilidade preditiva do que os modelos supracitados.
In this thesis we develop a piece-wise linear model named ARN model. Our model has a hybrid structure which combines autoregressive models and neural networks. We compare our model to the fixed-coefficient AR model and to the prediction static neural network. Our results show that ARN is able to find the non-linear structure of simulated data and in most cases it performs better than the methods mentioned above.
Beck, Amanda M. "State space models for isolating neural oscillations." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120408.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 55-56).
Information communication in the brain depends on the spiking patterns of neurons. The interaction of these cells at the population level can be observed as oscillations of varying frequency and power, in local field potential recordings as well as non-invasive scalp electroencephalograms (EEG). These oscillations are thought to be responsible for coordinating activity across larger brain regions and conveying information across the brain, directing processes such as attention, consciousness, sensory and information processing. A common approach for analyzing these electrical potentials is to apply a band pass filter in the frequency band of interest. Canonical frequency bands have been defined and applied in many previous studies, but their specific definitions vary within the field, and are to some degree arbitrary. We propose an alternative approach that uses state space models to represent basic physiological and dynamic principles, whose detailed structure and parameterization are informed by observed data. We find that this method can more accurately represent oscillatory power, effectively separating it from background broadband noise power. This approach provides a way of separating oscillations in the time domain and while also quantifying their structure efficiently with a small number of parameters.
by Amanda M. Beck.
S.M. in Computer Science and Engineering
Lei, Tao Ph D. Massachusetts Institute of Technology. "Interpretable neural models for natural language processing." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108990.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 109-119).
The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the model's structure and predictions. In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justifications, we relate this component to string kernels, i.e. functions measuring the similarity between sequences, and demonstrate how it encodes the efficient kernel computing algorithm into its structure. The proposed model achieves state-of-the-art or competitive results compared to alternative architectures (such as LSTMs and CNNs) across several NLP applications. In the second part, we learn rationales behind the model's prediction by extracting input pieces as supporting evidence. Rationales are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by the desiderata for rationales. We demonstrate the effectiveness of this learning framework in applications such multi-aspect sentiment analysis. Our method achieves a performance over 90% evaluated against manual annotated rationales.
by Tao Lei.
Ph. D.
Kryściński, Wojciech. "Training Neural Models for Abstractive Text Summarization." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-236973.
Full textAbstraktiv textsammanfattning syftar på att korta ner långa textdokument till en förkortad, mänskligt läsbar form, samtidigt som den viktigaste informationen i källdokumentet bevaras. Ett vanligt tillvägagångssätt för att träna sammanfattningsmodeller är att använda maximum likelihood-estimering med teacher-forcing-strategin. Trots dess popularitet har denna metod visat sig ge modeller med suboptimal prestanda vid inferens. I det här arbetet undersöks hur användningen av alternativa, uppgiftsspecifika träningssignaler påverkar sammanfattningsmodellens prestanda. Två nya träningssignaler föreslås och utvärderas som en del av detta arbete. Den första, vilket är en ny metrik, mäter överlappningen mellan n-gram i sammanfattningen och den sammanfattade artikeln. Den andra använder en diskrimineringsmodell för att skilja mänskliga skriftliga sammanfattningar från genererade på ordnivå. Empiriska resultat visar att användandet av de nämnda mätvärdena som belöningar för policygradient-träning ger betydande prestationsvinster mätt med ROUGE-score, novelty score och mänsklig utvärdering.
Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.
Full textLabeau, Matthieu. "Neural language models : Dealing with large vocabularies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Full textThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Fan, Xuetong. "Laminar Flow Control Models with Neural Networks /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487929745334864.
Full textTomes, Hayley Sarah. "Investigating neural responses in models of neurocysticercosis." Doctoral thesis, Faculty of Health Sciences, 2021. http://hdl.handle.net/11427/33046.
Full textVeltz, Romain. "Nonlinear analysis methods in neural field models." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1056/document.
Full textThis thesis deals with mesoscopic models of cortex called neural fields. The neural field equations describe the activity of neuronal populations, with common anatomical / functional properties. They were introduced in the 1950s and are called the equations of Wilson and Cowan. Mathematically, they consist of integro-differential equations with delays, the delays modeling the signal propagation and the passage of signals across synapses and the dendritic tree. In the first part, we recall the biology necessary to understand this thesis and derive the main equations. Then, we study these equations with the theory of dynamical systems by characterizing their equilibrium points and dynamics in the second part. In the third part, we study these delayed equations in general by giving formulas for the bifurcation diagrams, by proving a center manifold theorem, and by calculating the principal normal forms. We apply these results to one-dimensional neural fields which allows a detailed study of the dynamics. Finally, in the last part, we study three models of visual cortex. The first two models are from the literature and describe respectively a hypercolumn, i.e. the basic element of the first visual area (V1) and a network of such hypercolumns. The latest model is a new model of V1 which generalizes the two previous models while allowing a detailed study of specific effects of delays
Westermann, Gert. "Constructivist neural network models of cognitive development." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/22733.
Full textWedgwood, Kyle C. A., Kevin K. Lin, Ruediger Thul, and Stephen Coombes. "Phase-Amplitude Descriptions of Neural Oscillator Models." BioMed Central, 2013. http://hdl.handle.net/10150/610255.
Full textYau, Hon Wah. "Phase space techniques in neural network models." Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/14713.
Full textLineaweaver, Sean Kenneth Ridgway. "Dynamic spiral lumped element model of electrical field distribution and neural excitation in the implanted cochlea /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/6092.
Full textHerrington, William Frederick Jr. "Micro-optic elements for a compact opto-electronic integrated neural coprocessor." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97800.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 165-167).
The research done for this thesis was aimed at developing the optical elements needed for the Compact Opto-electronic Integrated Neural coprocessor (COIN coprocessor) project. The COIN coprocessor is an implementation of a feed forward neural network using free-space optical interconnects to communicate between neurons. Prior work on this project had assumed these interconnects would be formed using Holographic Optical Elements (HOEs), so early work for this thesis was directed along these lines. Important limits to the use of HOEs in the COIN system were identified and evaluated. In particular, the problem of changing wavelength between the hologram recording and readout steps was examined and it was shown that there is no general solution to this problem when the hologram to be recorded is constructed with more than two plane waves interfering with each other. Two experimental techniques, the holographic bead lens and holographic liftoff, were developed as partial workarounds to the identified limitations. As an alternative to HOEs, an optical element based on the concept of the Fresnel Zone Plate was developed and experimentally tested. The zone plate based elements offer an easily scalable method for fabricating the COIN optical interconnects using standard lithographic processes and appear to be the best choice for the COIN coprocessor project at this time. In addition to the development of the optical elements for the COIN coprocessor, this thesis also looks at the impact of optical element efficiency on the power consumption of the COIN coprocessor. Finally, a model of the COIN network based on the current COIN design was used to compare the performance and cost of the COIN system with competing implementations of neural networks, with the conclusion that at this time the proposed COIN coprocessor system is still a competitive option for neural network implementations.
by William Frederick Herrington Jr.
Ph. D.
Fischer, Shain Ann. "A Three-Dimensional Anatomically Accurate Finite Element Model for Nerve Fiber Activation Simulation Coupling." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1365.
Full textIsraeli, Yeshayahu D. "Whitney Element Based Priors for Hierarchical Bayesian Models." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1621866603265673.
Full textSugden, Frank Daniel. "A NOVEL DUAL MODELING METHOD FOR CHARACTERIZING HUMAN NERVE FIBER ACTIVATION." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1318.
Full textWu, Sui Vang. "The use of neural networks in financial models." Thesis, University of Macau, 2001. http://umaclib3.umac.mo/record=b1636268.
Full textEsnaola, Acebes Jose M. "Patterns of spike synchrony in neural field models." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663871.
Full textNeural field models are phenomenological descriptions of the activity of spatially organized, recurrently coupled neuronal networks. Due to their mathematical simplicity, such models are extremely useful for the analysis of spatiotemporal phenomena in networks of spiking neurons, and are largely used in computational neuroscience. Nevertheless, it is well known that traditional neural field descriptions fail to describe the collective dynamics of networks of synchronously spiking neurons. Yet, numerical simulations of networks of spiking neurons show that, even in the case of highly asynchronous activity, fast fluctuations in the common external inputs drive transient episodes of spike synchrony. Moreover, synchronization may also be generated by the network itself, resulting in the appearance of robust large-scale, self-sustained oscillations. In this thesis, we investigate the emergence of synchrony-induced spatiotemporal patterns in spatially distributed networks of heterogeneous spiking neurons. These patterns are not observed in traditional neural field theories and have been largely overlooked in the literature. To investigate synchrony-induced phenomena in neuronal networks, we use a novel neural field model which is exactly derived from a large population of quadratic integrate-and-fire model neurons. The simplicity of the neural field model allows us to analyze the stability of the network in terms of the spatial profile of the synaptic connectivity, and to obtain exact formulas for the stability boundaries characterizing the dynamics of the original spiking neuronal network. Remarkably, the analysis also reveals the existence of a collection of oscillation modes, which are exclusively due to spike-synchronization. We believe that the results presented in this thesis will foster theoretical advances on the collective dynamics of neuronal networks, upgrading the mathematical basis of computational neuroscience.
Nandeshwar, Ashutosh R. "Models for calculating confidence intervals for neural networks." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4600.
Full textTitle from document title page. Document formatted into pages; contains x, 65 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 62-65).