Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Models of neural elements.

Дисертації з теми "Models of neural elements"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Models of neural elements".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Andrzej, Tuchołka. "Methodology for assessing the construction of machine elements using neural models and antipatterns : doctoral dissertation." Rozprawa doktorska, [s.n.], 2020. http://dlibra.tu.koszalin.pl/Content/1317.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Miocinovic, Svjetlana. "THEORETICAL AND EXPERIMENTAL PREDICTIONS OF NEURAL ELEMENTS ACTIVATED BY DEEP BRAIN STIMULATION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=case1181758206.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tomov, Petar Georgiev. "Interplay of dynamics and network topology in systems of excitable elements." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17464.

Повний текст джерела
Анотація:
Wir untersuchen globale dynamische Phänomene, die sich von dem Zusammenspiel zwischen Netzwerktopologie und Dynamik der einzelnen Elementen ergeben. Im ersten Teil untersuchen wir relativ kleine strukturierte Netzwerke mit überschaubarer Komplexität. Als geeigneter theoretischer Rahmen für erregbare Systeme verwenden wir das Kuramoto und Shinomoto Modell der sinusförmig-gekoppelten "aktiven Rotatoren" und studieren das Kollektivverhalten des Systems in Bezug auf Synchronisation. Wir besprechen die Einschränkungen, die durch die Netzwerktopologie auf dem Fluss im Phasenraum des Systems gestellt werden. Insbesondere interessieren wir uns für die Stabilitätseigenschaften von Fluss-invarianten Polydiagonalen und die Entwicklungen von Attraktoren in den Parameterräume solcher Systeme. Wir untersuchen zweidimensionale hexagonale Gitter mit periodischen Randbedingungen. Wir untersuchen allgemeine Bedingungen auf der Adjazenzmatrix von Netzwerken, die die Watanabe-Strogatz Reduktion ermöglichen, und diskutieren verschiedene Beispiele. Schließlich präsentieren wir eine generische Analyse der Bifurkationen, die auf der Untermannigfaltigkeit des Watanabe-Strogatz reduzierten Systems stattfinden. Im zweiten Teil der Arbeit untersuchen wir das globale dynamische Phänomen selbstanhaltender Aktivität (self-sustained activity / SSA) in neuronalen Netzwerken. Wir betrachten Netzwerke mit hierarchischer und modularer Topologie , umfassend Neuronen von verschiedenen kortikalen elektrophysiologischen Zellklassen. Wir zeigen, dass SSA Zustände mit ähnlich zu den experimentell beobachteten Eigenschaften existieren. Durch Analyse der Dynamik einzelner Neuronen sowie des Phasenraums des gesamten Systems erläutern wir die Rolle der Inhibierung. Darüber hinaus zeigen wir, dass beide Netzwerkarchitektur, in Bezug auf Modularität, sowie Mischung aus verschiedenen Neuronen, in Bezug auf die unterschiedlichen Zellklassen, einen Einfluss auf die Lebensdauer der SSA haben.
In this work we study global dynamical phenomena which emerge as a result of the interplay between network topology and single-node dynamics in systems of excitable elements. We first focus on relatively small structured networks with comprehensible complexity in terms of graph-symmetries. We discuss the constraints posed by the network topology on the dynamical flow in the phase space of the system and on the admissible synchronized states. In particular, we are interested in the stability properties of flow invariant polydiagonals and in the evolutions of attractors in the parameter spaces of such systems. As a suitable theoretical framework describing excitable elements we use the Kuramoto and Shinomoto model of sinusoidally coupled “active rotators”. We investigate plane hexagonal lattices of different size with periodic boundary conditions. We study general conditions posed on the adjacency matrix of the networks, enabling the Watanabe-Strogatz reduction, and discuss different examples. Finally, we present a generic analysis of bifurcations taking place on the submanifold associated with the Watanabe-Strogatz reduced system. In the second part of the work we investigate a global dynamical phenomenon in neuronal networks known as self-sustained activity (SSA). We consider networks of hierarchical and modular topology, comprising neurons of different cortical electrophysiological cell classes. In the investigated neural networks we show that SSA states with spiking characteristics, similar to the ones observed experimentally, can exist. By analyzing the dynamics of single neurons, as well as the phase space of the whole system, we explain the importance of inhibition for sustaining the global oscillatory activity of the network. Furthermore, we show that both network architecture, in terms of modularity level, as well as mixture of excitatory-inhibitory neurons, in terms of different cell classes, have influence on the lifetime of SSA.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wadagbalkar, Pushkar. "Real-time prediction of projectile penetration to laminates by training machine learning models with finite element solver as the trainer." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592169428128864.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Citipitioglu, Ahmet Muhtar. "Development and assessment of response and strength models for bolted steel connections using refined nonlinear 3D finite element analysis." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31691.

Повний текст джерела
Анотація:
Thesis (Ph.D)--Civil and Environmental Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Haj-Ali, Rami; Committee Co-Chair: Leon, Roberto; Committee Co-Chair: White, Donald; Committee Member: DesRoches, Reginald; Committee Member: Gentry, Russell. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем". Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34755.

Повний текст джерела
Анотація:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.
Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем". Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34756.

Повний текст джерела
Анотація:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.
Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Levin, Robert Ian. "Dynamic Finite Element model updating using neural networks." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Stevenson, King Douglas Beverley. "Robust hardware elements for weightless artificial neural networks." Thesis, University of Central Lancashire, 2000. http://clok.uclan.ac.uk/1884/.

Повний текст джерела
Анотація:
This thesis investigates novel robust hardware elements for weightless artificial neural systems with a bias towards high integrity avionics applications. The author initially reviews the building blocks of physiological neural systems and then chronologically describes the development of weightless artificial neural systems. Several new design methodologies for the implementation of robust binary sum-and-threshold neurons are presented. The new techniques do not rely on weighted binary counters or registered arithmetic units for their operation making them less susceptible to transient single event upsets. They employ Boolean, weightless binary, asynchronous elements throughout thus increasing robustness in the presence of impulsive noise. Hierarchies formed from these neural elements are studied and a weightless probabilisitic activation function proposed for non-deterministic applications. Neuroram, an auto-associative memory created using these weightless neurons is described and analysed. The signal-to-noise ratio characteristics are compared with the traditional Hamming distance metric. This led to the proposal that neuroram can form a threshold logic based digital signal filter. Two weightless autoassociative memory based neuro-filters are presented and their filtration properties studied and compared with a traditional median filter. Eachn novel architecture was emulated using weightless numericM ATLAB code prior to schematic design and functional simulation. Several neural elements were implemented and validated using FPGA technology. A preliminary robustness evaluation was performed. The large scale particle accelerator at the Theodor Svedberg Laboratory at the University of Uppsala, Sweden, was used to generate transienut psetsin an FPGA performing a weightless binary neural function. One paper,two letters and five international patents have been published during the course of this research. The author has significantly contributed to the field of weightless artificial neural systems in high integrity hardware applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Venkov, Nikola A. "Dynamics of neural field models." Thesis, University of Nottingham, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517742.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Oglesby, J. "Neural models for speaker recognition." Thesis, Swansea University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638359.

Повний текст джерела
Анотація:
In recent years a resurgence of interest in neural modeling has taken place. This thesis examines one such class applied to the task of speaker recognition, with direct comparisons made to a contemporary approach based on vector quantisation (VQ). Speaker recognition systems in general, including feature representations and distance measures, are reviewed. The VQ approach, used for comparisons throughout the experimental work, is described in detail. Currently popular neural architectures are also reviewed and associated gradient-based training procedures examined. The performance of a VQ speaker identification system is determined experimentally for a range of popular speech features, using codebooks of varying sizes. Perceptually-based cepstral features are found to out-perform both standard LPC and filterbank representations. New approaches to speaker recognition based on multilayer perceptrons (MLP) and a variant using radial basis functions (RBF) are proposed and examined. To facilitate the research in terms of computational requirements a novel parallel training algorithm is proposed, which dynamically schedules the computational load amongst the available processors. This is shown to give close to linear speed-up on typical training tasks for up to fifty transputers. A transputer-based processing module with appropriate speech capture and synthesis facilities is also developed. For the identification task the MLP approach is found to give approximately the same performance as equivalent sized VQ codebooks. The MLP approach is slightly better for smaller models, however for larger models the VQ approach gives marginally superior results. MLP and RBF models are investigated for speaker verification. Both techniques significantly out-perform the VQ approach, giving 29.5% (MLP) and 21.5% (RBF) true talker rejections for a fixed 2% imposter acceptance rate, compared to 34.5% for the VQ approach. These figures relate to single digit test utterances. Extending the duration of the test utterance is found to significantly improve performance across all techniques. The best overall performance is obtained from RBF models: five digit utterances achieve around 2.5% true talker rejections for a fixed 2% imposter acceptance rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Taylor, Neill Richard. "Neural models of temporal sequences." Thesis, King's College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300844.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Whitney, William M. Eng (William F. ). Massachusetts Institute of Technology. "Disentangled representations in neural models." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106449.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Representation learning is the foundation for the recent success of neural network models. However, the distributed representations generated by neural networks are far from ideal. Due to their highly entangled nature, they are difficult to reuse and interpret, and they do a poor job of capturing the sparsity which is present in real-world transformations. In this paper, I describe methods for learning disentangled representations in the two domains of graphics and computation. These methods allow neural methods to learn representations which are easy to interpret and reuse, yet they incur little or no penalty to performance. In the Graphics section, I demonstrate the ability of these methods to infer the generating parameters of images and rerender those images under novel conditions. In the Computation section, I describe a model which is able to factorize a multitask learning problem into subtasks and which experiences no catastrophic forgetting. Together these techniques provide the tools to design a wide range of models that learn disentangled representations and better model the factors of variation in the real world.
by William Whitney.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Gao, Yun. "Statistical models in neural information processing /." View online version; access limited to Brown University users, 2005. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3174606.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Schmidt, Helmut. "Interface dynamics in neural field models." Thesis, University of Nottingham, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597110.

Повний текст джерела
Анотація:
Neural fields models have been developed. to emulate large scale brain dynamics. They exhibit similar types of patterns as observed in real cortical tissue, such as travelling waves and persistent localised activity. The study of neural field models is yet a growing field of research, and in this thesis we contribute by developing new approaches to the analysis of pattern formation. A particular focus is on interface methods in one and two spatial dimensions. In the first part of this thesis we Study the influence of inhomogeneities on the velocity of propagating waves. We examine periodically modulated connectivity functions as well as fluctuating firing thresholds. For strong inhomogeneities we observe wave propagation failure and the emergence of stable localised solutions that do not exist in the homogeneous model. In the second part we develop a method to approximate stationary localised solutions and travelling waves in neural field models with sigmoidal firing rates. In particular, we devise a scheme that approximates the slope of these solutions and yields refined results upon iteration. We calculate explicit solutions for piecewise linear and piecewise polynomial firing rates. In the third part we develop an interface approach for planar neural field models. We derive the equations of motion for a certain class of synaptic connectivity function. In the interface description the evolution of a contour, which is defined by a level set condition, is governed by the normal velocity which depends exclusively on the shape of the contour. We present results for the existence and stability of various types of patterns. The interface description is also incorporated into a numerical scheme which allows to investigate pattern formation beyond instabilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

McCabe, Susan Lynda. "Neural models of subcortical auditory processing." Thesis, University of Plymouth, 1994. http://hdl.handle.net/10026.1/2167.

Повний текст джерела
Анотація:
An important feature of the auditory system is its ability to distinguish many simultaneous sound sources. The primary goal of this work was to understand how a robust, preattentive analysis of the auditory scene is accomplished by the subcortical auditory system. Reasonably accurate modelling of the morphology and organisation of the relevant auditory nuclei, was seen as being of great importance. The formulation of plausible models and their subsequent simulation was found to be invaluable in elucidating biological processes and in highlighting areas of uncertainty. In the thesis, a review of important aspects of mammalian auditory processing is presented and used as a basis for the subsequent modelling work. For each aspect of auditory processing modelled, psychophysical results are described and existing models reviewed, before the models used here are described and simulated. Auditory processes which are modelled include the peripheral system, and the production of tonotopic maps of the spectral content of complex acoustic stimuli, and of modulation frequency or periodicity. A model of the formation of sequential associations between successive sounds is described, and the model is shown to be capable of emulating a wide range of psychophysical behaviour. The grouping of related spectral components and the development of pitch perception is also investigated. Finally a critical assessment of the work and ideas for future developments are presented. The principal contributions of this work are the further development of a model for pitch perception and the development of a novel architecture for the sequential association of those groups. In the process of developing these ideas, further insights into subcortical auditory processing were gained, and explanations for a number of puzzling psychophysical characteristics suggested.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Meng, Liang. "Statistical inferences of biophysical neural models." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12819.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Boston University PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
A fundamental issue in neuroscience is to understand the dynamic properties of, and biological mechanisms underlying, neural spiking activity. Two types of approaches have been developed: statistical and biophysical modeling. Statistical models focus on describing simple relationships between observed neural spiking activity and the signals that the brain encodes. Biophysical models concentrate on describing the biological mechanisms underlying the generation of spikes. Despite a large body of work, there remains an unbridged gap between the two model types. In this thesis, we propose a statistical framework linking observed spiking patterns to a general class of dynamic neural models. The framework uses a sequential Monte Carlo, or particle filtering, method to efficiently explore the parameter space of a detailed dynamic or biophysical model. We utilize point process theory to develop a procedure for estimating parameters and hidden variables in neuronal biophysical models given only the observed spike times. We successfully implement this method for simulated examples and address the issues of model identification and misspecification. We then apply the particle filter to actual spiking data recorded from rat layer V cortical neurons and show that it correctly identifies the dynamics of a non-traditional, intrinsic current. The method succeeds even though the observed cells exhibit two distinct classes of spiking activity: regular spiking and bursting. We propose that the approach can also frame hypotheses of underlying intrinsic currents that can be directly tested by current or future biological and/or psychological experiments. We then demonstrate the application of the proposed method to a separate problem: constructing a hypothesis test to investigate whether a point process is generated by a constant or dynamically varying intensity function. The hypothesis is formulated as an autoregressive state space model, which reduces the testing problem to a test on the variance of the state process. We apply the particle filtering method to compute test statistics and identify the rejection region. A simulation study is performed to quantify the power of this test procedure. Ultimately, the modeling framework and estimation procedures we developed provide a successful link between dynamical neural models and statistical inference from spike train data.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Murphy, Eric James. "Cell culture models for neural trauma /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487672245901403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Williams, Bryn V. "Evolutionary neural networks : models and applications." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Повний текст джерела
Анотація:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hely, Timothy Alasdair. "Computational models of developing neural systems." Thesis, University of Edinburgh, 1999. http://hdl.handle.net/1842/22303.

Повний текст джерела
Анотація:
The work of this thesis has focused on crating computational models of developing neurons. Three different but related areas of research have been studied - how cells make connections, what influences the shape of these connections and how neuronal network behaviour can be influenced by local interactions. In order to understand how cells make connections I simulated the dynamics of the neuronal growth cone - a structure which guides the developing axon to its target cells. Results from the first models showed that small interaction effects between structural proteins in the axon called microtubules can significantly alter the rate of axonal elongation and turning. I also simulated the dynamics of growth cone filopodia. The filopodia act as antennae and explore the extracellular environment surrounding the growth cone. This model showed that a reaction-diffusion system based on Turing morphogenesis patterns could account for the dynamic behaviour of filopodia. To find out what influences the shape of neuronal connections I simulated the branching patterns of neuronal dendrites. These are tree-like structures which receive input from other cells. Recent experiments indicate that dendrite branching is dependent on the phosphorylation status of microtubule associated protein 2 (MAP2) which affects the growth rate and spacing of microtubules. MAP2 phosphorylation can occur through calcium activation of the protein CaMKII. In the model the phosphorylation status and physical distribution of MAP2 within the cell can be varied to produce a wide range of biologically realistic dendritic patterns. The final model simulates emergent synchronisation of neuronal spike firing which can occur in cultures of developing neurons. In the model the frequency and phase of cell firing is modified by the pattern of input signals received by the cell through local connections. This mechanism alone can lead to synchronous oscillation of the entire network of cells. The results of the model indicate that synchronization of firing in developing neurons in culture occurs through a passive spread of activity, rather through an active coupling mechanism.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ionescu, Armand-Mihai. "Membrane computing: traces, neural inspired models, controls." Doctoral thesis, Universitat Rovira i Virgili, 2008. http://hdl.handle.net/10803/8790.

Повний текст джерела
Анотація:
Membrane Computing:
Traces, Neural Inspired Models, Controls
Autor:
Armand-Mihai Ionescu
Directores:
Dr. Victor Mitrana
(URV)
Dr. Takashi Yokomori
(Universidad Waseda, Japón)
Resumen Castellano:
El presente trabajo está dedicado a una área muy activa del cálculo natural (que intenta descubrir la odalidad en la cual la naturaleza calcula, especialmente al nivel biológico), es decir el cálculo con membranas, y más preciso, a los modelos de membranas inspirados de la funcionalidad biológica de la neurona.
La disertación contribuye al área de cálculo con membranas en tres direcciones principales. Primero, introducimos una nueva manera de definir el resultado de una computación siguiendo los rastros de un objeto especificado dentro de una estructura celular o de una estructura neuronal. A continuación, nos acercamos al ámbito de la biología del cerebro, con el objetivo de obtener varias maneras de controlar la computación por medio de procesos que inhiben/de-inhiben. Tercero, introducimos e investigamos en detallo - aunque en una fase preliminar porque muchos aspectos tienen que ser clarificados - una clase de sistemas inspirados de la manera en la cual las neuronas cooperan por medio de spikes, pulsos eléctricos de formas idénticas.
English summary:
The present work is dedicated to a very active branch of natural computing (which tries to discover the way nature computes, especially at a biological level), namely membrane computing, more precisely, to those models of membrane systems mainly inspired from the functioning of the neural cell.
The present dissertation contributes to membrane computing in three main directions. First, we introduce a new way of defining the result of a computation by means of following the traces of a specified object within a cell structure or a neural structure. Then, we get closer to the biology of the brain, considering various ways to control the computation by means of inhibiting/de-inhibiting processes. Third, we introduce and investigate in a great - though preliminary, as many issues remain to be clarified - detail a class of P systems inspired from the way neurons cooperate by means of spikes, electrical pulses of identical shapes.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Xu, Shuxiang, University of Western Sydney, and of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications." THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.

Повний текст джерела
Анотація:
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy. In the neural network literature only Zhang proved the universal approximation ability of FNN Group to any piecewise continuous function. Next, we have developed the approximation properties of Neuron Adaptive Higher Order Neural Networks (NAHONN's), a combination of HONN's and NAF, to any continuous function, functional and operator. Finally, we have created a software program called MASFinance which runs on the Solaris system for the approximation of continuous or discontinuous functions, and for the simulation of any continuous or discontinuous data (especially financial data). Our work distinguishes itself from previous work in the following ways: we use a new neuron-adaptive activation function, while the neuron activation functions in most existing work are all fixed and can't be tuned to adapt to different approximation problems; we only use on NANN to approximate any piecewise continuous function, while a neural network group must be utilised in previous research; we combine HONN's with NAF and investigate its approximation properties to any continuous function, functional, and operator; we present a new software program, MASFinance, for function approximation and data simulation. Experiments running MASFinance indicate that the proposed NANN's present several advantages over traditional neuron-fixed networks (such as greatly reduced network size, faster learning, and lessened simulation errors), and that the suggested NANN's can effectively approximate piecewise continuous functions better than neural networks groups. Experiments also indicate that NANN's are especially suitable for data simulation
Doctor of Philosophy (PhD)
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Tang, Chuan Zhang. "Artificial neural network models for digital implementation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq30298.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Xu, Shuxiang. "Neuron-adaptive neural network models and applications /." [Campbelltown, N.S.W. : The Author], 1999. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030702.085320/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Braga, Antônio de Pádua. "Design models for recursive binary neural networks." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336442.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ripley, Ruth Mary. "Neural network models for breast cancer prognosis." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244721.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Forrest, B. M. "Memory and optimisation in neural network models." Thesis, University of Edinburgh, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384164.

Повний текст джерела
Анотація:
A numerical study of two classes of neural network models is presented. The performance of Ising spin neural networks as content-addressable memories for the storage of bit patterns is analysed. By studying systems of increasing sizes, behaviour consistent with fintite-size scaling, characteristic of a first-order phase transition, is shown to be exhibited by the basins of attraction of the stored patterns in the Hopfield model. A local iterative learning algorithm is then developed for these models which is shown to achieve perfect storage of nominated patterns with near-optimal content-addressability. Similar scaling behaviour of the associated basins of attraction is observed. For both this learning algorithm and the Hopfield model, by extrapolating to the thermodynamic limit, estimates are obtained for the critical minimum overlap which an input pattern must have with a stored pattern in order to successfully retrieve it. The role of a neural network as a tool for optimising cost functions of binary valued variables is also studied. The particular application considered is that of restoring binary images which have become corrupted by noise. Image restorations are achieved by representing the array of pixel intensities as a network of analogue neurons. The performance of the network is shown to compare favourably with two other deterministic methods-a gradient descent on the same cost function and a majority-rule scheme-both in terms of restoring images and in terms of minimising the cost function. All of the computationally intensive simulations exploit the inherent parallelism in the models: both SIMD (the ICL DAP) and MIMD (the Meiko Computing Surface) machines are used.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

West, Ansgar Heinrich Ludolf. "Role of biases in neural network models." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/11546.

Повний текст джерела
Анотація:
The capacity problem for multi-layer networks has proven especially elusive. Our calculation of the capacity of multi-layer networks built by constructive algorithms relies heavily on the existence of biases in the basic building block, the binary perceptron. It is the first time where the capacity is explicitly evaluated for large networks and finite stability. One finds that the constructive algorithms studied, a tiling-like algorithm and variants of the upstart algorithm, do not saturate the known Mitchison-Durbin bound. In supervised learning, a student network is presented with training examples in the form of input-output pairs, where the output is generated by a teacher network. The central question to be answered is the relation between the number of examples presented and the typical performance of the student in approximating the teacher rule, which is usually termed generalisation. The influence of biases in such a student-teacher scenario has been assessed for the two-layer soft-committee architecture, which is a universal approximator and already resembles applicable multi-layer network models, within the on-line learning paradigm, where training examples are presented serially. One finds that adjustable biases dramatically alter the learning behaviour. The suboptimal symmetric phase, which can easily dominate training for fixed biases, vanishes almost entirely for non-degenerate teacher biases. Furthermore, the extended model exhibits a much richer dynamical behaviour, exemplified especially by a multitude of (attractive) suboptimal fixed points even for realizable cases, causing the training to fail or be severely slowed down. In addition, in order to study possible improvements over gradient decent training, an adaptive back-propagation algorithm parameterised by a "temperature" is introduced, which enhances the ability of the student to distinguish between teacher nodes. This algorithm, which has been studied in the various learning stages, provides more effective symmetry breaking between hidden units and faster convergence to optimal generalisation.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Mo, Mimi Shin Ning. "Neural vulnerability in models of Parkinson's disease." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:ac82e1c1-5d9f-473f-97ac-fcb70b2587ca.

Повний текст джерела
Анотація:
Parkinson's disease (PD) is a neurodegenerative disorder with no known cure. This thesis explores the degenerative process in two neurotoxin-based models, the 6-hydroxydopamine and the chronic 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine(MPTP)/probenecid mouse models, to yield important information about the pathogenesis of PD. Neuronal survival patterns in Parkinsonian patients and animals are heterogeneous. More dopaminergic neurons are lost from the ventral tier of the substantia nigra (SN) than from the dorsal tier or the adjacent ventral tegmental area, possibly due to differential expression of the calcium-binding protein, calbindin D28K. Brain sections were processed for tyrosine hydroxylase (TH) and calbindin (CB) immunocytochemistry to distinguish the dopaminergic subpopulations. I show that more TH+/CB- and TH-/CB+ than TH+/CB+ neurons are lost in both models, suggesting that CB confers some degree of protection for dopaminergic neurons. With respect to connectivity, I show that both TH+ and CB+ neurons receive striatal and dorsal raphe inputs. I investigated the possibility of a progressive loss in midbrain neurons by prolonging the post-lesion survival period. In both models, there is an irreversible neuronal cell loss of TH+, CB+ and TH+/CB+ neurons but the effects of survival time and lesion treatments differ for the three neuronal types. The lesions also appear to be toxic to GABAergic neurons. I explore whether, once neurodegeneration has started, neurons can be rescued by pharmacological intervention. Salicylic acid appears both to reduce microglial activation and significantly improve TH+, but not CB+ or TH+/CB+ neuronal survival. PD appears multifactorial in origin and may involve complex interactions between genetic and environmental influences. I show that a xenobiotic-metabolising enzyme, arylamine N-acetyltransferase may fulfil a neuroprotective role in the SN by limiting the environmental risks. Taken together, this study provides a body of information on two different mouse PD models and highlights possible genetic predispositions to PD neuropathology.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Veltz, Romain, and Romain Veltz. "Nonlinear analysis methods in neural field models." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00686695.

Повний текст джерела
Анотація:
This thesis deals with mesoscopic models of cortex called neural fields. The neural field equations describe the activity of neuronal populations, with common anatomical / functional properties. They were introduced in the 1950s and are called the equations of Wilson and Cowan. Mathematically, they consist of integro-differential equations with delays, the delays modeling the signal propagation and the passage of signals across synapses and the dendritic tree. In the first part, we recall the biology necessary to understand this thesis and derive the main equations. Then, we study these equations with the theory of dynamical systems by characterizing their equilibrium points and dynamics in the second part. In the third part, we study these delayed equations in general by giving formulas for the bifurcation diagrams, by proving a center manifold theorem, and by calculating the principal normal forms. We apply these results to one-dimensional neural fields which allows a detailed study of the dynamics. Finally, in the last part, we study three models of visual cortex. The first two models are from the literature and describe respectively a hypercolumn, i.e. the basic element of the first visual area (V1) and a network of such hypercolumns. The latest model is a new model of V1 which generalizes the two previous models while allowing a detailed study of specific effects of delays
Стилі APA, Harvard, Vancouver, ISO та ін.
31

MELLEM, MARCELO TOURASSE NASSIM. "AUTOREGRESSIVE-NEURAL HYBRID MODELS FOR TIME SERIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14541@1.

Повний текст джерела
Анотація:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Este trabalho apresenta um modelo linear por partes chamado de modelo ARN. Trata-se de uma estrutura híbrida que envolve modelos autoregressivos e redes neurais. Este modelo é comparado com o modelo AR de coeficientes fixos e com a rede neural estática aplicada à previsão. Os resultados mostram que o ARN consegue identificar a estrutura não-linear dos dados simulados e que na maioria dos casos ele possui melhor habilidade preditiva do que os modelos supracitados.
In this thesis we develop a piece-wise linear model named ARN model. Our model has a hybrid structure which combines autoregressive models and neural networks. We compare our model to the fixed-coefficient AR model and to the prediction static neural network. Our results show that ARN is able to find the non-linear structure of simulated data and in most cases it performs better than the methods mentioned above.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Beck, Amanda M. "State space models for isolating neural oscillations." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120408.

Повний текст джерела
Анотація:
Thesis: S.M. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 55-56).
Information communication in the brain depends on the spiking patterns of neurons. The interaction of these cells at the population level can be observed as oscillations of varying frequency and power, in local field potential recordings as well as non-invasive scalp electroencephalograms (EEG). These oscillations are thought to be responsible for coordinating activity across larger brain regions and conveying information across the brain, directing processes such as attention, consciousness, sensory and information processing. A common approach for analyzing these electrical potentials is to apply a band pass filter in the frequency band of interest. Canonical frequency bands have been defined and applied in many previous studies, but their specific definitions vary within the field, and are to some degree arbitrary. We propose an alternative approach that uses state space models to represent basic physiological and dynamic principles, whose detailed structure and parameterization are informed by observed data. We find that this method can more accurately represent oscillatory power, effectively separating it from background broadband noise power. This approach provides a way of separating oscillations in the time domain and while also quantifying their structure efficiently with a small number of parameters.
by Amanda M. Beck.
S.M. in Computer Science and Engineering
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Lei, Tao Ph D. Massachusetts Institute of Technology. "Interpretable neural models for natural language processing." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108990.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 109-119).
The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the model's structure and predictions. In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justifications, we relate this component to string kernels, i.e. functions measuring the similarity between sequences, and demonstrate how it encodes the efficient kernel computing algorithm into its structure. The proposed model achieves state-of-the-art or competitive results compared to alternative architectures (such as LSTMs and CNNs) across several NLP applications. In the second part, we learn rationales behind the model's prediction by extracting input pieces as supporting evidence. Rationales are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by the desiderata for rationales. We demonstrate the effectiveness of this learning framework in applications such multi-aspect sentiment analysis. Our method achieves a performance over 90% evaluated against manual annotated rationales.
by Tao Lei.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kryściński, Wojciech. "Training Neural Models for Abstractive Text Summarization." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-236973.

Повний текст джерела
Анотація:
Abstractive text summarization aims to condense long textual documents into a short, human-readable form while preserving the most important information from the source document. A common approach to training summarization models is by using maximum likelihood estimation with the teacher forcing strategy. Despite its popularity, this method has been shown to yield models with suboptimal performance at inference time. This work examines how using alternative, task-specific training signals affects the performance of summarization models. Two novel training signals are proposed and evaluated as part of this work. One, a novelty metric, measuring the overlap between n-grams in the summary and the summarized article. The other, utilizing a discriminator model to distinguish human-written summaries from generated ones on a word-level basis. Empirical results show that using the mentioned metrics as rewards for policy gradient training yields significant performance gains measured by ROUGE scores, novelty scores and human evaluation.
Abstraktiv textsammanfattning syftar på att korta ner långa textdokument till en förkortad, mänskligt läsbar form, samtidigt som den viktigaste informationen i källdokumentet bevaras. Ett vanligt tillvägagångssätt för att träna sammanfattningsmodeller är att använda maximum likelihood-estimering med teacher-forcing-strategin. Trots dess popularitet har denna metod visat sig ge modeller med suboptimal prestanda vid inferens. I det här arbetet undersöks hur användningen av alternativa, uppgiftsspecifika träningssignaler påverkar sammanfattningsmodellens prestanda. Två nya träningssignaler föreslås och utvärderas som en del av detta arbete. Den första, vilket är en ny metrik, mäter överlappningen mellan n-gram i sammanfattningen och den sammanfattade artikeln. Den andra använder en diskrimineringsmodell för att skilja mänskliga skriftliga sammanfattningar från genererade på ordnivå. Empiriska resultat visar att användandet av de nämnda mätvärdena som belöningar för policygradient-träning ger betydande prestationsvinster mätt med ROUGE-score, novelty score och mänsklig utvärdering.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.

Повний текст джерела
Анотація:
Coreference is an important and frequent concept in any form of discourse, and Coreference Resolution (CR) a widely used task in Natural Language Understanding (NLU). In this thesis, we implement and explore two recent models that include the concept of coreference in Recurrent Neural Network (RNN)-based Language Models (LM). Entity and reference decisions are modeled explicitly in these models using attention mechanisms. Both models learn to save the previously observed entities in a set and to decide if the next token created by the LM is a mention of one of the entities in the set, an entity that has not been observed yet, or not an entity. After a theoretical analysis where we compare the two LMs to each other and to a state of the art Coreference Resolution system, we perform an extensive quantitative and qualitative analysis. For this purpose, we train the two models and a classical RNN-LM as the baseline model on the OntoNotes 5.0 corpus with coreference annotation. While we do not reach the baseline in the perplexity metric, we show that the models’ relative performance on entity tokens has the potential to improve when including the explicit entity modeling. We show that the most challenging point in the systems is the decision if the next token is an entity token, while the decision which entity the next token refers to performs comparatively well. Our analysis in the context of a text generation task shows that a wide-spread error source for the mention creation process is the confusion of tokens that refer to related but different entities in the real world, presumably a result of the context-based word representations in the models. Our re-implementation of the DeepMind model by Yang et al. 2016 performs notably better than the re-implementation of the EntityNLM model by Ji et al. 2017 with a perplexity of 107 compared to a perplexity of 131.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Labeau, Matthieu. "Neural language models : Dealing with large vocabularies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.

Повний текст джерела
Анотація:
Le travail présenté dans cette thèse explore les méthodes pratiques utilisées pour faciliter l'entraînement et améliorer les performances des modèles de langues munis de très grands vocabulaires. La principale limite à l'utilisation des modèles de langue neuronaux est leur coût computationnel: il dépend de la taille du vocabulaire avec laquelle il grandit linéairement. La façon la plus aisée de réduire le temps de calcul de ces modèles reste de limiter la taille du vocabulaire, ce qui est loin d'être satisfaisant pour de nombreuses tâches. La plupart des méthodes existantes pour l'entraînement de ces modèles à grand vocabulaire évitent le calcul de la fonction de partition, qui est utilisée pour forcer la distribution de sortie du modèle à être normalisée en une distribution de probabilités. Ici, nous nous concentrons sur les méthodes à base d'échantillonnage, dont le sampling par importance et l'estimation contrastive bruitée. Ces méthodes permettent de calculer facilement une approximation de cette fonction de partition. L'examen des mécanismes de l'estimation contrastive bruitée nous permet de proposer des solutions qui vont considérablement faciliter l'entraînement, ce que nous montrons expérimentalement. Ensuite, nous utilisons la généralisation d'un ensemble d'objectifs basés sur l'échantillonnage comme divergences de Bregman pour expérimenter avec de nouvelles fonctions objectif. Enfin, nous exploitons les informations données par les unités sous-mots pour enrichir les représentations en sortie du modèle. Nous expérimentons avec différentes architectures, sur le Tchèque, et montrons que les représentations basées sur les caractères permettent l'amélioration des résultats, d'autant plus lorsque l'on réduit conjointement l'utilisation des représentations de mots
This work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Fan, Xuetong. "Laminar Flow Control Models with Neural Networks /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487929745334864.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Tomes, Hayley Sarah. "Investigating neural responses in models of neurocysticercosis." Doctoral thesis, Faculty of Health Sciences, 2021. http://hdl.handle.net/11427/33046.

Повний текст джерела
Анотація:
Epilepsy is more frequent in sub-Saharan Africa than the rest of the world due to high levels of brain infections by larvae of the pig cestode Taenia solium, a condition termed neurocysticercosis. Despite the large nature of the problem, little is known about how neurocysticercosis modulates neuronal responses to result in the development of seizures. In this thesis I have used the cestode Taenia crassiceps to develop multiple in vitro and in vivo models of neurocysticercosis in rodents. Utilising patch-clamp electrophysiology in organotypic hippocampal brain slices and chronic, wireless electrocorticographic recordings in freely moving animals I have explored how cestode larvae affect neuronal excitability in the brain across a range of time scales. First I demonstrate that homogenate of Taenia crassiceps larvae has a strong, acute excitatory effect on neurons, which is sufficient to trigger seizurelike events. The excitatory component of the homogenate was found to strongly activate glutamate receptors and not acetylcholine receptors nor acid-sensing ion channels. An enzymatic assay showed that the larval homogenate contains high levels of glutamate, explaining its acute excitatory effects on neurons. In the second part of my thesis I demonstrate that longer-term incubation of Taenia crassiceps homogenate with organotypic brain slices over the course of a day does not affect the intrinsic properties of pyramidal neurons nor the excitability of the neuronal network. In the final part of my thesis I established an in vivo model of neurocysticercosis. I found that intradermal inoculation together with multiple intracerebral injections of Taenia crassiceps homogenate did not result in the development of seizures over 3 months of chronic electrocorticography recordings. In addition, the seizure-threshold to picrotoxin, an excitotoxin, was not altered by Taenia crassiceps homogenate injection. Immunohistological analysis of the tissue below the injection site revealed no difference in astrocytes nor the number of microglia. However, microglial processes were observed to be retracted in the Taenia crassiceps group reflecting a moderate neuroinflammatory response. Together the data in my thesis provides novel insight into the acute and chronic effects of Taenia crassiceps homogenate on the excitability of neuronal networks with relevance to our understanding of neurocysticercosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Veltz, Romain. "Nonlinear analysis methods in neural field models." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1056/document.

Повний текст джерела
Анотація:
Cette thèse traite de modèles mésoscopiques de cortex appelés champs neuronaux. Les équations des champs neuronaux décrivent l'activité corticale de populations de neurones, ayant des propriétés anatomiques/fonctionnelles communes. Elles ont été introduites dans les années 1950 et portent le nom d'équations de Wilson et Cowan. Mathématiquement, elles consistent en des équations intégro-différentielles avec retards, les retards modélisant les délais de propagation des signaux ainsi que le passage des signaux à travers les synapses et l'arbre dendritique. Dans la première partie, nous rappelons la biologie nécessaire à la compréhension de cette thèse et dérivons les équations principales. Puis, nous étudions ces équations du point de vue des systèmes dynamiques en caractérisant leurs points d'équilibres et la dynamique dans la seconde partie. Dans la troisième partie, nous étudions de façon générale ces équations à retards en donnant des formules pour les diagrammes de bifurcation, en prouvant un théorème de la variété centrale et en calculant les principales formes normales. Nous appliquons tout d'abord ces résultats à des champs neuronaux simples mono-dimensionnels qui permettent une étude détaillée de la dynamique. Enfin, dans la dernière partie, nous appliquons ces différents résultats à trois modèles de cortex visuel. Les deux premiers modèles sont issus de la littérature et décrivent respectivement une hypercolonne, /i.e./ l'élément de base de la première aire visuelle (V1) et un réseau de telles hypercolonnes. Le dernier modèle est un nouveau modèle de V1 qui généralise les deux modèles précédents tout en permettant une étude poussée des effets spécifiques des retards
This thesis deals with mesoscopic models of cortex called neural fields. The neural field equations describe the activity of neuronal populations, with common anatomical / functional properties. They were introduced in the 1950s and are called the equations of Wilson and Cowan. Mathematically, they consist of integro-differential equations with delays, the delays modeling the signal propagation and the passage of signals across synapses and the dendritic tree. In the first part, we recall the biology necessary to understand this thesis and derive the main equations. Then, we study these equations with the theory of dynamical systems by characterizing their equilibrium points and dynamics in the second part. In the third part, we study these delayed equations in general by giving formulas for the bifurcation diagrams, by proving a center manifold theorem, and by calculating the principal normal forms. We apply these results to one-dimensional neural fields which allows a detailed study of the dynamics. Finally, in the last part, we study three models of visual cortex. The first two models are from the literature and describe respectively a hypercolumn, i.e. the basic element of the first visual area (V1) and a network of such hypercolumns. The latest model is a new model of V1 which generalizes the two previous models while allowing a detailed study of specific effects of delays
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Westermann, Gert. "Constructivist neural network models of cognitive development." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/22733.

Повний текст джерела
Анотація:
In this thesis I investigate the modelling of cognitive development with Constructivist neural networks. I argue that the constructivist nature of development, that is, the building of a cognitive system through active interactions with its environment, is an essential property of human development and should be considered in models of cognitive development. I evaluate this claim on the basis of evidence from cortical development, cognitive development, and learning theory. In an empirical evaluation of this claim, I then present a constructivist neural network model of the acquisition of the English past tense and of impaired inflectional processing in German agrammatic aphasics. The model displays a realistic course of acquisition, closely modelling the U-shaped learning curve and more detailed effects such as frequency and family effects. Further, the model develops double dissociations between regular and irregular verbs. I argue that the ability of the model to account for the human data is based on its constructivist nature, and this claim is backed by an analogous, but non-constructivist model that does not display many aspects of the human behaviour. Based on these results I develop a taxonomy for cognitive models that incorporates architectural and developmental aspects besides the traditional distinction between symbolic and subsymbolic processing. When the model is trained on the German participle and is then lesioned by removing connections, the breakdown in performance reflects the profiles of German aggrammatic aphasics. Irregular inflections are selectively impaired and are often overregularised. Further, frequency effects and the regularity-continuum effect that are observed in aphasic subjects can also be modelled. The model predicts that an aphasic profile with selectively impaired regular inflections would be evidence for a locally distinct processing of regular and irregular infections.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Wedgwood, Kyle C. A., Kevin K. Lin, Ruediger Thul, and Stephen Coombes. "Phase-Amplitude Descriptions of Neural Oscillator Models." BioMed Central, 2013. http://hdl.handle.net/10150/610255.

Повний текст джерела
Анотація:
Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strong-attraction assumption may well not be the natural one for many neural oscillator models. For example, the popular conductance based Morris-Lecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phase-amplitude description is shown to be very well suited to understanding the response of the oscillator to external stimuli (which are not necessarily weak). We consider a number of examples of neural oscillator models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phase-reduction technique. As an explicit application of this phase-amplitude framework, we consider in some detail the response of a generic planar model where the strong-attraction assumption does not hold, and examine the response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Yau, Hon Wah. "Phase space techniques in neural network models." Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/14713.

Повний текст джерела
Анотація:
We present here two calculations based on the phase-space of interactions treatment of neural network models. As a way of introduction we begin by discussing the type of neural network models we wish to study, and the analytical techniques available to us from the branch of disordered systems in statistical mechanics. We then detail a neural network which models a content addressable memory, and sketch the mathematical methods we shall use. The model is a mathematical realisation of a neural network with its synaptic efficacies optimised in its phase space of interactions through some training function. The first model looks at how the basin of attraction of such a content addressable memory can be enlarged by the use of noisy external fields. These fields are used separately during the training and retrieval phases, and their influences compared. Expressed in terms of the number of memory patterns which the network's dynamics can retrieve with a microscopic initial overlap, we shall show that content addressability can be substantially improved. The second calculation concerns the use of dual distribution functions for two networks with different constraints on their synapses, but required to store the same set of memory patterns. This technique allows us to see how the two networks accommodate the demands imposed on them, and whether they arrive at radically different solutions. The problem we choose is aimed at, and eventually succeeds in, resolving a paradox in the sign-constrained model.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Lineaweaver, Sean Kenneth Ridgway. "Dynamic spiral lumped element model of electrical field distribution and neural excitation in the implanted cochlea /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/6092.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Herrington, William Frederick Jr. "Micro-optic elements for a compact opto-electronic integrated neural coprocessor." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97800.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 165-167).
The research done for this thesis was aimed at developing the optical elements needed for the Compact Opto-electronic Integrated Neural coprocessor (COIN coprocessor) project. The COIN coprocessor is an implementation of a feed forward neural network using free-space optical interconnects to communicate between neurons. Prior work on this project had assumed these interconnects would be formed using Holographic Optical Elements (HOEs), so early work for this thesis was directed along these lines. Important limits to the use of HOEs in the COIN system were identified and evaluated. In particular, the problem of changing wavelength between the hologram recording and readout steps was examined and it was shown that there is no general solution to this problem when the hologram to be recorded is constructed with more than two plane waves interfering with each other. Two experimental techniques, the holographic bead lens and holographic liftoff, were developed as partial workarounds to the identified limitations. As an alternative to HOEs, an optical element based on the concept of the Fresnel Zone Plate was developed and experimentally tested. The zone plate based elements offer an easily scalable method for fabricating the COIN optical interconnects using standard lithographic processes and appear to be the best choice for the COIN coprocessor project at this time. In addition to the development of the optical elements for the COIN coprocessor, this thesis also looks at the impact of optical element efficiency on the power consumption of the COIN coprocessor. Finally, a model of the COIN network based on the current COIN design was used to compare the performance and cost of the COIN system with competing implementations of neural networks, with the conclusion that at this time the proposed COIN coprocessor system is still a competitive option for neural network implementations.
by William Frederick Herrington Jr.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Fischer, Shain Ann. "A Three-Dimensional Anatomically Accurate Finite Element Model for Nerve Fiber Activation Simulation Coupling." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1365.

Повний текст джерела
Анотація:
Improved knowledge of human nerve function and recruitment would enable innovation in the Biomedical Engineering field. Better understanding holds the potential for greater integration between devices and the nervous system as well as the ability to develop therapeutic devices to treat conditions affecting the nervous system. This work presents a three-dimensional volume conductor model of the human arm for coupling with code describing nerve membrane characteristics. The model utilizes an inhomogeneous medium composed of bone, muscle, skin, nerve, artery, and vein. Dielectric properties of each tissue were collected from the literature and applied to corresponding material subdomains. Both a fully anatomical version and a simplified version are presented. The computational model for this study was developed in COMSOL and formatted to be coupled with SPICE netlist code. Limitations to this model due to computational power as well as future work are discussed. The final model incorporated both anatomically correct geometries and simplified geometries to enhance computational power. A stationary study was performed implementing a boundary current source through the surface of a conventionally placed electrode. Results from the volume conductor study are presented and validated through previous studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Israeli, Yeshayahu D. "Whitney Element Based Priors for Hierarchical Bayesian Models." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1621866603265673.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sugden, Frank Daniel. "A NOVEL DUAL MODELING METHOD FOR CHARACTERIZING HUMAN NERVE FIBER ACTIVATION." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1318.

Повний текст джерела
Анотація:
Presented in this work is the investigation and successful illustration of a coupled model of the human nerve fiber. SPICE netlist code was utilized to describe the electrical properties of the human nervous membrane in tandem with COMSOL Multiphysics, a finite element analysis software tool. The initial research concentrated on the utilization of the Hodgkin-Huxley electrical circuit representation of the nerve fiber membrane. Further development of the project identified the need for a linear circuit model that more closely resembled the McNeal linearization model augmented by the work of Szlavik which better facilitated the coupling of both SPICE and COMSOL programs. Related literature was investigated and applied to validate the model. This combination of analysis tools allowed for the presentation of a consistent model and revealed that a coupled model produced not only a qualitatively comparable, but also a quantitatively comparable result to studies presented in the literature. All potential profiles produced during the simulation were compared against the literature in order to meet the purpose of presenting an advanced computational model of human neural recruitment and excitation. It was demonstrated through this process that the correct usage of neuron models within a two dimensional conductive space did allow for the approximate modeling of human neural electrical characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Wu, Sui Vang. "The use of neural networks in financial models." Thesis, University of Macau, 2001. http://umaclib3.umac.mo/record=b1636268.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Esnaola, Acebes Jose M. "Patterns of spike synchrony in neural field models." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663871.

Повний текст джерела
Анотація:
Els models neuronals de camp mig són descripcions fenomenològiques de l'activitat de xarxes de neurones espacialment organitzades. Gràcies a la seva simplicitat, aquests models són unes eines extremadament útils per a l'anàlisi dels patrons espai-temporals que apareixen a les xarxes neuronals, i s'utilitzen àmpliament en neurociència computacional. És ben sabut que els models de camp mig tradicionals no descriuen adequadament la dinàmica de les xarxes de neurones si aquestes actuen de manera síncrona. No obstant això, les simulacions computacionals de xarxes neuronals demostren que, fins i tot en estats d'alta asincronia, fluctuacions ràpides dels inputs comuns que arriben a les neurones poden provocar períodes transitoris en els quals les neurones de la xarxa es comporten de manera síncrona. A més a més, la sincronització també pot ser generada per la mateixa xarxa, donant lloc a oscil·lacions auto-sostingudes. En aquesta tesi investiguem la presència de patrons espai-temporals deguts a la sincronització en xarxes de neurones heterogènies i espacialment distribuïdes. Aquests patrons no s'observen en els models tradicionals de camp mig, i per aquest motiu han estat àmpliament ignorats en la literatura. Per poder investigar la dinàmica induïda per l'activitat sincronitzada de les neurones, fem servir un nou model de camp mig que es deriva exactament d'una població de neurones de tipus quadratic integrate-and-fire. La simplicitat del model ens permet analitzar l'estabilitat de la xarxa en termes del perfil espacial de la connectivitat sinàptica, i obtenir fórmules exactes per les fronteres d'estabilitat que caracteritzen la dinàmica de la xarxa neuronal original. Aquest mateix anàlisi també revela l'existència d'un conjunt de modes d'oscil·lació que es deuen exclusivament a l'activitat sincronitzada de les neurones. Creiem que els resultats presentats en aquesta tesi inspiraran nous avenços teòrics relacionats amb la dinàmica col·lectiva de les xarxes neuronals, contribuïnt així en el desenvolupament de la neurociència computacional.
Neural field models are phenomenological descriptions of the activity of spatially organized, recurrently coupled neuronal networks. Due to their mathematical simplicity, such models are extremely useful for the analysis of spatiotemporal phenomena in networks of spiking neurons, and are largely used in computational neuroscience. Nevertheless, it is well known that traditional neural field descriptions fail to describe the collective dynamics of networks of synchronously spiking neurons. Yet, numerical simulations of networks of spiking neurons show that, even in the case of highly asynchronous activity, fast fluctuations in the common external inputs drive transient episodes of spike synchrony. Moreover, synchronization may also be generated by the network itself, resulting in the appearance of robust large-scale, self-sustained oscillations. In this thesis, we investigate the emergence of synchrony-induced spatiotemporal patterns in spatially distributed networks of heterogeneous spiking neurons. These patterns are not observed in traditional neural field theories and have been largely overlooked in the literature. To investigate synchrony-induced phenomena in neuronal networks, we use a novel neural field model which is exactly derived from a large population of quadratic integrate-and-fire model neurons. The simplicity of the neural field model allows us to analyze the stability of the network in terms of the spatial profile of the synaptic connectivity, and to obtain exact formulas for the stability boundaries characterizing the dynamics of the original spiking neuronal network. Remarkably, the analysis also reveals the existence of a collection of oscillation modes, which are exclusively due to spike-synchronization. We believe that the results presented in this thesis will foster theoretical advances on the collective dynamics of neuronal networks, upgrading the mathematical basis of computational neuroscience.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Nandeshwar, Ashutosh R. "Models for calculating confidence intervals for neural networks." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4600.

Повний текст джерела
Анотація:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains x, 65 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 62-65).
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії