To see the other types of publications on this topic, follow the link: NEURONS NEURAL NETWORK.

Dissertations / Theses on the topic 'NEURONS NEURAL NETWORK'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'NEURONS NEURAL NETWORK.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Voysey, Matthew David. "Inexact analogue CMOS neurons for VLSI neural network design." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lukashev, A. "Basics of artificial neural networks (ANNs)." Thesis, Київський національний університет технологій та дизайну, 2018. https://er.knutd.edu.ua/handle/123456789/11353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schmidt, Peter H. (Peter Harrison). "The transfer characteristic of neurons in a pulse-code neural network." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brady, Patrick. "Internal representation and biological plausibility in an artificial neural network." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hunter, Russell I. "Improving associative memory in a network of spiking neurons." Thesis, University of Stirling, 2011. http://hdl.handle.net/1893/6177.

Full text
Abstract:
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory.
APA, Harvard, Vancouver, ISO, and other styles
6

D'Alton, S. "A Constructive Neural Network Incorporating Competitive Learning of Locally Tuned Hidden Neurons." Thesis, Honours thesis, University of Tasmania, 2005. https://eprints.utas.edu.au/243/1/D%27Alton05CompetitivelyTrainedRAN.pdf.

Full text
Abstract:
Performance metrics are a driving force in many fields of work today. The field of constructive neural networks is no different. In this field, the popular measurement metrics (resultant network size, test set accuracy) are difficult to maximise, given their dependence on several varied factors, of which the mostimportant is the dataset to be applied. This project set out with the intention to minimise the number of hidden units installed into a resource allocating network (RAN) (Platt 1991), whilst increasing the accuracy by means of application of competitive learning techniques. Three datasets were used for evaluation of the hypothesis, one being a time-series set, and the other two being more general regression sets. Many trials were conducted during the period of this work, in order to be able to prove conclusively the discovered results. Each trial was different in only one respect from another in an effort to maximise the comparability of the results found. Four metrics were recorded for each trial- network size (per training epoch, and final), test and training set accuracy (again, per training epoch and final), and overall trial runtime. The results indicate that the application of competitive learning algorithms to the RAN results in a considerable reduction in network size (and therefore the associated reduction in processing time) across the vast majority of the trials run. Inspection of the accuracy related metrics indicated that using this method offered no real difference to that of the originalimplementation of the RAN. As such, the positive network-size results found are only half of the bigger picture, meaning there is scope for future work to be done to increase the test set accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Grehl, Stephanie. "Stimulation-specific effects of low intensity repetitive magnetic stimulation on cortical neurons and neural circuit repair in vitro (studying the impact of pulsed magnetic fields on neural tissue)." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066706/document.

Full text
Abstract:
Les champs électromagnétiques sont couramment utilisés pour stimuler de manière non-invasive le cerveau humain soit à des fins thérapeutiques ou dans un contexte de recherche. Les effets de la stimulation magnétique varient en fonction de la fréquence et de l'intensité du champ magnétique. Les mécanismes mis en jeu restent inconnus, d'autant plus lors de stimulations à faible intensité. Dans cette thèse, nous avons évalué les effets de stimulations magnétiques répétées à différentes fréquences appliqués à faible intensité (10-13 mT ; Low Intensity Repetitive Magnetic Stimulation : LI-rMS) in vitro, sur des cultures corticales primaires et sur des modèles de réparation neuronale. De plus, nous décrivons une méthodologie pour la construction d'un dispositif instrumental fait sur mesure pour stimuler des cultures cellulaires.Les résultats montrent des effets dépendant de la fréquence sur la libération du calcium des stocks intracellulaires, sur la mort cellulaire, sur la croissance des neurites, sur la réparation neuronale, sur l'activation des neurones et sur l'expression de gènes impliqués. En conclusion, nous avons montré pour la première fois un nouveau mécanisme d'activation cellulaire par les champs magnétiques à faible intensité. Cette activation se fait en l'absence d'induction de potentiels d'action. Les résultats soulignent l'importance biologique de la LI-rMS par elle-même mais aussi en association avec les effets de la rTMS à haute intensité. Une meilleure compréhension des effets fondamentaux de la LI-rMS sur les tissus biologiques est nécessaire afin de mettre au point des applications thérapeutiques efficaces pour le traitement des conditions neurologiques
Electromagnetic fields are widely used to non-invasively stimulate the human brain in clinical treatment and research. This thesis investigates the effects of different low intensity (mT) repetitive magnetic stimulation (LI-rMS) parameters on single neurons and neural networks and describes key aspects of custom tailored LI-rMS delivery in vitro. Our results show stimulation specific effects of LI-rMS on cell survival, neuronal morphology, neural circuit repair and gene expression. We show novel mechanisms underlying cellular responses to stimulation below neuronal firing threshold, extending our understanding of the fundamental effects of LI-rMS on biological tissue which is essential to better tailor therapeutic applications
APA, Harvard, Vancouver, ISO, and other styles
8

Gettner, Jonathan A. "Identifying and Predicting Rat Behavior Using Neural Networks." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1513.

Full text
Abstract:
The hippocampus is known to play a critical role in episodic memory function. Understanding the relation between electrophysiological activity in a rat hippocampus and rat behavior may be helpful in studying pathological diseases that corrupt electrical signaling in the hippocampus, such as Parkinson’s and Alzheimer’s. Additionally, having a method to interpret rat behaviors from neural activity may help in understanding the dynamics of rat neural activity that are associated with certain identified behaviors. In this thesis, neural networks are used as a black-box model to map electrophysiological data, representative of an ensemble of neurons in the hippocampus, to a T-maze, wheel running or open exploration behavior. The velocity and spatial coordinates of the identified behavior are then predicted using the same neurological input data that was used for behavior identification. Results show that a nonlinear autoregressive process with exogenous inputs (NARX) neural network can partially identify between different behaviors and can generally determine the velocity and spatial position attributes of the identified behavior inside and outside of the trained interval
APA, Harvard, Vancouver, ISO, and other styles
9

Vissani, Matteo. "Multisensory features of peripersonal space representation: an analysis via neural network modelling." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
The peripersonal space (PPS) is the space immediately surrounding the body. It is coded in the brain in a multisensory, body part-centered (e.g. hand-centered, trunk-centered), modular fashion. This is supported by the existence of multisensory neurons (in fronto-parietal areas) with tactile receptive field on a specific body part (hand, arm, trunk, etc.) and visual/auditory receptive field surrounding the same body part. Recent behavioural results (Serino et al. Sci Rep 2015), obtained by using an audio-tactile paradigm, have further supported the existence of distinct PPS representations, each specific of a single body part (hand, trunk, face) and characterized by specific properties. That study has also evidenced that the PPS representations– although distinct – are not independent. In particular, the hand-PPS loses its properties and assumes those of the trunk-PPS when the hand is close to the trunk, as the hand-PPS was encapsulated within the trunk-PPS. Similarly, the face-PPS appears to be englobed into the trunk-PPS. It remains unclear how this interaction, which manifests behaviourally, can be implemented at a neural level by the modular organization of PPS representations. The aim of this Thesis is to propose a neural network model to help the comprehension of the underlying neurocomputational mechanisms. The model includes three subnetworks devoted to the single PPS representations around the hand, face and the trunk. Furthermore, interaction mechanisms– controlled by proprioceptive neurons – have been postulated among the subnetworks. The network is able to reproduce the behavioural data, explaining them in terms of neural properties and response. Moreover, the network provides some novel predictions, that can be tested in vivo. One of this prediction has been tested in this work, by performing an ad-hoc behavioural experiment at the Laboratory of Cognitive Neuroscience (Campus Biotech, Geneva) under the supervision of the neuropsychologist Dr Serino.
APA, Harvard, Vancouver, ISO, and other styles
10

Yao, Yong. "A neural network in the pond snail, Planorbis corneus : electrophysiology and morphology of pleural ganglion neurons and their input neurons /." [S.l.] : [s.n.], 1986. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Neocleous, Costantinos C. "A neural network architecture composed of adaptively defined dissimilar single-neurons : applications in engineering design." Thesis, Brunel University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Guo, Lilin. "A Biologically Plausible Supervised Learning Method for Spiking Neurons with Real-world Applications." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2982.

Full text
Abstract:
Learning is central to infusing intelligence to any biologically inspired system. This study introduces a novel Cross-Correlated Delay Shift (CCDS) learning method for spiking neurons with the ability to learn and reproduce arbitrary spike patterns in a supervised fashion with applicability tospatiotemporalinformation encoded at the precise timing of spikes. By integrating the cross-correlated term,axonaland synapse delays, the CCDS rule is proven to be both biologically plausible and computationally efficient. The proposed learning algorithm is evaluated in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. The results indicate that the proposed CCDS learning rule greatly improves classification accuracy when compared to the standards reached with the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. Network structureis the crucial partforany application domain of Artificial Spiking Neural Network (ASNN). Thus, temporal learning rules in multilayer spiking neural networks are investigated. As extensions of single-layer learning rules, the multilayer CCDS (MutCCDS) is also developed. Correlated neurons are connected through fine-tuned weights and delays. In contrast to the multilayer Remote Supervised Method (MutReSuMe) and multilayertempotronrule (MutTmptr), the newly developed MutCCDS shows better generalization ability and faster convergence. The proposed multilayer rules provide an efficient and biologically plausible mechanism, describing how delays and synapses in the multilayer networks are adjusted to facilitate learning. Interictalspikes (IS) aremorphologicallydefined brief events observed in electroencephalography (EEG) records from patients with epilepsy. The detection of IS remains an essential task for 3D source localization as well as in developing algorithms for seizure prediction and guided therapy. In this work, we present a new IS detection method using the Wavelet Encoding Device (WED) method together with CCDS learning rule and a specially designed Spiking Neural Network (SNN) structure. The results confirm the ability of such SNN to achieve good performance for automatically detecting such events from multichannel EEG records.
APA, Harvard, Vancouver, ISO, and other styles
13

Ortman, Robert L. "Sensory input encoding and readout methods for in vitro living neuronal networks." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44856.

Full text
Abstract:
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Piccinini, Nicola. "Interacting complex systems: theory and application to real-world situations." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1011847/.

Full text
Abstract:
The interest in complex systems has increased exponentially during the past years because it was found helpful in addressing many of today's challenges. The study of the brain, biology, earthquakes, markets and social sciences are only a few examples of the fields that have benefited from the investigation of complex systems. Internet, the increased mobility of people and the raising energy demand are among the factors that brought in contact complex systems that were isolated till a few years ago. A theory for the interaction between complex systems is becoming more and more urgent to help mankind in this transition. The present work builds upon the most recent results in this field by solving a theoretical problem that prevented previous work to be applied to important complex systems, like the brain. It also shows preliminary laboratory results of perturbation of in vitro neural networks that were done to test the theory. Finally, it gives a preview of the studies that are being done to create a theory that is even closer to the interaction between real complex systems.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Xueying. "Cumulative Single-cell Laser Ablation of Functionally or Genetically Defined Respiratory Neurons Interrogates Network Properties of Mammalian Breathing-related Neural Circuits in vitro." W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539623609.

Full text
Abstract:
A key feature of many neurodegenerative diseases is the pathological loss of neurons that participate in generating behavior. to mimic the neuronal degeneration procedure of a functioning neural circuit, we designed a computer-automated system that algorithmically detects and sequentially laser-ablates constituent neurons from a neural network with single-cell precision while monitoring the progressive change of the network function in real time. We applied this cell-specific cumulative lesion technique to an advantageous experimental model, the preBotzinger Complex (preBotC), the mammalian respiratory central pattern generator (CPG) that can be retained in thin slice preparations and spontaneously generates breathing-related motor activity in vitro . as a consequence, we sought to investigate the issue: how many neurons are necessary for generating respiratory behavior in vitro ? This question pertains to whether and how progressive cell destruction will impair, and possibly preclude, behaviorally relevant network function. Our ablation system identifies rhythm-generating interneurons in the preBotC based on genetically encoded fluorescent protein markers or imaged Ca 2+ activity patterns, stores their physical locations in memory, and then randomly laser-ablates the neuron targets one at a time in sequence, while continuously measuring changes to respiratory motor output via hypoglossal (XII) nerve electrophysiologicallyin vitro. A critical feature of the system is custom software package dubbed Ablator (in Python code) that detects cell targets, controls stage translation, focuses the laser, and implements the spot-lesion protocol automatically. Experiments are typically carried out in three steps: 1) define the domain of lesion and initialize the system, 2) perform image acquisition and target detection algorithms and maps populations of respiratory neurons in the bilateral volumes of the slice, 3) determine the order of lesions and then spot-lesion target neurons sequentially until all the targets are exhausted. Here we show that selectively and cumulatively deleting rhythmically active inspiratory neurons that are detected via Ca 2+ imaging in the preBotC, progressively decreases respiratory frequency and the amplitude of motor output. On average, the deletion of 120+/-45 neurons stopped spontaneous respiratory rhythm, and our data suggest ∼82% of the rhythm generating neurons remain un-lesioned. Similarly, destruction of 85+/-45 homeodomain transcription factor Dbx1-derived (Dbx1+) neurons, which were hypothesized to comprise the rhythmogenic core of the respiratory CPG in the preBotC, precludes the respiratory motor behavior in vitro as well. The fact that these two estimates of the size of the critical rhythmogenic core in the preBotC are different can be reconciled considering that the Ca2+ imaging method identifies ∼50% inhibitory neurons, which are found in the preBotC but are not rhythmogenic. Dbx1+, on the other hand, identifies only excitatory rhythmogenic neurons. Serial ablations in other medullary respiratory regions did not affect frequency, but diminished the amplitude of motor output to a lesser degree. These data support the hypothesis that cumulative single-cell ablations caused a critical threshold to be crossed during lesioning, after which rhythm generation in the respiratory network was unsustainable. Furthermore, this study provides a novel measurement that can help quantify network properties of the preBotC and gauge its susceptibility to failure. Our results in turn may help explain respiratory failure in patients with neurodegenerative diseases that cause progressive cell death in the brainstem respiratory networks.
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Shuxiang, University of Western Sydney, and of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications." THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.

Full text
Abstract:
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy. In the neural network literature only Zhang proved the universal approximation ability of FNN Group to any piecewise continuous function. Next, we have developed the approximation properties of Neuron Adaptive Higher Order Neural Networks (NAHONN's), a combination of HONN's and NAF, to any continuous function, functional and operator. Finally, we have created a software program called MASFinance which runs on the Solaris system for the approximation of continuous or discontinuous functions, and for the simulation of any continuous or discontinuous data (especially financial data). Our work distinguishes itself from previous work in the following ways: we use a new neuron-adaptive activation function, while the neuron activation functions in most existing work are all fixed and can't be tuned to adapt to different approximation problems; we only use on NANN to approximate any piecewise continuous function, while a neural network group must be utilised in previous research; we combine HONN's with NAF and investigate its approximation properties to any continuous function, functional, and operator; we present a new software program, MASFinance, for function approximation and data simulation. Experiments running MASFinance indicate that the proposed NANN's present several advantages over traditional neuron-fixed networks (such as greatly reduced network size, faster learning, and lessened simulation errors), and that the suggested NANN's can effectively approximate piecewise continuous functions better than neural networks groups. Experiments also indicate that NANN's are especially suitable for data simulation
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
17

Weaver, Adam L. "The functional roles of the Lateral Pyloric and Ventricular Dilator neurons in the pyloric network of the lobster, Panulirus interruptus." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1010521587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kuebler, Eric Stephen. "Harnessing the Variability of Neuronal Activity: From Single Neurons to Networks." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37855.

Full text
Abstract:
Neurons and networks of the brain may use various strategies of computation to provide the neural substrate for sensation, perception, or cognition. To simplify the scenario, two of the most commonly cited neural codes are firing rate and temporal coding, whereby firing rates are typically measured over a longer duration of time (i.e., seconds or minutes), and temporal codes use shorter time windows (i.e., 1 to 100 ms). However, it is possible that neurons may use other strategies. Here, we highlight three methods of computation that neurons, or networks, of the brain may use to encode and/or decode incoming activity. First, we explain how single neurons of the brain can utilize a neuronal oscillation, specifically by employing a ‘spike-phase’ code wherein responses to stimuli have greater reliability, in turn increasing the ability to discriminate between stimuli. Our focus was to explore the limitations of spike-phase coding, including the assumptions of low firing rates and precise timing of action potentials. Second, we examined the ability of single neurons to track the onset of network bursting activity, namely ‘burst predictors’. In addition, we show that burst predictors were less susceptible to an in vitro model of neuronal stroke (i.e., excitotoxicity). Third, we discuss the possibility of distributed processing with neuronal networks of the brain. Specifically, we show experimental and computational evidence supporting the possibility that the population activity of cortical networks may be useful to downstream classification. Furthermore, we show that when network activity is highly variable across time, there is an increase in the ability to linearly separate the spiking activity of various networks. Overall, we use the results of both experimental and computational methods to highlight three strategies of computation that neurons and networks of the brain may employ.
APA, Harvard, Vancouver, ISO, and other styles
19

Cottens, Pablo Eduardo Pereira de Araujo. "Development of an artificial neural network architecture using programmable logic." Universidade do Vale do Rio dos Sinos, 2016. http://www.repositorio.jesuita.org.br/handle/UNISINOS/5411.

Full text
Abstract:
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-06-29T14:42:16Z No. of bitstreams: 1 Pablo Eduardo Pereira de Araujo Cottens_.pdf: 1315690 bytes, checksum: 78ac4ce471c2b51e826c7523a01711bd (MD5)
Made available in DSpace on 2016-06-29T14:42:16Z (GMT). No. of bitstreams: 1 Pablo Eduardo Pereira de Araujo Cottens_.pdf: 1315690 bytes, checksum: 78ac4ce471c2b51e826c7523a01711bd (MD5) Previous issue date: 2016-03-07
Nenhuma
Normalmente Redes Neurais Artificiais (RNAs) necessitam estações de trabalho para o seu processamento, por causa da complexidade do sistema. Este tipo de arquitetura de processamento requer que instrumentos de campo estejam localizados na vizinhança da estação de trabalho, caso exista a necessidade de processamento em tempo real, ou que o dispositivo de campo possua como única tarefa a de coleta de dados para processamento futuro. Este projeto visa criar uma arquitetura em lógica programável para um neurônio genérico, no qual as RNAs podem fazer uso da natureza paralela de FPGAs para executar a aplicação de forma rápida. Este trabalho mostra que a utilização de lógica programável para a implementação de RNAs de baixa resolução de bits é viável e as redes neurais, devido à natureza paralelizável, se beneficiam pela implementação em hardware, podendo obter resultados de forma muito rápida.
Currently, modern Artificial Neural Networks (ANN), according to their complexity, require a workstation for processing all their input data. This type of processing architecture requires that the field device is located somewhere in the vicintity of a workstation, in case real-time processing is required, or that the field device at hand will have the sole task of collecting data for future processing, when field data is required. This project creates a generic neuron architecture in programmabl logic, where Artifical Neural Networks can use the parallel nature of FPGAs to execute applications in a fast manner, albeit not using the same resolution for its otputs. This work shows that the utilization of programmable logic for the implementation of low bit resolution ANNs is not only viable, but the neural network, due to its parallel nature, benefits greatly from the hardware implementation, giving fast and accurate results.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Shuxiang. "Neuron-adaptive neural network models and applications." Thesis, [Campbelltown, N.S.W. : The Author], 1999. http://handle.uws.edu.au:8081/1959.7/275.

Full text
Abstract:
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy. In the neural network literature only Zhang proved the universal approximation ability of FNN Group to any piecewise continuous function. Next, we have developed the approximation properties of Neuron Adaptive Higher Order Neural Networks (NAHONN's), a combination of HONN's and NAF, to any continuous function, functional and operator. Finally, we have created a software program called MASFinance which runs on the Solaris system for the approximation of continuous or discontinuous functions, and for the simulation of any continuous or discontinuous data (especially financial data). Our work distinguishes itself from previous work in the following ways: we use a new neuron-adaptive activation function, while the neuron activation functions in most existing work are all fixed and can't be tuned to adapt to different approximation problems; we only use on NANN to approximate any piecewise continuous function, while a neural network group must be utilised in previous research; we combine HONN's with NAF and investigate its approximation properties to any continuous function, functional, and operator; we present a new software program, MASFinance, for function approximation and data simulation. Experiments running MASFinance indicate that the proposed NANN's present several advantages over traditional neuron-fixed networks (such as greatly reduced network size, faster learning, and lessened simulation errors), and that the suggested NANN's can effectively approximate piecewise continuous functions better than neural networks groups. Experiments also indicate that NANN's are especially suitable for data simulation
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Shuxiang. "Neuron-adaptive neural network models and applications /." [Campbelltown, N.S.W. : The Author], 1999. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030702.085320/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Andersson, Aron, and Shabnam Mirkhani. "Portfolio Performance Optimization Using Multivariate Time Series Volatilities Processed With Deep Layering LSTM Neurons and Markowitz." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273617.

Full text
Abstract:
The stock market is a non-linear field, but many of the best-known portfolio optimization algorithms are based on linear models. In recent years, the rapid development of machine learning has produced flexible models capable of complex pattern recognition. In this paper, we propose two different methods of portfolio optimization; one based on the development of a multivariate time-dependent neural network,thelongshort-termmemory(LSTM),capable of finding lon gshort-term price trends. The other is the linear Markowitz model, where we add an exponential moving average to the input price data to capture underlying trends. The input data to our neural network are daily prices, volumes and market indicators such as the volatility index (VIX).The output variables are the prices predicted for each asset the following day, which are then further processed to produce metrics such as expected returns, volatilities and prediction error to design a portfolio allocation that optimizes a custom utility function like the Sharpe Ratio. The LSTM model produced a portfolio with a return and risk that was close to the actual market conditions for the date in question, but with a high error value, indicating that our LSTM model is insufficient as a sole forecasting tool. However,the ability to predict upward and downward trends was somewhat better than expected and therefore we conclude that multiple neural network can be used as indicators, each responsible for some specific aspect of what is to be analysed, to draw a conclusion from the result. The findings also suggest that the input data should be more thoroughly considered, as the prediction accuracy is enhanced by the choice of variables and the external information used for training.
Aktiemarknaden är en icke-linjär marknad, men många av de mest kända portföljoptimerings algoritmerna är baserad på linjära modeller. Under de senaste åren har den snabba utvecklingen inom maskininlärning skapat flexibla modeller som kan extrahera information ur komplexa mönster. I det här examensarbetet föreslår vi två sätt att optimera en portfölj, ett där ett neuralt nätverk utvecklas med avseende på multivariata tidsserier och ett annat där vi använder den linjära Markowitz modellen, där vi även lägger ett exponentiellt rörligt medelvärde på prisdatan. Ingångsdatan till vårt neurala nätverk är de dagliga slutpriserna, volymerna och marknadsindikatorer som t.ex. volatilitetsindexet VIX. Utgångsvariablerna kommer vara de predikterade priserna för nästa dag, som sedan bearbetas ytterligare för att producera mätvärden såsom förväntad avkastning, volatilitet och Sharpe ratio. LSTM-modellen producerar en portfölj med avkastning och risk som ligger närmre de verkliga marknadsförhållandena, men däremot gav resultatet ett högt felvärde och det visar att vår LSTM-modell är otillräckligt för att använda som ensamt predikteringssverktyg. Med det sagt så gav det ändå en bättre prediktion när det gäller trender än vad vi antog den skulle göra. Vår slutsats är därför att man bör använda flera neurala nätverk som indikatorer, där var och en är ansvarig för någon specifikt aspekt man vill analysera, och baserat på dessa dra en slutsats. Vårt resultat tyder också på att inmatningsdatan bör övervägas mera noggrant, eftersom predikteringsnoggrannheten.
APA, Harvard, Vancouver, ISO, and other styles
23

Viñoles, Serra Mireia. "Dynamics of Two Neuron Cellular Neural Networks." Doctoral thesis, Universitat Ramon Llull, 2011. http://hdl.handle.net/10803/9154.

Full text
Abstract:
Les xarxes neuronals cel·lulars altrament anomenades CNNs, són un tipus de sistema dinàmic que relaciona diferents elements que s'anomenen neurones via unes plantilles de paràmetres. Aquest sistema queda completament determinat coneixent quines són les entrades a la xarxa, les sortides i els paràmetres o pesos. En aquest treball fem un estudi exhaustiu sobre aquest tipus de xarxa en el cas més senzill on només hi intervenen dues neurones. Tot i la simplicitat del sistema, veurem que pot tenir una dinàmica molt rica.

Primer de tot, revisem l'estabilitat d'aquest sistema des de dos punts de vista diferents. Usant la teoria de Lyapunov, trobem el rang de paràmetres en el que hem de treballar per aconseguir la convergència de la xarxa cap a un punt fix. Aquest mètode ens obre les portes per abordar els diferents tipus de problemes que es poden resoldre usant una xarxa neuronal cel·lular de dues neurones. D'altra banda, el comportament dinàmic de la CNN està determinat per la funció lineal a trossos que defineix les sortides del sistema. Això ens permet estudiar els diferents sistemes que apareixen en cada una de les regions on el sistema és lineal, aconseguint un estudi complet de l'estabilitat de la xarxa en funció de les posicions locals dels diferents punts d'equilibri del sistema. D'aquí obtenim bàsicament dos tipus de convergència, cap a un punt fix o bé cap a un cicle límit. Aquests resultats ens permeten organitzar aquest estudi bàsicament en aquests dos tipus de convergència. Entendre el sistema d'equacions diferencials que defineixen la CNN en dimensió 1 usant només dues neurones, ens permet trobar les dificultats intrínseques de les xarxes neuronals cel·lulars així com els possibles usos que els hi podem donar. A més, ens donarà les claus per a poder entendre el cas general.

Un dels primers problemes que abordem és la dependència de les sortides del sistema respecte les condicions inicials. La funció de Lyapunov que usem en l'estudi de l'estabilitat es pot veure com una quàdrica si la pensem com a funció de les sortides. La posició i la geometria d'aquesta forma quadràtica ens permeten trobar condicions sobre els paràmetres que descriuen el sistema dinàmic. Treballant en aquestes regions aconseguim abolir el problema de la dependència. A partir d'aquí ja comencem a estudiar les diferents aplicacions de les CNN treballant en un rang de paràmetres on el sistema convergeix a un punt fix. Una primera aplicació la trobem usant aquest tipus de xarxa per a reproduir distribucions de probabilitat tipus Bernoulli usant altre cop la funció de Lyapunov emprada en l'estudi de l'estabilitat. Una altra aplicació apareix quan ens centrem a treballar dins del quadrat unitat. En aquest cas, el sistema és capaç de reproduir funcions lineals.

L'existència de la funció de Lyapunov permet també de construir unes gràfiques que depenen dels paràmetres de la CNN que ens indiquen la relació que hi ha entre les entrades de la CNN i les sortides. Aquestes gràfiques ens donen un algoritme per a dissenyar plantilles de paràmetres reproduint aquestes relacions. També ens obren la porta a un nou problema: com composar diferents plantilles per aconseguir una determinada relació entrada¬sortida. Tot aquest estudi ens porta a pensar en buscar una relació funcional entre les entrades externes a la xarxa i les sortides. Com que les possibles sortides és un conjunt discret d'elements gràcies a la funció lineal a trossos, la correspondència entrada¬sortida es pot pensar com un problema de classificació on cada una de les classes està definida per les diferent possibles sortides. Pensant¬ho d'aquesta manera, estudiem quins problemes de classificació es poden resoldre usant una CNN de dues neurones i trobem quina relació hi ha entre els paràmetres de la CNN, les entrades i les sortides. Això ens permet trobar un mètode per a dissenyar plantilles per a cada problema concret de classificació. A més, els resultats obtinguts d'aquest estudi ens porten cap al problema de reproduir funcions Booleanes usant CNNs i ens mostren alguns dels límits que tenen les xarxes neuronals cel·lulars tot intentant reproduir el capçal de la màquina universal de Turing descoberta per Marvin Minsky l'any 1962.

A partir d'aquí comencem a estudiar la xarxa neuronal cel·lular quan convergeix cap a un cicle límit. Basat en un exemple particular extret del llibre de L.O Chua, estudiem primer com trobar cicles límit en el cas que els paràmetres de la CNN que connecten les diferents neurones siguin antisimètrics. D'aquesta manera trobem en quin rang de paràmetres hem de treballar per assegurar que l'estat final de la xarxa sigui una corba tancada. A més ens dona la base per poder abordar el problema en el cas general. El comportament periòdic d'aquestes corbes ens incita primer a calcular aquest període per cada cicle i després a pensar en possibles aplicacions com ara usar les CNNs per a generar senyals de rellotge.

Finalment, un cop estudiats els diferents tipus de comportament dinàmics i les seves possibles aplicacions, fem un estudi comparatiu de la xarxa neuronal cel·lular quan la sortida està definida per la funció lineal a trossos i quan està definida per la tangent hiperbòlica ja que moltes vegades en la literatura s'usa l'una en comptes de l'altra aprofitant la seva diferenciabilitat. Aquest estudi ens indica que no sempre es pot usar la tangent hiperbòlica en comptes de la funció lineal a trossos ja que la convergència del sistema és diferent en un segons com es defineixin les sortides de la CNN.
Les redes neuronales celulares o CNNs, son un tipo de sistema dinámico que relaciona diferentes elementos llamados neuronas a partir de unas plantillas de parámetros. Este sistema queda completamente determinado conociendo las entradas de la red, las salidas y los parámetros o pesos. En este trabajo hacemos un estudio exhaustivo de estos tipos de red en el caso más sencillo donde sólo intervienen dos neuronas. Este es un sistema muy sencillo que puede llegar a tener una dinámica muy rica.

Primero, revisamos la estabilidad de este sistema desde dos puntos de vista diferentes. Usando la teoría de Lyapunov, encontramos el rango de parámetros en el que hemos de trabajar para conseguir que la red converja hacia un punto fijo. Este método nos abre las puertas parar poder abordar los diferentes tipos de problemas que se pueden resolver usando una red neuronal celular de dos neuronas. Por otro lado, el comportamiento dinámico de la CNN está determinado por la función lineal a tramos que define las salidas del sistema. Esto nos permite estudiar los diferentes sistemas que aparecen en cada una de las regiones donde el sistema es lineal, consiguiendo un estudio completo de la estabilidad de la red en función de las posiciones locales de los diferentes puntos de equilibrio del sistema. Obtenemos básicamente dos tipos de convergencia, hacia a un punto fijo o hacia un ciclo límite. Estos resultados nos permiten organizar este estudio básicamente en estos dos tipos de convergencia. Entender el sistema de ecuaciones diferenciales que definen la CNN en dimensión 1 usando solamente dos neuronas, nos permite encontrar las dificultades intrínsecas de las redes neuronales celulares así como sus posibles usos. Además, nos va a dar los puntos clave para poder entender el caso general. Uno de los primeros problemas que abordamos es la dependencia de las salidas del sistema respecto de las condiciones iniciales. La función de Lyapunov que usamos en el estudio de la estabilidad es una cuadrica si la pensamos como función de las salidas. La posición y la geometría de esta forma cuadrática nos permiten encontrar condiciones sobre los parámetros que describen el sistema dinámico. Trabajando en estas regiones logramos resolver el problema de la dependencia. A partir de aquí ya podemos empezar a estudiar las diferentes aplicaciones de las CNNs trabajando en un rango de parámetros donde el sistema converge a un punto fijo. Una primera aplicación la encontramos usando este tipo de red para reproducir distribuciones de probabilidad tipo Bernoulli usando otra vez la función de Lyapunov usada en el estudio de la estabilidad. Otra aplicación aparece cuando nos centramos en trabajar dentro del cuadrado unidad. En este caso, el sistema es capaz de reproducir funciones lineales.

La existencia de la función de Lyapuno v permite también construir unas graficas que dependen de los parámetros de la CNN que nos indican la relación que hay entre las entradas de la CNN y las salidas. Estas graficas nos dan un algoritmo para diseñar plantillas de parámetros reproduciendo estas relaciones. También nos abren la puerta hacia un nuevo problema: como componer diferentes plantillas para conseguir una determinada relación entrada¬salida. Todo este estudio nos lleva a pensar en buscar una relación funcional entre las entradas externas a la red y las salidas. Teniendo en cuenta que las posibles salidas es un conjunto discreto de elementos gracias a la función lineal a tramos, la correspondencia entrada¬salida se puede pensar como un problema de clasificación donde cada una de las clases está definida por las diferentes posibles salidas. Pensándolo de esta forma, estudiamos qué problemas de clasificación se pueden resolver usando una CNN de dos neuronas y encontramos la relación que hay entre los parámetros de la CNN, las entradas y las salidas. Esto nos permite encontrar un método de diseño de plantillas para cada problema concreto de clasificación. Además, los resultados obtenidos en este estudio nos conducen hacia el problema de reproducir funciones Booleanas usando CNNs y nos muestran algunos de los límites que tienen las redes neuronales celulares al intentar reproducir el cabezal (la cabeza) de la máquina universal de Turing descubierta por Marvin Minsky el año 1962.

A partir de aquí empezamos a estudiar la red neuronal celular cuando ésta converge hacia un ciclo límite. Basándonos en un ejemplo particular sacado del libro de L.O Chua, estudiamos primero como encontrar ciclos límite en el caso que los parámetros de la CNN que conectan las diferentes neuronas sean anti¬simétricos. De esta forma encontramos el rango de parámetros en el cuál hemos de trabajar para asegurar que el estado final de la red sea una curva cerrada. Además nos da la base para poder abordar el problema en el caso general. El comportamiento periódico de estas curvas incita primero a calcular su periodo para cada ciclo y luego a pensar en posibles aplicaciones como por ejemplo usar las CNNs para generar señales de reloj.

Finalmente, estudiados ya los diferentes tipos de comportamiento dinámico y sus posibles aplicaciones, hacemos un estudio comparativo de la red neuronal celular cuando la salida está definida por la función lineal a trozos y cuando está definida por la tangente hiperbólica ya que muchas veces en la literatura se usa una en vez de la otra intentado aprovechar su diferenciabilidad. Este estudio nos indica que no siempre se puede intercambiar dichas funciones ya que la convergencia del sistema es distinta según como se definan las salidas de la CNN.
In this dissertation we review the two neuron cellular neural network stability using the Lyapunov theory, and using the different local dynamic behavior derived from the piecewise linear function use. We study then a geometrical way to understand the system dynamics. The Lyapunov stability, gives us the key point to tackle the different convergence problems that can be studied when the CNN system converges to a fixed¬point. The geometric stability shed light on the convergence to limit cycles. This work is basically organized based on these two convergence classes.

We try to make an exhaustive study about Cellular Neural Networks in order to find the intrinsic difficulties, and the possible uses of a CNN. Understanding the CNN system in a lower dimension, give us some of the main keys in order to understand the general case. That's why we will focus our study in the one dimensional CNN case with only two neurons.

From the results obtained using the Lyapunov function, we propose some methods to avoid the dependence on initial conditions problem. Its intrinsic characteristics as a quadratic form of the output values gives us the key points to find parameters where the final outputs do not depend on initial conditions. At this point, we are able to study different CNN applications for parameter range where the system converges to a fixed¬point. We start by using CNNs to reproduce Bernoulli probability distributions, based on the Lyapunov function geometry. Secondly, we reproduce linear functions while working inside the unit square.

The existence of the Lyapunov function allows us to construct a map, called convergence map, depending on the CNN parameters, which relates the CNN inputs with the final outputs. This map gives us a recipe to design templates performing some desired input¬output associations. The results obtained drive us into the template composition problem. We study the way different templates can be applied in sequence. From the results obtained in the template design problem, we may think on finding a functional relation between the external inputs and the final outputs. Because the set of final states is discrete, thanks to the piecewise linear function, this correspondence can be thought as a classification problem. Each one of the different classes is defined by the different final states which, will depend on the CNN parameters.

Next, we study which classifications problems can be solved by a two neuron CNN, and relate them with weight parameters. In this case, we also find a recipe to design templates performing these classification problems. The results obtained allow us to tackle the problem to realize Boolean functions using CNNs, and show us some CNN limits trying to reproduce the header of a universal Turing machine.

Based on a particular limit cycle example extracted from Chua's book, we start this study with anti symmetric connections between cells. The results obtained can be generalized for CNNs with opposite sign parameters. We have seen in the stability study that limit cycles have the possibility to exist for this parameter range. Periodic behavior of these curves is computed in a particular case. The limit cycle period can be expressed as a function of the CNN parameters, and can be used to generate clock signals.

Finally, we compare the CNN dynamic behavior using different output functions, hyperbolic tangent and piecewise linear function. Many times in the literature, hyperbolic tangent is used instead of piecewise linear function because of its differentiability along the plane. Nevertheless, in some particular regions in the parameter space, they exhibit a different number of equilibrium points. Then, for theoretical results, hyperbolic tangent should not be used instead of piecewise linear function.
APA, Harvard, Vancouver, ISO, and other styles
24

Bordignon, Fernando Luis. "Aprendizado extremo para redes neurais fuzzy baseadas em uninormas." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259061.

Full text
Abstract:
Orientador: Fernando Antônio Campos Gomide
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-22T00:50:20Z (GMT). No. of bitstreams: 1 Bordignon_FernandoLuis_M.pdf: 1666872 bytes, checksum: 4d838dfb4ec418698d9ecd3b74e7c981 (MD5) Previous issue date: 2013
Resumo: Sistemas evolutivos são sistemas com alto nível de adaptação capazes de modificar simultaneamente suas estruturas e parâmetros a partir de um fluxo de dados, recursivamente. Aprendizagem a partir de fluxos de dados é um problema contemporâneo e difícil devido à taxa de aumento da dimensão, tamanho e disponibilidade temporal de dados, criando dificuldades para métodos tradicionais de aprendizado. Esta dissertação, além de apresentar uma revisão da literatura de sistemas evolutivos e redes neurais fuzzy, aborda uma estrutura e introduz um método de aprendizagem evolutivo para treinar redes neurais híbridas baseadas em uninormas, usando conceitos de aprendizado extremo. Neurônios baseados em uninormas fundamentados nas normas e conormas triangulares generalizam neurônios fuzzy. Uninormas trazem flexibilidade e generalidade a modelos neurais fuzzy, pois elas podem se comportar como normas triangulares, conormas triangulares, ou de forma intermediária por meio do ajuste de elementos identidade. Este recurso adiciona uma forma de plasticidade em modelos de redes neurais. Um método de agrupamento recursivo para granularizar o espaço de entrada e um esquema baseado no aprendizado extremo compõem um algoritmo para treinar a rede neural. _E provado que uma versão estática da rede neural fuzzy baseada em uninormas aproxima funções contínuas em domínios compactos, ou seja, _e um aproximador universal. Postula-se, e experimentos computacionais endossam, que a rede neural fuzzy evolutiva compartilha capacidade de aproximação equivalente, ou melhor, em ambientes dinâmicos, do que as suas equivalentes estáticas
Abstract: Evolving systems are highly adaptive systems able to simultaneously modify their structures and parameters from a stream of data, online. Learning from data streams is a contemporary and challenging issue due to the increasing rate of the size and temporal availability of data, turning the application of traditional learning methods limited. This dissertation, in addition to reviewing the literature of evolving systems and neuro fuzzy networks, addresses a structure and introduces an evolving learning approach to train uninorm-based hybrid neural networks using extreme learning concepts. Uninorm-based neurons, rooted in triangular norms and conorms, generalize fuzzy neurons. Uninorms bring flexibility and generality to fuzzy neuron models as they can behave like triangular norms, triangular conorms, or in between by adjusting identity elements. This feature adds a form of plasticity in neural network modeling. An incremental clustering method is used to granulate the input space, and a scheme based on extreme learning is developed to train the neural network. It is proved that a static version of the uninorm-based neuro fuzzy network approximate continuous functions in compact domains, i.e. it is a universal approximator. It is postulated and computational experiments endorse, that the evolving neuro fuzzy network share equivalent or better approximation capability in dynamic environments than their static counterparts
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
25

Rüppell, Maximilian Alexander [Verfasser], and Ulrich [Akademischer Betreuer] Egert. "Single neuron dynamics and interaction in neuronal networks during synchronized spontaneous activity." Freiburg : Universität, 2019. http://d-nb.info/1237617685/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Grünler, Daniel, and Saman Rassam. "The effects of connection density on neuronal synchrony in a simulated neuron network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280348.

Full text
Abstract:
As one of our most complex and least understood organs, the brain constitutes a major area of research, and our understanding of the inner workings of the brain is still at an early stage. Research into neural activity can provide better understanding of the brain by looking at patterns of activity within the brain. One form of such pattern is neuronal synchrony which has been shown to have significance in cognitive performance. We first gathered spike time data by simulating a neuronal network with varying degrees of connection density. We then analyzed the spike data using the ISI-distance measure in order to quantify the level of neuronal synchrony in the network at the different degrees of connection density. As the final step, we calculated the Pearson product to get a measure of the correlation between connection density and neuronal synchrony. The results indicated that connection density is strongly negatively correlated to neuronal synchrony, however due to limiting factors the result can not be generalized beyond the specific circumstances of this experiment.
Hjärnan är ett av våra mest komplexa organ. Vår förståelse för hjärnan är fortfarande i ett tidigt skede och hjärnrelaterad forskning har länge utgjort ett stort forskningsområde. Ett sätt att få en ökad förståelse för hur hjärnan fungerar är att undersöka neuronaktiviteten genom att kolla på aktiviteten och aktivitetsmönstret. Ett aktivitetsmönster är neuronsynkroni som har visats ha en betydande roll i vår kognitiva förmåga. Vi började med att generera spikdata genom att simulera ett neuronnätverk med varierande grad av densitet. Vi analyserade sedan spikdatan med ISI-avståndsmetoden för att kvantifiera nivån av neuronsynkroni i nätverket vid de olika graderna densitet. I ett sista steg beräknade vi Pearsons korrelationskoefficient för att få ett mått på korrelationen mellan densitet och neuronsynkroni. Resultatet visade på att densiteten i nätverket och graden av neuronsynkroni var starkt negativt korrelerade, men på grund av begränsande faktorer kan resultatet inte generaliseras utöver experimentets specifika omständigheter.
APA, Harvard, Vancouver, ISO, and other styles
27

Teller, Amado Sara. "Functional organization and networ resilience in self-organizing clustered neuronal cultures." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/396114.

Full text
Abstract:
Major dynamical traits of a neuronal network are shaped by its underlying circuitry. In several neurological disorders, the deterioration of brain's functionality and cognition has been ascribed to changes in the topological properties of the brain's circuits. To deepen in the understanding of the activity-connectivity relationship, neuronal cultures have emerged as remarkable systems given their accessibility and easy manipulation. A particularly appealing configuration of these in vitro systems consists in an assembly of interconnected aggregates of neurons termed 'clustered neuronal networks'. These networks exhibit a complex dynamics in which clusters fire in small groups, shaping communities with rich spatiotemporal properties. The detailed characterization of this dynamics, as well as its resilience to perturbations, has been the main objective of this thesis. In our experiments we monitored spontaneous activity using calcium fluorescence imaging, which allows the detection of neuronal firing events with both high temporal and spatial resolution. The detailed analysis of the recorded activity, in the context of network theory and community analysis, allowed for the quantification of important properties, including the effective connectivity map and its major topological descriptors. As major results, we observed that these clustered networks present hierarchical modularity, assortative mixing and the presence of a rich club core, a series of features that have also been observed at the scale of the brain. All these characteristic topological traits are associated with a robust architecture that reinforces and stabilizes network activity. To verify the existence of such robustness in our cultures, we studied their resilience upon chemical and physical damage. We concluded that, indeed, clustered networks present higher resilience compared to other configurations. Moreover, these clustered networks exhibited recovery mechanisms that can be linked to the balance between integration and segregation in the network, which ultimately tend to preserve network activity upon damage. Thus, these in vitro preparations offer a unique scenario to explore vulnerability in networks with topological properties similar to the brain. Moreover, the combination of all these approaches can help to develop models to quantify damage upon network degradation, with promising applications for the study of neurological disorders in vitro.
Desvelar la relación entre la red de conexiones anatómica y su emergente dinámica es uno de los grandes desafíos de la neurociencia actual. En este sentido, los cultivos neuronales han tomado un papel muy importante para entender esta cuestión, ya que fenomenologías fundamentales pueden ser estudiadas a escalas más tratables. Los cultivos neuronales se obtienen típicamente a base de disociar tejido neuronal de una parte específica del cerebro, corteza cerebral de rata en nuestro caso, y su cultivo en un medio adecuado. Neuronas en cultivo constituyen en 1-2 semanas una red nueva con una actividad espontánea rica. Una de las preparaciones in vitro que ofrece mayor potencial es las 'redes clusterizadas'. Estas redes se auto-organizan de forma natural, formando grupos de neuronas (clústeres) interconectados a través de axones. La caracterización de la dinámica de estas redes clusterizadas, así como su sensibilidad a perturbaciones, ha sido el objetivo principal de esta tesis. Así, hemos caracterizado la red funcional del cultivo a partir de su dinámica espontánea, desarrollando para ello un novedoso modelo fisicomatemático. Hemos observado que las redes tienen una conectividad modular, donde clústeres tienden a conectarse fuertemente en pequeños grupos, los cuales a su vez se conectan entre ellos. Además, las redes funcionales muestran propiedades topológicas clave, en especial asortatividad (interconexión preferente de clústeres con número similar de conexiones) y la existencia de un 'rich club' (grupo de clústeres con una interconectividad tan destacada que forman el núcleo fundamental de la red). Estas propiedades confieren una gran robustez y flexibilidad a la red. Por esta razón, en la tesis hemos investigado diferentes perturbaciones físicas y bioquímicas, demostrando que las redes clusterizadas son mucho más resistentes a daño que otras configuraciones, lo que refuerza la relación entre las propiedades topológicas descritas y resistencia al daño. Además, observamos que las redes presentaron diferentes mecanismos de reforzamiento entre conexiones para preservar la actividad de la red. Por ello, las redes clusterizadas constituyen una plataforma ideal para estudiar resistencia en redes o como sistema modelo aplicado a estudios de enfermedades neurodegenerativas, como por ejemplo Alzheimer.
APA, Harvard, Vancouver, ISO, and other styles
28

Carvalho, Milena Menezes. "Structural, functional and dynamical properties of a lognormal network of bursting neurons." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/76/76131/tde-25052017-110738/.

Full text
Abstract:
In hippocampal CA1 and CA3 regions, various properties of neuronal activity follow skewed, lognormal-like distributions, including average firing rates, rate and magnitude of spike bursts, magnitude of population synchrony, and correlations between pre- and postsynaptic spikes. In recent studies, the lognormal features of hippocampal activities were well replicated by a multi-timescale adaptive threshold (MAT) neuron network of lognormally distributed excitatory-to-excitatory synaptic weights, though it remains unknown whether and how other neuronal and network properties can be replicated in this model. Here we implement two additional studies of the same network: first, we further analyze its burstiness properties by identifying and clustering neurons with exceptionally bursty features, once again demonstrating the importance of the lognormal synaptic weight distribution. Second, we characterize dynamical patterns of activity termed neuronal avalanches in in vivo CA3 recordings of behaving rats and in the model network, revealing the similarities and differences between experimental and model avalanche size distributions across the sleep-wake cycle. These results show the comparison between the MAT neuron network and hippocampal readings in a different approach than shown before, providing more insight into the mechanisms behind activity in hippocampal subregions.
Nas regiões CA1 e CA3 do hipocampo, várias propriedades da atividade neuronal seguem distribuições assimétricas com características lognormais, incluindo frequência de disparo média, frequência e magnitude de rajadas de disparo (bursts), magnitude da sincronia populacional e correlações entre disparos pré- e pós-sinápticos. Em estudos recentes, as características lognormais das atividades hipocampais foram bem reproduzidas por uma rede de neurônios de limiar adaptativo (multi-timescale adaptive threshold, MAT) com pesos sinápticos entre neurônios excitatórios seguindo uma distribuição lognormal, embora ainda não se saiba se e como outras propriedades neuronais e da rede podem ser replicadas nesse modelo. Nesse trabalho implementamos dois estudos adicionais da mesma rede: primeiramente, analisamos mais a fundo as propriedades dos bursts identificando e agrupando neurônios com capacidade de burst excepcional, mostrando mais uma vez a importância da distribuição lognormal de pesos sinápticos. Em seguida, caracterizamos padrões dinâmicos de atividade chamados avalanches neuronais no modelo e em aquisições in vivo do CA3 de roedores em atividades comportamentais, revelando as semelhanças e diferenças entre as distribuições de tamanho de avalanche através do ciclo sono-vigília. Esses resultados mostram a comparação entre a rede de neurônios MAT e medições hipocampais em uma abordagem diferente da apresentada anteriormente, fornecendo mais percepção acerca dos mecanismos por trás da atividade em subregiões hipocampais.
APA, Harvard, Vancouver, ISO, and other styles
29

McMichael, Lonny D. (Lonny Dean). "A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Full text
Abstract:
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized adaptrode model, the wiring specification algorithms could also be used to specify the connections in other types of networks.
APA, Harvard, Vancouver, ISO, and other styles
30

Donachy, Shaun. "Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3984.

Full text
Abstract:
Networks of spiking neurons can be used not only for brain modeling but also to solve graph problems. With the use of a computationally efficient Izhikevich neuron model combined with plasticity rules, the networks possess self-organizing characteristics. Two different time-based synaptic plasticity rules are used to adjust weights among nodes in a graph resulting in solutions to graph prob- lems such as finding the shortest path and clustering.
APA, Harvard, Vancouver, ISO, and other styles
31

Reis, Elohim Fonseca dos 1984. "Criticality in neural networks = Criticalidade em redes neurais." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276917.

Full text
Abstract:
Orientadores: José Antônio Brum, Marcus Aloizio Martinez de Aguiar
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-29T15:40:55Z (GMT). No. of bitstreams: 1 Reis_ElohimFonsecados_M.pdf: 2277988 bytes, checksum: 08f2c3b84a391217d575c0f425159fca (MD5) Previous issue date: 2015
Resumo: Este trabalho é dividido em duas partes. Na primeira parte, uma rede de correlação é construída baseada em um modelo de Ising em diferentes temperaturas, crítica, subcrítica e supercrítica, usando um algorítimo de Metropolis Monte-Carlo com dinâmica de \textit{single-spin-flip}. Este modelo teórico é comparado com uma rede do cérebro construída a partir de correlações das séries temporais do sinal BOLD de fMRI de regiões do cérebro. Medidas de rede, como coeficiente de aglomeração, mínimo caminho médio e distribuição de grau são analisadas. As mesmas medidas de rede são calculadas para a rede obtida pelas correlações das séries temporais dos spins no modelo de Ising. Os resultados da rede cerebral são melhor explicados pelo modelo teórico na temperatura crítica, sugerindo aspectos de criticalidade na dinâmica cerebral. Na segunda parte, é estudada a dinâmica temporal da atividade de um população neural, ou seja, a atividade de células ganglionares da retina gravadas em uma matriz de multi-eletrodos. Vários estudos têm focado em descrever a atividade de redes neurais usando modelos de Ising com desordem, não dando atenção à estrutura dinâmica. Tratando o tempo como uma dimensão extra do sistema, a dinâmica temporal da atividade da população neural é modelada. O princípio de máxima entropia é usado para construir um modelo de Ising com interação entre pares das atividades de diferentes neurônios em tempos diferentes. O ajuste do modelo é feito com uma combinação de amostragem de Monte-Carlo e método do gradiente descendente. O sistema é caracterizado pelos parâmetros aprendidos, questões como balanço detalhado e reversibilidade temporal são analisadas e variáveis termodinâmicas, como o calor específico, podem ser calculadas para estudar aspectos de criticalidade
Abstract: This work is divided in two parts. In the first part, a correlation network is build based on an Ising model at different temperatures, critical, subcritical and supercritical, using a Metropolis Monte-Carlo algorithm with single-spin-flip dynamics. This theoretical model is compared with a brain network built from the correlations of BOLD fMRI temporal series of brain regions activity. Network measures, such as clustering coefficient, average shortest path length and degree distributions are analysed. The same network measures are calculated to the network obtained from the time series correlations of the spins in the Ising model. The results from the brain network are better explained by the theoretical model at the critical temperature, suggesting critical aspects in the brain dynamics. In the second part, the temporal dynamics of the activity of a neuron population, that is, the activity of retinal ganglion cells recorded in a multi-electrode array was studied. Many studies have focused on describing the activity of neural networks using disordered Ising models, with no regard to the dynamic nature. Treating time as an extra dimension of the system, the temporal dynamics of the activity of the neuron population is modeled. The maximum entropy principle approach is used to build an Ising model with pairwise interactions between the activities of different neurons at different times. Model fitting is performed by a combination of Metropolis Monte Carlo sampling with gradient descent methods. The system is characterized by the learned parameters, questions like detailed balance and time reversibility are analysed and thermodynamic variables, such as specific heat, can be calculated to study critical aspects
Mestrado
Física
Mestre em Física
2013/25361-6
FAPESP
APA, Harvard, Vancouver, ISO, and other styles
32

SUSI, GIANLUCA. "Asynchronous spiking neural networks: paradigma generale e applicazioni." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2012. http://hdl.handle.net/2108/80567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Diesmann, Markus. "Conditions for stable propagation of synchronous spiking in cortical neural networks single neuron dynamics and network properties /." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=968772781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Merege, Fernando. "Identificação de padrões de criminosos seriais usando inteligência artificial associada a neurônios espelhos." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-21052015-164058/.

Full text
Abstract:
Os criminosos seriais que atuam no cometimento do crime de furto possuem modos de operação (Modus operandi) distintos e, que podem ser identificados através da análise dos exames periciais utilizando-se redes neurais. No sistema proposto, identificado um determinado modo de operação, um analista forense utilizando as informações adicionais coletadas e as hipóteses geradas pelos peritos de campo tem a competência de definir conjuntos de ações periciais complementares, que serão adicionados aos registros do modo identificado. Durante um novo exame pericial, em tempo real, a sub-rotina auxiliar analisa os blocos de dados enviados pelos peritos criminais de campo e, em caso de similaridade com um modo de operação anteriormente identificado, envia a eles um conjunto de ações complementares que, a critério do responsável em campo, pode ou não ser usado para alterar o procedimento de campo escolhido. Neste trabalho definimos Neurônios Espelho como sendo a associação das redes neurais para a identificação de padrões com a planilha de trabalho, utilizada pelo analista forense para a definição de ações complementares, com a sub-rotina auxiliar que verifica os blocos de informação recebidos e, que pode identificar partes de um modo de operação, remetendo para os peritos de campo um conjunto de ações complementares. Esta definição deve-se a descoberta pela neurobiologia de um tipo especifico de neurônio que tem a capacidade de disparar ao receber um input sensorial ativando uma área de memória que, em consequência, pode ativar outras áreas de memória ou enviar um comando motor. Neste trabalho foram desenvolvidos os programas de rede neural utilizados para a identificação dos modos de operação parcial e o final, além, das planilhas de trabalho para a elaboração das ações complementares e a sub-rotina auxiliar para identificação em tempo real dos modos de operação parciais. O treinamento da rede foi efetuado com 98 ocorrências e na verificação de validade foram utilizados 10 ocorrências.
The serial criminals who operate in the commission of the crime of theft have different modes of operation (modus operandi) and which may be identified through the analysis of forensic examinations using neural networks. In the proposed system, identified a particular mode of operation, a forensic analyst using the information collected and the hypotheses generated by field experts have the competence to define sets of complementary expert shares, which will be added to the records so identified. During a new forensic examination, in real time, the auxiliary subroutine examines data blocks sent by forensic experts in the field and, in the case of similarity to previously identified a mode of operation, sends them a complementary set of actions that the discretion of the responsible in the field, or can not be used to change the procedure chosen field. In this paper we define Mirror Neurons as the association of neural networks to identify patterns with the worksheet, used by forensic analyst for the definition of complementary actions, with the auxiliary subroutine that checks the blocks of information received and that can identify parts of a mode of operation, referring to field experts a set of complementary actions. This definition should be discovered by the neurobiology of a specific type of neuron that has the ability to shoot while receiving a sensory \"input\" activating an area of memory that, in consequence, can activate other areas of memory or send a motor command. This work programs of neural network used for identifying the modes of operation and the final part were developed, in addition, the worksheets for the elaboration of complementary actions and the auxiliary subroutine for real-time identification of the modes of partial operation. Network training was performed with 98 occurrences and validity check 10 events were used.
APA, Harvard, Vancouver, ISO, and other styles
35

Gómez, Orlandi Javier. "Noise, coherent activity and network structure in neuronal cultures." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/346925.

Full text
Abstract:
In this thesis we apply a multidisciplinary approach, based on statistical physics and complex systems, to the study of neuronal dynamics. We focus on understanding, using theoretical and computational tools, how collective neuronal activity emerges in a controlled system, a neuronal culture. We show how the interplay between noise and network structure defines the emergent collective behavior of the system. We build, using theory and simulation, a framework that takes carefully describes spontaneous activity in neuronal cultures by taking into account the underlying network structure of neuronal cultures and use an accurate, yet simple, model for the individual neuronal dynamics. We show that the collective behavior of young cultures is dominated by the nucleation and propagations of activity fronts (bursts) throughout the system. These bursts nucleate at specific sites of the culture, called nucleation points, which result in a highly heterogeneous probability distribution of nucleation. We are able to explain the nucleation mechanism theoretically as a mechanism of noise propagation and amplification called noise focusing. We also explore the internal structure of activity avalanches by using well--defined regular networks, in which all the neurons have the same connectivity rules (motifs). Within these networks, we are able to associate to the avalanches an effective velocity and topological size and relate it to specific motifs. We also devise a continuum description of a neuronal culture at the mesoscale, i.e., we move away from the single neuron dynamics into a coarse--grained description that is able to capture most of the characteristic observables presented in previous chapters. This thesis also studies the spontaneous activity of neuronal cultures within the framework of quorum percolation. We study the effect of network structure within quorum percolation and propose a new model, called stochastic quorum percolation, that includes dynamics and the effect of internal noise. Finally, we use tools from information theory, namely transfer entropy, to show how to reliably infer the connectivity of a neuronal network from its activity, and how to distinguish between different excitatory and inhibitory connections purely from the activity, with no prior knowledge of the different neuronal types. The technique works directly on the fluorescence traces obtained in calcium imaging experiments, without the need to infer the underlying spike trains.
APA, Harvard, Vancouver, ISO, and other styles
36

Tuffy, Fergal. "Inter-neuron interconnect strategies for hardware implementations of neural networks." Thesis, University of Ulster, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Smetana, Bedřich. "Algebraizace a parametrizace přechodových relací mezi strukturovanými objekty s aplikacemi v oblasti neuronových sítí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-433543.

Full text
Abstract:
The dissertation thesis investigates the modeling of the neural network activity with a focus on a multilayer forward neural network (MLP – Multi Layer Perceptron). In this often used structure of neural networks, time-varying neurons are used, along with an analogy in modeling hyperstructures of linear differential operators. Using a finite lemma and defined hyperoperation, a hyperstructure composed of neurons is defined for a given transient function. There are examined their properties with an emphasis on structures with a layout.
APA, Harvard, Vancouver, ISO, and other styles
38

Corrêa, Leonardo Garcia. "Memória associativa em redes neurais realimentadas." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-06122004-115632/.

Full text
Abstract:
Nessa dissertação, é investigado o armazenamento e a recuperação de padrões de forma biologicamente inspirada no cérebro. Os modelos estudados consistiram de redes neurais realimentadas, que tentam modelar certos aspectos dinâmicos do funcionamento do cérebro. Em particular, atenção especial foi dada às Redes Neurais Celulares, que constituem uma versão localmente acoplada do já clássico modelo de Hopfield. Além da análise de estabilidade das redes consideradas, foi realizado um teste com o intuito de avaliar o desempenho de diversos métodos de construção de memórias endereçáveis por conteúdo (memórias associativas) em Redes Neurais Celulares.
In this dissertation we investigate biologically inspired models of pattern storage and retrieval, by means of feedback neural networks. These networks try to model some of the dynamical aspects of brain functioning. The study concentrated in Cellular Neural Networks, a local coupled version of the classical Hopfield model. The research comprised stability analysis of the referred networks, as well as performance tests of various methods for content-addressable (associative) memory design in Cellular Neural Networks.
APA, Harvard, Vancouver, ISO, and other styles
39

Bendinskienė, Janina. "Duomenų dimensijos mažinimas naudojant autoasociatyvinius neuroninius tinklus." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120731_132413-38444.

Full text
Abstract:
Šiame magistro darbe apžvelgiami daugiamačių duomenų dimensijos mažinimo (vizualizavimo) metodai, tarp kurių nagrinėjami dirbtiniai neuroniniai tinklai. Pateikiamos pagrindinės dirbtinių neuroninių tinklų sąvokos (biologinis neuronas ir dirbtinio neurono modelis, mokymo strategijos, daugiasluoksnis neuronas ir pan.). Nagrinėjami autoasociatyviniai neuroniniai tinklai. Darbo tikslas – išnagrinėti autoasociatyviųjų neuroninių tinklų taikymo daugiamačių duomenų dimensijos mažinimui ir vizualizavimui galimybes bei ištirti gaunamų rezultatų priklausomybę nuo skirtingų parametrų. Siekiant šio tikslo atlikti eksperimentai naudojant kelias daugiamačių duomenų aibes. Tyrimų metu nustatyti parametrai, įtakojantys autoasociatyvinio neuroninio tinklo veikimą. Be to, gauti rezultatai lyginti pagal dvi skirtingas tinklo daromas paklaidas – MDS ir autoasociatyvinę. MDS paklaida parodo, kaip gerai išlaikomi atstumai tarp analizuojamų taškų (vektorių) pereinant iš daugiamatės erdvės į mažesnės dimensijos erdvę. Autoasociatyvinio tinklo išėjimuose gautos reikšmės turi sutapti su įėjimo reikšmėmis, taigi autoasociatyvinė paklaida parodo, kaip gerai tai gaunama (vertinamas skirtumas tarp įėjimų ir išėjimų). Tirta, kaip paklaidas įtakoja šie autoasociatyvinio neuroninio tinklo parametrai: aktyvacijos funkcija, minimizuojama funkcija, mokymo funkcija, epochų skaičius, paslėptų neuronų skaičius ir dimensijos mažinimo skaičiaus pasirinkimas.
This thesis gives an overview of dimensionality reduction of multivariate data (visualization) techniques, including the issue of artificial neural networks. Presents the main concepts of artificial neural networks (biological and artificial neuron to neuron model, teaching strategies, multi-neuron and so on.). Autoassociative neural networks are analyzed. The aim of this work - to consider the application of autoassociative neural networks for multidimensional data visualization and dimension reduction and to explore the possibilities of the results obtained from the dependence of different parameters. To achieve this, several multidimensional data sets were used. In analysis determinate parameters influencing autoassociative neural network effect. In addition, the results obtained by comparing two different network made errors - MDS and autoassociative. MDS error shows how well maintained the distance between the analyzed points (vectors), in transition from the multidimensional space into a lower dimension space. Autoassociative network output values obtained should coincide with the input values, so autoassociative error shows how well it is received (evaluated the difference between inputs and outputs). Researched how autoassociative neural network errors are influenced by this parameters: the activation function, minimizing function, training function, the number of epochs, hidden neurons number and choices of the number of dimension reduction.
APA, Harvard, Vancouver, ISO, and other styles
40

Filho, Edson Costa de Barros Carvalho. "Investigation of Boolean neural networks on a novel goal-seeking neuron." Thesis, University of Kent, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vik, Lukas, and Fredrik Svensson. "Real-time stereoscopic object tracking on FPGA using neural networks." Thesis, Linköpings universitet, Institutionen för systemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110374.

Full text
Abstract:
Real-time tracking and object recognition is a large field with many possible applications. In this thesis we present a technical demo of a stereoscopic tracking system using artificial neural networks (ANN) and also an overview of the entire system, and its core functions. We have implemented a system able of tracking an object in real time at 60 frames per second. Using stereo matching we can extract the object coordinates in each camera, and calculate a distance estimate from the cameras to the object. The system is developed around the Xilinx ZC-706 evaluation board featuring a Zynq XC7Z045 SoC. Performance critical functions are implemented in the FPGA fabric. A dual-core ARM processor, integrated on the chip, is used for support and communication with an external PC. The system runs at moderate clock speeds to decrease power consumption and provide headroom for higher resolutions. A toolbox has been developed for prototyping and the aim has been to run the system with a one-push-button approach. The system can be taught to track any kind of object using an eight bit 32 × 16 pixel pattern generated by the user. The system is controlled over Ethernet from a regular workstation PC, which enables it to be very user-friendly.
APA, Harvard, Vancouver, ISO, and other styles
42

Chauvet, Pierre. "Sur la stabilité d'un réseau de neurones hiérarchique à propos de la coordination du mouvement." Angers, 1993. http://www.theses.fr/1993ANGE0011.

Full text
Abstract:
Dans le premier chapitre, quelques réseaux de neurones capables d'apprendre des mouvements sont présentés. Un modèle du cortex cérébelleux, très impliqué dans la coordination des mouvements, est décrit en détail : c'est un réseau hiérarchique de réseaux de neurones linéaires, appelés unités de Purkinje, qui respectent la connectivité réelle. Les poids synaptiques, en apprentissage, sont modifiés par une règle de covariance. L'étude de ce modèle a permis de définir de nouvelles règles d'apprentissage appelées règles d'apprentissage variationnelles. L'objectif de cette thèse est d'en étudier les conditions de validité pour des unités non linéaires et d'en déduire une explication sur la manière dont la coordination de mouvements est apprise. Dans le deuxième chapitre, une unité de Purkinje linéaire plus générale est analysée. Les notions d'apprentissage et de reconnaissance sont approfondies. Il est montre qu'en phase d'apprentissage, une unité linéaire converge et est stable au sens de Lyapunov, sous certaines conditions. Sous ces mêmes conditions, les règles variationnelles vues dans le chapitre précédent sont confirmées. Dans la première partie du troisième chapitre, il est supposé que les neurones de l'unité sont non linéaires. Sa stabilité au sens de Lyapunov est étudiée par linéarisation autour d'un point équilibre. Dans la seconde partie, des délais sont introduits à l'intérieur de l'unité entre certains neurones. Il en résulte que l'unité possède une dynamique interne. Les conditions de convergence de la sortie de l'unité sont alors déterminées. Finalement, les règles variationnelles sont confirmées sous certaines conditions pour cette unité non linéaire. Dans le quatrième chapitre, l'étude d'un réseau d'unités de Purkinje est entreprise. Après l'étude d'un réseau simple, des délais entre unités sont introduits. Des conditions de stabilité de réseaux d'unités non linéaires sont déterminées et des simulations numériques permettent de vérifier que les règles variationnelles sont bien suivies. Enfin, un exemple de coordination musculaire apprise par un réseau est donné.
APA, Harvard, Vancouver, ISO, and other styles
43

Anisenia, Andrei. "Stochastic Search Genetic Algorithm Approximation of Input Signals in Native Neuronal Networks." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26220.

Full text
Abstract:
The present work investigates the applicability of Genetic Algorithms (GA) to the problem of signal propagation in Native Neuronal Networks (NNNs). These networks are comprised of neurons, some of which receive input signals. The signals propagate though the network by transmission between neurons. The research focuses on the regeneration of the output signal of the network without knowing the original input signal. The computational complexity of the problem is prohibitive for the exact computation. We propose to use a heuristic approach called Genetic Algorithm. Three algorithms are developed, based on the GA technique. The developed algorithms are tested on two different networks with varying input signals. The results obtained from the testing indicate significantly better performance of the developed algorithms compared to the Uniform Random Search (URS) technique, which is used as a control group. The importance of the research is in the demonstration of the ability of GA-based algorithms to successfully solve the problem at hand.
APA, Harvard, Vancouver, ISO, and other styles
44

Pardo-Figuerez, Maria M. "Designing neuronal networks with chemically modified substrates : an improved approach to conventional in vitro neural systems." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/27941.

Full text
Abstract:
Highly organised structures have been well-known to be part of the complex neuronal network presented in the nervous system, where thousands of neuronal connections are arranged to give rise to critical physiological functions. Conventional in vitro culture methods are useful to represent simplistic neuronal behaviour, however, the lack of such organisation results in random and uncontrolled neurite spreading, leading to a lack of cell directionality and in turn, resulting in inaccurate neuronal in vitro models. Neurons are highly specialised cells, known to be greatly dependent on interactions with their surroundings. Therefore, when surface material is modified, drastic changes in neuronal behaviour can be achieved. The use of chemically modified surfaces in vitro has opened new avenues in cell culture, where the chaotic environment found in conventional culture methods can be controlled by the combination of surface modification methods with surface engineering techniques. Polymer brushes and self-assembled monolayers (SAMs) display a wide range of advantages as a surface modification tool for cell culture applications, since their properties can be finely tuned to promote or inhibit cellular adhesion, differentiation and proliferation. Therefore, when precisely combined with patterning techniques, a control over neuronal behaviour can be achieved. Neuronal patterning presents a system with instructive cues that can be used to study neuron-neuron communication by directing single neurites in specific locations to initiate synapses. Furthermore, although this area has not been much explored, the use of these patterned brushes could also be used in co-culture systems as a platform to closely monitor cell heterotypical communication. This research demonstrates the behaviour of SH-SY5Y neurons on a variety of SAMs and polymer brushes, both in isolation and combination to promote cellular spatial control. APTES and BIBB coatings promoted the highest cell viability, proliferation, metabolic activity and neuronal maturation, whilst low cell adhesion was seen on PKSPMA and PMETAC surfaces. Thereafter, PKSPMA brushes were used as a potential cell repulsive coating and its combination with micro- patterning techniques (photolithography and soft lithography) resulted in a system with instructive cues for neuronal guidance, where neuronal directionality was obtained. In the final chapter of this thesis, a chimeric co-culture system was developed where the patterned SH-SY5Y cells were co-cultured with C2C12 myoblasts in an attempt to obtain an organised neuronal-muscle co-culture system. Whilst preliminary observations showed first stages of a patterned neuronal-muscle co-culture, future work is necessary to refine and improve the patterned co-culture process.
APA, Harvard, Vancouver, ISO, and other styles
45

Dai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.

Full text
Abstract:
Computational intelligence techniques, such as artificial neural networks (ANNs), have been widely used to improve the performance of power system monitoring and control. Although inspired by the neurons in the brain, ANNs are largely different from living neuron networks (LNNs) in many aspects. Due to the oversimplification, the huge computational potential of LNNs cannot be realized by ANNs. Therefore, a more brain-like artificial neural network is highly desired to bridge the gap between ANNs and LNNs. The focus of this research is to develop a biologically inspired artificial neural network (BIANN), which is not only biologically meaningful, but also computationally powerful. The BIANN can serve as a novel computational intelligence tool in monitoring, modeling and control of the power systems. A comprehensive survey of ANNs applications in power system is presented. It is shown that novel types of reservoir-computing-based ANNs, such as echo state networks (ESNs) and liquid state machines (LSMs), have stronger modeling capability than conventional ANNs. The feasibility of using ESNs as modeling and control tools is further investigated in two specific power system applications, namely, power system nonlinear load modeling for true load harmonic prediction and the closed-loop control of active filters for power quality assessment and enhancement. It is shown that in both applications, ESNs are capable of providing satisfactory performances with low computational requirements. A novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. A comprehensive survey of the spiking models of living neurons as well as the coding approaches is presented to review the state-of-the-art in BIANN research. The proposed BIANNs are based on spiking models of living neurons with adoption of reservoir-computing approaches. It is shown that the proposed BIANNs have strong modeling capability and low computational requirements, which makes it a perfect candidate for online monitoring and control applications in power systems. BIANN-based modeling and control techniques are also proposed for power system applications. The proposed modeling and control schemes are validated for the modeling and control of a generator in a single-machine infinite-bus system under various operating conditions and disturbances. It is shown that the proposed BIANN-based technique can provide better control of the power system to enhance its reliability and tolerance to disturbances. To sum up, a novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. It is clearly shown that the proposed BIANN-based modeling and control schemes can provide faster and more accurate control for power system applications. The conclusions, the recommendations for future research, as well as the major contributions of this research are presented at the end.
APA, Harvard, Vancouver, ISO, and other styles
46

Duhr, Fanny. "Voies de signalisation associées au récepteur 5-HT6 et développement neuronal." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTT042/document.

Full text
Abstract:
La mise en place des circuits neuronaux est un processus complexe et précisément régulé. Une atteinte de ce processus est à l'origine de diverses pathologies neurodéveloppementales telles que la schizophrénie ou les troubles du spectre autistique, désordres psychiatriques partageant une altération des fonctions cognitives. Le récepteur 6 de la sérotonine (récepteur 5-HT6), notamment connu pour son implication dans la migration neuronale, s'est révélé être une cible thérapeutique de choix dans le traitement des symptômes cognitifs associés à la schizophrénie mais aussi à des pathologies neurodégénératives comme la maladie d'Alzheimer. Cependant la signalisation déclenchée par le récepteur 5-HT6 n'explique pas entièrement son implication dans les processus neurodéveloppementaux. Mon travail de thèse a donc visé à comprendre les mécanismes de signalisation engagés par le récepteur 5-HT6 au cours du développement neuronal. La réalisation d'un crible protéomique a permis de montrer que le récepteur 5-HT6 interagissait avec plusieurs protéines cruciales dans le développement neuronal comme la protéine Cdk5 et sa cible WAVE-1. J'ai ensuite pu démontrer qu'en plus de son rôle dans la migration, le récepteur 5-HT6 contrôlait de façon agoniste-indépendante l'élongation des neurites par un mécanisme impliquant la phosphorylation de son domaine C-terminal par la kinase Cdk5 et l'activation de la RhoGTPase Cdc42. La seconde partie de mon travail a visé à mettre en évidence le rôle du récepteur 5-HT6 dans la formation des épines dendritiques et à comprendre l'implication de la protéine WAVE-1, cible de Cdk5, dans ce processus. Les résultats obtenus au cours de ma thèse apportent de nouveaux éléments quant au contrôle des processus neurodéveloppementaux par le récepteur 5-HT6. Ce récepteur apparaît donc comme une cible thérapeutique de choix dans les atteintes neurodéveloppementales en contribuant au développement des circuits cognitifs en relation avec la physiopathologie des troubles du spectre autistique ou de la schizophrénie
Brain circuitry patterning is a complex, highly regulated process. Alteration of this process is affected gives rise to various neurodevelopmental disorders such as schizophrenia or Autism Spectrum Disorders (ASD), which are both characterized by a wide spectrum of deficits. Serotonin 6 receptor (5-HT6 receptor), which is known for its implication in neuronal migration process, has been identified as a key therapeutic target for the treatment of cognitive deficits observed in schizophrenia, but also in neurodegenerative pathologies such as Alzheimer's disease. However, the signalling mechanisms knowned to be activated by the 5-HT6 receptor do not explain its involvement in neurodevelopmental processes. My thesis project therefore aimed at characterizing the signalling pathways engaged by 5-HT6 receptor during neural development. A proteomic approach allowed me to show that the 5-HT6 receptor was interacting with several proteins playing crucial roles in neurodevelopmental processes such as Cdk5 or WAVE-1. I then demonstrated that, besides its role in neuronal migration, the 5-HT6 receptor was also involved in neurite growth through constitutive phosphorylation of 5-HT6 receptor at Ser350 by associated Cdk5, a process leading to an increase in Cdc42 activity. The second part of my work aimed at understanding the role of 5-HT6 receptor in dendritic spines morphogenesis, and the involvement of WAVE-1 and Cdk5 in this process. These results provide new insights into the control of neurodevelopemental processes by 5-HT6 receptor. Thus, 5-HT6 receptor appears to be a key therapeutic target for neurodevelopmental disorders by contributing to the development of cognitive circuitry related to the pathophysiology of ASD or schizophrenia
APA, Harvard, Vancouver, ISO, and other styles
47

Arruda, Henrique Ferraz de. "Análise estrutural e dinâmica de redes biológicas." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-03082015-101106/.

Full text
Abstract:
Diferentes tipos de neurônios possuem formas distintas, um fator importante para a regulação da forma é a expressão gênica. Além disso, esta característica também está relacionada com a conectividade entre as células nervosas, formando redes. Por meio delas ocorrem as dinâmicas, como por exemplo o aprendizado. Neste trabalho foi desenvolvido um arcabouço de modelagem e simulação neuronal, para analisar a integração das etapas desde a expressão gênica, passando pela geração dos neurônios até as dinâmicas, permitindo o estudo do sistema e relacionamento entre as partes. Na etapa de geração, foram utilizados diferentes padrões de expressão gênica. Por meio dos neurônios, foram criadas as redes, caracterizadas utilizando medidas de centralidade. Ademais, foram executadas as dinâmicas integra-e-dispara, que simula a comunicação entre os neurônios, e o desenvolvimento hebiano, que é uma dinâmica aplicada para simular o aprendizado. Para quantificar a influência da expressão gênica, foram utilizadas as medidas de correlação de Pearson e a informação mútua. Por meio destes testes, foi possível observar que a expressão gênica influencia todas as etapas, sendo que nelas, exceto na geração da forma neuronal, os padrões de expressão com que os neurônios foram organizados também são um fator importante. Além disso, na medida de centralidade betweenness centrality, foi possível observar a formação de caminhos, denominados caminhos do betweenness. Para descrever os caminhos, foram feitas comparações entre as redes neuronais e outras redes espaciais. Assim, foi possível observar que estes caminhos são uma característica comum em redes geográficas e estão relacionados com as comunidades da rede.
Different types of neurons have distinct shapes. An important factor for shape regulation is gene expression, which is also related to the connectivity between nervous cells, creating networks. Dynamics, such as learning, can take place in those networks. In this work we developed a framework for modeling and simulating neurons allowing an integrated analysis from gene expression to dynamics. It will allow the study of the system as a whole as well as the relationships between its parts. In the neuron generation step, we used different patterns of gene expression. The networks were created using those neurons, and several centrality measures were computed to characterize them. Moreover, the dynamic processes considered were the integrate-and-fire model, which simulates communication between neurons, and the hebbian development, which is applied to simulate learning. During every step, Pearsons correlation and mutual information between the level of expression was measured, quantifying the influence of gene expression. Through these experiments it was observed that the gene expression influences all steps, which is in all cases, except in the generation of neuronal shape, an important factor. In addition, by analyzing the betweenness centrality measure, it is possible to observe the formation of paths. To study these paths, comparisons between models and other spatial networks were performed. Thus, it was possible to observe that paths are a common feature in other geographical networks, being related to the connections between network communities.
APA, Harvard, Vancouver, ISO, and other styles
48

Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "insilico"." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14057/document.

Full text
Abstract:
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d’Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l’estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s’attache à faire le lien entre la modélisation neuronale et l’optimisation. L’accent est mis sur le modèle d’Hodgkin- Huxley pour lequel il existait déjà une méthode d’extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d’estimation des paramètres du modèle d’Hodgkin-Huxley s’appuyant sur l’algorithme d’évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d’estimer conjointement tous les paramètres d’un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l’estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d’estimation des paramètres à partir d’enregistrements de la tension de membrane d’un neurone, données dont l’acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l’utilisation de petits réseaux d’une centaine de neurones électroniques : nous réalisons une étude logicielle de l’influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma
These works, which were conducted in a research group designing neuromimetic integrated circuits based on the Hodgkin-Huxley model, deal with the parameter estimation of biological neuron models. The first part of the manuscript tries to bridge the gap between neuron modeling and optimization. We focus our interest on the Hodgkin-Huxley model because it is used in the group. There already existed an estimation method associated to the voltage-clamp technique. Nevertheless, this classical estimation method does not allow to extract precisely all parameters of the model, so in the second part, we propose an alternative method to jointly estimate all parameters of one ionic channel avoiding the usual approximations. This method is based on the differential evolution algorithm. The third chaper is divided into three sections : the first two sections present the application of our new estimation method to two different problems, model fitting from biological data and development of an automated tuning of neuromimetic chips. In the third section, we propose an estimation technique using only membrane voltage recordings – easier to mesure than ionic currents. Finally, the fourth and last chapter is a theoretical study preparing the implementation of small neural networks on neuromimetic chips. More specifically, we try to study the influence of cellular intrinsic properties on the global behavior of a neural network in the context of gamma oscillations
APA, Harvard, Vancouver, ISO, and other styles
49

Pontes, Fabrício José [UNESP]. "Projeto otimizado de redes neurais artificiais para predição da rugosidade em processos de usinagem com a utilização da metodologia de projeto de experimentos." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/103054.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:32:22Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-08-09Bitstream added on 2014-06-13T21:04:12Z : No. of bitstreams: 1 pontes_fj_dr_guara.pdf: 2076253 bytes, checksum: e0151bbfd7f5dd6f59a5364cd9097f4d (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O presente trabalho oferece contribuições à modelagem da rugosidade da peça em processos de usinagem por meio de redes neurais artificiais. Propõe-se um método para o projeto de redes. Perceptron Multi-Camada (Multi-Layer Percepton, ou MLO) e Função de Base radial Radial Basis Function, ou RBF) otimizadas para a predição da rugosidade da pela (Ra). Desenvolve-se um algoritmo que utiliza de forma hibrida a metodologia do projeto de experimentos por meio das técnicas dos fatoriais completose de Variações Evolucionária em Operações (EVOP). A estratégia adotada é a de utilizar o projeto de experimentos na busca de configurações de rede que favoreçam estatisticamente o desempenho na tarefa de predição. Parâmetro de corte dos processos de usinagem são utilizados como entradas das redes. O erro médio absoluto em porcentagem (MAE %) do decil inferioir das observações de predição para o conjunto de testes é utilizado como medida de desempnho dos modelos. Com o objetivo de validar o métido proposto são empregados casos de treinamento gerados a partir de daods obtidos de trabalhos de literatura e de experimentos de torneamento do aço ABNT 121.13. O método proposto leva á redução significativa do erro de predição da rugosidade nas operações de usinagem estudadas, quando se compara seu desempenho ao apresentado por modelos de regressão, aos resultados relatados pela literatura e ao desempenho de modelos neurais propostos por um pacotecomputacional comercial para otimização de configurações de rede. As redes projetadas segundo o método proposto possuem dispersão dos erros de predição significativamente reduzidos na comparação. Observa-se ainda que rede MLP atingem resultados estatisticamente superior aos obtidos pelas melhores redes RBF
The present work offers some contributions to the area of surface roughness modeling by Artificial Neural Network in machining processes. Ir proposes a method for the project networks of MLP (Multi-Layer Perceptron) and RBF (Radial Basis Function) architectures optimized for prediction of Average Surface Roughness (Ru). The methid is expressed in the format of an algorithm employing two techniques from the DOE (Design of Experiments) methodology: Full factorials and Evolutionary Operations(EVOP). The strategy adopted consists in the sistematic use of DOE in a search for network configurations that benefits performance in roughess prediction. Cutting para meters from machining operations are employed as network inputs. Themean absolute error in percentage (MAE%) of the lower decile of the predictions for the test set is used as a figure of merit for network performance. In order to validate the method, data sets retrieved from literature, as well as results of experiments with AISI/SAE free-machining steel, are employed to form training and test data sets for the networks. The proposed algorithm leads to significant reduction in prediction error for surface roughness when compared to the performance delivred by a regression model, by the networks proposed by the original studies data was borrowed from and when compared models proposed by a software package intend to search automatically for optimal network configurations. In addition, networks designed acording to the proposed algorithm displayed reduced dispersion of prediction error for surface roughness when compared to the performance delivered by a regression model, by the networks proposed by the original studies data was borrowed from and when compared to neural models proposed by a software package intended to searchautomatically for optimal network configurations. In addition, networks designed according to the proposed algorith ... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
50

Timoszczuk, Antonio Pedro. "Reconhecimento automático do locutor com redes neurais pulsadas." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-26102004-195250/.

Full text
Abstract:
As Redes Neurais Pulsadas são objeto de intensa pesquisa na atualidade. Neste trabalho é avaliado o potencial de aplicação deste paradigma neural, na tarefa de reconhecimento automático do locutor. Após uma revisão dos tópicos considerados importantes para o entendimento do reconhecimento automático do locutor e das redes neurais artificiais, é realizada a implementação e testes do modelo de neurônio com resposta por impulsos. A partir deste modelo é proposta uma nova arquitetura de rede com neurônios pulsados para a implementação de um sistema de reconhecimento automático do locutor. Para a realização dos testes foi utilizada a base de dados Speaker Recognition v1.0, do CSLU – Center for Spoken Language Understanding do Oregon Graduate Institute - E.U.A., contendo frases gravadas a partir de linhas telefônicas digitais. Para a etapa de classificação foi utilizada uma rede neural do tipo perceptron multicamada e os testes foram realizados no modo dependente e independente do texto. A viabilidade das Redes Neurais Pulsadas para o reconhecimento automático do locutor foi constatada, demonstrando que este paradigma neural é promissor para tratar as informações temporais do sinal de voz.
Pulsed Neural Networks have received a lot of attention from researchers. This work aims to verify the capability of this neural paradigm when applied to a speaker recognition task. After a description of the automatic speaker recognition and artificial neural networks fundamentals, a spike response model of neurons is tested. A novel neural network architecture based on this neuron model is proposed and used in a speaker recognition system. Text dependent and independent tests were performed using the Speaker Recognition v1.0 database from CSLU – Center for Spoken Language Understanding of Oregon Graduate Institute - U.S.A. A multilayer perceptron is used as a classifier. The Pulsed Neural Networks demonstrated its capability to deal with temporal information and the use of this neural paradigm in a speaker recognition task is promising.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography