Dissertations / Theses on the topic 'NEURONS NEURAL NETWORK'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'NEURONS NEURAL NETWORK.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Voysey, Matthew David. "Inexact analogue CMOS neurons for VLSI neural network design." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264387.
Full textLukashev, A. "Basics of artificial neural networks (ANNs)." Thesis, Київський національний університет технологій та дизайну, 2018. https://er.knutd.edu.ua/handle/123456789/11353.
Full textSchmidt, Peter H. (Peter Harrison). "The transfer characteristic of neurons in a pulse-code neural network." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14594.
Full textBrady, Patrick. "Internal representation and biological plausibility in an artificial neural network." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311273.
Full textHunter, Russell I. "Improving associative memory in a network of spiking neurons." Thesis, University of Stirling, 2011. http://hdl.handle.net/1893/6177.
Full textD'Alton, S. "A Constructive Neural Network Incorporating Competitive Learning of Locally Tuned Hidden Neurons." Thesis, Honours thesis, University of Tasmania, 2005. https://eprints.utas.edu.au/243/1/D%27Alton05CompetitivelyTrainedRAN.pdf.
Full textGrehl, Stephanie. "Stimulation-specific effects of low intensity repetitive magnetic stimulation on cortical neurons and neural circuit repair in vitro (studying the impact of pulsed magnetic fields on neural tissue)." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066706/document.
Full textElectromagnetic fields are widely used to non-invasively stimulate the human brain in clinical treatment and research. This thesis investigates the effects of different low intensity (mT) repetitive magnetic stimulation (LI-rMS) parameters on single neurons and neural networks and describes key aspects of custom tailored LI-rMS delivery in vitro. Our results show stimulation specific effects of LI-rMS on cell survival, neuronal morphology, neural circuit repair and gene expression. We show novel mechanisms underlying cellular responses to stimulation below neuronal firing threshold, extending our understanding of the fundamental effects of LI-rMS on biological tissue which is essential to better tailor therapeutic applications
Gettner, Jonathan A. "Identifying and Predicting Rat Behavior Using Neural Networks." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1513.
Full textVissani, Matteo. "Multisensory features of peripersonal space representation: an analysis via neural network modelling." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Find full textYao, Yong. "A neural network in the pond snail, Planorbis corneus : electrophysiology and morphology of pleural ganglion neurons and their input neurons /." [S.l.] : [s.n.], 1986. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.
Full textNeocleous, Costantinos C. "A neural network architecture composed of adaptively defined dissimilar single-neurons : applications in engineering design." Thesis, Brunel University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364142.
Full textGuo, Lilin. "A Biologically Plausible Supervised Learning Method for Spiking Neurons with Real-world Applications." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2982.
Full textOrtman, Robert L. "Sensory input encoding and readout methods for in vitro living neuronal networks." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44856.
Full textPiccinini, Nicola. "Interacting complex systems: theory and application to real-world situations." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1011847/.
Full textWang, Xueying. "Cumulative Single-cell Laser Ablation of Functionally or Genetically Defined Respiratory Neurons Interrogates Network Properties of Mammalian Breathing-related Neural Circuits in vitro." W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539623609.
Full textXu, Shuxiang, University of Western Sydney, and of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications." THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.
Full textDoctor of Philosophy (PhD)
Weaver, Adam L. "The functional roles of the Lateral Pyloric and Ventricular Dilator neurons in the pyloric network of the lobster, Panulirus interruptus." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1010521587.
Full textKuebler, Eric Stephen. "Harnessing the Variability of Neuronal Activity: From Single Neurons to Networks." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37855.
Full textCottens, Pablo Eduardo Pereira de Araujo. "Development of an artificial neural network architecture using programmable logic." Universidade do Vale do Rio dos Sinos, 2016. http://www.repositorio.jesuita.org.br/handle/UNISINOS/5411.
Full textMade available in DSpace on 2016-06-29T14:42:16Z (GMT). No. of bitstreams: 1 Pablo Eduardo Pereira de Araujo Cottens_.pdf: 1315690 bytes, checksum: 78ac4ce471c2b51e826c7523a01711bd (MD5) Previous issue date: 2016-03-07
Nenhuma
Normalmente Redes Neurais Artificiais (RNAs) necessitam estações de trabalho para o seu processamento, por causa da complexidade do sistema. Este tipo de arquitetura de processamento requer que instrumentos de campo estejam localizados na vizinhança da estação de trabalho, caso exista a necessidade de processamento em tempo real, ou que o dispositivo de campo possua como única tarefa a de coleta de dados para processamento futuro. Este projeto visa criar uma arquitetura em lógica programável para um neurônio genérico, no qual as RNAs podem fazer uso da natureza paralela de FPGAs para executar a aplicação de forma rápida. Este trabalho mostra que a utilização de lógica programável para a implementação de RNAs de baixa resolução de bits é viável e as redes neurais, devido à natureza paralelizável, se beneficiam pela implementação em hardware, podendo obter resultados de forma muito rápida.
Currently, modern Artificial Neural Networks (ANN), according to their complexity, require a workstation for processing all their input data. This type of processing architecture requires that the field device is located somewhere in the vicintity of a workstation, in case real-time processing is required, or that the field device at hand will have the sole task of collecting data for future processing, when field data is required. This project creates a generic neuron architecture in programmabl logic, where Artifical Neural Networks can use the parallel nature of FPGAs to execute applications in a fast manner, albeit not using the same resolution for its otputs. This work shows that the utilization of programmable logic for the implementation of low bit resolution ANNs is not only viable, but the neural network, due to its parallel nature, benefits greatly from the hardware implementation, giving fast and accurate results.
Xu, Shuxiang. "Neuron-adaptive neural network models and applications." Thesis, [Campbelltown, N.S.W. : The Author], 1999. http://handle.uws.edu.au:8081/1959.7/275.
Full textXu, Shuxiang. "Neuron-adaptive neural network models and applications /." [Campbelltown, N.S.W. : The Author], 1999. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030702.085320/index.html.
Full textAndersson, Aron, and Shabnam Mirkhani. "Portfolio Performance Optimization Using Multivariate Time Series Volatilities Processed With Deep Layering LSTM Neurons and Markowitz." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273617.
Full textAktiemarknaden är en icke-linjär marknad, men många av de mest kända portföljoptimerings algoritmerna är baserad på linjära modeller. Under de senaste åren har den snabba utvecklingen inom maskininlärning skapat flexibla modeller som kan extrahera information ur komplexa mönster. I det här examensarbetet föreslår vi två sätt att optimera en portfölj, ett där ett neuralt nätverk utvecklas med avseende på multivariata tidsserier och ett annat där vi använder den linjära Markowitz modellen, där vi även lägger ett exponentiellt rörligt medelvärde på prisdatan. Ingångsdatan till vårt neurala nätverk är de dagliga slutpriserna, volymerna och marknadsindikatorer som t.ex. volatilitetsindexet VIX. Utgångsvariablerna kommer vara de predikterade priserna för nästa dag, som sedan bearbetas ytterligare för att producera mätvärden såsom förväntad avkastning, volatilitet och Sharpe ratio. LSTM-modellen producerar en portfölj med avkastning och risk som ligger närmre de verkliga marknadsförhållandena, men däremot gav resultatet ett högt felvärde och det visar att vår LSTM-modell är otillräckligt för att använda som ensamt predikteringssverktyg. Med det sagt så gav det ändå en bättre prediktion när det gäller trender än vad vi antog den skulle göra. Vår slutsats är därför att man bör använda flera neurala nätverk som indikatorer, där var och en är ansvarig för någon specifikt aspekt man vill analysera, och baserat på dessa dra en slutsats. Vårt resultat tyder också på att inmatningsdatan bör övervägas mera noggrant, eftersom predikteringsnoggrannheten.
Viñoles, Serra Mireia. "Dynamics of Two Neuron Cellular Neural Networks." Doctoral thesis, Universitat Ramon Llull, 2011. http://hdl.handle.net/10803/9154.
Full textPrimer de tot, revisem l'estabilitat d'aquest sistema des de dos punts de vista diferents. Usant la teoria de Lyapunov, trobem el rang de paràmetres en el que hem de treballar per aconseguir la convergència de la xarxa cap a un punt fix. Aquest mètode ens obre les portes per abordar els diferents tipus de problemes que es poden resoldre usant una xarxa neuronal cel·lular de dues neurones. D'altra banda, el comportament dinàmic de la CNN està determinat per la funció lineal a trossos que defineix les sortides del sistema. Això ens permet estudiar els diferents sistemes que apareixen en cada una de les regions on el sistema és lineal, aconseguint un estudi complet de l'estabilitat de la xarxa en funció de les posicions locals dels diferents punts d'equilibri del sistema. D'aquí obtenim bàsicament dos tipus de convergència, cap a un punt fix o bé cap a un cicle límit. Aquests resultats ens permeten organitzar aquest estudi bàsicament en aquests dos tipus de convergència. Entendre el sistema d'equacions diferencials que defineixen la CNN en dimensió 1 usant només dues neurones, ens permet trobar les dificultats intrínseques de les xarxes neuronals cel·lulars així com els possibles usos que els hi podem donar. A més, ens donarà les claus per a poder entendre el cas general.
Un dels primers problemes que abordem és la dependència de les sortides del sistema respecte les condicions inicials. La funció de Lyapunov que usem en l'estudi de l'estabilitat es pot veure com una quàdrica si la pensem com a funció de les sortides. La posició i la geometria d'aquesta forma quadràtica ens permeten trobar condicions sobre els paràmetres que descriuen el sistema dinàmic. Treballant en aquestes regions aconseguim abolir el problema de la dependència. A partir d'aquí ja comencem a estudiar les diferents aplicacions de les CNN treballant en un rang de paràmetres on el sistema convergeix a un punt fix. Una primera aplicació la trobem usant aquest tipus de xarxa per a reproduir distribucions de probabilitat tipus Bernoulli usant altre cop la funció de Lyapunov emprada en l'estudi de l'estabilitat. Una altra aplicació apareix quan ens centrem a treballar dins del quadrat unitat. En aquest cas, el sistema és capaç de reproduir funcions lineals.
L'existència de la funció de Lyapunov permet també de construir unes gràfiques que depenen dels paràmetres de la CNN que ens indiquen la relació que hi ha entre les entrades de la CNN i les sortides. Aquestes gràfiques ens donen un algoritme per a dissenyar plantilles de paràmetres reproduint aquestes relacions. També ens obren la porta a un nou problema: com composar diferents plantilles per aconseguir una determinada relació entrada¬sortida. Tot aquest estudi ens porta a pensar en buscar una relació funcional entre les entrades externes a la xarxa i les sortides. Com que les possibles sortides és un conjunt discret d'elements gràcies a la funció lineal a trossos, la correspondència entrada¬sortida es pot pensar com un problema de classificació on cada una de les classes està definida per les diferent possibles sortides. Pensant¬ho d'aquesta manera, estudiem quins problemes de classificació es poden resoldre usant una CNN de dues neurones i trobem quina relació hi ha entre els paràmetres de la CNN, les entrades i les sortides. Això ens permet trobar un mètode per a dissenyar plantilles per a cada problema concret de classificació. A més, els resultats obtinguts d'aquest estudi ens porten cap al problema de reproduir funcions Booleanes usant CNNs i ens mostren alguns dels límits que tenen les xarxes neuronals cel·lulars tot intentant reproduir el capçal de la màquina universal de Turing descoberta per Marvin Minsky l'any 1962.
A partir d'aquí comencem a estudiar la xarxa neuronal cel·lular quan convergeix cap a un cicle límit. Basat en un exemple particular extret del llibre de L.O Chua, estudiem primer com trobar cicles límit en el cas que els paràmetres de la CNN que connecten les diferents neurones siguin antisimètrics. D'aquesta manera trobem en quin rang de paràmetres hem de treballar per assegurar que l'estat final de la xarxa sigui una corba tancada. A més ens dona la base per poder abordar el problema en el cas general. El comportament periòdic d'aquestes corbes ens incita primer a calcular aquest període per cada cicle i després a pensar en possibles aplicacions com ara usar les CNNs per a generar senyals de rellotge.
Finalment, un cop estudiats els diferents tipus de comportament dinàmics i les seves possibles aplicacions, fem un estudi comparatiu de la xarxa neuronal cel·lular quan la sortida està definida per la funció lineal a trossos i quan està definida per la tangent hiperbòlica ja que moltes vegades en la literatura s'usa l'una en comptes de l'altra aprofitant la seva diferenciabilitat. Aquest estudi ens indica que no sempre es pot usar la tangent hiperbòlica en comptes de la funció lineal a trossos ja que la convergència del sistema és diferent en un segons com es defineixin les sortides de la CNN.
Les redes neuronales celulares o CNNs, son un tipo de sistema dinámico que relaciona diferentes elementos llamados neuronas a partir de unas plantillas de parámetros. Este sistema queda completamente determinado conociendo las entradas de la red, las salidas y los parámetros o pesos. En este trabajo hacemos un estudio exhaustivo de estos tipos de red en el caso más sencillo donde sólo intervienen dos neuronas. Este es un sistema muy sencillo que puede llegar a tener una dinámica muy rica.
Primero, revisamos la estabilidad de este sistema desde dos puntos de vista diferentes. Usando la teoría de Lyapunov, encontramos el rango de parámetros en el que hemos de trabajar para conseguir que la red converja hacia un punto fijo. Este método nos abre las puertas parar poder abordar los diferentes tipos de problemas que se pueden resolver usando una red neuronal celular de dos neuronas. Por otro lado, el comportamiento dinámico de la CNN está determinado por la función lineal a tramos que define las salidas del sistema. Esto nos permite estudiar los diferentes sistemas que aparecen en cada una de las regiones donde el sistema es lineal, consiguiendo un estudio completo de la estabilidad de la red en función de las posiciones locales de los diferentes puntos de equilibrio del sistema. Obtenemos básicamente dos tipos de convergencia, hacia a un punto fijo o hacia un ciclo límite. Estos resultados nos permiten organizar este estudio básicamente en estos dos tipos de convergencia. Entender el sistema de ecuaciones diferenciales que definen la CNN en dimensión 1 usando solamente dos neuronas, nos permite encontrar las dificultades intrínsecas de las redes neuronales celulares así como sus posibles usos. Además, nos va a dar los puntos clave para poder entender el caso general. Uno de los primeros problemas que abordamos es la dependencia de las salidas del sistema respecto de las condiciones iniciales. La función de Lyapunov que usamos en el estudio de la estabilidad es una cuadrica si la pensamos como función de las salidas. La posición y la geometría de esta forma cuadrática nos permiten encontrar condiciones sobre los parámetros que describen el sistema dinámico. Trabajando en estas regiones logramos resolver el problema de la dependencia. A partir de aquí ya podemos empezar a estudiar las diferentes aplicaciones de las CNNs trabajando en un rango de parámetros donde el sistema converge a un punto fijo. Una primera aplicación la encontramos usando este tipo de red para reproducir distribuciones de probabilidad tipo Bernoulli usando otra vez la función de Lyapunov usada en el estudio de la estabilidad. Otra aplicación aparece cuando nos centramos en trabajar dentro del cuadrado unidad. En este caso, el sistema es capaz de reproducir funciones lineales.
La existencia de la función de Lyapuno v permite también construir unas graficas que dependen de los parámetros de la CNN que nos indican la relación que hay entre las entradas de la CNN y las salidas. Estas graficas nos dan un algoritmo para diseñar plantillas de parámetros reproduciendo estas relaciones. También nos abren la puerta hacia un nuevo problema: como componer diferentes plantillas para conseguir una determinada relación entrada¬salida. Todo este estudio nos lleva a pensar en buscar una relación funcional entre las entradas externas a la red y las salidas. Teniendo en cuenta que las posibles salidas es un conjunto discreto de elementos gracias a la función lineal a tramos, la correspondencia entrada¬salida se puede pensar como un problema de clasificación donde cada una de las clases está definida por las diferentes posibles salidas. Pensándolo de esta forma, estudiamos qué problemas de clasificación se pueden resolver usando una CNN de dos neuronas y encontramos la relación que hay entre los parámetros de la CNN, las entradas y las salidas. Esto nos permite encontrar un método de diseño de plantillas para cada problema concreto de clasificación. Además, los resultados obtenidos en este estudio nos conducen hacia el problema de reproducir funciones Booleanas usando CNNs y nos muestran algunos de los límites que tienen las redes neuronales celulares al intentar reproducir el cabezal (la cabeza) de la máquina universal de Turing descubierta por Marvin Minsky el año 1962.
A partir de aquí empezamos a estudiar la red neuronal celular cuando ésta converge hacia un ciclo límite. Basándonos en un ejemplo particular sacado del libro de L.O Chua, estudiamos primero como encontrar ciclos límite en el caso que los parámetros de la CNN que conectan las diferentes neuronas sean anti¬simétricos. De esta forma encontramos el rango de parámetros en el cuál hemos de trabajar para asegurar que el estado final de la red sea una curva cerrada. Además nos da la base para poder abordar el problema en el caso general. El comportamiento periódico de estas curvas incita primero a calcular su periodo para cada ciclo y luego a pensar en posibles aplicaciones como por ejemplo usar las CNNs para generar señales de reloj.
Finalmente, estudiados ya los diferentes tipos de comportamiento dinámico y sus posibles aplicaciones, hacemos un estudio comparativo de la red neuronal celular cuando la salida está definida por la función lineal a trozos y cuando está definida por la tangente hiperbólica ya que muchas veces en la literatura se usa una en vez de la otra intentado aprovechar su diferenciabilidad. Este estudio nos indica que no siempre se puede intercambiar dichas funciones ya que la convergencia del sistema es distinta según como se definan las salidas de la CNN.
In this dissertation we review the two neuron cellular neural network stability using the Lyapunov theory, and using the different local dynamic behavior derived from the piecewise linear function use. We study then a geometrical way to understand the system dynamics. The Lyapunov stability, gives us the key point to tackle the different convergence problems that can be studied when the CNN system converges to a fixed¬point. The geometric stability shed light on the convergence to limit cycles. This work is basically organized based on these two convergence classes.
We try to make an exhaustive study about Cellular Neural Networks in order to find the intrinsic difficulties, and the possible uses of a CNN. Understanding the CNN system in a lower dimension, give us some of the main keys in order to understand the general case. That's why we will focus our study in the one dimensional CNN case with only two neurons.
From the results obtained using the Lyapunov function, we propose some methods to avoid the dependence on initial conditions problem. Its intrinsic characteristics as a quadratic form of the output values gives us the key points to find parameters where the final outputs do not depend on initial conditions. At this point, we are able to study different CNN applications for parameter range where the system converges to a fixed¬point. We start by using CNNs to reproduce Bernoulli probability distributions, based on the Lyapunov function geometry. Secondly, we reproduce linear functions while working inside the unit square.
The existence of the Lyapunov function allows us to construct a map, called convergence map, depending on the CNN parameters, which relates the CNN inputs with the final outputs. This map gives us a recipe to design templates performing some desired input¬output associations. The results obtained drive us into the template composition problem. We study the way different templates can be applied in sequence. From the results obtained in the template design problem, we may think on finding a functional relation between the external inputs and the final outputs. Because the set of final states is discrete, thanks to the piecewise linear function, this correspondence can be thought as a classification problem. Each one of the different classes is defined by the different final states which, will depend on the CNN parameters.
Next, we study which classifications problems can be solved by a two neuron CNN, and relate them with weight parameters. In this case, we also find a recipe to design templates performing these classification problems. The results obtained allow us to tackle the problem to realize Boolean functions using CNNs, and show us some CNN limits trying to reproduce the header of a universal Turing machine.
Based on a particular limit cycle example extracted from Chua's book, we start this study with anti symmetric connections between cells. The results obtained can be generalized for CNNs with opposite sign parameters. We have seen in the stability study that limit cycles have the possibility to exist for this parameter range. Periodic behavior of these curves is computed in a particular case. The limit cycle period can be expressed as a function of the CNN parameters, and can be used to generate clock signals.
Finally, we compare the CNN dynamic behavior using different output functions, hyperbolic tangent and piecewise linear function. Many times in the literature, hyperbolic tangent is used instead of piecewise linear function because of its differentiability along the plane. Nevertheless, in some particular regions in the parameter space, they exhibit a different number of equilibrium points. Then, for theoretical results, hyperbolic tangent should not be used instead of piecewise linear function.
Bordignon, Fernando Luis. "Aprendizado extremo para redes neurais fuzzy baseadas em uninormas." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259061.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-22T00:50:20Z (GMT). No. of bitstreams: 1 Bordignon_FernandoLuis_M.pdf: 1666872 bytes, checksum: 4d838dfb4ec418698d9ecd3b74e7c981 (MD5) Previous issue date: 2013
Resumo: Sistemas evolutivos são sistemas com alto nível de adaptação capazes de modificar simultaneamente suas estruturas e parâmetros a partir de um fluxo de dados, recursivamente. Aprendizagem a partir de fluxos de dados é um problema contemporâneo e difícil devido à taxa de aumento da dimensão, tamanho e disponibilidade temporal de dados, criando dificuldades para métodos tradicionais de aprendizado. Esta dissertação, além de apresentar uma revisão da literatura de sistemas evolutivos e redes neurais fuzzy, aborda uma estrutura e introduz um método de aprendizagem evolutivo para treinar redes neurais híbridas baseadas em uninormas, usando conceitos de aprendizado extremo. Neurônios baseados em uninormas fundamentados nas normas e conormas triangulares generalizam neurônios fuzzy. Uninormas trazem flexibilidade e generalidade a modelos neurais fuzzy, pois elas podem se comportar como normas triangulares, conormas triangulares, ou de forma intermediária por meio do ajuste de elementos identidade. Este recurso adiciona uma forma de plasticidade em modelos de redes neurais. Um método de agrupamento recursivo para granularizar o espaço de entrada e um esquema baseado no aprendizado extremo compõem um algoritmo para treinar a rede neural. _E provado que uma versão estática da rede neural fuzzy baseada em uninormas aproxima funções contínuas em domínios compactos, ou seja, _e um aproximador universal. Postula-se, e experimentos computacionais endossam, que a rede neural fuzzy evolutiva compartilha capacidade de aproximação equivalente, ou melhor, em ambientes dinâmicos, do que as suas equivalentes estáticas
Abstract: Evolving systems are highly adaptive systems able to simultaneously modify their structures and parameters from a stream of data, online. Learning from data streams is a contemporary and challenging issue due to the increasing rate of the size and temporal availability of data, turning the application of traditional learning methods limited. This dissertation, in addition to reviewing the literature of evolving systems and neuro fuzzy networks, addresses a structure and introduces an evolving learning approach to train uninorm-based hybrid neural networks using extreme learning concepts. Uninorm-based neurons, rooted in triangular norms and conorms, generalize fuzzy neurons. Uninorms bring flexibility and generality to fuzzy neuron models as they can behave like triangular norms, triangular conorms, or in between by adjusting identity elements. This feature adds a form of plasticity in neural network modeling. An incremental clustering method is used to granulate the input space, and a scheme based on extreme learning is developed to train the neural network. It is proved that a static version of the uninorm-based neuro fuzzy network approximate continuous functions in compact domains, i.e. it is a universal approximator. It is postulated and computational experiments endorse, that the evolving neuro fuzzy network share equivalent or better approximation capability in dynamic environments than their static counterparts
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
Rüppell, Maximilian Alexander [Verfasser], and Ulrich [Akademischer Betreuer] Egert. "Single neuron dynamics and interaction in neuronal networks during synchronized spontaneous activity." Freiburg : Universität, 2019. http://d-nb.info/1237617685/34.
Full textGrünler, Daniel, and Saman Rassam. "The effects of connection density on neuronal synchrony in a simulated neuron network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280348.
Full textHjärnan är ett av våra mest komplexa organ. Vår förståelse för hjärnan är fortfarande i ett tidigt skede och hjärnrelaterad forskning har länge utgjort ett stort forskningsområde. Ett sätt att få en ökad förståelse för hur hjärnan fungerar är att undersöka neuronaktiviteten genom att kolla på aktiviteten och aktivitetsmönstret. Ett aktivitetsmönster är neuronsynkroni som har visats ha en betydande roll i vår kognitiva förmåga. Vi började med att generera spikdata genom att simulera ett neuronnätverk med varierande grad av densitet. Vi analyserade sedan spikdatan med ISI-avståndsmetoden för att kvantifiera nivån av neuronsynkroni i nätverket vid de olika graderna densitet. I ett sista steg beräknade vi Pearsons korrelationskoefficient för att få ett mått på korrelationen mellan densitet och neuronsynkroni. Resultatet visade på att densiteten i nätverket och graden av neuronsynkroni var starkt negativt korrelerade, men på grund av begränsande faktorer kan resultatet inte generaliseras utöver experimentets specifika omständigheter.
Teller, Amado Sara. "Functional organization and networ resilience in self-organizing clustered neuronal cultures." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/396114.
Full textDesvelar la relación entre la red de conexiones anatómica y su emergente dinámica es uno de los grandes desafíos de la neurociencia actual. En este sentido, los cultivos neuronales han tomado un papel muy importante para entender esta cuestión, ya que fenomenologías fundamentales pueden ser estudiadas a escalas más tratables. Los cultivos neuronales se obtienen típicamente a base de disociar tejido neuronal de una parte específica del cerebro, corteza cerebral de rata en nuestro caso, y su cultivo en un medio adecuado. Neuronas en cultivo constituyen en 1-2 semanas una red nueva con una actividad espontánea rica. Una de las preparaciones in vitro que ofrece mayor potencial es las 'redes clusterizadas'. Estas redes se auto-organizan de forma natural, formando grupos de neuronas (clústeres) interconectados a través de axones. La caracterización de la dinámica de estas redes clusterizadas, así como su sensibilidad a perturbaciones, ha sido el objetivo principal de esta tesis. Así, hemos caracterizado la red funcional del cultivo a partir de su dinámica espontánea, desarrollando para ello un novedoso modelo fisicomatemático. Hemos observado que las redes tienen una conectividad modular, donde clústeres tienden a conectarse fuertemente en pequeños grupos, los cuales a su vez se conectan entre ellos. Además, las redes funcionales muestran propiedades topológicas clave, en especial asortatividad (interconexión preferente de clústeres con número similar de conexiones) y la existencia de un 'rich club' (grupo de clústeres con una interconectividad tan destacada que forman el núcleo fundamental de la red). Estas propiedades confieren una gran robustez y flexibilidad a la red. Por esta razón, en la tesis hemos investigado diferentes perturbaciones físicas y bioquímicas, demostrando que las redes clusterizadas son mucho más resistentes a daño que otras configuraciones, lo que refuerza la relación entre las propiedades topológicas descritas y resistencia al daño. Además, observamos que las redes presentaron diferentes mecanismos de reforzamiento entre conexiones para preservar la actividad de la red. Por ello, las redes clusterizadas constituyen una plataforma ideal para estudiar resistencia en redes o como sistema modelo aplicado a estudios de enfermedades neurodegenerativas, como por ejemplo Alzheimer.
Carvalho, Milena Menezes. "Structural, functional and dynamical properties of a lognormal network of bursting neurons." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/76/76131/tde-25052017-110738/.
Full textNas regiões CA1 e CA3 do hipocampo, várias propriedades da atividade neuronal seguem distribuições assimétricas com características lognormais, incluindo frequência de disparo média, frequência e magnitude de rajadas de disparo (bursts), magnitude da sincronia populacional e correlações entre disparos pré- e pós-sinápticos. Em estudos recentes, as características lognormais das atividades hipocampais foram bem reproduzidas por uma rede de neurônios de limiar adaptativo (multi-timescale adaptive threshold, MAT) com pesos sinápticos entre neurônios excitatórios seguindo uma distribuição lognormal, embora ainda não se saiba se e como outras propriedades neuronais e da rede podem ser replicadas nesse modelo. Nesse trabalho implementamos dois estudos adicionais da mesma rede: primeiramente, analisamos mais a fundo as propriedades dos bursts identificando e agrupando neurônios com capacidade de burst excepcional, mostrando mais uma vez a importância da distribuição lognormal de pesos sinápticos. Em seguida, caracterizamos padrões dinâmicos de atividade chamados avalanches neuronais no modelo e em aquisições in vivo do CA3 de roedores em atividades comportamentais, revelando as semelhanças e diferenças entre as distribuições de tamanho de avalanche através do ciclo sono-vigília. Esses resultados mostram a comparação entre a rede de neurônios MAT e medições hipocampais em uma abordagem diferente da apresentada anteriormente, fornecendo mais percepção acerca dos mecanismos por trás da atividade em subregiões hipocampais.
McMichael, Lonny D. (Lonny Dean). "A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.
Full textDonachy, Shaun. "Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3984.
Full textReis, Elohim Fonseca dos 1984. "Criticality in neural networks = Criticalidade em redes neurais." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276917.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-29T15:40:55Z (GMT). No. of bitstreams: 1 Reis_ElohimFonsecados_M.pdf: 2277988 bytes, checksum: 08f2c3b84a391217d575c0f425159fca (MD5) Previous issue date: 2015
Resumo: Este trabalho é dividido em duas partes. Na primeira parte, uma rede de correlação é construída baseada em um modelo de Ising em diferentes temperaturas, crítica, subcrítica e supercrítica, usando um algorítimo de Metropolis Monte-Carlo com dinâmica de \textit{single-spin-flip}. Este modelo teórico é comparado com uma rede do cérebro construída a partir de correlações das séries temporais do sinal BOLD de fMRI de regiões do cérebro. Medidas de rede, como coeficiente de aglomeração, mínimo caminho médio e distribuição de grau são analisadas. As mesmas medidas de rede são calculadas para a rede obtida pelas correlações das séries temporais dos spins no modelo de Ising. Os resultados da rede cerebral são melhor explicados pelo modelo teórico na temperatura crítica, sugerindo aspectos de criticalidade na dinâmica cerebral. Na segunda parte, é estudada a dinâmica temporal da atividade de um população neural, ou seja, a atividade de células ganglionares da retina gravadas em uma matriz de multi-eletrodos. Vários estudos têm focado em descrever a atividade de redes neurais usando modelos de Ising com desordem, não dando atenção à estrutura dinâmica. Tratando o tempo como uma dimensão extra do sistema, a dinâmica temporal da atividade da população neural é modelada. O princípio de máxima entropia é usado para construir um modelo de Ising com interação entre pares das atividades de diferentes neurônios em tempos diferentes. O ajuste do modelo é feito com uma combinação de amostragem de Monte-Carlo e método do gradiente descendente. O sistema é caracterizado pelos parâmetros aprendidos, questões como balanço detalhado e reversibilidade temporal são analisadas e variáveis termodinâmicas, como o calor específico, podem ser calculadas para estudar aspectos de criticalidade
Abstract: This work is divided in two parts. In the first part, a correlation network is build based on an Ising model at different temperatures, critical, subcritical and supercritical, using a Metropolis Monte-Carlo algorithm with single-spin-flip dynamics. This theoretical model is compared with a brain network built from the correlations of BOLD fMRI temporal series of brain regions activity. Network measures, such as clustering coefficient, average shortest path length and degree distributions are analysed. The same network measures are calculated to the network obtained from the time series correlations of the spins in the Ising model. The results from the brain network are better explained by the theoretical model at the critical temperature, suggesting critical aspects in the brain dynamics. In the second part, the temporal dynamics of the activity of a neuron population, that is, the activity of retinal ganglion cells recorded in a multi-electrode array was studied. Many studies have focused on describing the activity of neural networks using disordered Ising models, with no regard to the dynamic nature. Treating time as an extra dimension of the system, the temporal dynamics of the activity of the neuron population is modeled. The maximum entropy principle approach is used to build an Ising model with pairwise interactions between the activities of different neurons at different times. Model fitting is performed by a combination of Metropolis Monte Carlo sampling with gradient descent methods. The system is characterized by the learned parameters, questions like detailed balance and time reversibility are analysed and thermodynamic variables, such as specific heat, can be calculated to study critical aspects
Mestrado
Física
Mestre em Física
2013/25361-6
FAPESP
SUSI, GIANLUCA. "Asynchronous spiking neural networks: paradigma generale e applicazioni." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2012. http://hdl.handle.net/2108/80567.
Full textDiesmann, Markus. "Conditions for stable propagation of synchronous spiking in cortical neural networks single neuron dynamics and network properties /." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=968772781.
Full textMerege, Fernando. "Identificação de padrões de criminosos seriais usando inteligência artificial associada a neurônios espelhos." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-21052015-164058/.
Full textThe serial criminals who operate in the commission of the crime of theft have different modes of operation (modus operandi) and which may be identified through the analysis of forensic examinations using neural networks. In the proposed system, identified a particular mode of operation, a forensic analyst using the information collected and the hypotheses generated by field experts have the competence to define sets of complementary expert shares, which will be added to the records so identified. During a new forensic examination, in real time, the auxiliary subroutine examines data blocks sent by forensic experts in the field and, in the case of similarity to previously identified a mode of operation, sends them a complementary set of actions that the discretion of the responsible in the field, or can not be used to change the procedure chosen field. In this paper we define Mirror Neurons as the association of neural networks to identify patterns with the worksheet, used by forensic analyst for the definition of complementary actions, with the auxiliary subroutine that checks the blocks of information received and that can identify parts of a mode of operation, referring to field experts a set of complementary actions. This definition should be discovered by the neurobiology of a specific type of neuron that has the ability to shoot while receiving a sensory \"input\" activating an area of memory that, in consequence, can activate other areas of memory or send a motor command. This work programs of neural network used for identifying the modes of operation and the final part were developed, in addition, the worksheets for the elaboration of complementary actions and the auxiliary subroutine for real-time identification of the modes of partial operation. Network training was performed with 98 occurrences and validity check 10 events were used.
Gómez, Orlandi Javier. "Noise, coherent activity and network structure in neuronal cultures." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/346925.
Full textTuffy, Fergal. "Inter-neuron interconnect strategies for hardware implementations of neural networks." Thesis, University of Ulster, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441169.
Full textSmetana, Bedřich. "Algebraizace a parametrizace přechodových relací mezi strukturovanými objekty s aplikacemi v oblasti neuronových sítí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-433543.
Full textCorrêa, Leonardo Garcia. "Memória associativa em redes neurais realimentadas." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-06122004-115632/.
Full textIn this dissertation we investigate biologically inspired models of pattern storage and retrieval, by means of feedback neural networks. These networks try to model some of the dynamical aspects of brain functioning. The study concentrated in Cellular Neural Networks, a local coupled version of the classical Hopfield model. The research comprised stability analysis of the referred networks, as well as performance tests of various methods for content-addressable (associative) memory design in Cellular Neural Networks.
Bendinskienė, Janina. "Duomenų dimensijos mažinimas naudojant autoasociatyvinius neuroninius tinklus." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120731_132413-38444.
Full textThis thesis gives an overview of dimensionality reduction of multivariate data (visualization) techniques, including the issue of artificial neural networks. Presents the main concepts of artificial neural networks (biological and artificial neuron to neuron model, teaching strategies, multi-neuron and so on.). Autoassociative neural networks are analyzed. The aim of this work - to consider the application of autoassociative neural networks for multidimensional data visualization and dimension reduction and to explore the possibilities of the results obtained from the dependence of different parameters. To achieve this, several multidimensional data sets were used. In analysis determinate parameters influencing autoassociative neural network effect. In addition, the results obtained by comparing two different network made errors - MDS and autoassociative. MDS error shows how well maintained the distance between the analyzed points (vectors), in transition from the multidimensional space into a lower dimension space. Autoassociative network output values obtained should coincide with the input values, so autoassociative error shows how well it is received (evaluated the difference between inputs and outputs). Researched how autoassociative neural network errors are influenced by this parameters: the activation function, minimizing function, training function, the number of epochs, hidden neurons number and choices of the number of dimension reduction.
Filho, Edson Costa de Barros Carvalho. "Investigation of Boolean neural networks on a novel goal-seeking neuron." Thesis, University of Kent, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277285.
Full textVik, Lukas, and Fredrik Svensson. "Real-time stereoscopic object tracking on FPGA using neural networks." Thesis, Linköpings universitet, Institutionen för systemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110374.
Full textChauvet, Pierre. "Sur la stabilité d'un réseau de neurones hiérarchique à propos de la coordination du mouvement." Angers, 1993. http://www.theses.fr/1993ANGE0011.
Full textAnisenia, Andrei. "Stochastic Search Genetic Algorithm Approximation of Input Signals in Native Neuronal Networks." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26220.
Full textPardo-Figuerez, Maria M. "Designing neuronal networks with chemically modified substrates : an improved approach to conventional in vitro neural systems." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/27941.
Full textDai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.
Full textDuhr, Fanny. "Voies de signalisation associées au récepteur 5-HT6 et développement neuronal." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTT042/document.
Full textBrain circuitry patterning is a complex, highly regulated process. Alteration of this process is affected gives rise to various neurodevelopmental disorders such as schizophrenia or Autism Spectrum Disorders (ASD), which are both characterized by a wide spectrum of deficits. Serotonin 6 receptor (5-HT6 receptor), which is known for its implication in neuronal migration process, has been identified as a key therapeutic target for the treatment of cognitive deficits observed in schizophrenia, but also in neurodegenerative pathologies such as Alzheimer's disease. However, the signalling mechanisms knowned to be activated by the 5-HT6 receptor do not explain its involvement in neurodevelopmental processes. My thesis project therefore aimed at characterizing the signalling pathways engaged by 5-HT6 receptor during neural development. A proteomic approach allowed me to show that the 5-HT6 receptor was interacting with several proteins playing crucial roles in neurodevelopmental processes such as Cdk5 or WAVE-1. I then demonstrated that, besides its role in neuronal migration, the 5-HT6 receptor was also involved in neurite growth through constitutive phosphorylation of 5-HT6 receptor at Ser350 by associated Cdk5, a process leading to an increase in Cdc42 activity. The second part of my work aimed at understanding the role of 5-HT6 receptor in dendritic spines morphogenesis, and the involvement of WAVE-1 and Cdk5 in this process. These results provide new insights into the control of neurodevelopemental processes by 5-HT6 receptor. Thus, 5-HT6 receptor appears to be a key therapeutic target for neurodevelopmental disorders by contributing to the development of cognitive circuitry related to the pathophysiology of ASD or schizophrenia
Arruda, Henrique Ferraz de. "Análise estrutural e dinâmica de redes biológicas." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-03082015-101106/.
Full textDifferent types of neurons have distinct shapes. An important factor for shape regulation is gene expression, which is also related to the connectivity between nervous cells, creating networks. Dynamics, such as learning, can take place in those networks. In this work we developed a framework for modeling and simulating neurons allowing an integrated analysis from gene expression to dynamics. It will allow the study of the system as a whole as well as the relationships between its parts. In the neuron generation step, we used different patterns of gene expression. The networks were created using those neurons, and several centrality measures were computed to characterize them. Moreover, the dynamic processes considered were the integrate-and-fire model, which simulates communication between neurons, and the hebbian development, which is applied to simulate learning. During every step, Pearsons correlation and mutual information between the level of expression was measured, quantifying the influence of gene expression. Through these experiments it was observed that the gene expression influences all steps, which is in all cases, except in the generation of neuronal shape, an important factor. In addition, by analyzing the betweenness centrality measure, it is possible to observe the formation of paths. To study these paths, comparisons between models and other spatial networks were performed. Thus, it was possible to observe that paths are a common feature in other geographical networks, being related to the connections between network communities.
Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "insilico"." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14057/document.
Full textThese works, which were conducted in a research group designing neuromimetic integrated circuits based on the Hodgkin-Huxley model, deal with the parameter estimation of biological neuron models. The first part of the manuscript tries to bridge the gap between neuron modeling and optimization. We focus our interest on the Hodgkin-Huxley model because it is used in the group. There already existed an estimation method associated to the voltage-clamp technique. Nevertheless, this classical estimation method does not allow to extract precisely all parameters of the model, so in the second part, we propose an alternative method to jointly estimate all parameters of one ionic channel avoiding the usual approximations. This method is based on the differential evolution algorithm. The third chaper is divided into three sections : the first two sections present the application of our new estimation method to two different problems, model fitting from biological data and development of an automated tuning of neuromimetic chips. In the third section, we propose an estimation technique using only membrane voltage recordings – easier to mesure than ionic currents. Finally, the fourth and last chapter is a theoretical study preparing the implementation of small neural networks on neuromimetic chips. More specifically, we try to study the influence of cellular intrinsic properties on the global behavior of a neural network in the context of gamma oscillations
Pontes, Fabrício José [UNESP]. "Projeto otimizado de redes neurais artificiais para predição da rugosidade em processos de usinagem com a utilização da metodologia de projeto de experimentos." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/103054.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O presente trabalho oferece contribuições à modelagem da rugosidade da peça em processos de usinagem por meio de redes neurais artificiais. Propõe-se um método para o projeto de redes. Perceptron Multi-Camada (Multi-Layer Percepton, ou MLO) e Função de Base radial Radial Basis Function, ou RBF) otimizadas para a predição da rugosidade da pela (Ra). Desenvolve-se um algoritmo que utiliza de forma hibrida a metodologia do projeto de experimentos por meio das técnicas dos fatoriais completose de Variações Evolucionária em Operações (EVOP). A estratégia adotada é a de utilizar o projeto de experimentos na busca de configurações de rede que favoreçam estatisticamente o desempenho na tarefa de predição. Parâmetro de corte dos processos de usinagem são utilizados como entradas das redes. O erro médio absoluto em porcentagem (MAE %) do decil inferioir das observações de predição para o conjunto de testes é utilizado como medida de desempnho dos modelos. Com o objetivo de validar o métido proposto são empregados casos de treinamento gerados a partir de daods obtidos de trabalhos de literatura e de experimentos de torneamento do aço ABNT 121.13. O método proposto leva á redução significativa do erro de predição da rugosidade nas operações de usinagem estudadas, quando se compara seu desempenho ao apresentado por modelos de regressão, aos resultados relatados pela literatura e ao desempenho de modelos neurais propostos por um pacotecomputacional comercial para otimização de configurações de rede. As redes projetadas segundo o método proposto possuem dispersão dos erros de predição significativamente reduzidos na comparação. Observa-se ainda que rede MLP atingem resultados estatisticamente superior aos obtidos pelas melhores redes RBF
The present work offers some contributions to the area of surface roughness modeling by Artificial Neural Network in machining processes. Ir proposes a method for the project networks of MLP (Multi-Layer Perceptron) and RBF (Radial Basis Function) architectures optimized for prediction of Average Surface Roughness (Ru). The methid is expressed in the format of an algorithm employing two techniques from the DOE (Design of Experiments) methodology: Full factorials and Evolutionary Operations(EVOP). The strategy adopted consists in the sistematic use of DOE in a search for network configurations that benefits performance in roughess prediction. Cutting para meters from machining operations are employed as network inputs. Themean absolute error in percentage (MAE%) of the lower decile of the predictions for the test set is used as a figure of merit for network performance. In order to validate the method, data sets retrieved from literature, as well as results of experiments with AISI/SAE free-machining steel, are employed to form training and test data sets for the networks. The proposed algorithm leads to significant reduction in prediction error for surface roughness when compared to the performance delivred by a regression model, by the networks proposed by the original studies data was borrowed from and when compared models proposed by a software package intend to search automatically for optimal network configurations. In addition, networks designed acording to the proposed algorithm displayed reduced dispersion of prediction error for surface roughness when compared to the performance delivered by a regression model, by the networks proposed by the original studies data was borrowed from and when compared to neural models proposed by a software package intended to searchautomatically for optimal network configurations. In addition, networks designed according to the proposed algorith ... (Complete abstract click electronic access below)
Timoszczuk, Antonio Pedro. "Reconhecimento automático do locutor com redes neurais pulsadas." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-26102004-195250/.
Full textPulsed Neural Networks have received a lot of attention from researchers. This work aims to verify the capability of this neural paradigm when applied to a speaker recognition task. After a description of the automatic speaker recognition and artificial neural networks fundamentals, a spike response model of neurons is tested. A novel neural network architecture based on this neuron model is proposed and used in a speaker recognition system. Text dependent and independent tests were performed using the Speaker Recognition v1.0 database from CSLU Center for Spoken Language Understanding of Oregon Graduate Institute - U.S.A. A multilayer perceptron is used as a classifier. The Pulsed Neural Networks demonstrated its capability to deal with temporal information and the use of this neural paradigm in a speaker recognition task is promising.