To see the other types of publications on this topic, follow the link: Perceptrons.

Dissertations / Theses on the topic 'Perceptrons'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Perceptrons.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kallin, Westin Lena. "Preprocessing perceptrons." Doctoral thesis, Umeå : Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Filho, Osame Kinouchi. "Generalização ótima em perceptrons." Universidade de São Paulo, 1992. http://www.teses.usp.br/teses/disponiveis/54/54131/tde-07042015-165731/.

Full text
Abstract:
O perceptron tem sido estudado no contexto da física estatística desde o trabalho seminal de Gardner e Derrida sobre o espaço de aclopamentos desta rede neural simples. Recentemente, Opper e Haussler calcularam via método de réplicas, o desempenho ótimo teórico do perceptron na aprendizagem de uma regra a partir de exemplos (generalização). Neste trabalho encontramos a curva de desempenho ótimo após a primeira apresentação dos exemplos (primeiro passo da dinâmica de aprendizagem). No limite de grande número de exemplos encontramos que o erro de generalização é apenas duas vezes maior que o erro encontrado por Opper e Haussler. Calculamos também o desempenho ótimo para o primeiro passo da dinâmica de aprendizagem com seleção de exemplos. Mostramos que a seleção ótima ocorre quando o novo exemplo é escolhido ortogonal ao vetor de acoplamentos do perceptron. O erro de generalização neste caso decai exponencialmente com o número de exemplos. Propomos também uma nova classe de algoritmos de aprendizagem que aproxima muito bem as curvas de desempenho ótimo. Estudamos analiticamente o primeiro passo da dinâmica de aprendizagem e numericamente seu comportamento para tempos longos. Mostramos que vários algoritmos conhecidos (Hebb, Perceptron, Adaline, Relaxação) podem ser interpretados como aproximações, de maior ou menor qualidade, de nosso algoritmo
The perceptron has been studied in the contexto f statistical physics since the seminal work of Gardner and Derrida on the coupling space of this simple neural network. Recently, Opper and Haussler calculated, with the replica method, the theoretical optimal performance of the perceptron for learning a rule (generalization). In this work we found the optimal performance curve after the first presentation of the examples (first step of learning dynamics). In the limit of large number of examples the generalization error is only two times the error found by Opper and Haussler. We also calculated the optimal performance for the first step in the learning situation with selection of examples. We show that optimal selection occurs when the new example is choosen orthogonal to the perceptron coupling vector. The generalization error in this case decay exponentially with the number of examples. We also propose a new class of learning algorithms which aproximates very well the optimal performance curves. We study analytically the first step of the learning dynamics and numerically its behaviour for long times. We show that several known learning algorithms (Hebb, Perceptron, Adaline, Relaxation) can be seen as more or less reliable aproximations o four algorithm
APA, Harvard, Vancouver, ISO, and other styles
3

Adharapurapu, Ratnasri Krishna. "Convergence properties of perceptrons." CSUSB ScholarWorks, 1995. https://scholarworks.lib.csusb.edu/etd-project/1034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Friess, Thilo-Thomas. "Perceptrons in kernel feature spaces." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Lenny. "Uncertainty prediction with multi-layer perceptrons." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55733.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mourao, Kira Margaret Thom. "Learning action representations using kernel perceptrons." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7717.

Full text
Abstract:
Action representation is fundamental to many aspects of cognition, including language. Theories of situated cognition suggest that the form of such representation is distinctively determined by grounding in the real world. This thesis tackles the question of how to ground action representations, and proposes an approach for learning action models in noisy, partially observable domains, using deictic representations and kernel perceptrons. Agents operating in real-world settings often require domain models to support planning and decision-making. To operate effectively in the world, an agent must be able to accurately predict when its actions will be successful, and what the effects of its actions will be. Only when a reliable action model is acquired can the agent usefully combine sequences of actions into plans, in order to achieve wider goals. However, learning the dynamics of a domain can be a challenging problem: agents’ observations may be noisy, or incomplete; actions may be non-deterministic; the world itself may be noisy; or the world may contain many objects and relations which are irrelevant. In this thesis, I first show that voted perceptrons, equipped with the DNF family of kernels, easily learn action models in STRIPS domains, even when subject to noise and partial observability. Key to the learning process is, firstly, the implicit exploration of the space of conjunctions of possible fluents (the space of potential action preconditions) enabled by the DNF kernels; secondly, the identification of objects playing similar roles in different states, enabled by a simple deictic representation; and lastly, the use of an attribute-value representation for world states. Next, I extend the model to more complex domains by generalising both the kernel and the deictic representation to a relational setting, where world states are represented as graphs. Finally, I propose a method to extract STRIPS-like rules from the learnt models. I give preliminary results for STRIPS domains and discuss how the method can be extended to more complex domains. As such, the model is both appropriate for learning data generated by robot explorations as well as suitable for use by automated planning systems. This combination is essential for the development of autonomous agents which can learn action models from their environment and use them to generate successful plans.
APA, Harvard, Vancouver, ISO, and other styles
7

Black, Michael David. "Applying perceptrons to speculation in computer architecture." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/6725.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Cairns, Graham Andrew. "Learning with analogue VLSI multi-layer perceptrons." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grandvalet, Yves. "Injection de bruit dans les perceptrons multicouches." Compiègne, 1995. http://www.theses.fr/1995COMPD802.

Full text
Abstract:
Lors de l'estimation d'une fonction de régression, la sélection d'un modèle est une étape clef. Elle détermine la complexité du modèle généralisant au mieux les données, i. E. Minimisant l'erreur en prédiction. Dans les perceptons multicouches, la complexité peut être réglée en modifiant l'architecture du réseau. Mais il est également possible de la contrôler à architecture fixée. Les méthodes employées consistent à ajouter au critère d'erreur, explicitement ou non, un terme pénalisant la complexité de la solution. La notion de paramètres effectifs supplante alors celle de paramètres. Parmi ces méthodes, nous avons choisi d'étudier l'injection de bruit, qui est une heuristique particulièrement attractive, car de coût algorithmique nul. Dans un premier temps, nos travaux portent sur la justification théorique de cette heuristique. Nous récusons tout d'abord l'approche par développement de Taylor, qui la plus couramment usitée aujourd'hui. Nous utilisons ensuite les rapports de l'injection de bruit avec le régresseur de Nadaraya-Watson pour délimiter le cadre d'utilisation de l'heuristique. De plus, nous proposons deux modifications permettant d'élargir ce cadre à une catégorie plus vaste de problèmes, i. E. Les données irrégulièrement espacées et de grandes dimensions. Enfin, nous validons notre approche en comparant les performances de différents régresseurs sur une application à des données issues d'un processus de fabrication du verre.
APA, Harvard, Vancouver, ISO, and other styles
10

Octavian, Stan. "New recursive algorithms for training feedforward multilayer perceptrons." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Barbato, Daniela Maria Lemos. "Estudo analítico do efeito da diluição em perceptrons." Universidade de São Paulo, 1998. http://www.teses.usp.br/teses/disponiveis/76/76131/tde-05052008-172548/.

Full text
Abstract:
Perceptrons são redes neurais sem retroalimentação cujos os neurônios estão dispostos em camadas. O perceptron considerado neste trabalho consiste de uma camada de N neurônios sensores Si = 1; i = 1,.... N ligados a um único neurônio motor através das conexões sinápticas , Ji; i = 1, .... N. Utilizando o formalismo da Mecânica Estatística desenvolvido por Gardner e colaboradores, estudamos os efeitos da eliminação de uma fração dos pesos sinápticos (diluição) nas capacidades de aprendizado e generalização de dois tipos de perceptrons, a saber, o perceptron linear e o perceptron Booleano. No perceptron linear comparamos o desempenho de redes lesadas por diferentes tipos de diluição, que podem ocorrer durante ou após o processo de aprendizado. Essa comparação mostra que a estratégia de minimizar o erro de treinamento, não fornece o menor erro de generalização, além do que, dependendo do tamanho do conjunto de treinamento e do nível de ruído, os pesos menores podem se tornar os fatores mais importantes para o bom funcionamento da rede. No perceptron Booleano investigamos apenas o efeito da diluição após o término do aprendizado na capacidade de generalização da rede neural treinada com padrões ruidosos. Neste caso, apresentamos uma comparação entre os desempenhos relativos de cinco regras de aprendizado: regra de Hebb, pseudo-inversa, algoritmo de Gibbs, algoritmo de estabilidade ótima e algoritmo de Bayes. Em particular mostramos que a diluição sempre degrada o desempenho de generalização e o algoritmo de Bayes sempre fornece o menor erro de generalização.
Perceptrons are layered, feed-forward neural networks. In this work we consider a perceptron composed of one input layer with N sensor neurons Si = 1; i = 1,..., N which are connected to a single motor neuron a through the synaptic weights Ji; i = 1,..., N. Using the Statistical Mechanics formalism developed by Gardner and co-workers, we study the effects of eliminating a fraction of synaptic weights (dilution) on the learning and generalization capabilities of the two types of perceptrons, namely, the linear perceptron and the Boolean perceptron. In the linear perceptron we compare the performances of networks damaged by different types of dilution, which may occur either during or after the learning stage. The comparison between the effects of the different types of dilution, shows that the strategy of minimizing the training error does not yield the best generalization performance. Moreover, this comparison also shows that, depending on the size of the training set and on the level of noise corrupting the training data, the smaller weights may became the determinant factors in the good functioning of the network. In the Boolean perceptron we investigate the effect of dilution after learning on the generalization ability when this network is trained with noise examples. We present a thorough comparison between the relative performances of five learning rules or algorithms: the Hebb rule, the pseudo-inverse rule, the Gibbs algorithm, the optimal stability algorithm and the Bayes algorithm. In particular, we show that the effect of dilution is always deleterious, and that the Bayes algorithm always gives the lest generalization performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Karouia, Mohamed. "Initialisation et détermination de l'architecture des perceptrons multicouches." Compiègne, 1996. http://www.theses.fr/1996COMPD879.

Full text
Abstract:
Le premier problème étudié dans cette thèse concerne l'initialisation des poids des perceptrons multicouches. Une nouvelle méthode a été proposée dans le cas particulier des problèmes de discrimination. Cette méthode utilise les facteurs discriminants issus d'une technique d'analyse factorielle discriminante de nature paramétrique ou non-paramétrique. Une étude expérimentale a permis de mettre en évidence une réduction du temps d'apprentissage et une amélioration sensible des performances en généralisation par rapport à l'initialisation aléatoire et, dans une moindre mesure, par rapport à l'initialisation par prototypes. Le second problème abordé est la question de détermination de l'architecture des perceptrons multicouches. La méthode incrémentale proposée initialise les vecteurs de poids par une technique d'analyse factorielle discriminante. L'algorithme d'apprentissage consiste en l'enchaînement de trois étapes : sélection et apprentissage de la dernière unité ajoutée seule, puis ajustement des poids de la couche de sortie, et finalement apprentissage du réseau entier. Cette stratégie, dont plusieurs variantes ont été étudiées, semble permettre d'atteindre de bonnes performances en termes de généralisation et de temps de calcul, notamment en comparaison d'autres algorithmes constructifs reposant sur des principes voisins.
APA, Harvard, Vancouver, ISO, and other styles
13

Marchi, Rodrigo Andreoli de. "Aplicação do perceptron de múltiplas camadas no controle direto de potência do gerador de indução duplamente alimentado." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258919.

Full text
Abstract:
Orientadores: Edson Bim, Fernando José Von Zuben
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-18T01:48:41Z (GMT). No. of bitstreams: 1 Marchi_RodrigoAndreolide_M.pdf: 5160130 bytes, checksum: 8e043b48f5cd0e86cc078b3adc27f74b (MD5) Previous issue date: 2011
Resumo: Neste trabalho é apresentada a estratégia de Controle Direto de Potência para o Gerador de Indução Duplamente Alimentado utilizando um controlador Perceptron de Múltiplas Camadas. O controlador tem a função de gerar os sinais das componentes de eixo direto e quadratura da tensão do rotor, sem a necessidade de controladores de corrente. A estratégia de controle apresentada permite operar o conversor de potência, conectado aos terminais do rotor, com frequência de chaveamento constante. A rede neural foi treinada off-line, a partir de um algoritmo de otimização de segunda ordem baseado no gradiente conjugado estendido, utilizando um conjunto de amostras obtido por meio da simulação digital de uma máquina de rotor bobinado de potência igual a 2 MW. Resultados de simulação digital com os dados dessa máquina, operando no modo gerador e com dupla alimentação, são apresentados para vários valores de potência ativa e reativa, e para velocidades fixas e variáveis, compreendidas na faixa de -15% a +15% da velocidade síncrona. Com o controlador implementado por uma rede neural artificial e treinada para uma máquina de 2 MW, testes de simulação digital e experimentais para uma máquina de 2,2 kW, operando na velocidade subsíncrona, são apresentados para validar a proposta
Abstract: This work presents a direct power control strategy for the doubly fed induction generator using a controller artificial neural networks, more specifically a multilayer perceptron. The controller has the role of generating the direct and quadrature-axis component signals of the rotor voltage, without the need of current controllers. The proposed control strategy allows to operate the converter, connected to the rotor terminals, with a fixed switching frequency. The multilayer perceptron was subject to an off-line training procedure using a second order algorithm based on an extend version of the conjugate gradient algorithm, using a set of samples produced by a 2 MW machine's digital simulation. Results of digital simulation for this machine are presented for several values of active and reactive power, with the generator operating on fixed and variable speed, in the range of -15% and +15% of the synchronous speed, considering the parameters of 2 MW machine. With the artificial neural network controller designed for this machine, digital simulation tests and experimental tests for a 2,2 kW machine, operating in a sub-synchronous speed, arc presented to validate the proposal
Mestrado
Energia Eletrica
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
14

Bartz, Michael. "Soft decision decoding of block codes using multilayer perceptrons." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bueno, Felipe Roberto 1985. "Perceptrons híbridos lineares/morfológicos fuzzy com aplicações em classificação." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306338.

Full text
Abstract:
Orientador: Peter Sussner
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-26T15:06:30Z (GMT). No. of bitstreams: 1 Bueno_FelipeRoberto_M.pdf: 1499339 bytes, checksum: 85b58d8b856fafa47974349e80c1729e (MD5) Previous issue date: 2015
Resumo: Perceptrons morfológicos (MPs) pertencem à classe de redes neurais morfológicas (MNNs). Estas redes representam uma classe de redes neurais artificiais que executam operações de morfologia matemática (MM) em cada nó, possivelmente seguido pela aplicação de uma função de ativação. Vale ressaltar que a morfologia matemática foi concebida como uma teoria para processamento e análise de objetos (imagens ou sinais), por meio de outros objetos chamados elementos estruturantes. Embora inicialmente desenvolvida para o processamento de imagens binárias e posteriormente estendida para o processamento de imagens em tons de cinza, a morfologia matemática pode ser conduzida de modo mais geral em uma estrutura de reticulados completos. Originalmente, as redes neurais morfológicas empregavam somente determinadas operações da morfologia matemática em tons de cinza, denominadas de erosão e dilatação em tons de cinza, segundo a abordagem umbra. Estas operações podem ser expressas em termos de produtos máximo aditivo e mínimo aditivo, definidos por meio de operações entre vetores ou matrizes, da álgebra minimax. Recentemente, as operações da morfologia matemática fuzzy surgiram como funções de agregação das redes neurais morfológicas. Neste caso, falamos em redes neurais morfológicas fuzzy. Perceptrons híbridos lineares/morfológicos fuzzy foram inicialmente projetados como uma generalização dos perceptrons lineares/morfológicos existentes, ou seja, os perceptrons lineares/morfológicos fuzzy podem ser definidos por uma combinação convexa de uma parte morfológica fuzzy e uma parte linear. Nesta dissertação de mestrado, introduzimos uma rede neural artificial alimentada adiante, representando um perceptron híbrido linear/morfológico fuzzy chamado F-DELP (do inglês fuzzy dilation/erosion/linear perceptron), que ainda não foi considerado na literatura de redes neurais. Seguindo as ideias de Pessoa e Maragos, aplicamos uma suavização adequada para superar a não-diferenciabilidade dos operadores de dilatação e erosão fuzzy utilizados no modelo F-DELP. Em seguida, o treinamento é realizado por intermédio de um algoritmo de retropropagação de erro tradicional. Desta forma, aplicamos o modelo F-DELP em alguns problemas de classificação conhecidos e comparamos seus resultados com os produzidos por outros classificadores
Abstract: Morphological perceptrons (MPs) belong to the class of morphological neural networks (MNNs). These MNNs represent a class of artificial neural networks that perform operations of mathematical morphology (MM) at every node, possibly followed by the application of an activation function. Recall that mathematical morphology was conceived as a theory for processing and analyzing objects (images or signals), by means of other objects called structuring elements. Although initially developed for binary image processing and later extended to gray-scale image processing, mathematical morphology can be conducted very generally in a complete lattice setting. Originally, morphological neural networks only employed certain operations of gray-scale mathematical morphology, namely gray-scale erosion and dilation according to the umbra approach. These operations can be expressed in terms of (additive maximum and additive minimum) matrix-vector products in minimax algebra. It was not until recently that operations of fuzzy mathematical morphology emerged as aggregation functions of morphological neural networks. In this case, we speak of fuzzy morphological neural networks. Hybrid fuzzy morphological/linear perceptrons was initially designed by generalizing existing morphological/linear perceptrons, in other words, fuzzy morphological/linear perceptrons can be defined by a convex combination of a fuzzy morphological part and a linear part. In this master's thesis, we introduce a feedforward artificial neural network representing a hybrid fuzzy morphological/linear perceptron called fuzzy dilation/erosion/linear perceptron (F-DELP), which has not yet been considered in the literature. Following Pessoa's and Maragos' ideas, we apply an appropriate smoothing to overcome the non-differentiability of the fuzzy dilation and erosion operators employed in the proposed F-DELP models. Then, training is achieved using a traditional backpropagation algorithm. Finally, we apply the F-DELP model to some well-known classification problems and compare the results with the ones produced by other classifiers
Mestrado
Matematica Aplicada
Mestre em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
16

Goosen, Johannes Christiaan. "Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen." Thesis, North-West University, 2011. http://hdl.handle.net/10394/5552.

Full text
Abstract:
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm
Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
17

Devouge, Claire. "Quelques aspects mathématiques de l'auto-organisation neuronale et des perceptrons multicouches." Paris 11, 1992. http://www.theses.fr/1992PA112268.

Full text
Abstract:
Cette these est consacree a l'etude mathematique de deux types de reseaux de neurones et se compose de deux parties distinctes. La premiere partie est centree sur le perceptron multicouche, et plus particulierement sur les problemes de convexite de la fonction d'erreur. Nous montrons que des conditions assez faibles suffisent a assurer cette convexite pour tous les perceptrons sans couche cachee, tandis que la convexite devient impossible pour les perceptrons a une couche cachee et plus, meme en tenant compte du groupe des transformations geometriques qui laissent l'erreur invariante. Nous proposons ensuite un nouvel algorithme d'apprentissage utilisant les informations obtenues et que nous avons teste sur le probleme classique de la reconnaissance de caracteres imprimes. La seconde partie repose sur les travaux de r. Linsker sur la vision. Ce chercheur a propose, a partir de 1986, un modele assez simple (base sur une regle d'apprentissage de type hebbien) qui permet d'expliquer l'emergence de cellules selectives a l'orientation dans le cadre de la vision prenatale. Nous proposons une etude mathematique de ce modele; l'operateur qui definit l'evolution dans le temps des connexions corticales depend de deux parametres reels et est diagonalisable dans une base orthonormee de fonctions propres. Nous montrons que, selon la valeur de ces parametres, ces fonctions propres sont, a renormalisation pres, soit les fonctions de hermite a deux variables, soit des sommes de ces fonctions de hermite. Afin de determiner le comportement asymptotique de la fonction poids synaptique, nous etudions ensuite la position relative des valeurs propres en fonction des parametres. Ceci nous permet, pour finir, de degager des conditions explicites afin de forcer tel ou tel mode (symetrique ou non) a devenir preponderant au cours du temps
APA, Harvard, Vancouver, ISO, and other styles
18

PERROTTA, DOMENICO. "Apports de l'analyse bayesienne aux methodes d'apprentissage des perceptrons multi-couches." Lyon, École normale supérieure (sciences), 1997. http://www.theses.fr/1997ENSL0042.

Full text
Abstract:
Ce memoire de these est consacre a l'etude theorique et experimentale des apports de l'approche bayesienne dans les methodes d'apprentissage des perceptrons multicouches. Plus precisement, nous avons etudie le formalisme et les methodes introduites dans le domaine des reseaux de neurones par david mackay (1992). Nous nous sommes interesses en particulier a certains aspects theoriques et methodologiques de ce cadre qui sont contestes par beaucoup de specialistes connexionnistes ou bayesiens. Par ailleurs, la place occupee par la methode de d. Mackay parmi les diverses techniques bayesiennes etant encore peu connue des connexionnistes, on a voulu ici contribuer a combler cette lacune. Un dernier aspect du travail est l'application de ces methodologies a des problemes reels. La these (i) a examine plusieurs aspects des techniques bayesiennes appliquees aux perceptrons multicouches ; (ii) a montre comment les techniques de d. Mackay permettent d'eviter le phenomene d'overfitting et de classer diverses architectures de perceptrons multicouches (iii) a compare les performances des reseaux bayesiens avec celles de certains classifieurs bien connus en statistique ; (iv) a montre des difficultes numerique qui empechent souvent d'experimenter la methode sur les problemes plus complexes ; (v) a essaye de tracer de nouvelles directions de recherche
APA, Harvard, Vancouver, ISO, and other styles
19

Papadopoulos, Georgios. "Theoretical issues and practical considerations concerning confidence measures for multi-layer perceptrons." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/12753.

Full text
Abstract:
The primary aim of this thesis is to study existing CM methods and assess their practicability and performance in harsh real-world environments. The motivation for this work was a real industrial application - the development of a paper curl prediction system. Curl is an important paper quality parameter that can only be measured after production. The available data were sparse and were known to be corrupted by gross errors. Moreover, it was suspected that data noise was not constant over input space. Three approaches were identified as suitable for use in real-world applications: maximum likelihood (ML), the approximate Bayesian approach and the bootstrap technique. These methods were initially compared using a standard CM performance evaluation method, based on estimating the prediction interval coverage probability (PI CP). It was found that the PI CP metric can only gauge CM performance as an average over the input space. However, local CM performance is crucial because a CM must associate low confidence with high data noise/low data density regions and high confidence with low noise/high data density regions. Moreover, evaluating local performance could be used to gauge the input-dependency of the noise in the data. For this reason, a new CM evaluation technique was developed to study local CM performance. The new approach, called classification of local uncertainty estimates (CLUES), was then used for a new comparison study, this time in the light of local performance. Three main conclusions were reached: the noise in the curl data was found to have input-dependent variance, the approximate Bayesian approach outperformed the other two in most cases, and the bootstrap technique was found to be inferior to both ML and Bayesian methods for data sets of input-dependent data noise variance.
APA, Harvard, Vancouver, ISO, and other styles
20

Carlsson, Leo. "Using Multilayer Perceptrons asmeans to predict the end-pointtemperature in an Electric ArcFurnace." Thesis, KTH, Materialens processteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-182288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cardoso, Maria Eduarda de Araújo. "Segmentação automática de Expressões Faciais Gramaticais com Multilayer Perceptrons e Misturas de Especialistas." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-25112018-203224/.

Full text
Abstract:
O reconhecimento de expressões faciais é uma área de interesse da ciência da computação e tem sido um atrativo para pesquisadores de diferentes áreas, pois tem potencial para promover o desenvolvimento de diferentes tipos de aplicações. Reconhecer automaticamente essas expressões tem se tornado um objetivo, principalmente na área de análise do comportamento humano. Especialmente para estudo das línguas de sinais, a análise das expressões faciais é importante para a interpretação do discurso, pois é o elemento que permite expressar informação prosódica, suporta o desenvolvimento da estrutura gramatical e semântica da língua, e ajuda na formação de sinais com outros elementos básicos da língua. Nesse contexto, as expressões faciais são chamadas de expressões faciais gramaticais e colaboram na composição no sentido semântico das sentenças. Entre as linhas de estudo que exploram essa temática, está aquela que pretende implementar a análise automática da língua de sinais. Para aplicações com objetivo de interpretar línguas de sinais de forma automatizada, é preciso que tais expressões sejam identificadas no curso de uma sinalização, e essa tarefa dá-se é definida como segmentação de expressões faciais gramaticais. Para essa área, faz-se útil o desenvolvimento de uma arquitetura capaz de realizar a identificação de tais expressões em uma sentença, segmentando-a de acordo com cada tipo diferente de expressão usada em sua construção. Dada a necessidade do desenvolvimento dessa arquitetura, esta pesquisa apresenta: uma análise dos estudos na área para levantar o estado da arte; a implementação de algoritmos de reconhecimento de padrões usando Multilayer Perceptron e misturas de especialistas para a resolução do problema de reconhecimento da expressão facial; a comparação desses algoritmos reconhecedores das expressões faciais gramaticais usadas na concepção de sentenças na Língua Brasileira de Sinais (Libras). A implementação e teste dos algoritmos mostraram que a segmentação automática de expressões faciais gramaticais é viável em contextos dependentes do usuários. Para contextos independentes de usuários, o problema de segmentação de expressões faciais representa um desafio que requer, principalmente, a organização de um ambiente de aprendizado estruturado sobre um conjunto de dados com volume e diversidade maior do que os atualmente disponíveis
The recognition of facial expressions is an area of interest in computer science and has been an attraction for researchers in different fields since it has potential for development of different types of applications. Automatically recognizing these expressions has become a goal primarily in the area of human behavior analysis. Especially for the study of sign languages, the analysis of facial expressions represents an important factor for the interpretation of discourse, since it is the element that allows expressing prosodic information, supports the development of the grammatical and semantic structure of the language, and eliminates ambiguities between similar signs. In this context, facial expressions are called grammatical facial expressions. These expressions collaborate in the semantic composition of the sentences. Among the lines of study that explore this theme is the one that intends to implement the automatic analysis of sign language. For applications aiming to interpret signal languages in an automated way, it is necessary that such expressions be identified in the course of a signaling, and that task is called \"segmentation of grammatical facial expressions\'\'. For this area, it is useful to develop an architecture capable of performing the identification of such expressions in a sentence, segmenting it according to each different type of expression used in its construction. Given the need to develop this architecture, this research presents: a review of studies already carried out in the area; the implementation of pattern recognition algorithms using Multilayer Perceptron and mixtures of experts to solve the facial expression recognition problem; the comparison of these algorithms as recognizers of grammatical facial expressions used in the conception of sentences in the Brazilian Language of Signs (Libras). The implementation and tests carried out with such algorithms showed that the automatic segmentation of grammatical facial expressions is practicable in user-dependent contexts. Regarding user-independent contexts, this is a challenge which demands the organization of a learning environment structured on datasets bigger and more diversified than those current available
APA, Harvard, Vancouver, ISO, and other styles
22

Lehalle, Charles-Albert. "Contrôle non linéaire et Réseaux de neurones formels : les Perceptrons Affines Par morceaux." Paris 6, 2005. https://tel.archives-ouvertes.fr/tel-00009592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

ARAÚJO, Ricardo de Andrade. "Proposta de uma Classe de Perceptrons Híbridos com Aprendizagem baseada em Gradiente Descendente." Universidade Federal de Pernambuco, 2012. https://repositorio.ufpe.br/handle/123456789/2842.

Full text
Abstract:
Made available in DSpace on 2014-06-12T16:01:32Z (GMT). No. of bitstreams: 2 arquivo9424_1.pdf: 5898735 bytes, checksum: 20e8579169a2c3b8e6d53fd877a5cd17 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2012
Este trabalho apresenta uma classe de perceptrons híbridos baseado nos princípios da morfologia matemática (mathematical morphology, MM) no contexto de teoria de reticulados (lattice theory). O modelo proposto, chamado de perceptron de dilatação-erosão-linear (dilationerosion- linear perceptron, DELP), consiste de uma combinação linear entre operadores nãolineares (do tipo morfológicos no contexto de teoria de reticulados) e um operador linear (do tipo resposta finita ao impulso), sendo desenvolvido na tentativa de superar o dilema do passeio aleatório (random walk dilemma, RWD) no problema de previsão de séries temporais financeiras. Para projetar o DELP (processo de aprendizagem), foi apresentado um método de gradiente descendente utilizando ideias do algoritmo de retropropagação do erro (back propagation, BP) e uma abordagem sistemática para superar o problema da não-diferenciabilidade das operações morfológicas de dilatação e erosão. Também, no processo de aprendizagem do DELP, foi incluída uma etapa adicional para ajustar distorções de fase temporais que ocorrem na reconstrução do espaço de fase de fenômenos temporais provenientes do mercado financeiro. Uma análise experimental foi conduzida utilizando um conjunto de séries temporais financeiras: Índice da Bolsa de Valores de São Paulo, Índice Dow Jones Industrial Average, Índice National Association of Securities Dealers Automated Quotation, Índice Financial Times and London Stock Exchange 100, Preço das ações do Bradesco PN, Preço das ações da Gol PN, Preço das ações do Itaú Unibanco PN, Preço das ações da Petrobras PN, Preço das ações da Usiminas PNA e Preço das ações da Vale PNA. Nestes experimentos, foram utilizadas cinco métricas e uma função de avaliação para mensurar o desempenho preditivo do modelo proposto, e os resultados alcançados superaram aqueles obtidos utilizando técnicas consolidadas na literatura
APA, Harvard, Vancouver, ISO, and other styles
24

Diouf, Daouda. "Méthode mixte d'inversion neuro-variationnelle d'images de la couleur de l'océan : Application aux signaux SeaWIFS au large de l'Afrique de l'Ouest." Paris 6, 2012. http://www.theses.fr/2012PA066181.

Full text
Abstract:
Les capteurs optiques, destinés à observer l’océan depuis l’espace, mesurent le rayonnement solaire réfléchi vers l’espace par le système océan-atmosphère. La réflectance marine intéressante pour l’analyse de l’océan représente en moyenne au plus 10% de la lumière totale reçue par le capteur et est obtenue à l’issue du processus de correction atmosphérique. L’inversion de ce signal marin permet d’obtenir les paramètres géophysiques intéressants pour l’étude de l’océan, tels que la concentration en chlorophylles-a, pigment principal du phytoplancton. En général la difficulté des algorithmes standards de correction atmosphériques réside dans la quantification de l’impact des aérosols présents dans l’atmosphère sur le signal mesuré par le capteur surtout lorsqu’ils sont absorbants. Nous proposons des méthodologies statistico-mathématiques adaptés afin de déterminer les types d’aérosols atmosphériques et leurs épaisseurs optiques et ensuite restituer la couleur de l’océan. Cette méthodologie qui est une combinaison de plusieurs algorithmes neuronaux et d’une optimisation variationnelle est appelée SOM-NV et a été appliquée sur treize années d’observations du capteur SeaWiFS au large de l’Afrique de l’Ouest. Les épaisseurs optiques et les coefficients d’Angström mesurés in-situ (mesures AERONET) ont permis de valider respectivement les épaisseurs optiques et les types d’aérosols obtenues par SOM-NV. D’autre part la méthode est aussi capable de détecter les aérosols absorbants tels que les poussières sahariennes et donne des résultats précis pour les valeurs d'épaisseur optiques supérieures à 0,35, ce qui n'est pas le cas pour le produit standard SeaWiFS
Optical sensors, used to observe the ocean from space, measure the solar radiation reflected back to space by the ocean-atmosphere system. The marine reflectance interesting for the analysis of the ocean represents an average at most 10% of the total light received by the sensor and is obtained at the end of an atmospheric correction process. The inversion of this marine signal provides geophysical parameters interesting for the study of the ocean, such as the chlorophyll-a concentration, a major pigment of phytoplankton. In general the difficulty of standard atmospheric correction algorithms lies in quantifying the impact of aerosols in the atmosphere on the signal measured by the sensor especially when they are absorbing. We present adapted statistical and mathematical methodologies to determine atmospheric aerosols types and their optical thickness and then retrieve the ocean color. This methodology which is a combination of several neural network algorithms and a variational optimization is called SOM-NV and was applied to thirteen years of SeaWiFS observations off West Africa. The aerosols optical thickness and Angström coefficients measured in-situ (AERONET measurements) were respectively used to validate the aerosols optical thickness and aerosols types obtained by SOM-NV. Furthermore the method is also able to detect absorbing aerosols such as Saharan dust and gives accurate results for optical thickness values greater than 0. 35, which is not the case for SeaWiFS standard product
APA, Harvard, Vancouver, ISO, and other styles
25

Rountree, Nathan, and n/a. "Initialising neural networks with prior knowledge." University of Otago. Department of Computer Science, 2007. http://adt.otago.ac.nz./public/adt-NZDU20070510.135442.

Full text
Abstract:
This thesis explores the relationship between two classification models: decision trees and multilayer perceptrons. Decision trees carve up databases into box-shaped regions, and make predictions based on the majority class in each box. They are quick to build and relatively easy to interpret. Multilayer perceptrons (MLPs) are often more accurate than decision trees, because they are able to use soft, curved, arbitrarily oriented decision boundaries. Unfortunately MLPs typically require a great deal of effort to determine a good number and arrangement of neural units, and then require many passes through the database to determine a good set of connection weights. The cost of creating and training an MLP is thus hundreds of times greater than the cost of creating a decision tree, for perhaps only a small gain in accuracy. The following scheme is proposed for reducing the computational cost of creating and training MLPs. First, build and prune a decision tree to generate prior knowledge of the database. Then, use that knowledge to determine the initial architecture and connection weights of an MLP. Finally, use a training algorithm to refine the knowledge now embedded in the MLP. This scheme has two potential advantages: a suitable neural network architecture is determined very quickly, and training should require far fewer passes through the data. In this thesis, new algorithms for initialising MLPs from decision trees are developed. The algorithms require just one traversal of a decision tree, and produce four-layer MLPs with the same number of hidden units as there are nodes in the tree. The resulting MLPs can be shown to reach a state more accurate than the decision trees that initialised them, in fewer training epochs than a standard MLP. Employing this approach typically results in MLPs that are just as accurate as standard MLPs, and an order of magnitude cheaper to train.
APA, Harvard, Vancouver, ISO, and other styles
26

Satravaha, Nuttavudh. "Tone classification of syllable-segmented Thai speech based on multilayer perceptron." Morgantown, W. Va. : [West Virginia University Libraries], 2002. http://etd.wvu.edu/templates/showETD.cfm?recnum=2280.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 2002.
Title from document title page. Document formatted into pages; contains v, 130 p. : ill. (some col.). Vita. Includes abstract. Includes bibliographical references (p. 107-118).
APA, Harvard, Vancouver, ISO, and other styles
27

Lehalle, Charles-Albert. "Le contrôle non linéaire par réseaux de neurones formels: les perceptrons affines par morceaux." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2005. http://tel.archives-ouvertes.fr/tel-00009592.

Full text
Abstract:
Le but de ce travail est d'exposer de nouveaux résultats concernant l'utilisation d'une classe particulière de réseaux de neurones formels (les Perceptrons Affines Par morceaux: PAP) dans le cadre du contrôle optimal en boucle fermée. Les résultats principaux obtenus sont: plusieurs propriétés des PAP, concernant la nature des fonctions qu'ils peuvent émuler, un théorème constructif de représentation des fonctions continues affines par morceaux, qui permet de construire explicitement un PAP à partir d'une collection de fonctions affines, une série d'heuristiques pour l'apprentissage des paramètres d'un perceptron dans une boucle fermée et dans un cadre de contrôle optimal, des résultats théoriques concernant la stabilité de PAP utilisés comme contrôleurs. La dernière partie est consacrée des applications de ces résultats à la construction automatique de contrôleurs de la combustion de moteurs de voiture, qui ont donné lieu au dépot de deux brevets par Renault.
APA, Harvard, Vancouver, ISO, and other styles
28

Dimopoulos, Ioannis. "La mise en oeuvre des modèles statistiques linéaires et non linéaires en sciences de l'environnement." Toulouse 3, 1997. http://www.theses.fr/1997TOU30096.

Full text
Abstract:
Ce travail etudie quelques approches, difficultes et questions portant sur les differentes etapes de la mise en uvre d'un modele statistique. Il presente d'abord quelques analyses permettant d'obtenir une premiere idee de la classe de modeles a laquelle celui qui est recherche a le plus de chance d'appartenir. Il s'interesse ensuite au probleme de la selection d'un modele parmi plusieurs candidats. Deux grandes classes de methodes permettant cette selection sont discutees. Les methodes de la premiere classe sont basees sur une estimation de la performance du modele en generalisation, tandis que celles de la seconde sont basees sur certaines hypotheses a priori sur la complexite du modele dont la satisfaction pourrait conduire a une bonne performance en generalisation. On montre que la difficulte des methodes de selection croit avec la capacite d'approximation du modele. Trois types de modeles sont etudies : - les modeles lineaires avec estimation recursive des parametres permettant de tenir compte de non-stationnarites. Le probleme de l'estimation des parametres est aborde par plusieurs methodes et illustre avec un exemple de prediction des debits. - les perceptrons multicouches, ayant une grande capacite d'approximation de relations fortement non-lineaires. Les problemes que posent les choix necessaires a leur utilisation correcte sont discutes. Les raisons pour lesquelles ils ne peuvent etre encore consideres comme un outil banalise sont expliquees. La methodologie de leur application est presentee avec deux exemples de modelisation de donnees reelles permettant de montrer qu'ils doivent etre utilises dans un contexte statistique. - les modeles locaux, bien que localement de forme simple, peuvent montrer une grande capacite d'approximation. Leur validite et leur capacite predictive sont etroitement liees au choix de parametres qui definissent l'etendue du voisinage d'approximation locale. Une nouvelle methode de determination d'un voisinage localement variable est proposee. Cette methode permet de determiner automatiquement une etendue localement optimale et d'utiliser les variations locales de cette etendue pour caracteriser la dynamique du systeme etudie. La validite de la methode est evaluee sur plusieurs ensembles de donnees.
APA, Harvard, Vancouver, ISO, and other styles
29

Shepherd, Adrian John. "Novel second-order techniques and global optimisation methods for supervised training of multi-layer perceptrons." Thesis, University College London (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

SEPULVEDA, EDUARDO. "Etude de la generalisation dans les perceptrons multi-couche : application a la reconnaissance des formes." Paris 6, 1994. http://www.theses.fr/1994PA066442.

Full text
Abstract:
Cette these traite des reseaux de neurones et leur application aux problemes de reconnaissance des formes. L'inexistence de solution mathematique a ces problemes font de l'outil reseau de neurones une alternative aux solutions classiques ou les principaux inconvenients sont l'absence de connaissance des distributions statistiques des formes et la grande dimensionalite des donnees. Les methodes connexionnistes permettent d'aborder ces problemes sans avoir a priori besoin de connaitre la statistique du probleme concerne. Dans un premier temps nous avons utilise les reseaux de neurones pour essayer de resoudre deux problemes specifiques: la detection d'une balise placee dans l'environnement d'un robot mobile afin d'aider a localiser ce dernier et la reconnaissance de caracteres manuscrits omni-scripteurs. Dans les deux cas, la quantite insuffisante d'exemples nous ont conduit a reduire la dimension des donnees a l'aide de codages specifiques. Les resultats insuffisants obtenus nous ont fait conclure que l'utilisation de ces codages n'est pas adaptee au probleme. Ensuite nous proposons une approche hybride ou nous utilisons les reseaux neuronaux pour reduire la dimension des donnees. La classification est assuree par un classifieur des plus-proches-voisins lequel, a l'aide d'une fonction de cout specifique qui est minimisee par une extension de la retropropagation classique, guide la reduction de la dimension effectuee par le reseau. Nous avons applique cette methode au probleme de la reconnaissance de caracteres manuscrits, obtenant une amelioration a la fois de la vitesse moyenne de reconnaissance et du taux d'erreur commise par le classifieur des plus-proches-voisins. L'inconvenient de cette methode etant le temps de calcul necessaire pour realiser l'apprentissage, nous proposons des ameliorations pour le reduire
APA, Harvard, Vancouver, ISO, and other styles
31

Rouleau, Christian. "Perceptron sous forme duale tronquée et variantes." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24492/24492.pdf.

Full text
Abstract:
L’apprentissage automatique fait parti d’une branche de l’intelligence artificielle et est utilisé dans de nombreux domaines en science. Il se divise en trois catégories principales : supervisé, non-supervisé et par renforcement. Ce mémoire de maîtrise portera uniquement sur l’apprentissage supervisé et plus précisément sur la classification de données. Un des premiers algorithmes en classification, le perceptron, fut proposé dans les années soixante. Nous proposons une variante de cet algorithme, que nous appelons le perceptron dual tronqué, qui permet l’arrêt de l’algorithme selon un nouveau critère. Nous comparerons cette nouvelle variante à d’autres variantes du perceptron. De plus, nous utiliserons le perceptron dual tronqué pour construire des classificateurs plus complexes comme les «Bayes Point Machines».
Machine Learning is a part of the artificial intelligence and is used in many fields in science. It is divided into three categories : supervised, not supervised and by reinforcement. This master’s paper will relate only the supervised learning and more precisely the classification of datas. One of the first algorithms in classification, the perceptron, was proposed in the Sixties. We propose an alternative of this algorithm, which we call the truncated dual perceptron, which allows the stop of the algorithm according to a new criterion. We will compare this new alternative with other alternatives of the perceptron. Moreover, we will use the truncated dual perceptron to build more complex classifiers like the «Bayes Point Machines».
APA, Harvard, Vancouver, ISO, and other styles
32

Bengio, Yoshua. "Connectionist models applied to automatic speech recognition." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Enes, Karen Braga. "Uma abordagem baseada em Perceptrons balanceados para geração de ensembles e redução do espaço de versões." Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/4883.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-07T17:28:53Z No. of bitstreams: 1 karenbragaenes.pdf: 607859 bytes, checksum: f7907cc35c012dd829a223c7d46a7e6b (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-24T13:13:01Z (GMT) No. of bitstreams: 1 karenbragaenes.pdf: 607859 bytes, checksum: f7907cc35c012dd829a223c7d46a7e6b (MD5)
Made available in DSpace on 2017-06-24T13:13:01Z (GMT). No. of bitstreams: 1 karenbragaenes.pdf: 607859 bytes, checksum: f7907cc35c012dd829a223c7d46a7e6b (MD5) Previous issue date: 2016-01-08
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Recentemente, abordagens baseadas em ensemble de classificadores têm sido bastante exploradas por serem uma alternativa eficaz para a construção de classificadores mais acurados. A melhoria da capacidade de generalização de um ensemble está diretamente relacionada à acurácia individual e à diversidade de seus componentes. Este trabalho apresenta duas contribuições principais: um método ensemble gerado pela combinação de Perceptrons balanceados e um método para geração de uma hipótese equivalente ao voto majoritário de um ensemble. Para o método ensemble, os componentes são selecionados por medidas de diversidade, que inclui a introdução de uma medida de dissimilaridade, e avaliados segundo a média e o voto majoritário das soluções. No caso de voto majoritário, o teste de novas amostras deve ser realizado perante todas as hipóteses geradas. O método para geração da hipótese equivalente é utilizado para reduzir o custo desse teste. Essa hipótese é obtida a partir de uma estratégia iterativa de redução do espaço de versões. Um estudo experimental foi conduzido para avaliação dos métodos propostos. Os resultados mostram que os métodos propostos são capazes de superar, na maior parte dos casos, outros algoritmos testados como o SVM e o AdaBoost. Ao avaliar o método de redução do espaço de versões, os resultados obtidos mostram a equivalência da hipótese gerada com a votação de um ensemble de Perceptrons balanceados.
Recently, ensemble learning theory has received much attention in the machine learning community, since it has been demonstrated as a great alternative to generate more accurate predictors with higher generalization abilities. The improvement of generalization performance of an ensemble is directly related to the diversity and accuracy of the individual classifiers. In this work, we present two main contribuitions: we propose an ensemble method by combining Balanced Perceptrons and we also propose a method for generating a hypothesis equivalent to the majority voting of an ensemble. Considering the ensemble method, we select the components by using some diversity strategies, which include a dissimilarity measure. We also apply two strategies in view of combining the individual classifiers decisions: majority unweighted vote and the average of all components. Considering the majority vote strategy, the set of unseen samples must be evaluate towards the generated hypotheses. The method for generating a hypothesis equivalent to the majority voting of an ensemble is applied in order to reduce the costs of the test phase. The hypothesis is obtained by successive reductions of the version space. We conduct a experimental study to evaluate the proposed methods. Reported results show that our methods outperforms, on most cases, other classifiers such as SVM and AdaBoost. From the results of the reduction of the version space, we observe that the genareted hypothesis is, in fact, equivalent to the majority voting of an ensemble.
APA, Harvard, Vancouver, ISO, and other styles
34

Béal, Sylvain. "Jeux de machines et réseaux sociaux." Saint-Etienne, 2005. http://www.theses.fr/2005STETT073.

Full text
Abstract:
Un dilemme social est un jeu non-coopératif qui possède un unique équilibre socialement inefficace sur lequel les joueurs encaissent leur gain minmax. Cette thèse propose de contourner un dilemme social en le répétant un nombre fini de fois et en limitant les capacités des joueurs à implémenter des stratégies. Précisément, nous supposons que les joueurs doivent choisir des stratégies calculables par une machine. L'automate est l'une de ces machines. Pour les situations à deux joueurs, nous considérons le jeu du dilemme du prisonnier répété un nombre fini de périodes. Lorsque les ressources des automates des joueurs sont suffisamment limitées, nous caractérisons les gains et la structure des automates à l'équilibre. En particulier, l'issue coopérative socialement efficace est possible. Pour les situations à n>2 joueurs, nous considérons un jeu de formation stratégique de réseaux sociaux. Un dilemme social émerge si le coût de création d'un lien excède le bénéfice direct de ce lien. Nous répétons ce jeu un nombre fini de fois et supposons que les joueurs choisissent des stratégies calculables par un automate. Nous caractérisons les suites de réseaux à l'équilibre et les suites de réseaux socialement optimales. Enfin, nous introduisons une autre machine, le perceptron. Nous montrons que les capacités du perceptron à calculer des stratégies sont différentes de celles de l'automate. Nous revenons sur le jeu du dilemme du prisonnier répété un nombre fini de fois et supposons que les joueurs choisissent des stratégies calculables par un automate et par un perceptron respectivement. Nous déterminons les conditions qui permettent une issue socialement optimale à l'équilibre
A social dilemma is a non-cooperative game that has a single inefficient Nash equilibrium in which players obtain their minmax payoff. This thesis aims at solving a social dilemma by finitely repeating it and by restricting players' abilities to implement strategies. Precisely, we assume that players must choose strategies which can be played by a machine. The automaton is one such machine. For two-player games, we investigate the finitely repeated prisoner's dilemma. When the size of automata available to players is enough restricted, we characterize the payoffs and the structure of automata at equilibrium. In particular, the cooperative outcome is reachable. For n-player games, we consider a network formation game with consent. A social dilemma appears when the cost of creating a link is larger than its direct benefit. We repeat this game for finitely many stages and assume that players choose strategies that can be played by automata. We characterize the sequences of equilibrium networks and the sequences of efficient networks. Lastly, we introduce another machine, the perceptron. We show that the perceptron and the automaton have different abilities to implement strategies. In the finitely repeated prisoner's dilemma, we assume that players choose strategies that can be played by an automaton and a perceptron respectively. We give conditions which allow for an efficient equilibrium outcome
APA, Harvard, Vancouver, ISO, and other styles
35

Thomas, Philippe. "Contribution à l'identification de systèmes non linéaires par réseaux de neurones." Nancy 1, 1997. http://docnum.univ-lorraine.fr/public/SCD_T_1997_0030_THOMAS.pdf.

Full text
Abstract:
Cette thèse est consacrée à l'identification de systèmes dynamiques non-linéaires SISO et MISO à l'aide de réseaux de neurones multicouches non-récurrents. Dans un premier temps, une présentation succincte de l'ensemble des méthodes d'identification non-linéaire est effectuée. Nous poursuivons notre étude par un bref historique des réseaux de neurones ainsi que par une présentation des divers modèles neuronaux existants. Une fois posé le cadre de ce travail, nous définissons plus particulièrement l'architecture générale du réseau de neurone retenu, puis nous présentons les algorithmes permettant d'adapter cette architecture générale à un cas précis. Ces choix concernent notamment le vecteur de régression et le nombre de neurones utilisés dans la couche cachée. Notre étude des réseaux de neurones s'effectuant dans le domaine précis de l'identification de systèmes, les liens importants existant entre la modélisation neuronale et les modèles non-linéaires classiques sont démontrés. Les divers critères de validation de modèles non-linéaires utilisables pour les réseaux neuronaux sont alors présentés. Le reste de ce travail est consacré aux diverses difficultés rencontrées lors de l'identification par réseau de neurones. La première difficulté se présente des l'initialisation des poids du réseau. En effet, un mauvais choix des poids initiaux peut conduire à l'obtention d'un minimum local très éloigné du minimum global, à la saturation des neurones de la couche cachée, à une convergence lente. Afin de résoudre ce problème, divers algorithmes ont été proposés et comparés sur divers exemples. Les problèmes de lenteur de convergence, ou même de divergence, peuvent également être dus a l'algorithme d'apprentissage utilisé. Nous proposerons alors un nouvel algorithme permettant de s'affranchir de cette seconde difficulté. Ce nouvel algorithme sera alors comparé à l'algorithme plus classique rpe sur un exemple de simulation. Nous finirons notre étude en nous intéressant au troisième problème qui est posé par la présence de valeurs aberrantes dans les données d'identification. En effet, cette présence de valeurs aberrantes risque de conduire à des biais sur les paramètres. Nous proposons alors divers critères d'apprentissage qui sont robustes aux valeurs aberrantes. Ces critères sont comparés sur des exemples de simulation et des données industrielles réelles
This thesis deals with the idenlificalion of dynamical non-linear ISO and MlSO systems with multilayer feedforward neural networks. Firstly, a short presentation of the non-linear identification methods is proposed and the neural network are reviewed. Secondly, the general architecture of the neural network used is more precisely defined. Some methods are presented to adapt this architecture to a particular case. These methods give the regressors and the number of neurons in the hidden layer. The relationships between neural identification and the most classical non-Iinear models are then shown. The validation criteria of non-linear models usable for the neural identification are presented. Three difficulties encountered in neural identification are investigated in the sequel. The first one is due to the initialisation of the parameters of the network. A bad choice of these initial parameters can lead to local minima very far from of the global minimum, to saturation of the hidden neurons, or to slow convergence. Two new algorithms are proposed to solve this problem and compared with others on three different examples. The slow convergence can be the result of the learning algorithm used. One algorithm is proposed to deal with this second difficulty. This algorithm is compared with the more classical RPE algorithm. This study is ends with the third studied problem which is posed by the presence of outliers in the identification data set. Lndeed, outliers can produce biases on estimated parameters. Three robust criteria are then proposed and are compared with the classical quadratic criterion on a simulation example and on a real industrial data set
APA, Harvard, Vancouver, ISO, and other styles
36

Barbosa, Itamar Magno. "Estudo das dispersões metrológicas em redes neurais artificiais do tipo Multilayer Perceptrons através da aplicação em curvas de calibração." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-12082010-113757/.

Full text
Abstract:
Este trabalho é um estudo das dispersões metrológicas em aproximações de funções tidas como não conhecidas ou não totalmente conhecidas. A metodologia alternativa para esse fim são as redes neurais artificiais do tipo Multilayer Perceptrons (MLP), aqui utilizadas como aproximadoras de funções. As funções aproximadas são curvas de calibração decorrentes de indicações de instrumentos ou sistemas de medição numa calibração. Essas curvas levam consigo propriedades metrológicas e possuem, neste trabalho, papel de ponte entre os elementos considerados da teoria metrológica e os elementos considerados da teoria da Inteligência Computacional: as Multilayer Perceptrons (MLPs). Uma balança externa de medição de esforços aerodinâmicos e uma Língua Eletrônica (LE), aplicada na medição da concentração de cátions, foram os meios de aplicação dos conceitos dessa metodologia alternativa. As proposições desta tese visam implementar melhorias na exatidão do ajuste das curvas de calibração por meio da consideração dos seguintes fatores: grandezas de influências, incertezas nos Valores Objetivos (VOs), tendência de medição de erros sistemáticos ocultos ou não solvidos e indicadores de desempenho metrológicos. A indicação da qualidade na medição ou a indicação da competência metrológica de um laboratório de calibração é estabelecida pelos valores das incertezas, e a curva de calibração é o ponto de partida para os cálculos desses valores. Visto que o estabelecimento dessa curva é uma das dificuldades para o cálculo das incertezas e a própria curva é uma fonte de incerteza, sua aproximação requer uma a cuidadosa e meticulosa metodologia, daí a importância estratégica deste trabalho. As dispersões metrológicas possuem conotação de incertezas nas medições e elas são a base para a determinação de seu valor numérico; assim, os indicadores de desempenho podem representar essas dispersões e a recíproca também é verdadeira: a incerteza padrão pode ser um dos indicadores de desempenho. Sintetizando, nesta tese é mostrado de que forma a teoria da inteligência computacional adentra na teoria da metrologia e vice versa, nas esferas dos elementos aqui considerados.
The present study investigates metrological dispersions in fitting partially or totally unknown functions. An alternative method is the application of a multilayer perceptron neural network used here to fit functions. The fitting functions are calibration curves from calibration indications of measurement systems or instruments. These curves hold metrological properties and establish a link between elements of Metrological theory and elements of Computing Intelligence theory: the Multilayer Perceptrons. An external balance of aerodynamic forces and moments and an electronic tongue applied in the measurement of cation concentrations were the measurement systems used to apply the concepts of this alternative methodology. This thesis proposes improvements in the accuracy of fitting calibration curves considering the following factors: influence quantities, uncertainties about target values, tendency of hidden or not solved systematic errors and metrological performance functions. The measurement quality indicator or the laboratory metrological competence indicator is established by uncertainty values and the calibration curve is the starting point for the calculation of these values. The establishment of this curve is one of the difficulties in assessing uncertainties and the curve itself is an uncertainty source. Therefore, a careful and meticulous methodology is necessary in curve approximation, which explains the strategic importance of this work. Metrological dispersions have connotation of uncertainty in measurements and are the basis for calculating their numerical values, the performance functions can represent metrological dispersions and the opposite is also true: the standard uncertainty can be a performance function. Making a synthesis, this thesis demonstrates how computing intelligence theory takes into account the metrological theory and vice versa, in the elements of these theories that were discussed in the present study.
APA, Harvard, Vancouver, ISO, and other styles
37

Villa-Vialaneix, Nathalie. "Eléments d'apprentissage en statistique fonctionnelle : classification et régression fonctionnelles par réseaux de neurones et Support Vector Machine." Toulouse 2, 2005. http://www.theses.fr/2005TOU20089.

Full text
Abstract:
Dans ce travail, nous présentons d'abord les résultats d'un travail interdisciplinaire dans lequel nous avons utilisé les qualités d'adaptation des perceptrons multi-couches pour la prédiction de cartes géographiques d'occupation du sol. Dans la suite de la thèse, nous nous focalisons sur la généralisation de l'utilisation des réseaux de neurones et des SVM au traitement de données fonctionnelles. Le but est de disposer d'outils non linéaires pour l'étude de ce type de données. Une partie de nos travaux est basée sur une approche semi-paramétrique utilisant une généralisation de la méthode de régression inverse au cadre fonctionnel. Enfin, nous explorons une approche différente par la construction de noyaux pour SVM qui prennent en compte la nature spécifique des données. Dans tous ces travaux, la théorie de l'apprentissage statistique joue un rôle important et nous nous attachons, autant que possible, à expliciter des résultats de convergence des méthodes décrites
In this thesis, we first present the results of an interdisciplinary project in which we use the approximation abilities of multilayer perceptrons in order to predict land cover maps. Subsequently, we focus on the extension of the neural networks and of the SVM for functional data analysis. Our purpose is to build non linear tools for functional data. A part of our work is based on a semi-parametric approach which uses a functional inverse regression method. Then, we present another approach which allows us to build kernels for SVM in order to take into account the functional nature of the data. In this work, the statistical learning theory plays a central role and we apply ourselves to give consistency results for our methods, as much as possible
APA, Harvard, Vancouver, ISO, and other styles
38

Harkouss, Youssef. "Application de réseaux de neurones à la modélisation de composants et de dispositifs microondes non linéaires." Limoges, 1998. http://www.theses.fr/1998LIMO0040.

Full text
Abstract:
Le developpement d'une nouvelle approche de la modelisation de composants et de dispositifs microondes non lineaires basee sur une representation par reseau neuronal constitue le theme essentiel de cette these. L'objectif de ce travail consiste d'une part a creer un outil efficace et rapide pour construire des modeles neuronaux performants et d'autre part a montrer a travers des applications pratiques la performance et la precision d'une telle representation comportementale. Nous avons envisage deux types de modeles neuronaux. Le premier est base sur un perceptron multicouche mlp entraine par la retro-propagation du gradient gbp. La structure multicouche du mlp et la nature aleatoire de la technique d'initialisation de ce mlp font que l'ensemble de processus de construction de ce type de modele s'avere couteux en temps. Le second modele repose sur un reseau d'ondelettes wnn initialise par la methode de selection de regresseurs et entraine par la methode quasi-newtonienne bfgs (broyden-fletcher-goldfarb-shanno). La combinaison de la methode de selection de regresseurs avec bfgs represente un outil efficace et rapide pour construire des modeles neuronaux performants. L'approche neuronale basee soit sur un mlp, soit sur un reseau d'ondelettes wnn, offre de bon resultats en terme de precision, d'efficacite et de fiabilite du modele.
APA, Harvard, Vancouver, ISO, and other styles
39

Djupfeldt, Petter. "Dr. Polopoly - IntelligentSystem Monitoring : An Experimental and Comparative Study ofMultilayer Perceptrons and Random Forests ForError Diagnosis In A Network of Servers." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191557.

Full text
Abstract:
This thesis explores the potential of using machine learning to superviseand diagnose a computer system by comparing how Multilayer Perceptron(MLP) and Random Forest (RF) perform at this task in a controlledenvironment. The base of comparison is primarily how accurate theyare in their predictions, but some thought is given to how cost effectivethey are regarding time. The specific system used is a content management system (CMS)called Polopoly. The thesis details how training samples were collectedby inserting Java proxys into the Polopoly system in order to time theinter-server method calls. Errors in the system were simulated by limitingindividual server’s bandwith, and a normal use case was simulatedthrough the use of a tool called Grinder. The thesis then delves into the setup of the two algorithms andhow the parameters were decided upon, before comparing their finalimplementations based on their accuracy. The accuracy is noted to bepoor, with both being correct roughly 20% of the time, but discussesif there could still be a use case for the algorithms with this level ofaccuracy. Finally, the thesis concludes that there is no significant difference(p 0.05) in the MLP and RF accuracies, and in the end suggeststhat future work should focus either on comparing the algorithms or ontrying to improve the diagnosing of errors in Polopoly.
Denna uppsats utforskar potentialen i att använda maskininlärning föratt övervaka och diagnostisera ett datorsystem genom att jämföra hureffektivt Multilayer Perceptron (MLP) respektive Random Forest (RF)gör detta i en kontrollerad miljö. Grunden för jämförelsen är främst hurträffsäkra MLP och RF är i sina klassifieringar, men viss tanke ges ocksååt hur kostnadseffektiva de är med hänseende till tid. Systemet som används är ett “content management system” (CMS)vid namn Polopoly. Uppsatsen beskriver hur träningsdatan samlades invia Java proxys, som injicerades i Polopoly systemet för att mäta hurlång tid metodanrop mellan servrarna tar. Fel i systemet simulerades genomatt begränsa enskilda servrars bandbredd, och normalt användandesimulerades med verktyget Grinder. Uppsatsen går sedan in på hur de två algoritmerna användes ochhur deras parametrar sattes, innan den fortsätter med att jämföra detvå slutgiltiga implementationerna baserat på deras träffsäkerhet. Detnoteras att träffsäkerheten är undermålig; både MLP:n och RF:n gissarrätt i ca 20% av fallen. En diskussion förs om det ändå finns en användningför algoritmerna med denna nivå av träffsäkerhet. Slutsatsen drasatt det inte finns någon signifikant skillnad (p 0.05) mellan MLP:nsoch RF:ns träffsäkerhet, och avslutningsvis så föreslås det att framtidaarbete borde fokusera antingen på att jämföra de två algoritmerna ellerpå att försöka förbättra feldiagnosiseringen i Polopoly.
APA, Harvard, Vancouver, ISO, and other styles
40

Collobert, Ronan. "Algorithmes d'Apprentissage pour grandes bases de données." Paris 6, 2004. http://www.theses.fr/2004PA066063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ferronato, Giuliano. "Intervalos de predição para redes neurais artificiais via regressão não linear." Florianópolis, SC, 2008. http://repositorio.ufsc.br/xmlui/handle/123456789/91675.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação.
Made available in DSpace on 2012-10-24T01:24:51Z (GMT). No. of bitstreams: 1 258459.pdf: 252997 bytes, checksum: a0457bb78b352c0aab2bb1f48ab79985 (MD5)
Este trabalho descreve a aplicação de uma técnica de regressão não linear (mínimos quadrados) para obter predições intervalares em redes neurais artificiais (RNA#s). Através de uma simulação de Monte Carlo é mostrada uma maneira de escolher um ajuste de parâmetros (pesos) para uma rede neural, de acordo com um critério de seleção que é baseado na magnitude dos intervalos de predição fornecidos pela rede. Com esta técnica foi possível obter as predições intervalares com amplitude desejada e com probabilidade de cobertura conhecida, de acordo com um grau de confiança escolhido. Os resultados e as discussões associadas indicam ser possível e factível a obtenção destes intervalos, fazendo com que a resposta das redes seja mais informativa e consequentemente aumentando sua aplicabilidade. A implementação computacional está disponível em www.inf.ufsc.br/~dandrade. This work describes the application of a nonlinear regression technique (least squares) to create prediction intervals on artificial neural networks (ANN´s). Through Monte Carlo#s simulations it is shown a way of choosing the set of parameters (weights) to a neural network, according to a selection criteria based on the magnitude of the prediction intervals provided by the net. With this technique it is possible to obtain the prediction intervals with the desired amplitude and with known coverage probability, according to the chosen confidence level. The associated results and discussions indicate to be possible and feasible to obtain these intervals, thus making the network response more informative and consequently increasing its applicability. The computational implementation is available in www.inf.ufsc.br/~dandrade.
APA, Harvard, Vancouver, ISO, and other styles
42

Silva, Carlos Alberto de Albuquerque. "Implementa??o de uma matriz de neur?nios dinamicamente reconfigur?vel para descri??o de topologias de redes neurais artificiais multilayer perceptrons." Universidade Federal do Rio Grande do Norte, 2015. http://repositorio.ufrn.br/handle/123456789/21138.

Full text
Abstract:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-08-09T22:51:25Z No. of bitstreams: 1 CarlosAlbertoDeAlbuquerqueSilva_TESE.pdf: 4568486 bytes, checksum: 5ddf18d55603ffd49ea2899025e1615f (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-08-10T23:15:49Z (GMT) No. of bitstreams: 1 CarlosAlbertoDeAlbuquerqueSilva_TESE.pdf: 4568486 bytes, checksum: 5ddf18d55603ffd49ea2899025e1615f (MD5)
Made available in DSpace on 2016-08-10T23:15:49Z (GMT). No. of bitstreams: 1 CarlosAlbertoDeAlbuquerqueSilva_TESE.pdf: 4568486 bytes, checksum: 5ddf18d55603ffd49ea2899025e1615f (MD5) Previous issue date: 2015-09-04
Ag?ncia Nacional do Petr?leo - ANP
As Redes Neurais Artificiais (RNAs), que constituem uma das ramifica??es da Intelig?ncia Artificial (IA), est?o sendo empregadas como solu??o para v?rios problemas complexos, existentes nas mais diversas ?reas. Para a solu??o destes problemas torna-se indispens?vel que sua implementa??o seja feita em hardware. Em meio as estrat?gias a serem adotadas e satisfeitas durante a fase de projeto e implementa??o das RNAs em hardware, as conex?es entre os neur?nios s?o as que necessitam de maior aten??o. Recentemente, encontram-se RNAs implementadas tanto em circuitos integrados de aplica??o espec?fica (Application Specific Integrated Circuits - ASIC) quanto em circuitos integrados, configurados pelo usu?rio, a exemplo dos Field Programmable Gate Array (FPGAs), que possuem a capacidade de serem reconfigurados parcialmente, em tempo de execu??o, formando, portanto, um Sistema Parcialmente Reconfigur?vel (SPR), cujo emprego proporciona diversas vantagens, tais como: flexibilidade na implementa??o e redu??o de custos. Tem-se observado um aumento considerado no uso destes dispositivos para a implementa??o de RNAs. Diante do exposto, prop?e-se a implementa??o de uma matriz de neur?nios dinamicamente reconfigur?vel no FPGA Virtex 6 da Xilinx, descrita em linguagem de hardware e que possa absorver projetos baseados em plataforma de sistemas embarcados, dedicados ao controle distribu?do de equipamentos normalmente utilizados na ind?stria. Prop?e-se ainda, que a configura??o das topologias das RNAs que possam vir a ser formadas, seja realizada via software.
The Artificial Neural Networks (ANN), which is one of the branches of Artificial Intelligence (AI), are being employed as a solution to many complex problems existing in several areas. To solve these problems, it is essential that its implementation is done in hardware. Among the strategies to be adopted and met during the design phase and implementation of RNAs in hardware, connections between neurons are the ones that need more attention. Recently, are RNAs implemented both in application specific integrated circuits's (Application Specific Integrated Circuits - ASIC) and in integrated circuits configured by the user, like the Field Programmable Gate Array (FPGA), which have the ability to be partially rewritten, at runtime, forming thus a system Partially Reconfigurable (SPR), the use of which provides several advantages, such as flexibility in implementation and cost reduction. It has been noted a considerable increase in the use of FPGAs for implementing ANNs. Given the above, it is proposed to implement an array of reconfigurable neurons for topologies Description of artificial neural network multilayer perceptrons (MLPs) in FPGA, in order to encourage feedback and reuse of neural processors (perceptrons) used in the same area of the circuit. It is further proposed, a communication network capable of performing the reuse of artificial neurons. The architecture of the proposed system will configure various topologies MLPs networks through partial reconfiguration of the FPGA. To allow this flexibility RNAs settings, a set of digital components (datapath), and a controller were developed to execute instructions that define each topology for MLP neural network.
APA, Harvard, Vancouver, ISO, and other styles
43

Silva, William de Medeiros. "Redes neurais artificiais como ferramenta para prognose de crescimento e melhoramento genético florestal /." Jaboticabal, 2019. http://hdl.handle.net/11449/190673.

Full text
Abstract:
Orientador: Rinaldo Cesar de Paula
Resumo: RESUMO – O eucalipto é a cultura de maior destaque para o setor florestal brasileiro. No entanto, a expansão do setor para áreas com condições climáticas limitantes ao desenvolvimento da cultura e a instabilidade climática atual, são alguns dos fatores que têm comprometido o desenvolvimento desta cultura no país nos últimos anos. Assim, é importante a busca contínua por ferramentas que possibilitem a prognose de crescimento, a seleção de indivíduos e famílias e a análise do comportamento de genótipos de eucalipto frente às variações ambientais de forma cada vez mais acurada. Desta forma, o objetivo geral deste trabalho foi testar o desempenho das Redes Neurais Artificiais (RNA) na modelagem de crescimento de clones de eucalipto, na predição de valores genéticos de indivíduos e famílias, e na seleção quanto à produtividade, estabilidade e adaptabilidade de progênies de Eucalyptus sp. Para a prognose de crescimento foram utilizados dados de 18 clones comerciais de Eucalyptus em diferentes estados do Brasil, e para a estimação de valor genético e análise de produtividade, estabilidade e adaptabilidade foram utilizados dados de testes de progênies de Eucalyptus grandis. Neste trabalho foram testadas diferentes arquiteturas de RNA do tipo múltiplas camadas com o algoritmo de aprendizado de retropropagação do erro e função de ativação do tipo tangente hiperbólica. O modelo desenvolvido para prognose do diâmetro à altura do peito (DAP) de árvores individuais em um local foi capaz de... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: ABSTRACT – Eucalyptus is the most important crop of the most important for the Brazilian forest sector. However, the expansion of the sector to areas with climatic conditions limiting the development of the crop and current climate instability are some of the factors that have compromised the development of this culture in the country in recent years. Thus, it is important to continuously search for tools that allow the prognosis of growth, the selection of individuals and families and the analysis of the behavior of eucalyptus genotypes in the face of environmental changes in an increasingly accurate way. Thus, the general objective of this work was to test the performance of artificial neural networks (ANN) in the modeling of growth of eucalyptus clones, prediction of genetic values of individuals and families, and selection of productivity, stability and adaptability of progenies of Eucalyptus sp. For the prognosis of growth, data from 18 commercial Eucalyptus clones were used in different states of Brazil, and for genetic value estimation and productivity, stability and adaptability analysis data from Eucalyptus grandis progenies were used. In this work, different ANN architectures of the multilayer type were tested with the backpropagation error algorithm and hyperbolic tangent activation function. The model developed for prognosis of the diameter at breast height (DBH) individual trees in one place was able to maintain good accuracy when applied at other sites. The thre... (Complete abstract click electronic access below)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
44

Coughlin, Michael J., and n/a. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks." Griffith University. School of Applied Psychology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030409.110949.

Full text
Abstract:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
APA, Harvard, Vancouver, ISO, and other styles
45

Coughlin, Michael J. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365854.

Full text
Abstract:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Applied Psychology
Griffith Health
Full Text
APA, Harvard, Vancouver, ISO, and other styles
46

Shao, Hang. "A Fast MLP-based Learning Method and its Application to Mine Countermeasure Missions." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23512.

Full text
Abstract:
In this research, a novel machine learning method is designed and applied to Mine Countermeasure Missions. Similarly to some kernel methods, the proposed approach seeks to compute a linear model from another higher dimensional feature space. However, no kernel is used and the feature mapping is explicit. Computation can be done directly in the accessible feature space. In the proposed approach, the feature projection is implemented by constructing a large hidden layer, which differs from traditional belief that Multi-Layer Perceptron is usually funnel-shaped and the hidden layer is used as feature extractor. The proposed approach is a general method that can be applied to various problems. It is able to improve the performance of the neural network based methods and the learning speed of support vector machine. The classification speed of the proposed approach is also faster than that of kernel machines on the mine countermeasure mission task.
APA, Harvard, Vancouver, ISO, and other styles
47

Ignatavičienė, Ieva. "Tiesioginio sklidimo neuroninių tinklų sistemų lyginamoji analizė." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120801_133809-03141.

Full text
Abstract:
Pagrindinis darbo tikslas – atlikti kelių tiesioginio sklidimo neuroninių tinklų sistemų lyginamąją analizę siekiant įvertinti jų funkcionalumą. Šiame darbe apžvelgiama: biologinio ir dirbtinio neuronų modeliai, neuroninių tinklų klasifikacija pagal jungimo konstrukciją (tiesioginio sklidimo ir rekurentiniai neuroniniai tinklai), dirbtinių neuroninių tinklų mokymo strategijos (mokymas su mokytoju, mokymas be mokytojo, hibridinis mokymas). Analizuojami pagrindiniai tiesioginio sklidimo neuroninių tinklų metodai: vienasluoksnis perceptronas, daugiasluoksnis perceptronas realizuotas „klaidos skleidimo atgal” algoritmu, radialinių bazinių funkcijų neuroninis tinklas. Buvo nagrinėjama 14 skirtingų tiesioginio sklidimo neuroninių tinklų sistemos. Programos buvo suklasifikuotos pagal kainą, tiesioginio sklidimo neuroninių tinklo mokymo metodų taikymą, galimybę vartotojui keisti parametrus prieš apmokant tinklą ir techninį programos įvertinimą. Programos buvo įvertintos dešimtbalėje vertinimo sistemoje pagal mokymo metodų įvairumą, parametrų keitimo galimybes, programos stabilumą, kokybę, bei kainos ir kokybės santykį. Aukščiausiu balu įvertinta „Matlab” programa (10 balų), o prasčiausiai – „Sharky NN” (2 balai). Detalesnei analizei pasirinktos keturios programos („Matlab“, „DTREG“, „PathFinder“, „Cortex“), kurios buvo įvertintos aukščiausiais balais, galėjo apmokyti tiesioginio sklidimo neuroninį tinklą daugiasluoksnio perceptrono metodu ir bent dvi radialinių bazinių funkcijų... [toliau žr. visą tekstą]
The main aim – to perform a comparative analysis of several feedforward neural system networks in order to identify its functionality. The work presents both: biological and artificial neural models, also classification of neural networks, according to connections’ construction (of feedforward and recurrent neural networks), studying strategies of artificial neural networks (with a trainer, without a trainer, hybrid). The main methods of feedforward neural networks: one-layer perceptron, multilayer perceptron, implemented upon “error feedback” algorithm, also a neural network of radial base functions have been considered. The work has included 14 different feedforward neural system networks, classified according its price, application of study methods of feedforward neural networks, also a customer’s possibility to change parameters before paying for the network and a technical evaluation of a program. The programs have been evaluated from 1 point to 10 points according to the following: variety of training systems, possibility to change parameters, stability, quality and ratio of price and quality. The highest evaluation has been awarded to “Matlab” (10 points), the lowest – to “Sharky NN” (2 points). Four programs (”Matlab“, “DTREG“, “PathFinder“,”Cortex“) have been selected for a detail analysis. The best evaluated programs have been able to train feedforward neural networks using multilayer perceptron method, also at least two radial base function networks. “Matlab“ and... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
48

Tang, Zibin. "A new design approach for numeric-to-symbolic conversion using neural networks." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4242.

Full text
Abstract:
A new approach is proposed which uses a combination of a Backprop paradigm neural network along with some perceptron processing elements performing logic operations to construct a numeric-to-symbolic converter. The design approach proposed herein is capable of implementing a decision region defined by a multi-dimensional, non-linear boundary surface. By defining a "two-valued" subspace of the boundary surface, a Backprop paradigm neural network is used to model the boundary surf ace. An input vector is tested by the neural network boundary model (along with perceptron logic gates) to determine whether the incoming vector point is within the decision region or not. Experiments with two qualitatively different kinds of nonlinear surface were carried out to test and demonstrate the design approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Louche, Ugo. "From confusion noise to active learning : playing on label availability in linear classification problems." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4025/document.

Full text
Abstract:
Les travaux présentés dans cette thèse relèvent de l'étude des méthodes de classification linéaires, c'est à dire l'étude de méthodes ayant pour but la catégorisation de données en différents groupes à partir d'un jeu d'exemples, préalablement étiquetés, disponible en amont et appelés ensemble d'apprentissage. En pratique, l'acquisition d'un tel ensemble d'apprentissage peut être difficile et/ou couteux, la catégorisation d'un exemple étant de fait plus ardu que l'obtention de dudit exemple. Cette disparité entre la disponibilité des données et notre capacité à constituer un ensemble d'apprentissage étiqueté a été un des problèmes centraux de l'apprentissage automatique et ce manuscrit s’intéresse à deux solutions usuellement considérées pour contourner ce problème : l'apprentissage en présence de données bruitées et l'apprentissage actif
The works presented in this thesis fall within the general framework of linear classification, that is the problem of categorizing data into two or more classes based on on a training set of labelled data. In practice though acquiring labeled examples might prove challenging and/or costly as data are inherently easier to obtain than to label. Dealing with label scarceness have been a motivational goal in the machine learning literature and this work discuss two settings related to this problem: learning in the presence of noise and active learning
APA, Harvard, Vancouver, ISO, and other styles
50

Rashidi, Abbas. "Evaluating the performance of machine-learning techniques for recognizing construction materials in digital images." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49122.

Full text
Abstract:
Digital images acquired at construction sites contain valuable information useful for various applications including As-built documentation of building elements, effective progress monitoring, structural damage assessment, and quality control of construction material. As a result there is an increasing need for effective methods to recognize different building materials in digital images and videos. Pattern recognition is a mature field within the area of image processing; however, its application in the area of civil engineering and building construction is only recent. In order to develop any robust image recognition method, it is necessary to choose the optimal machine learning algorithm. To generate a robust color model for building material detection in an outdoor construction environment, a comparative analysis of three generative and discriminative machine learning algorithms, namely, multilayer perceptron (MLP), radial basis function (RBF), and support vector machines (SVMs), is conducted. The main focus of this study is on three classes of building materials: concrete, plywood, and brick. For training purposes a large-size data set including hundreds of images is collected. The comparison study is conducted by implementing necessary algorithms in MATLAB and testing over hundreds of construction-site images. To evaluate the performance of each technique, the results are compared with a manual classification of building materials. In order to better assess the performance of each technique, experiments are conducted by taking pictures under various realistic jobsite conditions, e.g., different ranges of image resolutions, different distance of camera from object, and different types of cameras.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography