Academic literature on the topic 'Linear perceptrons'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Linear perceptrons.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Linear perceptrons"

1

Bylander, Tom. "Learning Linear Threshold Approximations Using Perceptrons." Neural Computation 7, no. 2 (March 1995): 370–79. http://dx.doi.org/10.1162/neco.1995.7.2.370.

Full text
Abstract:
We demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions using perceptrons. The central result is as follows. Suppose there exists a vector w*, of n weights (including the threshold) with “accuracy” 1 − α, “average error” η, and “balancing separation” σ, i.e., with probability 1 − α, w* correctly classifies an example x; over examples incorrectly classified by w*, the expected value of |w* · x| is η (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of |w* · x| is σ. Then, with probability 1 − δ, the perceptron achieves accuracy at least 1 − [∊ + α(1 + η/σ)] after O[n∊−2σ−2(ln 1/δ)] examples.
APA, Harvard, Vancouver, ISO, and other styles
2

Alpaydin, E., and M. I. Jordan. "Local linear perceptrons for classification." IEEE Transactions on Neural Networks 7, no. 3 (May 1996): 788–94. http://dx.doi.org/10.1109/72.501737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barber, D., D. Saad, and P. Sollich. "Test Error Fluctuations in Finite Linear Perceptrons." Neural Computation 7, no. 4 (July 1995): 809–21. http://dx.doi.org/10.1162/neco.1995.7.4.809.

Full text
Abstract:
We examine the fluctuations in the test error induced by random, finite, training and test sets for the linear perceptron of input dimension n with a spherically constrained weight vector. This variance enables us to address such issues as the partitioning of a data set into a test and training set. We find that the optimal assignment of the test set size scales with n2/3.
APA, Harvard, Vancouver, ISO, and other styles
4

Legenstein, Robert, and Wolfgang Maass. "On the Classification Capability of Sign-Constrained Perceptrons." Neural Computation 20, no. 1 (January 2008): 288–309. http://dx.doi.org/10.1162/neco.2008.20.1.288.

Full text
Abstract:
The perceptron (also referred to as McCulloch-Pitts neuron, or linear threshold gate) is commonly used as a simplified model for the discrimination and learning capability of a biological neuron. Criteria that tell us when a perceptron can implement (or learn to implement) all possible dichotomies over a given set of input patterns are well known, but only for the idealized case, where one assumes that the sign of a synaptic weight can be switched during learning. We present in this letter an analysis of the classification capability of the biologically more realistic model of a sign-constrained perceptron, where the signs of synaptic weights remain fixed during learning (which is the case for most types of biological synapses). In particular, the VC-dimension of sign-constrained perceptrons is determined, and a necessary and sufficient criterion is provided that tells us when all 2m dichotomies over a given set of m patterns can be learned by a sign-constrained perceptron. We also show that uniformity of L1 norms of input patterns is a sufficient condition for full representation power in the case where all weights are required to be nonnegative. Finally, we exhibit cases where the sign constraint of a perceptron drastically reduces its classification capability. Our theoretical analysis is complemented by computer simulations, which demonstrate in particular that sparse input patterns improve the classification capability of sign-constrained perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Xin, Mian Xie, Li Xia Tang, and Chen Yu Li. "Learning Algorithm for Fuzzy Perceptron with Max-Product Composition." Applied Mechanics and Materials 687-691 (November 2014): 1359–62. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1359.

Full text
Abstract:
Fuzzy neural networks is a powerful computational model, which integrates fuzzy systems with neural networks, and fuzzy perceptron is a kind of this neural networks. In this paper, a learning algorithm is proposed for a fuzzy perceptron with max-product composition, and the topological structure of this fuzzy perceptron is the same as conventional linear perceptrons. The inner operations involved in the working process of this fuzzy perceptron are based on the max-product logical operations rather than conventional multiplication and summation etc. To illustrate the finite convergence of proposed algorithm, some numerical experiments are provided.
APA, Harvard, Vancouver, ISO, and other styles
6

Shah, J. V., and Chi-Sang Poon. "Linear independence of internal representations in multilayer perceptrons." IEEE Transactions on Neural Networks 10, no. 1 (1999): 10–18. http://dx.doi.org/10.1109/72.737489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zwietering, P. J., E. H. L. Aarts, and J. Wessels. "EXACT CLASSIFICATION WITH TWO-LAYERED PERCEPTRONS." International Journal of Neural Systems 03, no. 02 (January 1992): 143–56. http://dx.doi.org/10.1142/s0129065792000127.

Full text
Abstract:
We study the capabilities of two-layered perceptrons for classifying exactly a given subset. Both necessary and sufficient conditions are derived for subsets to be exactly classifiable with two-layered perceptrons that use the hard-limiting response function. The necessary conditions can be viewed as generalizations of the linear-separability condition of one-layered perceptrons and confirm the conjecture that the capabilities of two-layered perceptrons are more limited than those of three-layered perceptrons. The sufficient conditions show that the capabilities of two-layered perceptrons extend beyond the exact classification of convex subsets. Furthermore, we present an algorithmic approach to the problem of verifying the sufficiency condition for a given subset.
APA, Harvard, Vancouver, ISO, and other styles
8

Hara, Kazuyuki, and Masato Okada. "Ensemble Learning of Linear Perceptrons: On-Line Learning Theory." Journal of the Physical Society of Japan 74, no. 11 (November 15, 2005): 2966–72. http://dx.doi.org/10.1143/jpsj.74.2966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Frean, Marcus. "The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks." Neural Computation 2, no. 2 (June 1990): 198–209. http://dx.doi.org/10.1162/neco.1990.2.2.198.

Full text
Abstract:
A general method for building and training multilayer perceptrons composed of linear threshold units is proposed. A simple recursive rule is used to build the structure of the network by adding units as they are needed, while a modified perceptron algorithm is used to learn the connection strengths. Convergence to zero errors is guaranteed for any boolean classification on patterns of binary variables. Simulations suggest that this method is efficient in terms of the numbers of units constructed, and the networks it builds can generalize over patterns not in the training set.
APA, Harvard, Vancouver, ISO, and other styles
10

Hamid, Danish, Syed Sajid Ullah, Jawaid Iqbal, Saddam Hussain, Ch Anwar ul Hassan, and Fazlullah Umar. "A Machine Learning in Binary and Multiclassification Results on Imbalanced Heart Disease Data Stream." Journal of Sensors 2022 (September 20, 2022): 1–13. http://dx.doi.org/10.1155/2022/8400622.

Full text
Abstract:
In medical filed, predicting the occurrence of heart diseases is a significant piece of work. Millions of healthcare-related complexities that have remained unsolved up until now can be greatly simplified with the help of machine learning. The proposed study is concerned with the cardiac disease diagnosis decision support system. An OpenML repository data stream with 1 million instances of heart disease and 14 features is used for this study. After applying to preprocess and feature engineering techniques, machine learning approaches like random forest, decision trees, gradient boosted trees, linear support vector classifier, logistic regression, one-vs-rest, and multilayer perceptron are used to perform binary and multiclassification on the data stream. When combined with the Max Abs Scaler technique, the multilayer perceptron performed satisfactorily in both binary (Accuracy 94.8%) and multiclassification (accuracy 88.2%). Compared to the other binary classification algorithms, the GBT delivered the right outcome (accuracy of 95.8%). Multilayer perceptrons, however, did well in multiple classifications. Techniques such as oversampling and undersampling have a negative impact on disease prediction. Machine learning methods like multilayer perceptrons and ensembles can be helpful for diagnosing cardiac conditions. For this kind of unbalanced data stream, sampling techniques like oversampling and undersampling are not practical.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Linear perceptrons"

1

Ferronato, Giuliano. "Intervalos de predição para redes neurais artificiais via regressão não linear." Florianópolis, SC, 2008. http://repositorio.ufsc.br/xmlui/handle/123456789/91675.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação.
Made available in DSpace on 2012-10-24T01:24:51Z (GMT). No. of bitstreams: 1 258459.pdf: 252997 bytes, checksum: a0457bb78b352c0aab2bb1f48ab79985 (MD5)
Este trabalho descreve a aplicação de uma técnica de regressão não linear (mínimos quadrados) para obter predições intervalares em redes neurais artificiais (RNA#s). Através de uma simulação de Monte Carlo é mostrada uma maneira de escolher um ajuste de parâmetros (pesos) para uma rede neural, de acordo com um critério de seleção que é baseado na magnitude dos intervalos de predição fornecidos pela rede. Com esta técnica foi possível obter as predições intervalares com amplitude desejada e com probabilidade de cobertura conhecida, de acordo com um grau de confiança escolhido. Os resultados e as discussões associadas indicam ser possível e factível a obtenção destes intervalos, fazendo com que a resposta das redes seja mais informativa e consequentemente aumentando sua aplicabilidade. A implementação computacional está disponível em www.inf.ufsc.br/~dandrade. This work describes the application of a nonlinear regression technique (least squares) to create prediction intervals on artificial neural networks (ANN´s). Through Monte Carlo#s simulations it is shown a way of choosing the set of parameters (weights) to a neural network, according to a selection criteria based on the magnitude of the prediction intervals provided by the net. With this technique it is possible to obtain the prediction intervals with the desired amplitude and with known coverage probability, according to the chosen confidence level. The associated results and discussions indicate to be possible and feasible to obtain these intervals, thus making the network response more informative and consequently increasing its applicability. The computational implementation is available in www.inf.ufsc.br/~dandrade.
APA, Harvard, Vancouver, ISO, and other styles
2

Louche, Ugo. "From confusion noise to active learning : playing on label availability in linear classification problems." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4025/document.

Full text
Abstract:
Les travaux présentés dans cette thèse relèvent de l'étude des méthodes de classification linéaires, c'est à dire l'étude de méthodes ayant pour but la catégorisation de données en différents groupes à partir d'un jeu d'exemples, préalablement étiquetés, disponible en amont et appelés ensemble d'apprentissage. En pratique, l'acquisition d'un tel ensemble d'apprentissage peut être difficile et/ou couteux, la catégorisation d'un exemple étant de fait plus ardu que l'obtention de dudit exemple. Cette disparité entre la disponibilité des données et notre capacité à constituer un ensemble d'apprentissage étiqueté a été un des problèmes centraux de l'apprentissage automatique et ce manuscrit s’intéresse à deux solutions usuellement considérées pour contourner ce problème : l'apprentissage en présence de données bruitées et l'apprentissage actif
The works presented in this thesis fall within the general framework of linear classification, that is the problem of categorizing data into two or more classes based on on a training set of labelled data. In practice though acquiring labeled examples might prove challenging and/or costly as data are inherently easier to obtain than to label. Dealing with label scarceness have been a motivational goal in the machine learning literature and this work discuss two settings related to this problem: learning in the presence of noise and active learning
APA, Harvard, Vancouver, ISO, and other styles
3

Coughlin, Michael J., and n/a. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks." Griffith University. School of Applied Psychology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030409.110949.

Full text
Abstract:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
APA, Harvard, Vancouver, ISO, and other styles
4

Manesco, Luis Fernando. "Modelagem de um processo fermentativo por rede Perceptron multicamadas com atraso de tempo." Universidade de São Paulo, 1996. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-22012018-103016/.

Full text
Abstract:
A utilização de Redes Neurais Artificias para fins de identificação e controle de sistemas dinâmicos têm recebido atenção especial de muitos pesquisadores, principalmente no que se refere a sistemas não lineares. Neste trabalho é apresentado um estudo sobre a utilização de um tipo em particular de Rede Neural Artificial, uma Perceptron Multicamadas com Atraso de Tempo, na estimação de estados da etapa fermentativa do processo de Reichstein para produção de vitamina C. A aplicação de Redes Neurais Artificiais a este processo pode ser justificada pela existência de problemas associados à esta etapa, como variáveis de estado não mensuráveis e com incertezas de medida e não linearidade do processo fermentativo, além da dificuldade em se obter um modelo convencional que contemple todas as fases do processo. É estudado também a eficácia do algoritmo de Levenberg-Marquadt, na aceleração do treinamento da Rede Neural Artificial, além de uma comparação do desempenho de estimação de estados das Redes Neurais Artificiais estudadas com o filtro estendido de Kalman, baseado em um modelo não estruturado do processo fermentativo. A análise do desempenho das Redes Neurais Artificiais estudadas é avaliada em termos de uma figura de mérito baseada no erro médio quadrático sendo feitas considerações quanto ao tipo da função de ativação e o número de unidades da camada oculta. Os dados utilizados para treinamento e avaliação da Redes Neurais Artificiais foram obtidos de um conjunto de ensaios interpolados para o intervalo de amostragem desejado.
ldentification and Control of dynamic systems using Artificial Neural Networks has been widely investigated by many researchers in the last few years, with special attention to the application of these in nonlinear systems. ls this works, a study on the utilization of a particular type of Artificial Neural Networks, a Time Delay Multi Layer Perceptron, in the state estimation of the fermentative phase of the Reichstein process of the C vitamin production. The use of Artificial Neural Networks can be justified by the presence of problems, such as uncertain and unmeasurable state variables and process non-linearity, and by the fact that a conventional model that works on all phases of the fermentative processes is very difficult to obtain. The efficiency of the Levenberg Marquadt algorithm on the acceleration of the training process is also studied. Also, a comparison is performed between the studied Artificial Neural Networks and an extended Kalman filter based on a non-structured model for this fermentative process. The analysis of lhe Artificial Neural Networks is carried out using lhe mean square errors taking into consideration lhe activation function and the number of units presents in the hidden layer. A set of batch experimental runs, interpolated to the desired time interval, is used for training and validating the Artificial Neural Networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Power, Phillip David. "Non-linear multi-layer perceptron channel equalisation." Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bueno, Felipe Roberto 1985. "Perceptrons híbridos lineares/morfológicos fuzzy com aplicações em classificação." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306338.

Full text
Abstract:
Orientador: Peter Sussner
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-26T15:06:30Z (GMT). No. of bitstreams: 1 Bueno_FelipeRoberto_M.pdf: 1499339 bytes, checksum: 85b58d8b856fafa47974349e80c1729e (MD5) Previous issue date: 2015
Resumo: Perceptrons morfológicos (MPs) pertencem à classe de redes neurais morfológicas (MNNs). Estas redes representam uma classe de redes neurais artificiais que executam operações de morfologia matemática (MM) em cada nó, possivelmente seguido pela aplicação de uma função de ativação. Vale ressaltar que a morfologia matemática foi concebida como uma teoria para processamento e análise de objetos (imagens ou sinais), por meio de outros objetos chamados elementos estruturantes. Embora inicialmente desenvolvida para o processamento de imagens binárias e posteriormente estendida para o processamento de imagens em tons de cinza, a morfologia matemática pode ser conduzida de modo mais geral em uma estrutura de reticulados completos. Originalmente, as redes neurais morfológicas empregavam somente determinadas operações da morfologia matemática em tons de cinza, denominadas de erosão e dilatação em tons de cinza, segundo a abordagem umbra. Estas operações podem ser expressas em termos de produtos máximo aditivo e mínimo aditivo, definidos por meio de operações entre vetores ou matrizes, da álgebra minimax. Recentemente, as operações da morfologia matemática fuzzy surgiram como funções de agregação das redes neurais morfológicas. Neste caso, falamos em redes neurais morfológicas fuzzy. Perceptrons híbridos lineares/morfológicos fuzzy foram inicialmente projetados como uma generalização dos perceptrons lineares/morfológicos existentes, ou seja, os perceptrons lineares/morfológicos fuzzy podem ser definidos por uma combinação convexa de uma parte morfológica fuzzy e uma parte linear. Nesta dissertação de mestrado, introduzimos uma rede neural artificial alimentada adiante, representando um perceptron híbrido linear/morfológico fuzzy chamado F-DELP (do inglês fuzzy dilation/erosion/linear perceptron), que ainda não foi considerado na literatura de redes neurais. Seguindo as ideias de Pessoa e Maragos, aplicamos uma suavização adequada para superar a não-diferenciabilidade dos operadores de dilatação e erosão fuzzy utilizados no modelo F-DELP. Em seguida, o treinamento é realizado por intermédio de um algoritmo de retropropagação de erro tradicional. Desta forma, aplicamos o modelo F-DELP em alguns problemas de classificação conhecidos e comparamos seus resultados com os produzidos por outros classificadores
Abstract: Morphological perceptrons (MPs) belong to the class of morphological neural networks (MNNs). These MNNs represent a class of artificial neural networks that perform operations of mathematical morphology (MM) at every node, possibly followed by the application of an activation function. Recall that mathematical morphology was conceived as a theory for processing and analyzing objects (images or signals), by means of other objects called structuring elements. Although initially developed for binary image processing and later extended to gray-scale image processing, mathematical morphology can be conducted very generally in a complete lattice setting. Originally, morphological neural networks only employed certain operations of gray-scale mathematical morphology, namely gray-scale erosion and dilation according to the umbra approach. These operations can be expressed in terms of (additive maximum and additive minimum) matrix-vector products in minimax algebra. It was not until recently that operations of fuzzy mathematical morphology emerged as aggregation functions of morphological neural networks. In this case, we speak of fuzzy morphological neural networks. Hybrid fuzzy morphological/linear perceptrons was initially designed by generalizing existing morphological/linear perceptrons, in other words, fuzzy morphological/linear perceptrons can be defined by a convex combination of a fuzzy morphological part and a linear part. In this master's thesis, we introduce a feedforward artificial neural network representing a hybrid fuzzy morphological/linear perceptron called fuzzy dilation/erosion/linear perceptron (F-DELP), which has not yet been considered in the literature. Following Pessoa's and Maragos' ideas, we apply an appropriate smoothing to overcome the non-differentiability of the fuzzy dilation and erosion operators employed in the proposed F-DELP models. Then, training is achieved using a traditional backpropagation algorithm. Finally, we apply the F-DELP model to some well-known classification problems and compare the results with the ones produced by other classifiers
Mestrado
Matematica Aplicada
Mestre em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
7

Siu, Sammy. "Non-linear adaptive equalization based on a multi-layer perceptron architecture." Thesis, University of Edinburgh, 1991. http://hdl.handle.net/1842/11916.

Full text
Abstract:
The subject of this thesis is the original study of the application of the multi-layer perceptron architecture to channel equalization in digital communications systems. Both theoretical analyses and simulations were performed to explore the performance of the perceptron-based equalizer (including the decision feedback equalizer). Topics covered include the factors that affect performance of the structures including, the parameters (learning gain and momentum parameter) in the learning algorithm, the network topology (input dimension, number of neurons and the number of hidden layers), and the power metrics on the error cost function. Based on the geometric hyperplane analysis of the multi-layer perceptron, the results offer valuable insight into the properties and complexity of the network. Comparisons of the bit error rate performance and the dynamic behaviour of the decision boundary of the perceptron-based equalizer with both the optimal non-linear equalizer and the optimal linear equalizer are provided. Through comparisons, some asymptotic results for the performance in the perceptron-based equalizer are obtained. Furthermore, a comparison of the performance of the perceptron-based equalizer (including the decision feedback equalizer) with the least mean squares linear transversal equalizer (including decision feedback equalizer) indicates that the former offers significant reduction in the bit error rate. This is because it has the ability to form highly nonlinear decision regions, in contrast with the linear equalizer which only forms linear decision regions. The linearity of the decision regions limits the performance of the conventional linear equalizer.
APA, Harvard, Vancouver, ISO, and other styles
8

Evans, John Thomas. "Investigation of a multi-layer perceptron network to model and control a non-linear system." Thesis, Liverpool John Moores University, 1994. http://researchonline.ljmu.ac.uk/4945/.

Full text
Abstract:
This thesis describes the development and implementation of an on-line optimal predictive controller incorporating a neural network model of a non-linear process. The scheme is based on a Multi-Layer Perceptron neural net-work as a modelling tool for a real non-linear, dual tank, liquid level process. A neural network process model is developed and evaluated firstly in simulation studies and then subsequently on the real process. During the development of the network model, the ability of the network to predict the process output multiple time steps ahead was investigated. This led to investigations into a number of important aspects such as the network topology, training algorithms, period of network training, model validation and conditioning of the process data. Once the development of the neural network model had been achieved, it was included into a predictive control scheme where an on-line comparison with a conventional three term controller was undertaken. Improvements in process control performance that can be achieved in practice using a neural control scheme are illustrated. Additionally, an insight into the dynamics and stability of the neural control scheme was obtained in a novel application of linear system identification techniques. The research shows that a technique of conditioning the process data, called spread encoding, enabled a neural network to accurately emulate the real process using only process input information and this facilitated accurate multi-step-ahead predictive control to be performed.
APA, Harvard, Vancouver, ISO, and other styles
9

Samuel, Nikhil J. "Identification of Uniform Class Regions using Perceptron Training." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439307102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rocha, Fabiano Lopes. "Identificação de sistemas não-lineares multivariáveis usando redes neurais perceptron multicamadas e função de base radial / Fabiano Lopes Rocha ; orientador, Leandro dos Santos Coelho." reponame:Biblioteca Digital de Teses e Dissertações da PUC_PR, 2006. http://www.biblioteca.pucpr.br/tede/tde_busca/arquivo.php?codArquivo=450.

Full text
Abstract:
Dissertação (mestrado) - Pontifícia Universidade Católica do Paraná, Curitiba, 2006
Inclui bibliografia
A identificação de sistemas dinâmicos não-lineares multivariáveis é uma área importante em várias áreas da Engenharia. Esta dissertação apresenta o estudo de uma metodologia baseada em redes neurais artificiais para identificação de sistemas não-lineares
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Linear perceptrons"

1

Lont, Jerzy B. Analog CMOS implementatrion of a multi-layer perceptron with nonlinear synapses. Kontanz: Hartung-Gorre, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1941-, Hart Peter E., and Stork David G, eds. Pattern classification. 2nd ed. New York: Wiley, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duda, Richard O., David G. Stork, and Peter E. Hart. Pattern Classification. Wiley & Sons, Incorporated, John, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Duda, Richard O. Pattern Classification. Wiley & Sons, Limited, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Duda, Richard O. Pattern Classification. Wiley & Sons, Incorporated, John, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Duda, Richard O., David G. Stork, and Peter E. Hart. Pattern Classification: Solutions Manual. Wiley & Sons, Incorporated, John, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Duda, Richard O., David G. Stork, and Peter E. Hart. Pattern Classification. Wiley & Sons, Incorporated, John, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Duda, Richard O., David G. Stork, and Peter E. Hart. Pattern Classification. Wiley & Sons, Incorporated, John, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Linear perceptrons"

1

Bielecki, Andrzej. "Linear Perceptrons." In Models of Neurons and Perceptrons: Selected Problems and Challenges, 111–19. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90140-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Murty, M. N., and Rashmi Raghava. "Linear Discriminant Function." In Support Vector Machines and Perceptrons, 15–25. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41063-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Murty, M. N., and Rashmi Raghava. "Linear Support Vector Machines." In Support Vector Machines and Perceptrons, 41–56. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41063-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hartono, Pitoyo, and Shuji Hashimoto. "Learning with Ensemble of Linear Perceptrons." In Lecture Notes in Computer Science, 115–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550907_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goldberg, Yoav. "From Linear Models to Multi-layer Perceptrons." In Neural Network Methods for Natural Language Processing, 37–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-02165-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lappalainen, Harri, and Antti Honkela. "Bayesian Non-Linear Independent Component Analysis by Multi-Layer Perceptrons." In Advances in Independent Component Analysis, 93–121. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0443-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hara, Kazuyuki, Yoichi Nakayama, Seiji Miyoshi, and Masato Okada. "Mutual Learning with Many Linear Perceptrons: On-Line Learning Theory." In Artificial Neural Networks – ICANN 2009, 171–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mousset, E., and A. Faraj. "A Formal Link between Multilayer Perceptrons and a Generalization of Linear Discriminant Analysis." In ICANN ’93, 508. London: Springer London, 1993. http://dx.doi.org/10.1007/978-1-4471-2063-6_134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kryzhanovskiy, Vladimir, Irina Zhelavskaya, and Anatoliy Fonarev. "Vector Perceptron Learning Algorithm Using Linear Programming." In Artificial Neural Networks and Machine Learning – ICANN 2012, 197–204. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33266-1_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lafif Tej, Mohamed, and Stefan Holban. "Determining Optimal Multi-layer Perceptron Structure Using Linear Regression." In Business Information Systems, 232–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20485-3_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Linear perceptrons"

1

Bueno, Felipe Roberto, and Peter Sussner. "FUZZY MORPHOLOGICAL PERCEPTRONS AND HYBRID FUZZY MORPHOLOGICAL/LINEAR PERCEPTRONS." In The 11th International FLINS Conference (FLINS 2014). WORLD SCIENTIFIC, 2014. http://dx.doi.org/10.1142/9789814619998_0120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arcadia, Christopher E., Hokchhay Tann, Amanda Dombroski, Kady Ferguson, Shui Ling Chen, Eunsuk Kim, Christopher Rose, Brenda M. Rubenstein, Sherief Reda, and Jacob K. Rosenstein. "Parallelized Linear Classification with Volumetric Chemical Perceptrons." In 2018 IEEE International Conference on Rebooting Computing (ICRC). IEEE, 2018. http://dx.doi.org/10.1109/icrc.2018.8638627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yunfeng Wu, Jinming Zhang, Cong Wang, and Sin Chun Ng. "Linear decision fusions in multilayer perceptrons for breast cancer diagnosis." In 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'05). IEEE, 2005. http://dx.doi.org/10.1109/ictai.2005.82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hartono, Pitoyo. "Ensemble of perceptrons with confidence measure for piecewise linear decomposition." In 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose). IEEE, 2011. http://dx.doi.org/10.1109/ijcnn.2011.6033282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hassan, T. A. F., A. El-Shafei, Y. Zeyada, and N. Rieger. "Comparison of Neural Network Architectures for Machinery Fault Diagnosis." In ASME Turbo Expo 2003, collocated with the 2003 International Joint Power Generation Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/gt2003-38450.

Full text
Abstract:
This paper provides a comparison of the performance of five different neural network architectures in diagnosing machinery faults. The network architectures include perceptrons, linear filters, feed-forward, self-organizing, and LVQ. The study provides a critical analysis of the performance of each network on a test rig with different faults. The comparison discusses the success rate in network training and identification of faults including: unbalance and looseness. It is shown that the perceptron and LVQ architectures were superior and achieved 100% diagnosis on the cases presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Sussner, Peter, and Felipe Roberto Bueno. "SOME EXPERIMENTAL RESULTS IN CLASSIFICATION USING HYBRID FUZZY MORPHOLOGICAL/LINEAR PERCEPTRONS." In The 11th International FLINS Conference (FLINS 2014). WORLD SCIENTIFIC, 2014. http://dx.doi.org/10.1142/9789814619998_0115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, D., M. Kamel, and M. I. Elmasry. "A training approach based on linear separability analysis for layered perceptrons." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Araujo, Ricardo de A., Adriano L. I. Oliveira, and Silvio Meira. "A learning process based on covariance matrix adaptation for morphological-linear perceptrons." In 2013 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2013. http://dx.doi.org/10.1109/cec.2013.6557840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sussner, Peter, Israel Campiott, and Manuel Alejandro Quispe Torres. "Hybrid Gray-Scale and Fuzzy Morphological/Linear Perceptrons Trained By Extreme Learning Machine." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Goyal, Somya, and Pradeep K. Bhatia. "A Non-Linear Technique for Effective Software Effort Estimation using Multi-Layer Perceptrons." In 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). IEEE, 2019. http://dx.doi.org/10.1109/comitcon.2019.8862256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography