Academic literature on the topic 'Multi-layer perceptron networks (MLPNs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-layer perceptron networks (MLPNs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-layer perceptron networks (MLPNs)"

1

Przybył, Krzysztof, Krzysztof Koszela, Franciszek Adamski, Katarzyna Samborska, Katarzyna Walkowiak, and Mariusz Polarczyk. "Deep and Machine Learning Using SEM, FTIR, and Texture Analysis to Detect Polysaccharide in Raspberry Powders." Sensors 21, no. 17 (August 30, 2021): 5823. http://dx.doi.org/10.3390/s21175823.

Full text
Abstract:
In the paper, an attempt was made to use methods of artificial neural networks (ANN) and Fourier transform infrared spectroscopy (FTIR) to identify raspberry powders that are different from each other in terms of the amount and the type of polysaccharide. Spectra in the absorbance function (FTIR) were prepared as well as training sets, taking into account the structure of microparticles acquired from microscopic images with Scanning Electron Microscopy (SEM). In addition to the above, Multi-Layer Perceptron Networks (MLPNs) with a set of texture descriptors (machine learning) and Convolution Neural Network (CNN) with bitmap (deep learning) were devised, which is an innovative attitude to solving this issue. The aim of the paper was to create MLPN and CNN neural models, which are characterized by a high efficiency of classification. It translates into recognizing microparticles (obtaining their homogeneity) of raspberry powders on the basis of the texture of the image pixel.
APA, Harvard, Vancouver, ISO, and other styles
2

Rohman, Budiman Putra Asmaur, and Dayat Kurniawan. "Classification of Radar Environment Using Ensemble Neural Network with Variation of Hidden Neuron Number." Jurnal Elektronika dan Telekomunikasi 17, no. 1 (August 31, 2017): 19. http://dx.doi.org/10.14203/jet.v17.19-24.

Full text
Abstract:
Target detection is a mandatory task of radar system so that the radar system performance is mainly determined by its detection rate. Constant False Alarm Rate (CFAR) is a detection algorithm commonly used in radar systems. This method is divided into several approaches which have different performance in the different environments. Therefore, this paper proposes an ensemble neural network based classifier with a variation of hidden neuron number for classifying the radar environments. The result of this research will support the improvement of the performance of the target detection on the radar systems by developing such an adaptive CFAR. Multi-layer perceptron network (MLPN) with a single hidden layer is employed as the structure of base classifiers. The first step of this research is the evaluation of the hidden neuron number giving the highest accuracy of classification and the simplicity of computation. According to the result of this step, the three best structures are selected to build an ensemble classifier. On the ensemble structure, all of those three MLPN outputs then be collected and voted for getting the majority result in order to decide the final classification. The three possible radar environments investigated are homogeneous, multiple-targets and clutter boundary. According to the simulation results, the ensemble MLPN provides a higher detection rate than the conventional single MLPNs. Moreover, in the multiple-target and clutter boundary environments, the proposed method is able to show its highest performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Bologna, Guido. "A Simple Convolutional Neural Network with Rule Extraction." Applied Sciences 9, no. 12 (June 13, 2019): 2411. http://dx.doi.org/10.3390/app9122411.

Full text
Abstract:
Classification responses provided by Multi Layer Perceptrons (MLPs) can be explained by means of propositional rules. So far, many rule extraction techniques have been proposed for shallow MLPs, but not for Convolutional Neural Networks (CNNs). To fill this gap, this work presents a new rule extraction method applied to a typical CNN architecture used in Sentiment Analysis (SA). We focus on the textual data on which the CNN is trained with “tweets” of movie reviews. Its architecture includes an input layer representing words by “word embeddings”, a convolutional layer, a max-pooling layer, followed by a fully connected layer. Rule extraction is performed on the fully connected layer, with the help of the Discretized Interpretable Multi Layer Perceptron (DIMLP). This transparent MLP architecture allows us to generate symbolic rules, by precisely locating axis-parallel hyperplanes. Experiments based on cross-validation emphasize that our approach is more accurate than that based on SVMs and decision trees that substitute DIMLPs. Overall, rules reach high fidelity and the discriminative n-grams represented in the antecedents explain the classifications adequately. With several test examples we illustrate the n-grams represented in the activated rules. They present the particularity to contribute to the final classification with a certain intensity.
APA, Harvard, Vancouver, ISO, and other styles
4

CAIRNS, GRAHAM, and LIONEL TARASSENKO. "PERTURBATION TECHNIQUES FOR ON-CHIP LEARNING WITH ANALOGUE VLSI MLPs." Journal of Circuits, Systems and Computers 06, no. 02 (April 1996): 93–113. http://dx.doi.org/10.1142/s0218126696000108.

Full text
Abstract:
Microelectronic neural network technology has become sufficiently mature over the past few years that reliable performance can now be obtained from VLSI circuits under carefully controlled conditions (see Refs. 8 or 13 for example). The use of analogue VLSI allows low power, area efficient hardware realisations which can perform the computationally intensive feed-forward operation of neural networks at high speed, making real-time applications possible. In this paper we focus on important issues for the successful operation and implementation of on-chip learning with such analogue VLSI neural hardware, in particular the issue of weight precision. We first review several perturbation techniques which have been proposed to train multi-layer perceptron (MLP) networks. We then present a novel error criterion which performs well on benchmark problems and which allows simple integration of error measurement hardware for complete on-chip learning systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Suprapto, Suprapto, and Edy Riyanto. "Grape Drying Process Using Machine Vision Based on Multilayer Perceptron Networks." Indonesian Journal of Science and Technology 5, no. 3 (December 1, 2020): 382–94. http://dx.doi.org/10.17509/ijost.v5i3.24991.

Full text
Abstract:
This paper proposed a grape drying machine using computer vision and Multi-layer Perceptron (MLP) method. Computer vision is for taking grapes’ image on conveyor, whereas MLP is for controlling grape drying machine and classifying its output. To evaluate the proposed, a kind of grapes are put on conveyor of the machine and their images are taken every two min. Some parameters of MLP to control the drying machine includes dried grape, temperature, grape area, motor position, and motion speed. Those parameters are to adjust an appropriate MLP’s output, including motion control and heater control. Two different temperatures are employed on the machine, including 60 and 75°C. The results showed that the grape could be dried with similar area 3800 pixel at the 770th min using temperature 60°C and at the 410th min using temperature 75°C. Comparing between them, the similar ratio could also be achieved at 0.64 with different time 360 min. Indeed, the temperature setting at 75°C resulted faster drying performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Geng, Chao, Qingji Sun, and Shigetoshi Nakatake. "Implementation of Analog Perceptron as an Essential Element of Configurable Neural Networks." Sensors 20, no. 15 (July 29, 2020): 4222. http://dx.doi.org/10.3390/s20154222.

Full text
Abstract:
Perceptron is an essential element in neural network (NN)-based machine learning, however, the effectiveness of various implementations by circuits is rarely demonstrated from chip testing. This paper presents the measured silicon results for the analog perceptron circuits fabricated in a 0.6 μm/±2.5 V complementary metal oxide semiconductor (CMOS) process, which are comprised of digital-to-analog converter (DAC)-based multipliers and phase shifters. The results from the measurement convinces us that our implementation attains the correct function and good performance. Furthermore, we propose the multi-layer perceptron (MLP) by utilizing analog perceptron where the structure and neurons as well as weights can be flexibly configured. The example given is to design a 2-3-4 MLP circuit with rectified linear unit (ReLU) activation, which consists of 2 input neurons, 3 hidden neurons, and 4 output neurons. Its experimental case shows that the simulated performance achieves a power dissipation of 200 mW, a range of working frequency from 0 to 1 MHz, and an error ratio within 12.7%. Finally, to demonstrate the feasibility and effectiveness of our analog perceptron for configuring a MLP, seven more analog-based MLPs designed with the same approach are used to analyze the simulation results with respect to various specifications, in which two cases are used to compare to their digital counterparts with the same structures.
APA, Harvard, Vancouver, ISO, and other styles
7

Bensaoucha, Saddam, Youcef Brik, Sandrine Moreau, Sid Ahmed Bessedik, and Aissa Ameur. "Induction machine stator short-circuit fault detection using support vector machine." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 40, no. 3 (May 21, 2021): 373–89. http://dx.doi.org/10.1108/compel-06-2020-0208.

Full text
Abstract:
Purpose This paper provides an effective study to detect and locate the inter-turn short-circuit faults (ITSC) in a three-phase induction motor (IM) using the support vector machine (SVM). The characteristics extracted from the analysis of the phase shifts between the stator currents and their corresponding voltages are used as inputs to train the SVM. The latter automatically decides on the IM state, either a healthy motor or a short-circuit fault on one of its three phases. Design/methodology/approach To evaluate the performance of the SVM, three supervised algorithms of machine learning, namely, multi-layer perceptron neural networks (MLPNNs), radial basis function neural networks (RBFNNs) and extreme learning machine (ELM) are used along with the SVM in this study. Thus, all classifiers (SVM, MLPNN, RBFNN and ELM) are tested and the results are compared with the same data set. Findings The obtained results showed that the SVM outperforms MLPNN, RBFNNs and ELM to diagnose the health status of the IM. Especially, this technique (SVM) provides an excellent performance because it is able to detect a fault of two short-circuited turns (early detection) when the IM is operating under a low load. Originality/value The original of this work is to use the SVM algorithm based on the phase shift between the stator currents and their voltages as inputs to detect and locate the ITSC fault.
APA, Harvard, Vancouver, ISO, and other styles
8

Loukeris, Nikolaos, and Iordanis Eleftheriadis. "Further Higher Moments in Portfolio Selection andA PrioriDetection of Bankruptcy, Under Multi-layer Perceptron Neural Networks, Hybrid Neuro-genetic MLPs, and the Voted Perceptron." International Journal of Finance & Economics 20, no. 4 (September 1, 2015): 341–61. http://dx.doi.org/10.1002/ijfe.1521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Przybył, Krzysztof, Jolanta Wawrzyniak, Krzysztof Koszela, Franciszek Adamski, and Marzena Gawrysiak-Witulska. "Application of Deep and Machine Learning Using Image Analysis to Detect Fungal Contamination of Rapeseed." Sensors 20, no. 24 (December 19, 2020): 7305. http://dx.doi.org/10.3390/s20247305.

Full text
Abstract:
This paper endeavors to evaluate rapeseed samples obtained in the process of storage experiments with different humidity (12% and 16% seed moisture content) and temperature conditions (25 and 30 °C). The samples were characterized by different levels of contamination with filamentous fungi. In order to acquire graphic data, the analysis of the morphological structure of rapeseeds was carried out with the use of microscopy. The acquired database was prepared in order to build up training, validation, and test sets. The process of generating a neural model was based on Convolutional Neural Networks (CNN), Multi-Layer Perceptron Networks (MLPN), and Radial Basis Function Networks (RBFN). The classifiers that were compared were devised on the basis of the environments Tensorflow (deep learning) and Statistica (machine learning). As a result, it was possible to achieve the lowest classification error of 14% for the test set, 18% classification error for MLPN, and 21% classification error for RBFN, in the process of recognizing mold in rapeseed with the use of CNN.
APA, Harvard, Vancouver, ISO, and other styles
10

He, Hao, Jiaxiang Zhao, and Guiling Sun. "Prediction of MoRFs in Protein Sequences with MLPs Based on Sequence Properties and Evolution Information." Entropy 21, no. 7 (June 27, 2019): 635. http://dx.doi.org/10.3390/e21070635.

Full text
Abstract:
Molecular recognition features (MoRFs) are one important type of intrinsically disordered proteins functional regions that can undergo a disorder-to-order transition through binding to their interaction partners. Prediction of MoRFs is crucial, as the functions of MoRFs are associated with many diseases and can therefore become the potential drug targets. In this paper, a method of predicting MoRFs is developed based on the sequence properties and evolutionary information. To this end, we design two distinct multi-layer perceptron (MLP) neural networks and present a procedure to train them. We develop a preprocessing process which exploits different sizes of sliding windows to capture various properties related to MoRFs. We then use the Bayes rule together with the outputs of two trained MLP neural networks to predict MoRFs. In comparison to several state-of-the-art methods, the simulation results show that our method is competitive.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multi-layer perceptron networks (MLPNs)"

1

Tran-Canh, Dung. "Simulating the flow of some non-Newtonian fluids with neural-like networks and stochastic processes." University of Southern Queensland, Faculty of Engineering and Surveying, 2004. http://eprints.usq.edu.au/archive/00001518/.

Full text
Abstract:
The thesis reports a contribution to the development of neural-like network- based element-free methods for the numerical simulation of some non-Newtonian fluid flow problems. The numerical approximation of functions and solution of the governing partial differential equations are mainly based on radial basis function networks. The resultant micro-macroscopic approaches do not require any element-based discretisation and only rely on a set of unstructured collocation points and hence are truly meshless or element-free. The development of the present methods begins with the use of the multi-layer perceptron networks (MLPNs) and radial basis function networks (RBFNs) to effectively eliminate the volume integrals in the integral formulation of fluid flow problems. An adaptive velocity gradient domain decomposition (AVGDD) scheme is incorporated into the computational algorithm. As a result, an improved feed forward neural network boundary-element-only method (FFNN- BEM) is created and verified. The present FFNN-BEM successfully simulates the flow of several Generalised Newtonian Fluids (GNFs), including the Carreau, Power-law and Cross models. To the best of the author's knowledge, the present FFNN-BEM is the first to achieve convergence for difficult flow situations when the power-law indices are very small (as small as 0.2). Although some elements are still used to discretise the governing equations, but only on the boundary of the analysis domain, the experience gained in the development of element-free approximation in the domain provides valuable skills for the progress towards an element-free approach. A least squares collocation RBFN-based mesh-free method is then developed for solving the governing PDEs. This method is coupled with the stochastic simulation technique (SST), forming the mesoscopic approach for analyzing viscoelastic flid flows. The velocity field is computed from the RBFN-based mesh-free method (macroscopic component) and the stress is determined by the SST (microscopic component). Thus the SST removes a limitation in traditional macroscopic approaches since closed form constitutive equations are not necessary in the SST. In this mesh-free method, each of the unknowns in the conservation equations is represented by a linear combination of weighted radial basis functions and hence the unknowns are converted from physical variables (e.g. velocity, stresses, etc) into network weights through the application of the general linear least squares principle and point collocation procedure. Depending on the type of RBFs used, a number of parameters will influence the performance of the method. These parameters include the centres in the case of thin plate spline RBFNs (TPS-RBFNs), and the centres and the widths in the case of multi-quadric RBFNs (MQ-RBFNs). A further improvement of the approach is achieved when the Eulerian SST is formulated via Brownian configuration fields (BCF) in place of the Lagrangian SST. The SST is made more efficient with the inclusion of the control variate variance reduction scheme, which allows for a reduction of the number of dumbbells used to model the fluid. A highly parallelised algorithm, at both macro and micro levels, incorporating a domain decomposition technique, is implemented to handle larger problems. The approach is verified and used to simulate the flow of several model dilute polymeric fluids (the Hookean, FENE and FENE-P models) in simple as well as non-trivial geometries, including shear flows (transient Couette, Poiseuille flows)), elongational flows (4:1 and 10:1 abrupt contraction flows) and lid-driven cavity flows.
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Gonghui. "Design and evaluation of a multi-output-layer perceptron." Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vural, Hulya. "Comparison Of Rough Multi Layer Perceptron And Rough Radial Basis Function Networks Using Fuzzy Attributes." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605293/index.pdf.

Full text
Abstract:
The hybridization of soft computing methods of Radial Basis Function (RBF) neural networks, Multi Layer Perceptron (MLP) neural networks with back-propagation learning, fuzzy sets and rough sets are studied in the scope of this thesis. Conventional MLP, conventional RBF, fuzzy MLP, fuzzy RBF, rough fuzzy MLP, and rough fuzzy RBF networks are compared. In the fuzzy neural networks implemented in this thesis, the input data and the desired outputs are given fuzzy membership values as the fuzzy properties &ldquo
low&rdquo
, &ldquo
medium&rdquo
and &ldquo
high&rdquo
. In the rough fuzzy MLP, initial weights and near optimal number of hidden nodes are estimated using rough dependency rules. A rough fuzzy RBF structure similar to the rough fuzzy MLP is proposed. The rough fuzzy RBF was inspected whether dependencies like the ones in rough fuzzy MLP can be concluded.
APA, Harvard, Vancouver, ISO, and other styles
4

Dlugosz, Stephan. "Multi-layer perceptron networks for ordinal data analysis : order independent online learning by sequential estimation /." Berlin : Logos, 2008. http://d-nb.info/990567311/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

McGarry, Kenneth J. "Rule extraction and knowledge transfer from radial basis function neural networks." Thesis, University of Sunderland, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Valmiki, Geetha Charan, and Akhil Santosh Tirupathi. "Performance Analysis Between Combinations of Optimization Algorithms and Activation Functions used in Multi-Layer Perceptron Neural Networks." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20204.

Full text
Abstract:
Background:- Artificial Neural networks are motivated from biological nervous system and can be used for classification and forecasting the data. Each neural node contains activation function could be used for solving non-linear problems and optimization function to minimize the loss and give more accurate results. Neural networks are bustling in the field of machine learning, which inspired this study to analyse the performance variation based on the use of different combinations of the activation functions and optimization algorithms in terms of accuracy results and metrics recall and impact of data-set features on the performance of the neural networks. Objectives:- This study deals with an experiment to analyse the performance of the combinations are performing well and giving more results and to see impact of the feature segregation from data-set on the neural networks model performance. Methods:- The process involve the gathering of the data-sets, activation functions and optimization algorithm. Execute the network model using 7X5 different combinations of activation functions and optimization algorithm and analyse the performance of the neural networks. These models are tested upon the same data-set with some of the discarded features to know the effect on the performance of the neural networks. Results:- All the metrics for evaluating the neural networks presented in separate table and graphs are used to show growth and fall down of the activation function when associating with different optimization function. Impact of the individual feature on the performance of the neural network is also represented. Conclusions:- Out of 35 combinations, combinations made from optimizations algorithms Adam,RMSprop and Adagrad and activation functions ReLU,Softplus,Tanh Sigmoid and Hard_Sigmoid are selected based on the performance evaluation and data has impact on the performance of the combinations of the algorithms and activation functions which is also evaluated based on the experimentation. Individual features have their corresponding effect on the neural network.
APA, Harvard, Vancouver, ISO, and other styles
7

Andrade, Kléber de Oliveira. "Sistema neural reativo para o estacionamento paralelo com uma única manobra em veículos de passeio." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18149/tde-21112011-131734/.

Full text
Abstract:
Graças aos avanços tecnológicos nas áreas da computação, eletrônica embarcada e mecatrônica a robótica está cada vez mais presente no cotidiano da pessoas. Nessas últimas décadas, uma infinidade de ferramentas e métodos foram desenvolvidos no campo da Robótica Móvel. Um exemplo disso são os sistemas inteligentes embarcados nos veículos de passeio. Tais sistemas auxiliam na condução através de sensores que recebem informações do ambiente e algoritmos que analisam os dados e tomam decisões para realizar uma determinada tarefa, como por exemplo estacionar um carro. Este trabalho tem por objetivo apresentar estudos realizados no desenvolvimento de um controlador inteligente capaz de estacionar um veículo simulado em vagas paralelas, na qual seja possível entrar com uma única manobra. Para isso, foi necessário realizar estudos envolvendo a modelagem de ambientes, cinemática veicular e sensores, os quais foram implementados em um ambiente de simulação desenvolvido em C# com o Visual Studio 2008. Em seguida é realizado um estudo sobre as três etapas do estacionamento, que consistem em procurar uma vaga, posicionar o veículo e manobrá-lo. Para realizar a manobra foi adotada a trajetória em S desenvolvida e muito utilizada em outros trabalhos encontrados na literatura da área. A manobra consiste em posicionar corretamente duas circunferências com um raio de esterçamento do veículo. Sendo assim, foi utilizado um controlador robusto baseado em aprendizado supervisionado utilizando Redes Neurais Artificiais (RNA), pois esta abordagem apresenta grande robustez com relação à presença de ruídos no sistema. Este controlador recebe dados de dois sensores laser (um fixado na frente do veículo e o outro na parte traseira), da odometria e de orientação de um sensor inercial. Os dados adquiridos desses sensores e a etapa da manobra em que o veículo está, servem de entrada para o controlador. Este é capaz de interpretar tais dados e responder a esses estímulos de forma correta em aproximadamente 99% dos casos. Os resultados de treinamento e de simulação se mostraram muito satisfatórios, permitindo que o carro controlador pela RNA pudesse estacionar corretamente em uma vaga paralela.
Thanks to technological advances in the fields of computer science, embedded electronics and mechatronics, robotics is increasingly more present in people\'s lives. On the past few decades a great variety of tools and methods were developed in the Mobile Robotics field, e.g. the passenger vehicles with smart embedded systems. Such systems help drivers through sensors that acquire information from the surrounding environment and algorithms which process this data and make decisions to perform a task, like parking a car. This work aims to present the studies performed on the development of a smart controller able to park a simulated vehicle in parallel parking spaces, where a single maneuver is enough to enter. To accomplish this, studies involving the modeling of environments, vehicle kinematics and sensors were conducted, which were implemented in a simulated environment developed in C# with Visual Studio 2008. Next, a study about the three stages of parking was carried out, which consists in looking for a slot, positioning the vehicle and maneuvering it. The \"S\" trajectory was adopted and developed to maneuver the vehicle, since it is well known and highly used in related works found in the literature of this field. The maneuver consists in the correct positioning of two circumferences with the possible steering radius of the vehicle. For this task, a robust controller based on supervised learning using Artificial Neural Networks (ANN) was employed, since this approach has great robustness regarding the presence of noise in the system. This controller receives data from two laser sensors (one attached on the front of the vehicle and the other on the rear), from the odometry and from the inertial orientation sensor. The data acquired from these sensors and the current maneuver stage of the vehicle are the inputs of the controller, which interprets these data and responds to these stimuli in a correct way in approximately 99% of the cases. The results of the training and simulation were satisfactory, allowing the car controlled by the ANN to correctly park in a parallel slot.
APA, Harvard, Vancouver, ISO, and other styles
8

Cherif, Aymen. "Réseaux de neurones, SVM et approches locales pour la prévision de séries temporelles." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4003/document.

Full text
Abstract:
La prévision des séries temporelles est un problème qui est traité depuis de nombreuses années. On y trouve des applications dans différents domaines tels que : la finance, la médecine, le transport, etc. Dans cette thèse, on s’est intéressé aux méthodes issues de l’apprentissage artificiel : les réseaux de neurones et les SVM. On s’est également intéressé à l’intérêt des méta-méthodes pour améliorer les performances des prédicteurs, notamment l’approche locale. Dans une optique de diviser pour régner, les approches locales effectuent le clustering des données avant d’affecter les prédicteurs aux sous ensembles obtenus. Nous présentons une modification dans l’algorithme d’apprentissage des réseaux de neurones récurrents afin de les adapter à cette approche. Nous proposons également deux nouvelles techniques de clustering, la première basée sur les cartes de Kohonen et la seconde sur les arbres binaires
Time series forecasting is a widely discussed issue for many years. Researchers from various disciplines have addressed it in several application areas : finance, medical, transportation, etc. In this thesis, we focused on machine learning methods : neural networks and SVM. We have also been interested in the meta-methods to push up the predictor performances, and more specifically the local models. In a divide and conquer strategy, the local models perform a clustering over the data sets before different predictors are affected into each obtained subset. We present in this thesis a new algorithm for recurrent neural networks to use them as local predictors. We also propose two novel clustering techniques suitable for local models. The first is based on Kohonen maps, and the second is based on binary trees
APA, Harvard, Vancouver, ISO, and other styles
9

Oliveira, Rogério Campos de. "Aplicação de máquinas de comitê de redes neurais artificiais na solução de um problema inverso em transferência radiativa." Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=1732.

Full text
Abstract:
Este trabalho fundamenta-se no conceito de máquina de comitê de redes neurais artificiais e tem por objetivo resolver o problema inverso de transferência radiativa em um meio unidimensional, homogêneo, absorvedor e espalhador isotrópico. A máquina de comitê de redes neurais artificiais agrega e combina o conhecimento adquirido por um certo número de especialistas aqui representados, individualmente, por cada uma das redes neurais artificiais (RNA) que compõem a máquina de comitê de redes neurais artificiais. O objetivo é atingir um resultado final melhor do que o obtido por qualquer rede neural artificial separadamente, selecionando-se apenas àquelas redes neurais artificiais que apresentam os melhores resultados na fase de generalização descartando-se as demais, o que foi feito neste trabalho. Aqui são utilizados dois modelos estáticos de máquinas de comitê, usando a média aritmética de conjunto, que se diferenciam entre si apenas na composição do combinador de saída de cada máquina de comitê. São obtidas, usando-se máquinas de comitê de redes neurais artificiais, estimativas para os parâmetros de transferência radiativa, isto é, a espessura óptica do meio, o albedo de espalhamento simples e as refletividades difusas. Finalmente, os resultados obtidos com ambos os modelos de máquina de comitê são comparados entre si e com aqueles encontrados usando-se apenas redes neurais artificiais do tipo perceptrons de múltiplas camadas (MLP), isoladamente. Aqui essas redes neurais artificiais são denominadas redes neurais especialistas, mostrando que a técnica empregada traz melhorias de desempenho e resultados a um custo computacional relativamente baixo.
This work is based on the concept of neural networks committee machine and has the objective to solve the inverse radiative transfer problem in one-dimensional, homogeneous, absorbing and isotropic scattering media. The artificial neural networks committee machine adds and combines the knowledge acquired by an exact number of specialists which are represented, individually, by each one of the artificial neural networks (ANN) that composes the artificial neural network committee machine. The aim is to reach a final result better than the one obtained by any of the artificial neural network separately, selecting only those artificial neural networks that presents the best results during the generalization phase and discarding the others, what was done in this present work. Here are used two static models of committee machines, using the ensemble arithmetic average, that differ between themselves only by the composition of the output combinator by each one of the committee machine. Are obtained, using artificial neural networks committee machines, estimates for the radiative transfer parameters, that is, medium optical thickness, single scattering albedo and diffuse reflectivities. Finally, the results obtained with both models of committee machine are compared between themselves and with those found using artificial neural networks type multi-layer perceptrons (MLP), isolated. Here that artificial neural networks are named as specialists neural networks, showing that the technique employed brings performance and results improvements with relatively low computational cost.
APA, Harvard, Vancouver, ISO, and other styles
10

Börthas, Lovisa, and Sjölander Jessica Krange. "Machine Learning Based Prediction and Classification for Uplift Modeling." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266379.

Full text
Abstract:
The desire to model the true gain from targeting an individual in marketing purposes has lead to the common use of uplift modeling. Uplift modeling requires the existence of a treatment group as well as a control group and the objective hence becomes estimating the difference between the success probabilities in the two groups. Efficient methods for estimating the probabilities in uplift models are statistical machine learning methods. In this project the different uplift modeling approaches Subtraction of Two Models, Modeling Uplift Directly and the Class Variable Transformation are investigated. The statistical machine learning methods applied are Random Forests and Neural Networks along with the standard method Logistic Regression. The data is collected from a well established retail company and the purpose of the project is thus to investigate which uplift modeling approach and statistical machine learning method that yields in the best performance given the data used in this project. The variable selection step was shown to be a crucial component in the modeling processes as so was the amount of control data in each data set. For the uplift to be successful, the method of choice should be either the Modeling Uplift Directly using Random Forests, or the Class Variable Transformation using Logistic Regression. Neural network - based approaches are sensitive to uneven class distributions and is hence not able to obtain stable models given the data used in this project. Furthermore, the Subtraction of Two Models did not perform well due to the fact that each model tended to focus too much on modeling the class in both data sets separately instead of modeling the difference between the class probabilities. The conclusion is hence to use an approach that models the uplift directly, and also to use a great amount of control data in each data set.
Behovet av att kunna modellera den verkliga vinsten av riktad marknadsföring har lett till den idag vanligt förekommande metoden inkrementell responsanalys. För att kunna utföra denna typ av metod krävs förekomsten av en existerande testgrupp samt kontrollgrupp och målet är således att beräkna differensen mellan de positiva utfallen i de två grupperna. Sannolikheten för de positiva utfallen för de två grupperna kan effektivt estimeras med statistiska maskininlärningsmetoder. De inkrementella responsanalysmetoderna som undersöks i detta projekt är subtraktion av två modeller, att modellera den inkrementella responsen direkt samt en klassvariabeltransformation. De statistiska maskininlärningsmetoderna som tillämpas är random forests och neurala nätverk samt standardmetoden logistisk regression. Datan är samlad från ett väletablerat detaljhandelsföretag och målet är därmed att undersöka vilken inkrementell responsanalysmetod och maskininlärningsmetod som presterar bäst givet datan i detta projekt. De mest avgörande aspekterna för att få ett bra resultat visade sig vara variabelselektionen och mängden kontrolldata i varje dataset. För att få ett lyckat resultat bör valet av maskininlärningsmetod vara random forests vilken används för att modellera den inkrementella responsen direkt, eller logistisk regression tillsammans med en klassvariabeltransformation. Neurala nätverksmetoder är känsliga för ojämna klassfördelningar och klarar därmed inte av att erhålla stabila modeller med den givna datan. Vidare presterade subtraktion av två modeller dåligt på grund av att var modell tenderade att fokusera för mycket på att modellera klassen i båda dataseten separat, istället för att modellera differensen mellan dem. Slutsatsen är således att en metod som modellerar den inkrementella responsen direkt samt en relativt stor kontrollgrupp är att föredra för att få ett stabilt resultat.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multi-layer perceptron networks (MLPNs)"

1

Dissertation: Autonomous Construction of Multi Layer Perceptron Neural Networks. Storming Media, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-layer perceptron networks (MLPNs)"

1

Shepherd, Adrian J. "Multi-Layer Perceptron Training." In Second-Order Methods for Neural Networks, 1–22. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-4471-0953-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suresh, Sundaram, Narasimhan Sundararajan, and Ramasamy Savitha. "Fully Complex-valued Multi Layer Perceptron Networks." In Supervised Learning with Complex-valued Neural Networks, 31–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-29491-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pérez-Miñana, Elena, Peter Ross, and John Hallam. "Multi-layer perceptron design using Delaunay triangulations." In Fuzzy Logic, Neural Networks, and Evolutionary Computation, 188–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61988-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khoi, Duong Dang, and Yuji Murayama. "Multi-layer Perceptron Neural Networks in Geospatial Analysis." In Progress in Geospatial Analysis, 125–41. Tokyo: Springer Japan, 2012. http://dx.doi.org/10.1007/978-4-431-54000-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lang, Bernhard. "Monotonic Multi-layer Perceptron Networks as Universal Approximators." In Lecture Notes in Computer Science, 31–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550907_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sureddy, Sneha, and Jeena Jacob. "Multi-features Based Multi-layer Perceptron for Facial Expression Recognition System." In Lecture Notes in Networks and Systems, 206–17. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84760-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Suksmono, Andriyan Bayu, and Akira Hirose. "Adaptive Beamforming by Using Complex-Valued Multi Layer Perceptron." In Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003, 959–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44989-2_114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Trunfio, Giuseppe A. "Enhancing Cellular Automata by an Embedded Generalized Multi-layer Perceptron." In Artificial Neural Networks: Biological Inspirations – ICANN 2005, 343–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550822_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Eleuteri, Antonio, Roberto Tagliaferri, and Leopoldo Milano. "Divergence Projections for Variable Selection in Multi–layer Perceptron Networks." In Neural Nets, 287–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45216-4_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mahesh, Vijayalakshmi G. V., Alex Noel Joseph Raj, and P. Arulmozhivarman. "Thermal IR Face Recognition Using Zernike Moments and Multi Layer Perceptron Neural Network (MLPNN) Classifier." In Advances in Intelligent Systems and Computing, 213–22. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60618-7_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-layer perceptron networks (MLPNs)"

1

Motato, Eliot, and Clark Radcliffe. "Recursive Assembly of Multi-Layer Perceptron Neural Networks." In ASME 2014 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/dscc2014-5997.

Full text
Abstract:
The objective of this paper is to present a methodology to modularly connect Multi-Layer Perceptron (MLP) neural network models describing static port-based physical behavior. The MLP considered in this work are characterized for an standard format with a single hidden layer with sigmoidal activation functions. Since every port is defined by an input-output pair, the number of outputs of the proposed neural network format is equal to the number of its inputs. This work extends the Model Assembly Method (MAM) used to connect transfer function models and Volterra models to multi-layer perceptron neural networks.
APA, Harvard, Vancouver, ISO, and other styles
2

Mirjalili, Seyedali, and Ali Safa Sadiq. "Magnetic Optimization Algorithm for training Multi Layer Perceptron." In 2011 IEEE 3rd International Conference on Communication Software and Networks (ICCSN). IEEE, 2011. http://dx.doi.org/10.1109/iccsn.2011.6014845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Karami, A. R., M. Ahmadian Attari, and H. Tavakoli. "Multi Layer Perceptron Neural Networks Decoder for LDPC Codes." In 2009 5th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM). IEEE, 2009. http://dx.doi.org/10.1109/wicom.2009.5303382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ikuta, Chihiro, Yoko Uwate, and Yoshifumi Nishio. "Investigation of four-layer multi-layer perceptron with glia connections of hidden-layer neurons." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zihan, Zhaochun Ren, Chunyu He, Peng Zhang, and Yue Hu. "Robust Embedding with Multi-Level Structures for Link Prediction." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/728.

Full text
Abstract:
Knowledge Graph (KG) embedding has become crucial for the task of link prediction. Recent work applies encoder-decoder models to tackle this problem, where an encoder is formulated as a graph neural network (GNN) and a decoder is represented by an embedding method. These approaches enforce embedding techniques with structure information. Unfortunately, existing GNN-based frameworks still confront 3 severe problems: low representational power, stacking in a flat way, and poor robustness to noise. In this work, we propose a novel multi-level graph neural network (M-GNN) to address the above challenges. We first identify an injective aggregate scheme and design a powerful GNN layer using multi-layer perceptrons (MLPs). Then, we define graph coarsening schemes for various kinds of relations, and stack GNN layers on a series of coarsened graphs, so as to model hierarchical structures. Furthermore, attention mechanisms are adopted so that our approach can make predictions accurately even on the noisy knowledge graph. Results on WN18 and FB15k datasets show that our approach is effective in the standard link prediction task, significantly and consistently outperforming competitive baselines. Furthermore, robustness analysis on FB15k-237 dataset demonstrates that our proposed M-GNN is highly robust to sparsity and noise.
APA, Harvard, Vancouver, ISO, and other styles
6

Van Efferen, Lennart, and Amr M. T. Ali-Eldin. "A multi-layer perceptron approach for flow-based anomaly detection." In 2017 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, 2017. http://dx.doi.org/10.1109/isncc.2017.8072036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Msiza, Ishmael S., Fulufhelo V. Nelwamondo, and Tshilidzi Marwala. "Water Demand Forecasting Using Multi-layer Perceptron and Radial Basis Functions." In 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4370923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ngwar, Melin, and Jim Wight. "A fully integrated analog neuron for dynamic multi-layer perceptron networks." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Toderean, Roxana. "Classification of Sensorimotor Rhythms Based on Multi-layer Perceptron Neural Networks." In 2020 International Conference on Development and Application Systems (DAS). IEEE, 2020. http://dx.doi.org/10.1109/das49615.2020.9108910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alsmadi, Mutasem khalil, Khairuddin Bin Omar, Shahrul Azman Noah, and Ibrahim Almarashdah. "Performance Comparison of Multi-layer Perceptron (Back Propagation, Delta Rule and Perceptron) algorithms in Neural Networks." In 2009 IEEE International Advance Computing Conference (IACC 2009). IEEE, 2009. http://dx.doi.org/10.1109/iadcc.2009.4809024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography