Journal articles on the topic 'Multi-layer perceptrons'

To see the other types of publications on this topic, follow the link: Multi-layer perceptrons.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-layer perceptrons.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Racca, Robert. "Can periodic perceptrons replace multi-layer perceptrons?" Pattern Recognition Letters 21, no. 12 (November 2000): 1019–25. http://dx.doi.org/10.1016/s0167-8655(00)00057-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Garcı́a-Pedrajas, N., D. Ortiz-Boyer, and C. Hervás-Martı́nez. "Cooperative coevolution of generalized multi-layer perceptrons." Neurocomputing 56 (January 2004): 257–83. http://dx.doi.org/10.1016/j.neucom.2003.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

TSENG, YUEN-HSIEN, and JA-LING WU. "Decoding Reed-Muller codes by multi-layer perceptrons." International Journal of Electronics 75, no. 4 (October 1993): 589–94. http://dx.doi.org/10.1080/00207219308907134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Buchholz, Sven, and Gerald Sommer. "On Clifford neurons and Clifford multi-layer perceptrons." Neural Networks 21, no. 7 (September 2008): 925–35. http://dx.doi.org/10.1016/j.neunet.2008.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mirzai, A. R., A. Higgins, and D. Tsaptsinos. "Techniques for the minimisation of multi-layer perceptrons." Engineering Applications of Artificial Intelligence 6, no. 3 (June 1993): 265–77. http://dx.doi.org/10.1016/0952-1976(93)90069-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Egmont-Petersen, Michael, Jan L. Talmon, Arie Hasman, and Anton W. Ambergen. "Assessing the importance of features for multi-layer perceptrons." Neural Networks 11, no. 4 (June 1998): 623–35. http://dx.doi.org/10.1016/s0893-6080(98)00031-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Roque, Antonio Muñoz San, Carlos Maté, Javier Arroyo, and Ángel Sarabia. "iMLP: Applying Multi-Layer Perceptrons to Interval-Valued Data." Neural Processing Letters 25, no. 2 (February 15, 2007): 157–69. http://dx.doi.org/10.1007/s11063-007-9035-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vlachos, D. S. "A Local Supervised Learning Algorithm For Multi-Layer Perceptrons." Applied Numerical Analysis & Computational Mathematics 1, no. 2 (December 2004): 535–39. http://dx.doi.org/10.1002/anac.200410016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moustafa, Essam B., and Ammar Elsheikh. "Predicting Characteristics of Dissimilar Laser Welded Polymeric Joints Using a Multi-Layer Perceptrons Model Coupled with Archimedes Optimizer." Polymers 15, no. 1 (January 2, 2023): 233. http://dx.doi.org/10.3390/polym15010233.

Full text
Abstract:
This study investigates the application of a coupled multi-layer perceptrons (MLP) model with Archimedes optimizer (AO) to predict characteristics of dissimilar lap joints made of polymethyl methacrylate (PMMA) and polycarbonate (PC). The joints were welded using the laser transmission welding (LTW) technique equipped with a beam wobbling feature. The inputs of the models were laser power, welding speed, pulse frequency, wobble frequency, and wobble width; whereas, the outputs were seam width and shear strength of the joint. The Archimedes optimizer was employed to obtain the optimal internal parameters of the multi-layer perceptrons. In addition to the Archimedes optimizer, the conventional gradient descent technique, as well as the particle swarm optimizer (PSO), was employed as internal optimizers of the multi-layer perceptrons model. The prediction accuracy of the three models was compared using different error measures. The AO-MLP outperformed the other two models. The computed root mean square errors of the MLP, PSO-MLP, and AO-MLP models are (39.798, 19.909, and 2.283) and (0.153, 0.084, and 0.0321) for shear strength and seam width, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Xi, Yan Hui, and Hui Peng. "Training Multi-Layer Perceptrons with the Unscented Kalman Particle Filter." Advanced Materials Research 542-543 (June 2012): 745–48. http://dx.doi.org/10.4028/www.scientific.net/amr.542-543.745.

Full text
Abstract:
Many Bayesian learning approaches to multi-layer perceptrons (MLPs) parameters optimization have been proposed such as the extended Kalman filter (EKF). In this paper, a sequential approach is applied to train the MLPs. Based on the particle filter, the approach named unscented Kalman particle filter (UPF) uses the unscented Kalman filter as proposal distribution to generate the importance sampling density. The UPF are devised to deal with the high dimensional parameter space that is inherent to neural network models. Simulation results show that the new algorithm performs better than traditional optimization methods such as the extended Kalman filter.
APA, Harvard, Vancouver, ISO, and other styles
11

Ortigosa, E. M., A. Cañas, E. Ros, P. M. Ortigosa, S. Mota, and J. Díaz. "Hardware description of multi-layer perceptrons with different abstraction levels." Microprocessors and Microsystems 30, no. 7 (November 2006): 435–44. http://dx.doi.org/10.1016/j.micpro.2006.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fry, J., and P. Jennings. "Using Multi-Layer Perceptrons to Predict Vehicle Pass-By Noise." Neural Computing & Applications 11, no. 3-4 (June 1, 2003): 161–67. http://dx.doi.org/10.1007/s00521-003-0354-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

BUCHHOLZ, SVEN, and NICOLAS LE BIHAN. "POLARIZED SIGNAL CLASSIFICATION BY COMPLEX AND QUATERNIONIC MULTI-LAYER PERCEPTRONS." International Journal of Neural Systems 18, no. 02 (April 2008): 75–85. http://dx.doi.org/10.1142/s0129065708001403.

Full text
Abstract:
For polarized signals, which arise in many application fields, a statistical framework in terms of quaternionic random processes is proposed. Based on it, the ability of real-, complex- and quaternionic-valued multi-layer perceptrons (MLPs) of performing classification tasks for such signals is evaluated. For the multi-dimensional neural networks the relevance of class label representations is discussed. For signal to noise separation it is shown that the quaternionic MLP yields an optimal solution. Results on the classification of two different polarized signals are also reported.
APA, Harvard, Vancouver, ISO, and other styles
14

Jansen, Ben H., and Pratish R. Desai. "K-complex detection using multi-layer perceptrons and recurrent networks." International Journal of Bio-Medical Computing 37, no. 3 (November 1994): 249–57. http://dx.doi.org/10.1016/0020-7101(94)90123-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zheng, Lilei, Stefan Duffner, Khalid Idrissi, Christophe Garcia, and Atilla Baskurt. "Siamese multi-layer perceptrons for dimensionality reduction and face identification." Multimedia Tools and Applications 75, no. 9 (August 30, 2015): 5055–73. http://dx.doi.org/10.1007/s11042-015-2847-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hänsch, Ronny. "Complex-Valued Multi-Layer Perceptrons – An Application to Polarimetric SAR Data." Photogrammetric Engineering & Remote Sensing 76, no. 9 (September 1, 2010): 1081–88. http://dx.doi.org/10.14358/pers.76.9.1081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kämmerer, B., and W. Küpper. "Experiments for isolated-word recognition with single-and multi-layer perceptrons." Neural Networks 1 (January 1988): 302. http://dx.doi.org/10.1016/0893-6080(88)90333-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Wen-Yuan, Sin-Horng Chen, and Cheng-Jung Lin. "A speech recognition method based on the sequential multi-layer perceptrons." Neural Networks 9, no. 4 (June 1996): 655–69. http://dx.doi.org/10.1016/0893-6080(95)00140-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Langer, H., G. Nunnari, and L. Occhipinti. "Identification of Model Parameters Governing Seismic Waveforms with Multi-Layer Perceptrons." IFAC Proceedings Volumes 27, no. 8 (July 1994): 615–20. http://dx.doi.org/10.1016/s1474-6670(17)47777-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Farias, Felipe C., Teresa B. Ludermir, and Carmelo J. A. Bastos-Filho. "Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons with Heterogeneous Architectures." AI 4, no. 1 (December 27, 2022): 16–27. http://dx.doi.org/10.3390/ai4010002.

Full text
Abstract:
In this paper we propose a procedure to enable the training of several independent Multilayer Perceptron Neural Networks with a different number of neurons and activation functions in parallel (ParallelMLPs) by exploring the principle of locality and parallelization capabilities of modern CPUs and GPUs. The core idea of this technique is to represent several sub-networks as a single large network and use a Modified Matrix Multiplication that replaces an ordinal matrix multiplication with two simple matrix operations that allow separate and independent paths for gradient flowing. We have assessed our algorithm in simulated datasets varying the number of samples, features and batches using 10,000 different models as well as in the MNIST dataset. We achieved a training speedup from 1 to 4 orders of magnitude if compared to the sequential approach. The code is available online.
APA, Harvard, Vancouver, ISO, and other styles
21

Pham, D. T., and S. Sagiroglu. "Three methods of training multi-layer perceptrons to model a robot sensor." Robotica 13, no. 5 (September 1995): 531–38. http://dx.doi.org/10.1017/s0263574700018373.

Full text
Abstract:
SummaryThis paper discusses three methods of training multi-layer perceptrons (MLPs) to model a six-degrees-of- freedom inertial sensor. Such a sensor is designed for use with a robot to determine the location of objects it has to pick up. The sensor operates by measuring parameters related to the inertia of an object and computing its location from those parameters. MLP models are employed for part of the computation. They are trained to output the orientation of the object in response to an input pattern that includes the period of natural vibration of the sensor on which the object rests. After reviewing the working principle of the sensor, the paper describes the three MLP training methods (backpropagation, optimisation using the Levenberg-Marquardt algorithm, evolution based on the genetic algorithm) and presents the experimental results obtained.
APA, Harvard, Vancouver, ISO, and other styles
22

Cigizoglu, H. Kerem. "Estimation and forecasting of daily suspended sediment data by multi-layer perceptrons." Advances in Water Resources 27, no. 2 (February 2004): 185–95. http://dx.doi.org/10.1016/j.advwatres.2003.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhong, Lin, Jia Liu, and Runsheng Liu. "A rejection model based on multi-layer perceptrons for Mandarin digit recognition." Journal of Computer Science and Technology 17, no. 2 (March 2002): 196–202. http://dx.doi.org/10.1007/bf02962212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mirjalili, Seyedali. "How effective is the Grey Wolf optimizer in training multi-layer perceptrons." Applied Intelligence 43, no. 1 (January 17, 2015): 150–61. http://dx.doi.org/10.1007/s10489-014-0645-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mak, M. W., W. G. Allen, and G. G. Sexton. "Comparing Multi-layer Perceptrons and Radial Basis Functions networks in speaker recognition." Journal of Microcomputer Applications 16, no. 2 (April 1993): 147–59. http://dx.doi.org/10.1006/jmca.1993.1013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bologna, Guido. "A Simple Convolutional Neural Network with Rule Extraction." Applied Sciences 9, no. 12 (June 13, 2019): 2411. http://dx.doi.org/10.3390/app9122411.

Full text
Abstract:
Classification responses provided by Multi Layer Perceptrons (MLPs) can be explained by means of propositional rules. So far, many rule extraction techniques have been proposed for shallow MLPs, but not for Convolutional Neural Networks (CNNs). To fill this gap, this work presents a new rule extraction method applied to a typical CNN architecture used in Sentiment Analysis (SA). We focus on the textual data on which the CNN is trained with “tweets” of movie reviews. Its architecture includes an input layer representing words by “word embeddings”, a convolutional layer, a max-pooling layer, followed by a fully connected layer. Rule extraction is performed on the fully connected layer, with the help of the Discretized Interpretable Multi Layer Perceptron (DIMLP). This transparent MLP architecture allows us to generate symbolic rules, by precisely locating axis-parallel hyperplanes. Experiments based on cross-validation emphasize that our approach is more accurate than that based on SVMs and decision trees that substitute DIMLPs. Overall, rules reach high fidelity and the discriminative n-grams represented in the antecedents explain the classifications adequately. With several test examples we illustrate the n-grams represented in the activated rules. They present the particularity to contribute to the final classification with a certain intensity.
APA, Harvard, Vancouver, ISO, and other styles
27

Jia-shu, Zhang, and Xiao Xian-ci. "Fast evolving multi-layer perceptrons for noisy chaotic time series modeling and predictions." Chinese Physics 9, no. 6 (June 2000): 408–13. http://dx.doi.org/10.1088/1009-1963/9/6/002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Riedmiller, Martin. "Advanced supervised learning in multi-layer perceptrons — From backpropagation to adaptive learning algorithms." Computer Standards & Interfaces 16, no. 3 (July 1994): 265–78. http://dx.doi.org/10.1016/0920-5489(94)90017-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Frias-Martinez, E., A. Sanchez, and J. Velez. "Support vector machines versus multi-layer perceptrons for efficient off-line signature recognition." Engineering Applications of Artificial Intelligence 19, no. 6 (September 2006): 693–704. http://dx.doi.org/10.1016/j.engappai.2005.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kişi, Özgür. "Daily pan evaporation modelling using multi-layer perceptrons and radial basis neural networks." Hydrological Processes 23, no. 2 (January 15, 2009): 213–23. http://dx.doi.org/10.1002/hyp.7126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Pham, D. T., and E. Oztemel. "Control Chart Pattern Recognition Using Combinations of Multi-Layer Perceptrons and Learning-Vector-Quantization Neural Networks." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 207, no. 2 (May 1993): 113–18. http://dx.doi.org/10.1243/pime_proc_1993_207_325_02.

Full text
Abstract:
Pattern recognition systems made up of independent multi-layer perceptrons and learning-vector-quantization neural network modules have been developed for classifying control chart patterns. These composite pattern recognition systems have better classification capabilities than their individual modules. The paper describes the structures of these pattern recognition systems and the results obtained on using them.
APA, Harvard, Vancouver, ISO, and other styles
32

Laouid, Abdelkader Azzeddine, Abdelkrim Mohrem, and Aicha Djalab. "A multi-objective grey wolf optimizer (GWO)-based multi-layer perceptrons (MLPs) trainer for optimal PMUs placement." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 41, no. 1 (November 5, 2021): 187–208. http://dx.doi.org/10.1108/compel-01-2021-0018.

Full text
Abstract:
Purpose This paper aims to find the minimum possible number of phasor measurement units (PMUs) to achieve maximum and complete observability of the power system and improve the redundancy of measurements, in normal cases (with and without zero injection bus [ZIB]), and then in conditions of a single PMU failure and outage of a single line. Design/methodology/approach An efficient approach operates adequately and provides the optimal solutions for the PMUs placement problem. The finest function of optimal PMUs placement (OPP) should be mathematically devised as a problem, and via that, the aim of the OPP problem is to identify the buses of the power system to place the PMU devices to ensure full observability of the system. In this paper, the grey wolf optimizer (GWO) is used for training multi-layer perceptrons (MLPs), which is known as Grey Wolf Optimizer (GWO) based Neural Network (“GW-NN”) to place the PMUs in power grids optimally. Findings Following extensive simulation tests with MATLAB/Simulink, the results obtained for the placement of PMUs provide system measurements with less or at most the same number of PMUs, but with a greater degree of observability than other approaches. Practical implications The efficiency of the suggested method is tested on the IEEE 14-bus, 24-bus, New England 39-bus and Algerian 114-bus systems. Originality/value This paper proposes a new method for placing PMUs in the power grids as a multi-objective to reduce the cost and improve the observability of these grids in normal and faulty cases.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Xiao Jun, and Lin Li. "IP Core Based Hardware Implementation of Multi-Layer Perceptrons on FPGAs: A Parallel Approach." Advanced Materials Research 433-440 (January 2012): 5647–53. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.5647.

Full text
Abstract:
There’re many models derived from the famous bio-inspired artificial neural network (ANN). Among them, multi-layer perceptron (MLP) is widely used as a universal function approximator. With the development of EDA and recent research work, we are able to use rapid and convenient method to generate hardware implementation of MLP on FPGAs through pre-designed IP cores. In the mean time, we focus on achieving the inherent parallelism of neural networks. In this paper, we firstly propose the hardware architecture of modular IP cores. Then, a parallel MLP is devised as an example. At last, some conclusions are made.
APA, Harvard, Vancouver, ISO, and other styles
34

Yaoguang, Wang, Liu Zemin, and Zhou Zheng. "Equal interval range approximation and expanding learning rule for multi-layer perceptrons and applications." Journal of Electronics (China) 9, no. 4 (October 1992): 327–35. http://dx.doi.org/10.1007/bf02685869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Vershkov, Nikolay Anatolievich, Mikhail Grigoryevich Babenko, Viktor Andreevich Kuchukov, and Natalia Nikolaevna Kuchukova. "Advanced supervised learning in multi-layer perceptrons to the recognition tasks based on correlation indicator." Proceedings of the Institute for System Programming of the RAS 33, no. 1 (2021): 33–46. http://dx.doi.org/10.15514/ispras-2021-33(1)-2.

Full text
Abstract:
The article deals with the problem of recognition of handwritten digits using feedforward neural networks (perceptrons) using a correlation indicator. The proposed method is based on the mathematical model of the neural network as an oscillatory system similar to the information transmission system. The article uses theoretical developments of the authors to search for the global extremum of the error function in artificial neural networks. The handwritten digit image is considered as a one-dimensional input discrete signal representing a combination of "perfect digit writing" and noise, which describes the deviation of the input implementation from "perfect writing". The ideal observer criterion (Kotelnikov criterion), which is widely used in information transmission systems and describes the probability of correct recognition of the input signal, is used to form the loss function. In the article is carried out a comparative analysis of the convergence of learning and experimentally obtained sequences on the basis of the correlation indicator and widely used in the tasks of classification of the function CrossEntropyLoss with the use of the optimizer and without it. Based on the experiments carried out, it is concluded that the proposed correlation indicator has an advantage of 2-3 times.
APA, Harvard, Vancouver, ISO, and other styles
36

Javed, Fajar, Syed Omer Gilani, Seemab Latif, Asim Waris, Mohsin Jamil, and Ahmed Waqas. "Predicting Risk of Antenatal Depression and Anxiety Using Multi-Layer Perceptrons and Support Vector Machines." Journal of Personalized Medicine 11, no. 3 (March 12, 2021): 199. http://dx.doi.org/10.3390/jpm11030199.

Full text
Abstract:
Perinatal depression and anxiety are defined to be the mental health problems a woman faces during pregnancy, around childbirth, and after child delivery. While this often occurs in women and affects all family members including the infant, it can easily go undetected and underdiagnosed. The prevalence rates of antenatal depression and anxiety worldwide, especially in low-income countries, are extremely high. The wide majority suffers from mild to moderate depression with the risk of leading to impaired child–mother relationship and infant health, few women end up taking their own lives. Owing to high costs and non-availability of resources, it is almost impossible to diagnose every pregnant woman for depression/anxiety whereas under-detection can have a lasting impact on mother and child’s health. This work proposes a multi-layer perceptron based neural network (MLP-NN) classifier to predict the risk of depression and anxiety in pregnant women. We trained and evaluated our proposed system on a Pakistani dataset of 500 women in their antenatal period. ReliefF was used for feature selection before classifier training. Evaluation metrics such as accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver operating characteristic curve were used to evaluate the performance of the trained model. Multilayer perceptron and support vector classifier achieved an area under the receiving operating characteristic curve of 88% and 80% for antenatal depression and 85% and 77% for antenatal anxiety, respectively. The system can be used as a facilitator for screening women during their routine visits in the hospital’s gynecology and obstetrics departments.
APA, Harvard, Vancouver, ISO, and other styles
37

Culotta, Simona, Antonio Messineo, and Simona Messineo. "The Application of Different Model of Multi-Layer Perceptrons in the Estimation of Wind Speed." Advanced Materials Research 452-453 (January 2012): 690–94. http://dx.doi.org/10.4028/scientific5/amr.452-453.690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Han, Seongbae, Gyuyeol Kong, and Sooyong Choi. "Equalization scheme with misalignment estimation based on multi-layer perceptrons for holographic data storage systems." Japanese Journal of Applied Physics 58, SK (July 17, 2019): SKKD02. http://dx.doi.org/10.7567/1347-4065/ab2be0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Culotta, Simona, Antonio Messineo, and Simona Messineo. "The Application of Different Model of Multi-Layer Perceptrons in the Estimation of Wind Speed." Advanced Materials Research 452-453 (January 2012): 690–94. http://dx.doi.org/10.4028/www.scientific.net/amr.452-453.690.

Full text
Abstract:
Wind speed forecasting is essential for effective planning of wind energy exploitation projects. The ability to predict short-term wind speed is a prerequisite for all the operators of the wind energy sector. Consequently it is essential to identify an efficient method for forecasts. In this paper, the wind speed in the province of Trapani (Sicily) is modeled by artificial neural network. Several model of neural network were generated and compared through error measures. Simulation results show that the estimated values of wind speed are in good agreement with the values measured by anemometers..
APA, Harvard, Vancouver, ISO, and other styles
40

Greco, Claudia, Pasquale Pace, Stefano Basagni, and Giancarlo Fortino. "Jamming detection at the edge of drone networks using Multi-layer Perceptrons and Decision Trees." Applied Soft Computing 111 (November 2021): 107806. http://dx.doi.org/10.1016/j.asoc.2021.107806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Oh, Sang-Hoon Oh, and Youngjik Lee Lee. "A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons." ETRI Journal 17, no. 1 (April 1, 1995): 11–22. http://dx.doi.org/10.4218/etrij.95.0195.0012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

RejabFahmi, Ben, Nouira Kaouther, and Abdelwahed Trabelsi. "Support Vector Machines versus Multi-layer Perceptrons for Reducing False Alarms in Intensive Care Units." International Journal of Computer Applications 49, no. 11 (July 28, 2012): 41–47. http://dx.doi.org/10.5120/7675-0969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Minati, Ludovico. "Rapid generation of biexponential and diffusional kurtosis maps using multi-layer perceptrons: a preliminary experience." Magnetic Resonance Materials in Physics, Biology and Medicine 21, no. 4 (July 2008): 299–305. http://dx.doi.org/10.1007/s10334-008-0129-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bologna, Guido, and Yoichi Hayashi. "Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning." Journal of Artificial Intelligence and Soft Computing Research 7, no. 4 (October 1, 2017): 265–86. http://dx.doi.org/10.1515/jaiscr-2017-0019.

Full text
Abstract:
AbstractRule extraction from neural networks is a fervent research topic. In the last 20 years many authors presented a number of techniques showing how to extract symbolic rules from Multi Layer Perceptrons (MLPs). Nevertheless, very few were related to ensembles of neural networks and even less for networks trained by deep learning. On several datasets we performed rule extraction from ensembles of Discretized Interpretable Multi Layer Perceptrons (DIMLP), and DIMLPs trained by deep learning. The results obtained on the Thyroid dataset and the Wisconsin Breast Cancer dataset show that the predictive accuracy of the extracted rules compare very favorably with respect to state of the art results. Finally, in the last classification problem on digit recognition, generated rules from the MNIST dataset can be viewed as discriminatory features in particular digit areas. Qualitatively, with respect to rule complexity in terms of number of generated rules and number of antecedents per rule, deep DIMLPs and DIMLPs trained by arcing give similar results on a binary classification problem involving digits 5 and 8. On the whole MNIST problem we showed that it is possible to determine the feature detectors created by neural networks and also that the complexity of the extracted rulesets can be well balanced between accuracy and interpretability.
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Taehwan, and Tülay Adalı. "Approximation by Fully Complex Multilayer Perceptrons." Neural Computation 15, no. 7 (July 1, 2003): 1641–66. http://dx.doi.org/10.1162/089976603321891846.

Full text
Abstract:
We investigate the approximation ability of a multi layer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential functionez that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin.
APA, Harvard, Vancouver, ISO, and other styles
46

Khudhur, Hisham M., and Kais I. Ibraheem. "Metaheuristic optimization algorithm based on the two-step Adams-Bashforth method in training multi-layer perceptrons." Eastern-European Journal of Enterprise Technologies 2, no. 4 (116) (April 28, 2022): 6–13. http://dx.doi.org/10.15587/1729-4061.2022.254023.

Full text
Abstract:
The proposed metaheuristic optimization algorithm based on the two-step Adams-Bashforth scheme (MOABT) was first used in this paper for Multilayer Perceptron Training (MLP). In computer science and mathematical examples, metaheuristic is high-level procedures or guidelines designed to find, devise, or select algorithmic research methods to obtain high-quality solutions to an example problem, especially if the information is insufficient or incomplete, or if computational capacity is limited. Many metaheuristic methods include some stochastic example operations, which means that the resulting solution is dependent on the random variables that are generated during the search. The use of higher evidence can frequently find good solutions with less computational effort than iterative methods and algorithms because it searches a broad range of feasible solutions at the same time. Therefore, metaheuristic is a useful approach to solving example problems. There are several characteristics that distinguish metaheuristic strategies for the research process. The goal is to efficiently explore the search perimeter to find the best and closest solution. The techniques that make up metaheuristic algorithms range from simple searches to complex learning processes. Eight model data sets are used to calculate the proposed approach, and there are five classification data sets and three proximate job data sets included in this set. The numerical results were compared with those of the well-known evolutionary trainer Gray Wolf Optimizer (GWO). The statistical study revealed that the MOABT algorithm can outperform other algorithms in terms of avoiding local optimum and speed of convergence to global optimum. The results also show that the proposed problems can be classified and approximated with high accuracy
APA, Harvard, Vancouver, ISO, and other styles
47

Ghanem, Waheed A. H. M., and Aman Jantan. "A Cognitively Inspired Hybridization of Artificial Bee Colony and Dragonfly Algorithms for Training Multi-layer Perceptrons." Cognitive Computation 10, no. 6 (September 12, 2018): 1096–134. http://dx.doi.org/10.1007/s12559-018-9588-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Dodd, Nigel. "Graph matching by stochastic optimisation applied to the implementation of multi layer perceptrons on transputer networks." Parallel Computing 10, no. 2 (April 1989): 135–42. http://dx.doi.org/10.1016/0167-8191(89)90013-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Han, Seongbae, Gyuyeol Kong, and Sooyong Choi. "A Detection Scheme With TMR Estimation Based on Multi-Layer Perceptrons for Bit Patterned Media Recording." IEEE Transactions on Magnetics 55, no. 7 (July 2019): 1–4. http://dx.doi.org/10.1109/tmag.2018.2889875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yılmaz, Ömer, Adem Alpaslan Altun, and Murat Köklü. "Optimizing the learning process of multi-layer perceptrons using a hybrid algorithm based on MVO and SA." International Journal of Industrial Engineering Computations 13, no. 4 (2022): 617–40. http://dx.doi.org/10.5267/j.ijiec.2022.5.003.

Full text
Abstract:
Artificial neural networks (ANNs) are one of the artificial intelligence techniques used in real-world problems and applications encountered in almost all industries such as education, health, chemistry, food, informatics, logistics, transportation. ANN is widely used in many techniques such as optimization, modelling, classification and forecasting, and many empirical studies have been carried out in areas such as planning, inventory management, maintenance, quality control, econometrics, supply chain management and logistics related to ANN. The most important and just as hard stage of ANNs is the learning process. This process is about finding optimal values in the search space for different datasets. In this process, the values generated by training algorithms are used as network parameters and are directly effective in the success of the neural network (NN). In classical training techniques, problems such as local optimum and slow convergence are encountered. Meta-heuristic algorithms for the training of ANNs in the face of this negative situation have been used in many studies as an alternative. In this study, a new hybrid algorithm namely MVOSANN is suggested for the training of ANNs, using Simulated annealing (SA) and Multi-verse optimizer (MVO) algorithms. The suggested MVOSANN algorithm has been experimented on 12 prevalently classification datasets. The productivity of MVOSANN has been compared with 12 well-recognized and current meta-heuristic algorithms. Experimental results show that MVOSANN produces very successful and competitive results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography