Academic literature on the topic 'Multi-layer perceptrons'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-layer perceptrons.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-layer perceptrons"

1

Racca, Robert. "Can periodic perceptrons replace multi-layer perceptrons?" Pattern Recognition Letters 21, no. 12 (November 2000): 1019–25. http://dx.doi.org/10.1016/s0167-8655(00)00057-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Garcı́a-Pedrajas, N., D. Ortiz-Boyer, and C. Hervás-Martı́nez. "Cooperative coevolution of generalized multi-layer perceptrons." Neurocomputing 56 (January 2004): 257–83. http://dx.doi.org/10.1016/j.neucom.2003.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

TSENG, YUEN-HSIEN, and JA-LING WU. "Decoding Reed-Muller codes by multi-layer perceptrons." International Journal of Electronics 75, no. 4 (October 1993): 589–94. http://dx.doi.org/10.1080/00207219308907134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Buchholz, Sven, and Gerald Sommer. "On Clifford neurons and Clifford multi-layer perceptrons." Neural Networks 21, no. 7 (September 2008): 925–35. http://dx.doi.org/10.1016/j.neunet.2008.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mirzai, A. R., A. Higgins, and D. Tsaptsinos. "Techniques for the minimisation of multi-layer perceptrons." Engineering Applications of Artificial Intelligence 6, no. 3 (June 1993): 265–77. http://dx.doi.org/10.1016/0952-1976(93)90069-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Egmont-Petersen, Michael, Jan L. Talmon, Arie Hasman, and Anton W. Ambergen. "Assessing the importance of features for multi-layer perceptrons." Neural Networks 11, no. 4 (June 1998): 623–35. http://dx.doi.org/10.1016/s0893-6080(98)00031-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Roque, Antonio Muñoz San, Carlos Maté, Javier Arroyo, and Ángel Sarabia. "iMLP: Applying Multi-Layer Perceptrons to Interval-Valued Data." Neural Processing Letters 25, no. 2 (February 15, 2007): 157–69. http://dx.doi.org/10.1007/s11063-007-9035-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vlachos, D. S. "A Local Supervised Learning Algorithm For Multi-Layer Perceptrons." Applied Numerical Analysis & Computational Mathematics 1, no. 2 (December 2004): 535–39. http://dx.doi.org/10.1002/anac.200410016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moustafa, Essam B., and Ammar Elsheikh. "Predicting Characteristics of Dissimilar Laser Welded Polymeric Joints Using a Multi-Layer Perceptrons Model Coupled with Archimedes Optimizer." Polymers 15, no. 1 (January 2, 2023): 233. http://dx.doi.org/10.3390/polym15010233.

Full text
Abstract:
This study investigates the application of a coupled multi-layer perceptrons (MLP) model with Archimedes optimizer (AO) to predict characteristics of dissimilar lap joints made of polymethyl methacrylate (PMMA) and polycarbonate (PC). The joints were welded using the laser transmission welding (LTW) technique equipped with a beam wobbling feature. The inputs of the models were laser power, welding speed, pulse frequency, wobble frequency, and wobble width; whereas, the outputs were seam width and shear strength of the joint. The Archimedes optimizer was employed to obtain the optimal internal parameters of the multi-layer perceptrons. In addition to the Archimedes optimizer, the conventional gradient descent technique, as well as the particle swarm optimizer (PSO), was employed as internal optimizers of the multi-layer perceptrons model. The prediction accuracy of the three models was compared using different error measures. The AO-MLP outperformed the other two models. The computed root mean square errors of the MLP, PSO-MLP, and AO-MLP models are (39.798, 19.909, and 2.283) and (0.153, 0.084, and 0.0321) for shear strength and seam width, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Xi, Yan Hui, and Hui Peng. "Training Multi-Layer Perceptrons with the Unscented Kalman Particle Filter." Advanced Materials Research 542-543 (June 2012): 745–48. http://dx.doi.org/10.4028/www.scientific.net/amr.542-543.745.

Full text
Abstract:
Many Bayesian learning approaches to multi-layer perceptrons (MLPs) parameters optimization have been proposed such as the extended Kalman filter (EKF). In this paper, a sequential approach is applied to train the MLPs. Based on the particle filter, the approach named unscented Kalman particle filter (UPF) uses the unscented Kalman filter as proposal distribution to generate the importance sampling density. The UPF are devised to deal with the high dimensional parameter space that is inherent to neural network models. Simulation results show that the new algorithm performs better than traditional optimization methods such as the extended Kalman filter.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multi-layer perceptrons"

1

Zhao, Lenny. "Uncertainty prediction with multi-layer perceptrons." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55733.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cairns, Graham Andrew. "Learning with analogue VLSI multi-layer perceptrons." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Papadopoulos, Georgios. "Theoretical issues and practical considerations concerning confidence measures for multi-layer perceptrons." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/12753.

Full text
Abstract:
The primary aim of this thesis is to study existing CM methods and assess their practicability and performance in harsh real-world environments. The motivation for this work was a real industrial application - the development of a paper curl prediction system. Curl is an important paper quality parameter that can only be measured after production. The available data were sparse and were known to be corrupted by gross errors. Moreover, it was suspected that data noise was not constant over input space. Three approaches were identified as suitable for use in real-world applications: maximum likelihood (ML), the approximate Bayesian approach and the bootstrap technique. These methods were initially compared using a standard CM performance evaluation method, based on estimating the prediction interval coverage probability (PI CP). It was found that the PI CP metric can only gauge CM performance as an average over the input space. However, local CM performance is crucial because a CM must associate low confidence with high data noise/low data density regions and high confidence with low noise/high data density regions. Moreover, evaluating local performance could be used to gauge the input-dependency of the noise in the data. For this reason, a new CM evaluation technique was developed to study local CM performance. The new approach, called classification of local uncertainty estimates (CLUES), was then used for a new comparison study, this time in the light of local performance. Three main conclusions were reached: the noise in the curl data was found to have input-dependent variance, the approximate Bayesian approach outperformed the other two in most cases, and the bootstrap technique was found to be inferior to both ML and Bayesian methods for data sets of input-dependent data noise variance.
APA, Harvard, Vancouver, ISO, and other styles
4

Shepherd, Adrian John. "Novel second-order techniques and global optimisation methods for supervised training of multi-layer perceptrons." Thesis, University College London (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Collobert, Ronan. "Algorithmes d'Apprentissage pour grandes bases de données." Paris 6, 2004. http://www.theses.fr/2004PA066063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shao, Hang. "A Fast MLP-based Learning Method and its Application to Mine Countermeasure Missions." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23512.

Full text
Abstract:
In this research, a novel machine learning method is designed and applied to Mine Countermeasure Missions. Similarly to some kernel methods, the proposed approach seeks to compute a linear model from another higher dimensional feature space. However, no kernel is used and the feature mapping is explicit. Computation can be done directly in the accessible feature space. In the proposed approach, the feature projection is implemented by constructing a large hidden layer, which differs from traditional belief that Multi-Layer Perceptron is usually funnel-shaped and the hidden layer is used as feature extractor. The proposed approach is a general method that can be applied to various problems. It is able to improve the performance of the neural network based methods and the learning speed of support vector machine. The classification speed of the proposed approach is also faster than that of kernel machines on the mine countermeasure mission task.
APA, Harvard, Vancouver, ISO, and other styles
7

Coughlin, Michael J., and n/a. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks." Griffith University. School of Applied Psychology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030409.110949.

Full text
Abstract:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
APA, Harvard, Vancouver, ISO, and other styles
8

Coughlin, Michael J. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365854.

Full text
Abstract:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Applied Psychology
Griffith Health
Full Text
APA, Harvard, Vancouver, ISO, and other styles
9

Dunne, R. A. "Multi-layer perceptron models for classification." Thesis, Dunne, R.A. (2003) Multi-layer perceptron models for classification. PhD thesis, Murdoch University, 2003. https://researchrepository.murdoch.edu.au/id/eprint/50257/.

Full text
Abstract:
This thesis concerns the Multi-layer Perceptron (MLP) model, one of a variety of neural network models that have come into wide prominence since the mid 1980s for the classification of individuals into pre-defined classes based on a vector of individual measurements. Each discipline in which the MLP model has had influence, including computing, electrical engineering and psychology, has recast the model into its own language and imbued it with its own concerns. This divergence of terminologies has made the literature somewhat impenetrable but has also led to an appreciation of other disciplines' priorities and interests. The major aim of the thesis has been to bring the MLP model within the frame­work of statistics. We have two aims here: one is to make the MLP model more intelligible to statisticians; and the other is to bring the insights of statistics to the MLP model. A statistical modeling approach can make valuable contributions, ranging from small but important clarifications, such as clearing up the confusion in the MLP literature between the model and the methodology for fitting the model, to much larger insights such as determining the robustness of the model in the event of outlying or atypical data. We provide a treatment of the relationship of the MLP classifier to more familiar statistical models and of the various fitting and model selection methodologies currently used for MLP models. A description of the influence curves of the MLP is provided, leading to both an understanding of how the MLP model relates to logistic regression (and to robust versions of logistic regression) and to a proposal for a robust MLP model. Practical problems associated with the fitting of MLP models, from the effects of scaling of the input data to the effects of various penalty terms, are also considered. The MLP model has a variable architecture with the major source of variation being the number of hidden layer processing units. A direct method is given for determining this in multi-class problems where the pairwise decision boundary is linear in the feature space. Finally, in applications such as remote sensing each vector of measurements or pixel contains contextual information about the neighboring pixels. The MLP model is modified to incorporate this contextual information into the classification procedure.
APA, Harvard, Vancouver, ISO, and other styles
10

Power, Phillip David. "Non-linear multi-layer perceptron channel equalisation." Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343086.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multi-layer perceptrons"

1

Ma, Zhe. Explanation by general rules extracted from trained multi-layer perceptrons. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Peeling, S. M. Experiments in isolated digit recognition using the multi-layer perceptron. [London: HMSO, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lont, Jerzy B. Analog CMOS implementatrion of a multi-layer perceptron with nonlinear synapses. Kontanz: Hartung-Gorre, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shepherd, Adrian J. Second-order methods for neural networks: Fast and reliable training methods for multi-layer perceptrons. London: Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Gonghui. Design and evaluation of a multi-output-layer perceptron. [s.l: The Author], 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harrison, R. F. The multi-layer perceptron as an aid to the early diagnosis of myocardial infarction. Sheffield: University of Sheffield, Dept. of Control Engineering, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Zhe. Dynamic query algorithms for human-computer interaction based on information gain and the multi-layer perceptron. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shepherd, Adrian J. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shepherd, Adrian J. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer London, Limited, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dissertation: Autonomous Construction of Multi Layer Perceptron Neural Networks. Storming Media, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-layer perceptrons"

1

Kruse, Rudolf, Christian Borgelt, Frank Klawonn, Christian Moewes, Matthias Steinbrecher, and Pascal Held. "Multi-Layer Perceptrons." In Texts in Computer Science, 47–81. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5013-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kruse, Rudolf, Sanaz Mostaghim, Christian Borgelt, Christian Braune, and Matthias Steinbrecher. "Multi-layer Perceptrons." In Texts in Computer Science, 53–124. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-42227-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Conan-Guez, Brieuc, and Fabrice Rossi. "Phoneme Discrimination with Functional Multi-Layer Perceptrons." In Classification, Clustering, and Data Mining Applications, 157–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-642-17103-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bo, Liefeng, Ling Wang, and Licheng Jiao. "Training Multi-layer Perceptrons Using MiniMin Approach." In Computational Intelligence and Security, 909–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11596448_135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goldberg, Yoav. "From Linear Models to Multi-layer Perceptrons." In Neural Network Methods for Natural Language Processing, 37–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-02165-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arce, Fernando, Erik Zamora, Gerardo Hernández, Javier M. Antelis, and Humberto Sossa. "Recognizing Motor Imagery Tasks Using Deep Multi-Layer Perceptrons." In Machine Learning and Data Mining in Pattern Recognition, 468–82. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96133-0_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ortigosa, E. M., P. M. Ortigosa, A. Cañas, E. Ros, R. Agís, and J. Ortega. "FPGA Implementation of Multi-layer Perceptrons for Speech Recognition." In Field Programmable Logic and Application, 1048–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45234-8_117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tanahashi, Yusuke, Kazumi Saito, and Ryohei Nakano. "Model Selection and Weight Sharing of Multi-layer Perceptrons." In Lecture Notes in Computer Science, 716–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11554028_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Köksal, Fatih, Ethem Alpaydyn, and Günhan Dündar. "Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing." In Artificial Neural Networks — ICANN 2001, 211–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lappalainen, Harri, and Antti Honkela. "Bayesian Non-Linear Independent Component Analysis by Multi-Layer Perceptrons." In Advances in Independent Component Analysis, 93–121. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0443-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-layer perceptrons"

1

Marchesi, M., G. Orlandi, F. Piazza, L. Pollonara, and A. Uncini. "Multi-layer perceptrons with discrete weights." In 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Coe, Brian. "Multi-layer Perceptrons for Subvocal Recognition." In 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2017. http://dx.doi.org/10.1109/ictai.2017.00054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hauger, S., and T. Windeatt. "ECOC and boosting with multi-layer perceptrons." In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yamany, Waleed, Mohammed Fawzy, Alaa Tharwat, and Aboul Ella Hassanien. "Moth-flame optimization for training Multi-Layer Perceptrons." In 2015 11th International Computer Engineering Conference (ICENCO). IEEE, 2015. http://dx.doi.org/10.1109/icenco.2015.7416360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alboaneen, Dabiah Ahmed, Huaglory Tianfield, and Yan Zhang. "Glowworm Swarm Optimisation for Training Multi-Layer Perceptrons." In UCC '17: 10th International Conference on Utility and Cloud Computing. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3148055.3148075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ritschel, W., T. Pfeifer, and R. Grob. "Rating of pattern classifications in multi-layer perceptrons." In the 1994 ACM symposium. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/326619.326684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bernardo-Torres, Abraham, and Pilar Gomez-Gil. "One-step forecasting of seismograms using multi-layer perceptrons." In 2009 6th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE 2009). IEEE, 2009. http://dx.doi.org/10.1109/iceee.2009.5393349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaban, Ata. "Compressive Learning of Multi-layer Perceptrons: An Error Analysis." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8851743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Buhrke, E. R., and J. L. LoCicero. "Fast learning for multi-layer perceptrons using statistical techniques." In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.225887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Lilei, Ying Zhang, and Vrizlynn L. L. Thing. "Understanding multi-layer perceptrons on spatial image steganalysis features." In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282181.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multi-layer perceptrons"

1

Chen, B., T. Hickling, M. Krnjajic, W. Hanley, G. Clark, J. Nitao, D. Knapp, L. Hiller, and M. Mugge. Multi-Layer Perceptrons and Support Vector Machines for Detection Problems with Low False Alarm Requirements: an Eight-Month Progress Report. Office of Scientific and Technical Information (OSTI), January 2007. http://dx.doi.org/10.2172/922310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography