Gotowa bibliografia na temat „Multi-layer perceptrons”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multi-layer perceptrons”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Multi-layer perceptrons"

1

Racca, Robert. "Can periodic perceptrons replace multi-layer perceptrons?" Pattern Recognition Letters 21, nr 12 (listopad 2000): 1019–25. http://dx.doi.org/10.1016/s0167-8655(00)00057-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Garcı́a-Pedrajas, N., D. Ortiz-Boyer i C. Hervás-Martı́nez. "Cooperative coevolution of generalized multi-layer perceptrons". Neurocomputing 56 (styczeń 2004): 257–83. http://dx.doi.org/10.1016/j.neucom.2003.09.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

TSENG, YUEN-HSIEN, i JA-LING WU. "Decoding Reed-Muller codes by multi-layer perceptrons". International Journal of Electronics 75, nr 4 (październik 1993): 589–94. http://dx.doi.org/10.1080/00207219308907134.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Buchholz, Sven, i Gerald Sommer. "On Clifford neurons and Clifford multi-layer perceptrons". Neural Networks 21, nr 7 (wrzesień 2008): 925–35. http://dx.doi.org/10.1016/j.neunet.2008.03.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Mirzai, A. R., A. Higgins i D. Tsaptsinos. "Techniques for the minimisation of multi-layer perceptrons". Engineering Applications of Artificial Intelligence 6, nr 3 (czerwiec 1993): 265–77. http://dx.doi.org/10.1016/0952-1976(93)90069-a.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Egmont-Petersen, Michael, Jan L. Talmon, Arie Hasman i Anton W. Ambergen. "Assessing the importance of features for multi-layer perceptrons". Neural Networks 11, nr 4 (czerwiec 1998): 623–35. http://dx.doi.org/10.1016/s0893-6080(98)00031-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Roque, Antonio Muñoz San, Carlos Maté, Javier Arroyo i Ángel Sarabia. "iMLP: Applying Multi-Layer Perceptrons to Interval-Valued Data". Neural Processing Letters 25, nr 2 (15.02.2007): 157–69. http://dx.doi.org/10.1007/s11063-007-9035-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Vlachos, D. S. "A Local Supervised Learning Algorithm For Multi-Layer Perceptrons". Applied Numerical Analysis & Computational Mathematics 1, nr 2 (grudzień 2004): 535–39. http://dx.doi.org/10.1002/anac.200410016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Moustafa, Essam B., i Ammar Elsheikh. "Predicting Characteristics of Dissimilar Laser Welded Polymeric Joints Using a Multi-Layer Perceptrons Model Coupled with Archimedes Optimizer". Polymers 15, nr 1 (2.01.2023): 233. http://dx.doi.org/10.3390/polym15010233.

Pełny tekst źródła
Streszczenie:
This study investigates the application of a coupled multi-layer perceptrons (MLP) model with Archimedes optimizer (AO) to predict characteristics of dissimilar lap joints made of polymethyl methacrylate (PMMA) and polycarbonate (PC). The joints were welded using the laser transmission welding (LTW) technique equipped with a beam wobbling feature. The inputs of the models were laser power, welding speed, pulse frequency, wobble frequency, and wobble width; whereas, the outputs were seam width and shear strength of the joint. The Archimedes optimizer was employed to obtain the optimal internal parameters of the multi-layer perceptrons. In addition to the Archimedes optimizer, the conventional gradient descent technique, as well as the particle swarm optimizer (PSO), was employed as internal optimizers of the multi-layer perceptrons model. The prediction accuracy of the three models was compared using different error measures. The AO-MLP outperformed the other two models. The computed root mean square errors of the MLP, PSO-MLP, and AO-MLP models are (39.798, 19.909, and 2.283) and (0.153, 0.084, and 0.0321) for shear strength and seam width, respectively.
Style APA, Harvard, Vancouver, ISO itp.
10

Xi, Yan Hui, i Hui Peng. "Training Multi-Layer Perceptrons with the Unscented Kalman Particle Filter". Advanced Materials Research 542-543 (czerwiec 2012): 745–48. http://dx.doi.org/10.4028/www.scientific.net/amr.542-543.745.

Pełny tekst źródła
Streszczenie:
Many Bayesian learning approaches to multi-layer perceptrons (MLPs) parameters optimization have been proposed such as the extended Kalman filter (EKF). In this paper, a sequential approach is applied to train the MLPs. Based on the particle filter, the approach named unscented Kalman particle filter (UPF) uses the unscented Kalman filter as proposal distribution to generate the importance sampling density. The UPF are devised to deal with the high dimensional parameter space that is inherent to neural network models. Simulation results show that the new algorithm performs better than traditional optimization methods such as the extended Kalman filter.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Multi-layer perceptrons"

1

Zhao, Lenny. "Uncertainty prediction with multi-layer perceptrons". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55733.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Cairns, Graham Andrew. "Learning with analogue VLSI multi-layer perceptrons". Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296901.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Papadopoulos, Georgios. "Theoretical issues and practical considerations concerning confidence measures for multi-layer perceptrons". Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/12753.

Pełny tekst źródła
Streszczenie:
The primary aim of this thesis is to study existing CM methods and assess their practicability and performance in harsh real-world environments. The motivation for this work was a real industrial application - the development of a paper curl prediction system. Curl is an important paper quality parameter that can only be measured after production. The available data were sparse and were known to be corrupted by gross errors. Moreover, it was suspected that data noise was not constant over input space. Three approaches were identified as suitable for use in real-world applications: maximum likelihood (ML), the approximate Bayesian approach and the bootstrap technique. These methods were initially compared using a standard CM performance evaluation method, based on estimating the prediction interval coverage probability (PI CP). It was found that the PI CP metric can only gauge CM performance as an average over the input space. However, local CM performance is crucial because a CM must associate low confidence with high data noise/low data density regions and high confidence with low noise/high data density regions. Moreover, evaluating local performance could be used to gauge the input-dependency of the noise in the data. For this reason, a new CM evaluation technique was developed to study local CM performance. The new approach, called classification of local uncertainty estimates (CLUES), was then used for a new comparison study, this time in the light of local performance. Three main conclusions were reached: the noise in the curl data was found to have input-dependent variance, the approximate Bayesian approach outperformed the other two in most cases, and the bootstrap technique was found to be inferior to both ML and Bayesian methods for data sets of input-dependent data noise variance.
Style APA, Harvard, Vancouver, ISO itp.
4

Shepherd, Adrian John. "Novel second-order techniques and global optimisation methods for supervised training of multi-layer perceptrons". Thesis, University College London (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321662.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Collobert, Ronan. "Algorithmes d'Apprentissage pour grandes bases de données". Paris 6, 2004. http://www.theses.fr/2004PA066063.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Shao, Hang. "A Fast MLP-based Learning Method and its Application to Mine Countermeasure Missions". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23512.

Pełny tekst źródła
Streszczenie:
In this research, a novel machine learning method is designed and applied to Mine Countermeasure Missions. Similarly to some kernel methods, the proposed approach seeks to compute a linear model from another higher dimensional feature space. However, no kernel is used and the feature mapping is explicit. Computation can be done directly in the accessible feature space. In the proposed approach, the feature projection is implemented by constructing a large hidden layer, which differs from traditional belief that Multi-Layer Perceptron is usually funnel-shaped and the hidden layer is used as feature extractor. The proposed approach is a general method that can be applied to various problems. It is able to improve the performance of the neural network based methods and the learning speed of support vector machine. The classification speed of the proposed approach is also faster than that of kernel machines on the mine countermeasure mission task.
Style APA, Harvard, Vancouver, ISO itp.
7

Coughlin, Michael J., i n/a. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks". Griffith University. School of Applied Psychology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030409.110949.

Pełny tekst źródła
Streszczenie:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
Style APA, Harvard, Vancouver, ISO itp.
8

Coughlin, Michael J. "Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks". Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365854.

Pełny tekst źródła
Streszczenie:
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Applied Psychology
Griffith Health
Full Text
Style APA, Harvard, Vancouver, ISO itp.
9

Dunne, R. A. "Multi-layer perceptron models for classification". Thesis, Dunne, R.A. (2003) Multi-layer perceptron models for classification. PhD thesis, Murdoch University, 2003. https://researchrepository.murdoch.edu.au/id/eprint/50257/.

Pełny tekst źródła
Streszczenie:
This thesis concerns the Multi-layer Perceptron (MLP) model, one of a variety of neural network models that have come into wide prominence since the mid 1980s for the classification of individuals into pre-defined classes based on a vector of individual measurements. Each discipline in which the MLP model has had influence, including computing, electrical engineering and psychology, has recast the model into its own language and imbued it with its own concerns. This divergence of terminologies has made the literature somewhat impenetrable but has also led to an appreciation of other disciplines' priorities and interests. The major aim of the thesis has been to bring the MLP model within the frame­work of statistics. We have two aims here: one is to make the MLP model more intelligible to statisticians; and the other is to bring the insights of statistics to the MLP model. A statistical modeling approach can make valuable contributions, ranging from small but important clarifications, such as clearing up the confusion in the MLP literature between the model and the methodology for fitting the model, to much larger insights such as determining the robustness of the model in the event of outlying or atypical data. We provide a treatment of the relationship of the MLP classifier to more familiar statistical models and of the various fitting and model selection methodologies currently used for MLP models. A description of the influence curves of the MLP is provided, leading to both an understanding of how the MLP model relates to logistic regression (and to robust versions of logistic regression) and to a proposal for a robust MLP model. Practical problems associated with the fitting of MLP models, from the effects of scaling of the input data to the effects of various penalty terms, are also considered. The MLP model has a variable architecture with the major source of variation being the number of hidden layer processing units. A direct method is given for determining this in multi-class problems where the pairwise decision boundary is linear in the feature space. Finally, in applications such as remote sensing each vector of measurements or pixel contains contextual information about the neighboring pixels. The MLP model is modified to incorporate this contextual information into the classification procedure.
Style APA, Harvard, Vancouver, ISO itp.
10

Power, Phillip David. "Non-linear multi-layer perceptron channel equalisation". Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343086.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Multi-layer perceptrons"

1

Ma, Zhe. Explanation by general rules extracted from trained multi-layer perceptrons. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Peeling, S. M. Experiments in isolated digit recognition using the multi-layer perceptron. [London: HMSO, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lont, Jerzy B. Analog CMOS implementatrion of a multi-layer perceptron with nonlinear synapses. Kontanz: Hartung-Gorre, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Shepherd, Adrian J. Second-order methods for neural networks: Fast and reliable training methods for multi-layer perceptrons. London: Springer, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zheng, Gonghui. Design and evaluation of a multi-output-layer perceptron. [s.l: The Author], 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Harrison, R. F. The multi-layer perceptron as an aid to the early diagnosis of myocardial infarction. Sheffield: University of Sheffield, Dept. of Control Engineering, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ma, Zhe. Dynamic query algorithms for human-computer interaction based on information gain and the multi-layer perceptron. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Shepherd, Adrian J. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Shepherd, Adrian J. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer London, Limited, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Dissertation: Autonomous Construction of Multi Layer Perceptron Neural Networks. Storming Media, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Multi-layer perceptrons"

1

Kruse, Rudolf, Christian Borgelt, Frank Klawonn, Christian Moewes, Matthias Steinbrecher i Pascal Held. "Multi-Layer Perceptrons". W Texts in Computer Science, 47–81. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5013-8_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kruse, Rudolf, Sanaz Mostaghim, Christian Borgelt, Christian Braune i Matthias Steinbrecher. "Multi-layer Perceptrons". W Texts in Computer Science, 53–124. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-42227-1_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Conan-Guez, Brieuc, i Fabrice Rossi. "Phoneme Discrimination with Functional Multi-Layer Perceptrons". W Classification, Clustering, and Data Mining Applications, 157–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-642-17103-1_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bo, Liefeng, Ling Wang i Licheng Jiao. "Training Multi-layer Perceptrons Using MiniMin Approach". W Computational Intelligence and Security, 909–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11596448_135.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Goldberg, Yoav. "From Linear Models to Multi-layer Perceptrons". W Neural Network Methods for Natural Language Processing, 37–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-02165-7_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Arce, Fernando, Erik Zamora, Gerardo Hernández, Javier M. Antelis i Humberto Sossa. "Recognizing Motor Imagery Tasks Using Deep Multi-Layer Perceptrons". W Machine Learning and Data Mining in Pattern Recognition, 468–82. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96133-0_35.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ortigosa, E. M., P. M. Ortigosa, A. Cañas, E. Ros, R. Agís i J. Ortega. "FPGA Implementation of Multi-layer Perceptrons for Speech Recognition". W Field Programmable Logic and Application, 1048–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45234-8_117.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Tanahashi, Yusuke, Kazumi Saito i Ryohei Nakano. "Model Selection and Weight Sharing of Multi-layer Perceptrons". W Lecture Notes in Computer Science, 716–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11554028_100.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Köksal, Fatih, Ethem Alpaydyn i Günhan Dündar. "Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing". W Artificial Neural Networks — ICANN 2001, 211–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_30.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Lappalainen, Harri, i Antti Honkela. "Bayesian Non-Linear Independent Component Analysis by Multi-Layer Perceptrons". W Advances in Independent Component Analysis, 93–121. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0443-8_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Multi-layer perceptrons"

1

Marchesi, M., G. Orlandi, F. Piazza, L. Pollonara i A. Uncini. "Multi-layer perceptrons with discrete weights". W 1990 IJCNN International Joint Conference on Neural Networks. IEEE, 1990. http://dx.doi.org/10.1109/ijcnn.1990.137772.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Coe, Brian. "Multi-layer Perceptrons for Subvocal Recognition". W 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2017. http://dx.doi.org/10.1109/ictai.2017.00054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hauger, S., i T. Windeatt. "ECOC and boosting with multi-layer perceptrons". W Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334565.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Yamany, Waleed, Mohammed Fawzy, Alaa Tharwat i Aboul Ella Hassanien. "Moth-flame optimization for training Multi-Layer Perceptrons". W 2015 11th International Computer Engineering Conference (ICENCO). IEEE, 2015. http://dx.doi.org/10.1109/icenco.2015.7416360.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Alboaneen, Dabiah Ahmed, Huaglory Tianfield i Yan Zhang. "Glowworm Swarm Optimisation for Training Multi-Layer Perceptrons". W UCC '17: 10th International Conference on Utility and Cloud Computing. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3148055.3148075.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ritschel, W., T. Pfeifer i R. Grob. "Rating of pattern classifications in multi-layer perceptrons". W the 1994 ACM symposium. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/326619.326684.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bernardo-Torres, Abraham, i Pilar Gomez-Gil. "One-step forecasting of seismograms using multi-layer perceptrons". W 2009 6th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE 2009). IEEE, 2009. http://dx.doi.org/10.1109/iceee.2009.5393349.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kaban, Ata. "Compressive Learning of Multi-layer Perceptrons: An Error Analysis". W 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8851743.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Buhrke, E. R., i J. L. LoCicero. "Fast learning for multi-layer perceptrons using statistical techniques". W [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.225887.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zheng, Lilei, Ying Zhang i Vrizlynn L. L. Thing. "Understanding multi-layer perceptrons on spatial image steganalysis features". W 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282181.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Multi-layer perceptrons"

1

Chen, B., T. Hickling, M. Krnjajic, W. Hanley, G. Clark, J. Nitao, D. Knapp, L. Hiller i M. Mugge. Multi-Layer Perceptrons and Support Vector Machines for Detection Problems with Low False Alarm Requirements: an Eight-Month Progress Report. Office of Scientific and Technical Information (OSTI), styczeń 2007. http://dx.doi.org/10.2172/922310.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii