Journal articles on the topic 'Perceptron'

To see the other types of publications on this topic, follow the link: Perceptron.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Perceptron.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Sheng-De, and Tsong-Chih Hsu. "Perceptron–perceptron net." Pattern Recognition Letters 19, no. 7 (May 1998): 559–68. http://dx.doi.org/10.1016/s0167-8655(98)00045-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Xin, Mian Xie, Li Xia Tang, and Chen Yu Li. "Learning Algorithm for Fuzzy Perceptron with Max-Product Composition." Applied Mechanics and Materials 687-691 (November 2014): 1359–62. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1359.

Full text
Abstract:
Fuzzy neural networks is a powerful computational model, which integrates fuzzy systems with neural networks, and fuzzy perceptron is a kind of this neural networks. In this paper, a learning algorithm is proposed for a fuzzy perceptron with max-product composition, and the topological structure of this fuzzy perceptron is the same as conventional linear perceptrons. The inner operations involved in the working process of this fuzzy perceptron are based on the max-product logical operations rather than conventional multiplication and summation etc. To illustrate the finite convergence of proposed algorithm, some numerical experiments are provided.
APA, Harvard, Vancouver, ISO, and other styles
3

Banda, Peter, Christof Teuscher, and Matthew R. Lakin. "Online Learning in a Chemical Perceptron." Artificial Life 19, no. 2 (April 2013): 195–219. http://dx.doi.org/10.1162/artl_a_00105.

Full text
Abstract:
Autonomous learning implemented purely by means of a synthetic chemical system has not been previously realized. Learning promotes reusability and minimizes the system design to simple input-output specification. In this article we introduce a chemical perceptron, the first full-featured implementation of a perceptron in an artificial (simulated) chemistry. A perceptron is the simplest system capable of learning, inspired by the functioning of a biological neuron. Our artificial chemistry is deterministic and discrete-time, and follows Michaelis-Menten kinetics. We present two models, the weight-loop perceptron and the weight-race perceptron, which represent two possible strategies for a chemical implementation of linear integration and threshold. Both chemical perceptrons can successfully identify all 14 linearly separable two-input logic functions and maintain high robustness against rate-constant perturbations. We suggest that DNA strand displacement could, in principle, provide an implementation substrate for our model, allowing the chemical perceptron to perform reusable, programmable, and adaptable wet biochemical computing.
APA, Harvard, Vancouver, ISO, and other styles
4

TOH, H. S. "WEIGHT CONFIGURATIONS OF TRAINED PERCEPTRONS." International Journal of Neural Systems 04, no. 03 (September 1993): 231–46. http://dx.doi.org/10.1142/s0129065793000195.

Full text
Abstract:
We strive to predict the function mapping and rules performed by a trained perceptron from studying the weights. We derive a few properties of the trained weights and show how the perceptron's representation of knowledge, rules and functions depend on these properties. Two types of perceptrons are studied — one case with continuous inputs and one hidden layer, the other a simple binary classifier with boolean inputs and no hidden units.
APA, Harvard, Vancouver, ISO, and other styles
5

Nadal, J. P., and N. Parga. "Duality Between Learning Machines: A Bridge Between Supervised and Unsupervised Learning." Neural Computation 6, no. 3 (May 1994): 491–508. http://dx.doi.org/10.1162/neco.1994.6.3.491.

Full text
Abstract:
We exhibit a duality between two perceptrons that allows us to compare the theoretical analysis of supervised and unsupervised learning tasks. The first perceptron has one output and is asked to learn a classification of p patterns. The second (dual) perceptron has p outputs and is asked to transmit as much information as possible on a distribution of inputs. We show in particular that the maximum information that can be stored in the couplings for the supervised learning task is equal to the maximum information that can be transmitted by the dual perceptron.
APA, Harvard, Vancouver, ISO, and other styles
6

Adams, A., and S. J. Bye. "New perceptron." Electronics Letters 28, no. 3 (1992): 321. http://dx.doi.org/10.1049/el:19920199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Martinelli, G., and F. M. Mascioli. "Cascade perceptron." Electronics Letters 28, no. 10 (May 7, 1992): 947–49. http://dx.doi.org/10.1049/el:19920600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ringienė, Laura, and Gintautas Dzemyda. "Specialios struktūros daugiasluoksnis perceptronas daugiamačiams duomenims vizualizuoti." Informacijos mokslai 50 (January 1, 2009): 358–64. http://dx.doi.org/10.15388/im.2009.0.3210.

Full text
Abstract:
Pasiūlytas ir ištirtas radialinių bazinių funkcijų ir daugiasluoksnio perceptrono junginys daugiamačiams duomenis vizualizuoti. Siūlomas vizualizavimo būdas apima daugiamačių duomenų matmenų mažinimą naudojant radialines bazines funkcijas, daugiamačių duomenų suskirstymą į klasterius, klasterį charakterizuojančių skaitinių reikšmių nustatymą ir daugiamačių duomenų vizualizavimą dirbtinio neuroninio tinklo paskutiniame paslėptajame sluoksnyje.Special Multilayer Perceptron for Multidimensional Data VisualizationLaura Ringienė, Gintautas Dzemyda SummaryIn this paper a special feed forward neural network, consisting of the radial basis function layer and a multilayer perceptron is presented. The multilayer perceptron has been proposed and investigated for multidimensional data visualization. The roposedvisualization approach includes data clustering, determining the parameters of the radial basis function and forming the data set to train the multilayer perceptron. The outputs of the last hidden layer are assigned as coordinates of the visualized points.
APA, Harvard, Vancouver, ISO, and other styles
9

KUKOLJ, DRAGAN D., MIROSLAVA T. BERKO-PUSIC, and BRANISLAV ATLAGIC. "Experimental design of supervisory control functions based on multilayer perceptrons." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, no. 5 (November 2001): 425–31. http://dx.doi.org/10.1017/s0890060401155058.

Full text
Abstract:
This article presents the results of research concerning possibilities of applying multilayer perceptron type of neural network for fault diagnosis, state estimation, and prediction in the gas pipeline transmission network. The influence of several factors on accuracy of the multilayer perceptron was considered. The emphasis was put on the multilayer perceptrons' function as a state estimator. The choice of the most informative features, the amount and sampling period of training data sets, as well as different configurations of multilayer perceptrons were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
10

Elizalde, E., and S. Gomez. "Multistate perceptrons: learning rule and perceptron of maximal stability." Journal of Physics A: Mathematical and General 25, no. 19 (October 7, 1992): 5039–45. http://dx.doi.org/10.1088/0305-4470/25/19/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Legenstein, Robert, and Wolfgang Maass. "On the Classification Capability of Sign-Constrained Perceptrons." Neural Computation 20, no. 1 (January 2008): 288–309. http://dx.doi.org/10.1162/neco.2008.20.1.288.

Full text
Abstract:
The perceptron (also referred to as McCulloch-Pitts neuron, or linear threshold gate) is commonly used as a simplified model for the discrimination and learning capability of a biological neuron. Criteria that tell us when a perceptron can implement (or learn to implement) all possible dichotomies over a given set of input patterns are well known, but only for the idealized case, where one assumes that the sign of a synaptic weight can be switched during learning. We present in this letter an analysis of the classification capability of the biologically more realistic model of a sign-constrained perceptron, where the signs of synaptic weights remain fixed during learning (which is the case for most types of biological synapses). In particular, the VC-dimension of sign-constrained perceptrons is determined, and a necessary and sufficient criterion is provided that tells us when all 2m dichotomies over a given set of m patterns can be learned by a sign-constrained perceptron. We also show that uniformity of L1 norms of input patterns is a sufficient condition for full representation power in the case where all weights are required to be nonnegative. Finally, we exhibit cases where the sign constraint of a perceptron drastically reduces its classification capability. Our theoretical analysis is complemented by computer simulations, which demonstrate in particular that sparse input patterns improve the classification capability of sign-constrained perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
12

Shimada, Kazutaka, Ryosuke Muto, and Tsutomu Endo. "A Combined Method Based on SVM and Online Learning with HOG for Hand Shape Recognition." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 6 (September 20, 2012): 687–95. http://dx.doi.org/10.20965/jaciii.2012.p0687.

Full text
Abstract:
In this paper, we propose a combined method for hand shape recognition. It consists of Support Vector Machines (SVMs) and an online learning algorithm based on the perceptron. We apply HOG features to each method. First, our method estimates the hand shape of an input image by using SVMs. Here, an online learning method with the perceptron uses an input image as new training data if the image is effective in relearning in the recognition process. Next, we select a final hand shape from the outputs of SVMs and perceptrons by using the score from SVMs. The combined method with the online perceptron is robust against unknown users because it contains a relearning process for the current user. Therefore applying the online perceptron leads to an improvement in accuracy. We compare the combined method with a method that uses only SVMs. Experimental results show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
13

K., Ravichandran, Arulchelvan S., and PeriyaKannan K. "Perception of medical awareness of media analyzed by multilayer perceptron." Journal of Media and Communication Studies 10, no. 10 (November 30, 2018): 118–27. http://dx.doi.org/10.5897/jmcs2018.0627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Yoona, Zvi A. Yaari, Alex Settle, Daniel A. Heller, Ming Zheng, and Anand Jagota. "Machine Learning for Molecular Perceptron: A Perception-Based Sensing System." ECS Meeting Abstracts MA2020-01, no. 6 (May 1, 2020): 632. http://dx.doi.org/10.1149/ma2020-016632mtgabs.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jiun-Hung Chen and Chu-Song Chen. "Fuzzy kernel perceptron." IEEE Transactions on Neural Networks 13, no. 6 (November 2002): 1364–73. http://dx.doi.org/10.1109/tnn.2002.804311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Anguita, D., A. Boni, and S. Ridella. "Digital kernel perceptron." Electronics Letters 38, no. 10 (2002): 445. http://dx.doi.org/10.1049/el:20020330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Martinelli, G., L. Prina Ricotti, and S. Ragazzini. "Distributed arithmetic perceptron." IEE Proceedings - Circuits, Devices and Systems 141, no. 5 (1994): 382. http://dx.doi.org/10.1049/ip-cds:19941187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Caminhas, Walmir M., Douglas A. G. Vieira, and João A. Vasconcelos. "Parallel layer perceptron." Neurocomputing 55, no. 3-4 (October 2003): 771–78. http://dx.doi.org/10.1016/s0925-2312(03)00440-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

ANDRECUT, M., and M. K. ALI. "A QUANTUM PERCEPTRON." International Journal of Modern Physics B 16, no. 04 (February 10, 2002): 639–45. http://dx.doi.org/10.1142/s0217979202009998.

Full text
Abstract:
The task of a classical perceptron is to classify two classes of patterns by generating a separation hyperplane. Here, we give a complete description of a quantum perceptron. The quantum algorithms for classification and learning are formulated in terms of unitary quantum gates operators. In the quantum case, the concept of separable or non-separable classes is irrelevant because the quantum perceptron can learn a superposition of patterns which are not separable by a hyperplane.
APA, Harvard, Vancouver, ISO, and other styles
20

Serrano-Rubio, Juan Pablo, Arturo Hernández-Aguirre, and Rafael Herrera-Guzmán. "Hyperconic Multilayer Perceptron." Neural Processing Letters 45, no. 1 (February 24, 2016): 29–58. http://dx.doi.org/10.1007/s11063-016-9505-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chakraborty, Arya. "Perceptron Collaborative Filtering." International Journal for Research in Applied Science and Engineering Technology 11, no. 2 (February 28, 2023): 437–47. http://dx.doi.org/10.22214/ijraset.2023.49044.

Full text
Abstract:
Abstract: While multivariate logistic regression classifiers are a great way of implementing collaborative filtering - a method of making automatic predictions about the interests of a user by collecting preferences or taste information from many other users, we can also achieve similar results using neural networks. A recommender system is a subclass of information filtering system that provide suggestions for items that are most pertinent to a particular user. A perceptron or a neural network is a machine learning model designed for fitting complex datasets using backpropagation and gradient descent. When coupled with advanced optimization techniques, the model may prove to be a great substitute for classical logistic classifiers. The optimizations include feature scaling, mean normalization, regularization, hyperparameter tuning and using stochastic/mini-batch gradient descent instead of regular gradient descent. In this use case, we will use the perceptron in the recommender system to fit the parameters i.e., the data from a multitude of users and use it to predict the preference/interest of a particular user
APA, Harvard, Vancouver, ISO, and other styles
22

Bridgeman, Bruce. "The dynamical model is a Perceptron." Behavioral and Brain Sciences 21, no. 5 (October 1998): 631–32. http://dx.doi.org/10.1017/s0140525x98251739.

Full text
Abstract:
Van Gelder's example of a dynamical model is a Perceptron. The similarity of dynamical models and Perceptrons in turn exemplifies the close relationship between dynamical and algorithmic models. Both are models, not literal descriptions of brains. The brain states of standard modeling are better conceived as processes in the dynamical sense, but algorithmic models remain useful.
APA, Harvard, Vancouver, ISO, and other styles
23

LIANG, XUN, and SHAOWEI XIA. "METHODS OF TRAINING AND CONSTRUCTING MULTILAYER PERCEPTRONS WITH ARBITRARY PATTERN SETS." International Journal of Neural Systems 06, no. 03 (September 1995): 233–47. http://dx.doi.org/10.1142/s0129065795000172.

Full text
Abstract:
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very difficult to train by traditional Back Propagation (BP) methods. For MLPs trapped in local minima, compensating methods can correct the wrong outputs one by one using constructing techniques until all outputs are right, so that the MLPs can skip from the local minima to the global minima. A hidden neuron is added as compensation for a binary input three-layer perceptron trapped in a local minimum; and one or two hidden neurons are added as compensation for a real input three-layer perceptron. For a perceptron of more than three layers, the second hidden layer from behind will be temporarily treated as the input layer during compensation, hence the above methods can also be used. Examples are given.
APA, Harvard, Vancouver, ISO, and other styles
24

BLATT, MARCELO, EYTAN DOMANY, and IDO KANTER. "ON THE EQUIVALENCE OF TWO-LAYERED PERCEPTRONS WITH BINARY NEURONS." International Journal of Neural Systems 06, no. 03 (September 1995): 225–31. http://dx.doi.org/10.1142/s0129065795000160.

Full text
Abstract:
We consider two-layered perceptrons consisting of N binary input units, K binary hidden units and one binary output unit, in the limit N≫K≥1. We prove that the weights of a regular irreducible network are uniquely determined by its input-output map up to some obvious global symmetries. A network is regular if its K weight vectors from the input layer to the K hidden units are linearly independent. A (single layered) perceptron is said to be irreducible if its output depends on every one of its input units; and a two-layered perceptron is irreducible if the K+1 perceptrons that constitute such network are irreducible. By global symmetries we mean, for instance, permuting the labels of the hidden units. Hence, two irreducible regular two-layered perceptrons that implement the same Boolean function must have the same number of hidden units, and must be composed of equivalent perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
25

Belue, Lisa M., Kenneth W. Bauer, and Dennis W. Ruck. "Selecting Optimal Experiments for Multiple Output Multilayer Perceptrons." Neural Computation 9, no. 1 (January 1, 1997): 161–83. http://dx.doi.org/10.1162/neco.1997.9.1.161.

Full text
Abstract:
Where should a researcher conduct experiments to provide training data for a multilayer perceptron? This question is investigated, and a statistical method for selecting optimal experimental design points for multiple output multilayer perceptrons is introduced. Multiple class discrimination problems are examined using a framework in which the multilayer perceptron is viewed as a multivariate nonlinear regression model. Following a Bayesian formulation for the case where the variance-covariance matrix of the responses is unknown, a selection criterion is developed. This criterion is based on the volume of the joint confidence ellipsoid for the weights in a multilayer perceptron. An example is used to demonstrate the superiority of optimally selected design points over randomly chosen points, as well as points chosen in a grid pattern. Simplification of the basic criterion is offered through the use of Hadamard matrices to produce uncorrelated outputs.
APA, Harvard, Vancouver, ISO, and other styles
26

Ho, Charlotte Yuk-Fan, Bingo Wing-Kuen Ling, and Herbert Ho-Ching Iu. "Invariant Set of Weight of Perceptron Trained by Perceptron Training Algorithm." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40, no. 6 (December 2010): 1521–30. http://dx.doi.org/10.1109/tsmcb.2010.2042444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sondakh, Debby E. "Data Mining for Healthcare Data: A Comparison of Neural Networks Algorithms." CogITo Smart Journal 3, no. 1 (July 13, 2017): 10. http://dx.doi.org/10.31154/cogito.v3i1.40.10-19.

Full text
Abstract:
Classification has been considered as an important tool utilized for the extraction of useful information from healthcare dataset. It may be applied for recognition of disease over symptoms. This paper aims to compare and evaluate different approaches of neural networks classification algorithms for healthcare datasets. The algorithms considered here are Multilayer Perceptron, Radial Basis Function, and Voted Perceptron which are tested based on resulted classifiers accuracy, precision, mean absolute error and root mean squared error rates, and classifier training time. All the algorithms are applied for five multivariate healthcare datasets, Echocardiogram, SPECT Heart, Chronic Kidney Disease, Mammographic Mass, and EEG Eye State datasets. Among the three algorithms, this study concludes the best algorithm for the chosen datasets is Multilayer Perceptron. It achieves the highest for all performance parameters tested. It can produce high accuracy classifier model with low error rate, but suffer in training time especially of large dataset. Voted Perceptron performance is the lowest in all parameters tested. For further research, an investigation may be conducted to analyze whether the number of hidden layer in Multilayer Perceptron’s architecture has a significant impact on the training time.
APA, Harvard, Vancouver, ISO, and other styles
28

ANDRECUT, M. "A STATISTICAL-FUZZY PERCEPTRON." Modern Physics Letters B 13, no. 01 (January 10, 1999): 33–41. http://dx.doi.org/10.1142/s0217984999000063.

Full text
Abstract:
An optimal statistical perceptron algorithm is derived using the Bayes classification theory. The described algorithm is able to construct an optimal classification hyperplane for separable and nonseparable classes. The described algorithm can be easily improved by imposing a simple fuzzyfication scheme of the training sets.
APA, Harvard, Vancouver, ISO, and other styles
29

Soheili, Negar, and Javier Pen͂a. "A Smooth Perceptron Algorithm." SIAM Journal on Optimization 22, no. 2 (January 2012): 728–37. http://dx.doi.org/10.1137/110848955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Martinelli, G., L. P. Ricotti, S. Ragazzini, and F. M. Mascioli. "A pyramidal delayed perceptron." IEEE Transactions on Circuits and Systems 37, no. 9 (1990): 1176–81. http://dx.doi.org/10.1109/31.57609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Yu, Hao. "Perceptron for Language Modeling." Journal of Computer Research and Development 43, no. 2 (2006): 260. http://dx.doi.org/10.1360/crad20060211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Benatti, F., S. Mancini, and S. Mangini. "Continuous variable quantum perceptron." International Journal of Quantum Information 17, no. 08 (December 2019): 1941009. http://dx.doi.org/10.1142/s0219749919410090.

Full text
Abstract:
We present a model of Continuous Variable Quantum Perceptron (CVQP), also referred to as neuron in the following, whose architecture implements a classical perceptron. The necessary nonlinearity is obtained via measuring the output qubit and using the measurement outcome as input to an activation function. The latter is chosen to be the so-called Rectified linear unit (ReLu) activation function by virtue of its practical feasibility and the advantages it provides in learning tasks. The encoding of classical data into realistic finitely squeezed states and the use of superposed (entangled) input states for specific binary problems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Gallant, S. I. "Perceptron-based learning algorithms." IEEE Transactions on Neural Networks 1, no. 2 (June 1990): 179–91. http://dx.doi.org/10.1109/72.80230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Mashor, M. Y. "Hybrid multilayered perceptron networks." International Journal of Systems Science 31, no. 6 (January 2000): 771–85. http://dx.doi.org/10.1080/00207720050030815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chung, F. L., Wang Shitong, Deng Zhaohong, and Hu Dewen. "Fuzzy kernel hyperball perceptron." Applied Soft Computing 5, no. 1 (December 2004): 67–74. http://dx.doi.org/10.1016/j.asoc.2004.03.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gas, Bruno. "Self-Organizing MultiLayer Perceptron." IEEE Transactions on Neural Networks 21, no. 11 (November 2010): 1766–79. http://dx.doi.org/10.1109/tnn.2010.2072790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Carmesin, H. O. "Multilinear perceptron convergence theorem." Physical Review E 50, no. 1 (July 1, 1994): 622–24. http://dx.doi.org/10.1103/physreve.50.622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Filliatre, Bernard, and Robert Racca. "Multi-threshold neurones perceptron." Neural Processing Letters 4, no. 1 (September 1996): 39–44. http://dx.doi.org/10.1007/bf00454844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Irwan, Irwan, Rusnardi Rahmat Putra, Krismadinata Krismadinata, and Nizwardi Jalinus. "Introduction to questionnaire data patterns using Perceptron Algorithm for lecturer improvement and development in higher education." INVOTEK: Jurnal Inovasi Vokasional dan Teknologi 21, no. 1 (February 2, 2021): 1–10. http://dx.doi.org/10.24036/invotek.v21i1.743.

Full text
Abstract:
This artificial neural network research article aimed to overview the current teaching process focusing on lecturer performance using the perceptron algorithm, improving the teaching process, developing the lecturer based on the perceptron algorithm's results, evaluating the speed and accuracy of the perceptron algorithm in evaluating performance lecturer and learning the rules of the perceptron algorithm in processing assessment criteria for lecturers in tertiary institutions. In this case, the perceptron algorithm was used to recognize the questionnaire data input patterns. The perceptron algorithm was trained and tested to recognize input data patterns so that this neural network could identify input data patterns from questionnaire data.
APA, Harvard, Vancouver, ISO, and other styles
40

Yudhistiro, Kukuh. "Pemanfaatan Neural Network Perceptron pada Pengenalan Pola Karakter." SMATIKA JURNAL 7, no. 02 (December 6, 2017): 21–25. http://dx.doi.org/10.32664/smatika.v7i02.153.

Full text
Abstract:
Various methods on artificial neural network has been applied to identify patterns of characters one using Perceptron algorithm. Perceptron algorithm can be used to identify patterns of characters with various input patterns that resemble letters of the alphabet so perceptron can be trained to recognize it. Perceptron algorithm is an algorithm in a neural network that includes supervised neural network.
APA, Harvard, Vancouver, ISO, and other styles
41

Rahmadhany, Sri. "IDENTIFIKASI POLA KARAKTER ANAK DENGAN ALGORITMA PERCEPTRON." JURNAL TEKNOLOGI INFORMASI 3, no. 1 (June 13, 2019): 86. http://dx.doi.org/10.36294/jurti.v3i1.695.

Full text
Abstract:
Abstract - Artificial Neural Network is a computational method that works like a human brain. The Perceptron algorithm is one method that exists in Artificial Neural Networks. The research carried out was the identification of children's character patterns using the Perceptron algorithm. The Perceptron algorithm is very reliable in recognizing patterns, one of which is the child's character pattern as was done in this study. The Perceptron algorithm identifies the character patterns of children through three inputs and two outputs. The three outputs are taken from nature variables, attitude variables and behavioral variables. The output is four human temperaments according to Hipocrates, namely sanguin, melancholy, choleric and plegamatic. All inputs and outputs will be converted into binary numbers to be trained with Matlab software.Keywords - Artificial Neural Networks, Perceptron Algorithms, child character patterns, input, output, binary numbers. Abstrak - Jaringan Syaraf Tiruan merupakan salah satu metode komputasi yang dapat bekerja seperti layaknya otak manusia. Algortima Perceptron merupakan salah satu metode yang ada pada Jaringan Syaraf Tiruan. Penelitian yang dilakukan adalah identifikasi pola karakter anak dengan menggunakan algoritma Perceptron. Algoritma Perceptron sangat handal dalam mengenali pola salah satunya yaitu pola karakter anak seperti yang dilakukan dalam penelitian ini. Algoritma Perceptron mengidentifikasi pola karakter anak melalui tiga input dan dua output. Tiga output tersebut diambil dari variabel sifat, variabel sikap dan variabel tingkah laku. Adapun output merupakan empat temperamen manusia menurut Hipocrates yaitu sanguin, melankolis, koleris dan plegamatis. Seluruh input dan output akan diubah menjadi bilangan biner untuk dilatih dengan software Matlab.Kata Kunci - Jaringan Syaraf Tiruan, Algoritma Perceptron, pola karakter anak, input, output, bilangan biner.
APA, Harvard, Vancouver, ISO, and other styles
42

Chatterjee, Ankita, Jayasree Saha, and Jayanta Mukherjee. "Clustering with multi-layered perceptron." Pattern Recognition Letters 155 (March 2022): 92–99. http://dx.doi.org/10.1016/j.patrec.2022.02.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Boštík, Josef, Jaromír Kukal, and Miroslav Virius. "PERCEPTRON NETWORK FOR RBF LOVERS." Neural Network World 22, no. 6 (2012): 511–18. http://dx.doi.org/10.14311/nnw.2012.22.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Dey, Prasenjit, Kaustuv Nag, Tandra Pal, and Nikhil R. Pal. "Regularizing Multilayer Perceptron for Robustness." IEEE Transactions on Systems, Man, and Cybernetics: Systems 48, no. 8 (August 2018): 1255–66. http://dx.doi.org/10.1109/tsmc.2017.2664143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Selvaraj, A. Arockia. "Leaf Recognition using Multilayer Perceptron." International Journal for Research in Applied Science and Engineering Technology 8, no. 6 (June 30, 2020): 859–63. http://dx.doi.org/10.22214/ijraset.2020.6139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Dahmen, David, Matthieu Gilson, and Moritz Helias. "Capacity of the covariance perceptron." Journal of Physics A: Mathematical and Theoretical 53, no. 35 (August 14, 2020): 354002. http://dx.doi.org/10.1088/1751-8121/ab82dd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

ADELI, H., and C. YEH. "Perceptron Learning in Engineering Design." Computer-Aided Civil and Infrastructure Engineering 4, no. 4 (November 6, 2008): 247–56. http://dx.doi.org/10.1111/j.1467-8667.1989.tb00026.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Saitoh, Daisuke, and Kazuyuki Hara. "Mutual Learning Using Nonlinear Perceptron." Journal of Artificial Intelligence and Soft Computing Research 5, no. 1 (January 1, 2015): 71–77. http://dx.doi.org/10.1515/jaiscr-2015-0020.

Full text
Abstract:
Abstract We propose a mutual learning method using nonlinear perceptron within the framework of online learning and have analyzed its validity using computer simulations. Mutual learning involving three or more students is fundamentally different from the two-student case with regard to variety when selecting a student to act as the teacher. The proposed method consists of two learning steps: first, multiple students learn independently from a teacher, and second, the students learn from others through mutual learning. Results showed that the mean squared error could be improved even if the teacher had not taken part in the mutual learning.
APA, Harvard, Vancouver, ISO, and other styles
49

Benedict, K. A. "Learning in the multilayer perceptron." Journal of Physics A: Mathematical and General 21, no. 11 (June 7, 1988): 2643–50. http://dx.doi.org/10.1088/0305-4470/21/11/021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Nadal, J. P., and N. Parga. "INFORMATION PROCESSING BY A PERCEPTRON." International Journal of Neural Systems 03, supp01 (January 1992): 41–50. http://dx.doi.org/10.1142/s012906579200036x.

Full text
Abstract:
The information coming into a module, which is part of a global system, will in general require pre-processing that consists in building a representation — expressing it in a new code — convenient to the task to be performed by this particular module. This information, coming either from the environment or from another module in the system, will undergo this operation in what we can call an encoder. In this work we describe our results for a perceptron architecture viewed as an encoder, using encoding principles based on Information Theory. In particular we show how to evaluate the information capacity and the typical mutual information, quantities which are relevant to analize the different criteria used to code the information. Techniques taken from statistical mechanics of disordered systems will be shown to be useful for these calculations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography