To see the other types of publications on this topic, follow the link: Perceptrons.

Journal articles on the topic 'Perceptrons'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Perceptrons.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

TOH, H. S. "WEIGHT CONFIGURATIONS OF TRAINED PERCEPTRONS." International Journal of Neural Systems 04, no. 03 (September 1993): 231–46. http://dx.doi.org/10.1142/s0129065793000195.

Full text
Abstract:
We strive to predict the function mapping and rules performed by a trained perceptron from studying the weights. We derive a few properties of the trained weights and show how the perceptron's representation of knowledge, rules and functions depend on these properties. Two types of perceptrons are studied — one case with continuous inputs and one hidden layer, the other a simple binary classifier with boolean inputs and no hidden units.
APA, Harvard, Vancouver, ISO, and other styles
2

BLATT, MARCELO, EYTAN DOMANY, and IDO KANTER. "ON THE EQUIVALENCE OF TWO-LAYERED PERCEPTRONS WITH BINARY NEURONS." International Journal of Neural Systems 06, no. 03 (September 1995): 225–31. http://dx.doi.org/10.1142/s0129065795000160.

Full text
Abstract:
We consider two-layered perceptrons consisting of N binary input units, K binary hidden units and one binary output unit, in the limit N≫K≥1. We prove that the weights of a regular irreducible network are uniquely determined by its input-output map up to some obvious global symmetries. A network is regular if its K weight vectors from the input layer to the K hidden units are linearly independent. A (single layered) perceptron is said to be irreducible if its output depends on every one of its input units; and a two-layered perceptron is irreducible if the K+1 perceptrons that constitute such network are irreducible. By global symmetries we mean, for instance, permuting the labels of the hidden units. Hence, two irreducible regular two-layered perceptrons that implement the same Boolean function must have the same number of hidden units, and must be composed of equivalent perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
3

KUKOLJ, DRAGAN D., MIROSLAVA T. BERKO-PUSIC, and BRANISLAV ATLAGIC. "Experimental design of supervisory control functions based on multilayer perceptrons." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, no. 5 (November 2001): 425–31. http://dx.doi.org/10.1017/s0890060401155058.

Full text
Abstract:
This article presents the results of research concerning possibilities of applying multilayer perceptron type of neural network for fault diagnosis, state estimation, and prediction in the gas pipeline transmission network. The influence of several factors on accuracy of the multilayer perceptron was considered. The emphasis was put on the multilayer perceptrons' function as a state estimator. The choice of the most informative features, the amount and sampling period of training data sets, as well as different configurations of multilayer perceptrons were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
4

Elizalde, E., and S. Gomez. "Multistate perceptrons: learning rule and perceptron of maximal stability." Journal of Physics A: Mathematical and General 25, no. 19 (October 7, 1992): 5039–45. http://dx.doi.org/10.1088/0305-4470/25/19/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Racca, Robert. "Can periodic perceptrons replace multi-layer perceptrons?" Pattern Recognition Letters 21, no. 12 (November 2000): 1019–25. http://dx.doi.org/10.1016/s0167-8655(00)00057-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cannas, Sergio A. "Arithmetic Perceptrons." Neural Computation 7, no. 1 (January 1995): 173–81. http://dx.doi.org/10.1162/neco.1995.7.1.173.

Full text
Abstract:
A feedforward layered neural network (perceptron) with one hidden layer, which adds two N-bit binary numbers is constructed. The set of synaptic strengths and thresholds is obtained exactly for different architectures of the network and for arbitrary N. These structures can be easily generalized to perform more complicated arithmetic operations (like subtraction).
APA, Harvard, Vancouver, ISO, and other styles
7

Falkowski, Bernd-Jürgen. "Probabilistic perceptrons." Neural Networks 8, no. 4 (January 1995): 513–23. http://dx.doi.org/10.1016/0893-6080(94)00107-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Falkowski, Bernd-Jürgen. "Perceptrons revisited." Information Processing Letters 36, no. 4 (November 1990): 207–13. http://dx.doi.org/10.1016/0020-0190(90)90075-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lewenstein, M. "Quantum Perceptrons." Journal of Modern Optics 41, no. 12 (December 1994): 2491–501. http://dx.doi.org/10.1080/09500349414552331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rowcliffe, P., Jianfeng Feng, and H. Buxton. "Spiking perceptrons." IEEE Transactions on Neural Networks 17, no. 3 (May 2006): 803–7. http://dx.doi.org/10.1109/tnn.2006.873274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Legenstein, Robert, and Wolfgang Maass. "On the Classification Capability of Sign-Constrained Perceptrons." Neural Computation 20, no. 1 (January 2008): 288–309. http://dx.doi.org/10.1162/neco.2008.20.1.288.

Full text
Abstract:
The perceptron (also referred to as McCulloch-Pitts neuron, or linear threshold gate) is commonly used as a simplified model for the discrimination and learning capability of a biological neuron. Criteria that tell us when a perceptron can implement (or learn to implement) all possible dichotomies over a given set of input patterns are well known, but only for the idealized case, where one assumes that the sign of a synaptic weight can be switched during learning. We present in this letter an analysis of the classification capability of the biologically more realistic model of a sign-constrained perceptron, where the signs of synaptic weights remain fixed during learning (which is the case for most types of biological synapses). In particular, the VC-dimension of sign-constrained perceptrons is determined, and a necessary and sufficient criterion is provided that tells us when all 2m dichotomies over a given set of m patterns can be learned by a sign-constrained perceptron. We also show that uniformity of L1 norms of input patterns is a sufficient condition for full representation power in the case where all weights are required to be nonnegative. Finally, we exhibit cases where the sign constraint of a perceptron drastically reduces its classification capability. Our theoretical analysis is complemented by computer simulations, which demonstrate in particular that sparse input patterns improve the classification capability of sign-constrained perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
12

Lyuu, Yuh-Dauh, and Igor Rivin. "Tight Bounds on Transition to Perfect Generalization in Perceptrons." Neural Computation 4, no. 6 (November 1992): 854–62. http://dx.doi.org/10.1162/neco.1992.4.6.854.

Full text
Abstract:
“Sudden” transition to perfect generalization in binary perceptrons is investigated. Building on recent theoretical works of Gardner and Derrida (1989) and Baum and Lyuu (1991), we show the following: for α > αc = 1.44797 …, if α n examples are drawn from the uniform distribution on {+1, −1}n and classified according to a target perceptron wt ∈ {+1, −1}n as positive if wt · x ≥ 0 and negative otherwise, then the expected number of nontarget perceptrons consistent with the examples is 2−⊖(√n); the same number, however, grows exponentially 2⊖(n) if α ≤ αc. Numerical calculations for n up to 1,000,002 are reported.
APA, Harvard, Vancouver, ISO, and other styles
13

Majer, P., A. Engel, and A. Zippelius. "Perceptrons above saturation." Journal of Physics A: Mathematical and General 26, no. 24 (December 21, 1993): 7405–16. http://dx.doi.org/10.1088/0305-4470/26/24/015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Silva, Francisco, Mikel Sanz, João Seixas, Enrique Solano, and Yasser Omar. "Perceptrons from memristors." Neural Networks 122 (February 2020): 273–78. http://dx.doi.org/10.1016/j.neunet.2019.10.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ou, Jun, Yujian Li, and Chuanhui Shan. "Two-dimensional perceptrons." Soft Computing 24, no. 5 (June 5, 2019): 3355–64. http://dx.doi.org/10.1007/s00500-019-04098-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cerella, John. "Pigeons and perceptrons." Pattern Recognition 19, no. 6 (January 1986): 431–38. http://dx.doi.org/10.1016/0031-3203(86)90041-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Navia-Vázquez, A. "Support vector perceptrons." Neurocomputing 70, no. 4-6 (January 2007): 1089–95. http://dx.doi.org/10.1016/j.neucom.2006.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kiranyaz, Serkan, Turker Ince, Alexandros Iosifidis, and Moncef Gabbouj. "Progressive Operational Perceptrons." Neurocomputing 224 (February 2017): 142–54. http://dx.doi.org/10.1016/j.neucom.2016.10.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Belue, Lisa M., Kenneth W. Bauer, and Dennis W. Ruck. "Selecting Optimal Experiments for Multiple Output Multilayer Perceptrons." Neural Computation 9, no. 1 (January 1, 1997): 161–83. http://dx.doi.org/10.1162/neco.1997.9.1.161.

Full text
Abstract:
Where should a researcher conduct experiments to provide training data for a multilayer perceptron? This question is investigated, and a statistical method for selecting optimal experimental design points for multiple output multilayer perceptrons is introduced. Multiple class discrimination problems are examined using a framework in which the multilayer perceptron is viewed as a multivariate nonlinear regression model. Following a Bayesian formulation for the case where the variance-covariance matrix of the responses is unknown, a selection criterion is developed. This criterion is based on the volume of the joint confidence ellipsoid for the weights in a multilayer perceptron. An example is used to demonstrate the superiority of optimally selected design points over randomly chosen points, as well as points chosen in a grid pattern. Simplification of the basic criterion is offered through the use of Hadamard matrices to produce uncorrelated outputs.
APA, Harvard, Vancouver, ISO, and other styles
20

Yu, Xin, Mian Xie, Li Xia Tang, and Chen Yu Li. "Learning Algorithm for Fuzzy Perceptron with Max-Product Composition." Applied Mechanics and Materials 687-691 (November 2014): 1359–62. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1359.

Full text
Abstract:
Fuzzy neural networks is a powerful computational model, which integrates fuzzy systems with neural networks, and fuzzy perceptron is a kind of this neural networks. In this paper, a learning algorithm is proposed for a fuzzy perceptron with max-product composition, and the topological structure of this fuzzy perceptron is the same as conventional linear perceptrons. The inner operations involved in the working process of this fuzzy perceptron are based on the max-product logical operations rather than conventional multiplication and summation etc. To illustrate the finite convergence of proposed algorithm, some numerical experiments are provided.
APA, Harvard, Vancouver, ISO, and other styles
21

Nadal, J. P., and N. Parga. "Duality Between Learning Machines: A Bridge Between Supervised and Unsupervised Learning." Neural Computation 6, no. 3 (May 1994): 491–508. http://dx.doi.org/10.1162/neco.1994.6.3.491.

Full text
Abstract:
We exhibit a duality between two perceptrons that allows us to compare the theoretical analysis of supervised and unsupervised learning tasks. The first perceptron has one output and is asked to learn a classification of p patterns. The second (dual) perceptron has p outputs and is asked to transmit as much information as possible on a distribution of inputs. We show in particular that the maximum information that can be stored in the couplings for the supervised learning task is equal to the maximum information that can be transmitted by the dual perceptron.
APA, Harvard, Vancouver, ISO, and other styles
22

Bridgeman, Bruce. "The dynamical model is a Perceptron." Behavioral and Brain Sciences 21, no. 5 (October 1998): 631–32. http://dx.doi.org/10.1017/s0140525x98251739.

Full text
Abstract:
Van Gelder's example of a dynamical model is a Perceptron. The similarity of dynamical models and Perceptrons in turn exemplifies the close relationship between dynamical and algorithmic models. Both are models, not literal descriptions of brains. The brain states of standard modeling are better conceived as processes in the dynamical sense, but algorithmic models remain useful.
APA, Harvard, Vancouver, ISO, and other styles
23

Zwietering, P. J., E. H. L. Aarts, and J. Wessels. "EXACT CLASSIFICATION WITH TWO-LAYERED PERCEPTRONS." International Journal of Neural Systems 03, no. 02 (January 1992): 143–56. http://dx.doi.org/10.1142/s0129065792000127.

Full text
Abstract:
We study the capabilities of two-layered perceptrons for classifying exactly a given subset. Both necessary and sufficient conditions are derived for subsets to be exactly classifiable with two-layered perceptrons that use the hard-limiting response function. The necessary conditions can be viewed as generalizations of the linear-separability condition of one-layered perceptrons and confirm the conjecture that the capabilities of two-layered perceptrons are more limited than those of three-layered perceptrons. The sufficient conditions show that the capabilities of two-layered perceptrons extend beyond the exact classification of convex subsets. Furthermore, we present an algorithmic approach to the problem of verifying the sufficiency condition for a given subset.
APA, Harvard, Vancouver, ISO, and other styles
24

Kirdin, Alexander, Sergey Sidorov, and Nikolai Zolotykh. "Rosenblatt’s First Theorem and Frugality of Deep Learning." Entropy 24, no. 11 (November 10, 2022): 1635. http://dx.doi.org/10.3390/e24111635.

Full text
Abstract:
The Rosenblatt’s first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated Rosenblatt’s first theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We also constructed a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt’s A-elements), whereas for the deep network, the second-order polynomial complexity is sufficient. We demonstrated that for the same complex problem, the deep network can be much smaller and reveal a heuristic behind this effect.
APA, Harvard, Vancouver, ISO, and other styles
25

LIANG, XUN, and SHAOWEI XIA. "METHODS OF TRAINING AND CONSTRUCTING MULTILAYER PERCEPTRONS WITH ARBITRARY PATTERN SETS." International Journal of Neural Systems 06, no. 03 (September 1995): 233–47. http://dx.doi.org/10.1142/s0129065795000172.

Full text
Abstract:
This paper presents two compensation methods for multilayer perceptrons (MLPs) which are very difficult to train by traditional Back Propagation (BP) methods. For MLPs trapped in local minima, compensating methods can correct the wrong outputs one by one using constructing techniques until all outputs are right, so that the MLPs can skip from the local minima to the global minima. A hidden neuron is added as compensation for a binary input three-layer perceptron trapped in a local minimum; and one or two hidden neurons are added as compensation for a real input three-layer perceptron. For a perceptron of more than three layers, the second hidden layer from behind will be temporarily treated as the input layer during compensation, hence the above methods can also be used. Examples are given.
APA, Harvard, Vancouver, ISO, and other styles
26

Bylander, Tom. "Learning Linear Threshold Approximations Using Perceptrons." Neural Computation 7, no. 2 (March 1995): 370–79. http://dx.doi.org/10.1162/neco.1995.7.2.370.

Full text
Abstract:
We demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions using perceptrons. The central result is as follows. Suppose there exists a vector w*, of n weights (including the threshold) with “accuracy” 1 − α, “average error” η, and “balancing separation” σ, i.e., with probability 1 − α, w* correctly classifies an example x; over examples incorrectly classified by w*, the expected value of |w* · x| is η (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of |w* · x| is σ. Then, with probability 1 − δ, the perceptron achieves accuracy at least 1 − [∊ + α(1 + η/σ)] after O[n∊−2σ−2(ln 1/δ)] examples.
APA, Harvard, Vancouver, ISO, and other styles
27

Podolskii, V. V. "Perceptrons of large weight." Problems of Information Transmission 45, no. 1 (March 2009): 46–53. http://dx.doi.org/10.1134/s0032946009010062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bunzmann, C., M. Biehl, and R. Urbanczik. "Efficiently Learning Multilayer Perceptrons." Physical Review Letters 86, no. 10 (March 5, 2001): 2166–69. http://dx.doi.org/10.1103/physrevlett.86.2166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Watkin, T. L. H., and A. Rau. "Selecting examples for perceptrons." Journal of Physics A: Mathematical and General 25, no. 1 (January 7, 1992): 113–21. http://dx.doi.org/10.1088/0305-4470/25/1/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kerlirzin, P., and F. Vallet. "Robustness in Multilayer Perceptrons." Neural Computation 5, no. 3 (May 1993): 473–82. http://dx.doi.org/10.1162/neco.1993.5.3.473.

Full text
Abstract:
In this paper, we study the robustness of multilayer networks versus the destruction of neurons. We show that the classical backpropagation algorithm does not lead to optimal robustness and we propose a modified algorithm that improves this capability.
APA, Harvard, Vancouver, ISO, and other styles
31

Kuzuoglu, M., and K. Leblebicioglu. "Infinite-dimensional multilayer perceptrons." IEEE Transactions on Neural Networks 7, no. 4 (July 1996): 889–96. http://dx.doi.org/10.1109/72.508932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Charalampidis, Dimitrios, and Barry Muldrey. "Clustering using multilayer perceptrons." Nonlinear Analysis: Theory, Methods & Applications 71, no. 12 (December 2009): e2807-e2813. http://dx.doi.org/10.1016/j.na.2009.06.064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Murthy, C. "Multilayer perceptrons and fractals." Information Sciences 112, no. 1-4 (December 1998): 137–50. http://dx.doi.org/10.1016/s0020-0255(98)10028-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xiang, Xuyan, Yingchun Deng, and Xiangqun Yang. "Second order spiking perceptrons." Soft Computing 13, no. 12 (March 27, 2009): 1219–30. http://dx.doi.org/10.1007/s00500-009-0415-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Tattersall, G. D., S. Foster, and R. D. Johnston. "Single-layer lookup perceptrons." IEE Proceedings F Radar and Signal Processing 138, no. 1 (1991): 46. http://dx.doi.org/10.1049/ip-f-2.1991.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hamid, Danish, Syed Sajid Ullah, Jawaid Iqbal, Saddam Hussain, Ch Anwar ul Hassan, and Fazlullah Umar. "A Machine Learning in Binary and Multiclassification Results on Imbalanced Heart Disease Data Stream." Journal of Sensors 2022 (September 20, 2022): 1–13. http://dx.doi.org/10.1155/2022/8400622.

Full text
Abstract:
In medical filed, predicting the occurrence of heart diseases is a significant piece of work. Millions of healthcare-related complexities that have remained unsolved up until now can be greatly simplified with the help of machine learning. The proposed study is concerned with the cardiac disease diagnosis decision support system. An OpenML repository data stream with 1 million instances of heart disease and 14 features is used for this study. After applying to preprocess and feature engineering techniques, machine learning approaches like random forest, decision trees, gradient boosted trees, linear support vector classifier, logistic regression, one-vs-rest, and multilayer perceptron are used to perform binary and multiclassification on the data stream. When combined with the Max Abs Scaler technique, the multilayer perceptron performed satisfactorily in both binary (Accuracy 94.8%) and multiclassification (accuracy 88.2%). Compared to the other binary classification algorithms, the GBT delivered the right outcome (accuracy of 95.8%). Multilayer perceptrons, however, did well in multiple classifications. Techniques such as oversampling and undersampling have a negative impact on disease prediction. Machine learning methods like multilayer perceptrons and ensembles can be helpful for diagnosing cardiac conditions. For this kind of unbalanced data stream, sampling techniques like oversampling and undersampling are not practical.
APA, Harvard, Vancouver, ISO, and other styles
37

Banda, Peter, Christof Teuscher, and Matthew R. Lakin. "Online Learning in a Chemical Perceptron." Artificial Life 19, no. 2 (April 2013): 195–219. http://dx.doi.org/10.1162/artl_a_00105.

Full text
Abstract:
Autonomous learning implemented purely by means of a synthetic chemical system has not been previously realized. Learning promotes reusability and minimizes the system design to simple input-output specification. In this article we introduce a chemical perceptron, the first full-featured implementation of a perceptron in an artificial (simulated) chemistry. A perceptron is the simplest system capable of learning, inspired by the functioning of a biological neuron. Our artificial chemistry is deterministic and discrete-time, and follows Michaelis-Menten kinetics. We present two models, the weight-loop perceptron and the weight-race perceptron, which represent two possible strategies for a chemical implementation of linear integration and threshold. Both chemical perceptrons can successfully identify all 14 linearly separable two-input logic functions and maintain high robustness against rate-constant perturbations. We suggest that DNA strand displacement could, in principle, provide an implementation substrate for our model, allowing the chemical perceptron to perform reusable, programmable, and adaptable wet biochemical computing.
APA, Harvard, Vancouver, ISO, and other styles
38

Hush, D. R., B. Horne, and J. M. Salas. "Error surfaces for multilayer perceptrons." IEEE Transactions on Systems, Man, and Cybernetics 22, no. 5 (1992): 1152–61. http://dx.doi.org/10.1109/21.179853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Murphy, O. J. "Nearest neighbor pattern classification perceptrons." Proceedings of the IEEE 78, no. 10 (1990): 1595–98. http://dx.doi.org/10.1109/5.58344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Raudys, Sarunas, and Aistis Raudys. "Pairwise Costs in Multiclass Perceptrons." IEEE Transactions on Pattern Analysis and Machine Intelligence 32, no. 7 (July 2010): 1324–28. http://dx.doi.org/10.1109/tpami.2010.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kurchan, J., and E. Domany. "Critical capacity of constrained perceptrons." Journal of Physics A: Mathematical and General 23, no. 16 (August 21, 1990): L847—L852. http://dx.doi.org/10.1088/0305-4470/23/16/013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Engel, A., D. Malzahn, and I. Kanter. "Storage properties of correlated perceptrons." Philosophical Magazine B 77, no. 5 (May 1998): 1507–13. http://dx.doi.org/10.1080/13642819808205042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Coelho, A. L. V., and F. O. França. "Enhancing perceptrons with contrastive biclusters." Electronics Letters 52, no. 24 (November 2016): 1974–76. http://dx.doi.org/10.1049/el.2016.3067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Schietse, J., M. Bouten, and C. van den Broeck. "Training Binary Perceptrons by Clipping." Europhysics Letters (EPL) 32, no. 3 (October 20, 1995): 279–84. http://dx.doi.org/10.1209/0295-5075/32/3/015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wiegerinck, W. A. J. J., and A. C. C. Coolen. "The connections of large perceptrons." Journal of Physics A: Mathematical and General 26, no. 11 (June 7, 1993): 2535–48. http://dx.doi.org/10.1088/0305-4470/26/11/007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Malzahn, D., A. Engel, and I. Kanter. "Storage capacity of correlated perceptrons." Physical Review E 55, no. 6 (June 1, 1997): 7369–78. http://dx.doi.org/10.1103/physreve.55.7369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Alpaydin, E., and M. I. Jordan. "Local linear perceptrons for classification." IEEE Transactions on Neural Networks 7, no. 3 (May 1996): 788–94. http://dx.doi.org/10.1109/72.501737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Verma, B. "Fast training of multilayer perceptrons." IEEE Transactions on Neural Networks 8, no. 6 (November 1997): 1314–20. http://dx.doi.org/10.1109/72.641454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tran, Dat Thanh, Serkan Kiranyaz, Moncef Gabbouj, and Alexandros Iosifidis. "Progressive Operational Perceptrons with Memory." Neurocomputing 379 (February 2020): 172–81. http://dx.doi.org/10.1016/j.neucom.2019.10.079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Apolloni, B., and G. Ronchini. "Dynamic sizing of multilayer perceptrons." Biological Cybernetics 71, no. 1 (May 1994): 49–63. http://dx.doi.org/10.1007/bf00198911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography