Academic literature on the topic 'Perceptrons'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Perceptrons.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Perceptrons"

1

TOH, H. S. "WEIGHT CONFIGURATIONS OF TRAINED PERCEPTRONS." International Journal of Neural Systems 04, no. 03 (September 1993): 231–46. http://dx.doi.org/10.1142/s0129065793000195.

Full text
Abstract:
We strive to predict the function mapping and rules performed by a trained perceptron from studying the weights. We derive a few properties of the trained weights and show how the perceptron's representation of knowledge, rules and functions depend on these properties. Two types of perceptrons are studied — one case with continuous inputs and one hidden layer, the other a simple binary classifier with boolean inputs and no hidden units.
APA, Harvard, Vancouver, ISO, and other styles
2

BLATT, MARCELO, EYTAN DOMANY, and IDO KANTER. "ON THE EQUIVALENCE OF TWO-LAYERED PERCEPTRONS WITH BINARY NEURONS." International Journal of Neural Systems 06, no. 03 (September 1995): 225–31. http://dx.doi.org/10.1142/s0129065795000160.

Full text
Abstract:
We consider two-layered perceptrons consisting of N binary input units, K binary hidden units and one binary output unit, in the limit N≫K≥1. We prove that the weights of a regular irreducible network are uniquely determined by its input-output map up to some obvious global symmetries. A network is regular if its K weight vectors from the input layer to the K hidden units are linearly independent. A (single layered) perceptron is said to be irreducible if its output depends on every one of its input units; and a two-layered perceptron is irreducible if the K+1 perceptrons that constitute such network are irreducible. By global symmetries we mean, for instance, permuting the labels of the hidden units. Hence, two irreducible regular two-layered perceptrons that implement the same Boolean function must have the same number of hidden units, and must be composed of equivalent perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
3

KUKOLJ, DRAGAN D., MIROSLAVA T. BERKO-PUSIC, and BRANISLAV ATLAGIC. "Experimental design of supervisory control functions based on multilayer perceptrons." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, no. 5 (November 2001): 425–31. http://dx.doi.org/10.1017/s0890060401155058.

Full text
Abstract:
This article presents the results of research concerning possibilities of applying multilayer perceptron type of neural network for fault diagnosis, state estimation, and prediction in the gas pipeline transmission network. The influence of several factors on accuracy of the multilayer perceptron was considered. The emphasis was put on the multilayer perceptrons' function as a state estimator. The choice of the most informative features, the amount and sampling period of training data sets, as well as different configurations of multilayer perceptrons were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
4

Elizalde, E., and S. Gomez. "Multistate perceptrons: learning rule and perceptron of maximal stability." Journal of Physics A: Mathematical and General 25, no. 19 (October 7, 1992): 5039–45. http://dx.doi.org/10.1088/0305-4470/25/19/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Racca, Robert. "Can periodic perceptrons replace multi-layer perceptrons?" Pattern Recognition Letters 21, no. 12 (November 2000): 1019–25. http://dx.doi.org/10.1016/s0167-8655(00)00057-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cannas, Sergio A. "Arithmetic Perceptrons." Neural Computation 7, no. 1 (January 1995): 173–81. http://dx.doi.org/10.1162/neco.1995.7.1.173.

Full text
Abstract:
A feedforward layered neural network (perceptron) with one hidden layer, which adds two N-bit binary numbers is constructed. The set of synaptic strengths and thresholds is obtained exactly for different architectures of the network and for arbitrary N. These structures can be easily generalized to perform more complicated arithmetic operations (like subtraction).
APA, Harvard, Vancouver, ISO, and other styles
7

Falkowski, Bernd-Jürgen. "Probabilistic perceptrons." Neural Networks 8, no. 4 (January 1995): 513–23. http://dx.doi.org/10.1016/0893-6080(94)00107-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Falkowski, Bernd-Jürgen. "Perceptrons revisited." Information Processing Letters 36, no. 4 (November 1990): 207–13. http://dx.doi.org/10.1016/0020-0190(90)90075-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lewenstein, M. "Quantum Perceptrons." Journal of Modern Optics 41, no. 12 (December 1994): 2491–501. http://dx.doi.org/10.1080/09500349414552331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rowcliffe, P., Jianfeng Feng, and H. Buxton. "Spiking perceptrons." IEEE Transactions on Neural Networks 17, no. 3 (May 2006): 803–7. http://dx.doi.org/10.1109/tnn.2006.873274.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Perceptrons"

1

Kallin, Westin Lena. "Preprocessing perceptrons." Doctoral thesis, Umeå : Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Filho, Osame Kinouchi. "Generalização ótima em perceptrons." Universidade de São Paulo, 1992. http://www.teses.usp.br/teses/disponiveis/54/54131/tde-07042015-165731/.

Full text
Abstract:
O perceptron tem sido estudado no contexto da física estatística desde o trabalho seminal de Gardner e Derrida sobre o espaço de aclopamentos desta rede neural simples. Recentemente, Opper e Haussler calcularam via método de réplicas, o desempenho ótimo teórico do perceptron na aprendizagem de uma regra a partir de exemplos (generalização). Neste trabalho encontramos a curva de desempenho ótimo após a primeira apresentação dos exemplos (primeiro passo da dinâmica de aprendizagem). No limite de grande número de exemplos encontramos que o erro de generalização é apenas duas vezes maior que o erro encontrado por Opper e Haussler. Calculamos também o desempenho ótimo para o primeiro passo da dinâmica de aprendizagem com seleção de exemplos. Mostramos que a seleção ótima ocorre quando o novo exemplo é escolhido ortogonal ao vetor de acoplamentos do perceptron. O erro de generalização neste caso decai exponencialmente com o número de exemplos. Propomos também uma nova classe de algoritmos de aprendizagem que aproxima muito bem as curvas de desempenho ótimo. Estudamos analiticamente o primeiro passo da dinâmica de aprendizagem e numericamente seu comportamento para tempos longos. Mostramos que vários algoritmos conhecidos (Hebb, Perceptron, Adaline, Relaxação) podem ser interpretados como aproximações, de maior ou menor qualidade, de nosso algoritmo
The perceptron has been studied in the contexto f statistical physics since the seminal work of Gardner and Derrida on the coupling space of this simple neural network. Recently, Opper and Haussler calculated, with the replica method, the theoretical optimal performance of the perceptron for learning a rule (generalization). In this work we found the optimal performance curve after the first presentation of the examples (first step of learning dynamics). In the limit of large number of examples the generalization error is only two times the error found by Opper and Haussler. We also calculated the optimal performance for the first step in the learning situation with selection of examples. We show that optimal selection occurs when the new example is choosen orthogonal to the perceptron coupling vector. The generalization error in this case decay exponentially with the number of examples. We also propose a new class of learning algorithms which aproximates very well the optimal performance curves. We study analytically the first step of the learning dynamics and numerically its behaviour for long times. We show that several known learning algorithms (Hebb, Perceptron, Adaline, Relaxation) can be seen as more or less reliable aproximations o four algorithm
APA, Harvard, Vancouver, ISO, and other styles
3

Adharapurapu, Ratnasri Krishna. "Convergence properties of perceptrons." CSUSB ScholarWorks, 1995. https://scholarworks.lib.csusb.edu/etd-project/1034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Friess, Thilo-Thomas. "Perceptrons in kernel feature spaces." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Lenny. "Uncertainty prediction with multi-layer perceptrons." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55733.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mourao, Kira Margaret Thom. "Learning action representations using kernel perceptrons." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7717.

Full text
Abstract:
Action representation is fundamental to many aspects of cognition, including language. Theories of situated cognition suggest that the form of such representation is distinctively determined by grounding in the real world. This thesis tackles the question of how to ground action representations, and proposes an approach for learning action models in noisy, partially observable domains, using deictic representations and kernel perceptrons. Agents operating in real-world settings often require domain models to support planning and decision-making. To operate effectively in the world, an agent must be able to accurately predict when its actions will be successful, and what the effects of its actions will be. Only when a reliable action model is acquired can the agent usefully combine sequences of actions into plans, in order to achieve wider goals. However, learning the dynamics of a domain can be a challenging problem: agents’ observations may be noisy, or incomplete; actions may be non-deterministic; the world itself may be noisy; or the world may contain many objects and relations which are irrelevant. In this thesis, I first show that voted perceptrons, equipped with the DNF family of kernels, easily learn action models in STRIPS domains, even when subject to noise and partial observability. Key to the learning process is, firstly, the implicit exploration of the space of conjunctions of possible fluents (the space of potential action preconditions) enabled by the DNF kernels; secondly, the identification of objects playing similar roles in different states, enabled by a simple deictic representation; and lastly, the use of an attribute-value representation for world states. Next, I extend the model to more complex domains by generalising both the kernel and the deictic representation to a relational setting, where world states are represented as graphs. Finally, I propose a method to extract STRIPS-like rules from the learnt models. I give preliminary results for STRIPS domains and discuss how the method can be extended to more complex domains. As such, the model is both appropriate for learning data generated by robot explorations as well as suitable for use by automated planning systems. This combination is essential for the development of autonomous agents which can learn action models from their environment and use them to generate successful plans.
APA, Harvard, Vancouver, ISO, and other styles
7

Black, Michael David. "Applying perceptrons to speculation in computer architecture." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/6725.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Cairns, Graham Andrew. "Learning with analogue VLSI multi-layer perceptrons." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grandvalet, Yves. "Injection de bruit dans les perceptrons multicouches." Compiègne, 1995. http://www.theses.fr/1995COMPD802.

Full text
Abstract:
Lors de l'estimation d'une fonction de régression, la sélection d'un modèle est une étape clef. Elle détermine la complexité du modèle généralisant au mieux les données, i. E. Minimisant l'erreur en prédiction. Dans les perceptons multicouches, la complexité peut être réglée en modifiant l'architecture du réseau. Mais il est également possible de la contrôler à architecture fixée. Les méthodes employées consistent à ajouter au critère d'erreur, explicitement ou non, un terme pénalisant la complexité de la solution. La notion de paramètres effectifs supplante alors celle de paramètres. Parmi ces méthodes, nous avons choisi d'étudier l'injection de bruit, qui est une heuristique particulièrement attractive, car de coût algorithmique nul. Dans un premier temps, nos travaux portent sur la justification théorique de cette heuristique. Nous récusons tout d'abord l'approche par développement de Taylor, qui la plus couramment usitée aujourd'hui. Nous utilisons ensuite les rapports de l'injection de bruit avec le régresseur de Nadaraya-Watson pour délimiter le cadre d'utilisation de l'heuristique. De plus, nous proposons deux modifications permettant d'élargir ce cadre à une catégorie plus vaste de problèmes, i. E. Les données irrégulièrement espacées et de grandes dimensions. Enfin, nous validons notre approche en comparant les performances de différents régresseurs sur une application à des données issues d'un processus de fabrication du verre.
APA, Harvard, Vancouver, ISO, and other styles
10

Octavian, Stan. "New recursive algorithms for training feedforward multilayer perceptrons." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13534.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Perceptrons"

1

Murty, M. N., and Rashmi Raghava. Support Vector Machines and Perceptrons. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41063-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Minsky, Marvin Lee. Perceptrons: An Introduction to Computational Geometry. Cambridge, Massachusetts: MIT Press, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gorelik, A. L. Sovremennoe sostoi͡a︡nie problemy raspoznavanii͡a︡: Nekotorye aspekty. Moskva: "Radio i svi͡a︡zʹ", 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bielecki, Andrzej. Models of Neurons and Perceptrons: Selected Problems and Challenges. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-90140-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Niemiro, Wojciech. Statystyczne własnosci metody minimalizacji perceptronowej funkcji kryterialnej w dyskryminacji liniowej. Warszawa: In-t Biocybernetyki i Inżynierii Biomedycznej, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Zhe. Explanation by general rules extracted from trained multi-layer perceptrons. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

International Conference on Information Acquisition (2004 Hefei Shi, China). ICIA 2004: Proceedings of 2004 International Conference on Information Acquisition : June 21-25, 2004, Hefei, China. Edited by Mei Tao and International Association of Information Acquisition. Piscataway, N.J: IEEE, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Peeling, S. M. Experiments in isolated digit recognition using the multi-layer perceptron. [London: HMSO, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

P, Banks Stephen. Can Perceptrons find Lyapunov functions?: An algorithmic approach to systems stability. Sheffield: University of Sheffield, Dept. of Control Engineering, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Glaz, A. B. Parametricheskai͡a︡ i strukturnai͡a︡ adaptat͡s︡ii͡a︡ reshai͡u︡shchikh pravil v zadachakh raspoznavanii͡a︡. Riga: "Zinatne", 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Perceptrons"

1

Du, Ke-Lin, and M. N. S. Swamy. "Perceptrons." In Neural Networks and Statistical Learning, 67–81. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5571-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nauck, Detlef, Frank Klawonn, and Rudolf Kruse. "Perceptrons." In Neuronale Netze und Fuzzy-Systeme, 39–57. Wiesbaden: Vieweg+Teubner Verlag, 1994. http://dx.doi.org/10.1007/978-3-322-85993-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Du, Ke-Lin, and M. N. S. Swamy. "Perceptrons." In Neural Networks and Statistical Learning, 81–95. London: Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Silaparasetty, Vinita. "Perceptrons." In Deep Learning Projects Using TensorFlow 2, 49–69. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5802-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peters, H. J. M. "Perceptrons." In Lecture Notes in Computer Science, 67–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0027023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Picton, Phil. "Perceptrons." In Introduction to Neural Networks, 25–45. London: Macmillan Education UK, 1994. http://dx.doi.org/10.1007/978-1-349-13530-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nauck, Detlef, Frank Klawonn, and Rudolf Kruse. "Perceptrons." In Neuronale Netze und Fuzzy-Systeme, 39–57. Wiesbaden: Vieweg+Teubner Verlag, 1996. http://dx.doi.org/10.1007/978-3-663-10898-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nauck, Detlef, Frank Klawonn, and Rudolf Kruse. "Multilayer-Perceptrons." In Neuronale Netze und Fuzzy-Systeme, 71–94. Wiesbaden: Vieweg+Teubner Verlag, 1994. http://dx.doi.org/10.1007/978-3-322-85993-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

García, Daniel, Ana González, and José R. Dorronsoro. "Convex Perceptrons." In Intelligent Data Engineering and Automated Learning – IDEAL 2006, 578–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11875581_70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kruse, Rudolf, Christian Borgelt, Christian Braune, Sanaz Mostaghim, and Matthias Steinbrecher. "Multilayer Perceptrons." In Texts in Computer Science, 47–92. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-7296-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Perceptrons"

1

Saromo, Daniel, Elizabeth Villota, and Edwin Villanueva. "Auto-Rotating Perceptrons." In LatinX in AI at Neural Information Processing Systems Conference 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai2019120826.

Full text
Abstract:
This paper proposes an improved design of the perceptron unit to mitigate the vanishing gradient problem. This nuisance appears when training deep multilayer perceptron networks with bounded activation functions. The new neuron design, named auto-rotating perceptron (ARP), has a mechanism to ensure that the node always operates in the dynamic region of the activation function, by avoiding saturation of the perceptron. The proposed method does not change the inference structure learned at each neuron. We test the effect of using ARP units in some network architectures which use the sigmoid activation function. The results support our hypothesis that neural networks with ARP units can achieve better learning performance than equivalent models with classic perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
2

Bueno, Felipe Roberto, and Peter Sussner. "FUZZY MORPHOLOGICAL PERCEPTRONS AND HYBRID FUZZY MORPHOLOGICAL/LINEAR PERCEPTRONS." In The 11th International FLINS Conference (FLINS 2014). WORLD SCIENTIFIC, 2014. http://dx.doi.org/10.1142/9789814619998_0120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xiang, Xuyan, Yingchun Deng, and Xiangqun Yang. "Spike-Rate Perceptrons." In 2008 Fourth International Conference on Natural Computation. IEEE, 2008. http://dx.doi.org/10.1109/icnc.2008.556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vucetic, Slobodan, Vladimir Coric, and Zhuang Wang. "Compressed Kernel Perceptrons." In 2009 Data Compression Conference (DCC). IEEE, 2009. http://dx.doi.org/10.1109/dcc.2009.75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ou, Jun, and Yujian Li. "Two-Dimensional Perceptrons." In the 2018 2nd International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3297156.3297213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McDonnell, John R., and Donald E. Waagen. "Evolving recurrent perceptrons." In Optical Engineering and Photonics in Aerospace Sensing, edited by Dennis W. Ruck. SPIE, 1993. http://dx.doi.org/10.1117/12.152634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kou-Yuan Huang. "Sequential classification by perceptrons and application to net pruning of multilayer perceptron." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jain, Brijnesh. "Margin Perceptrons for Graphs." In 2014 22nd International Conference on Pattern Recognition (ICPR). IEEE, 2014. http://dx.doi.org/10.1109/icpr.2014.661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chung-Nim Lee and Seung-Cheol Goh. "Perceptrons for image recognition." In 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xiang, Xuyan, Yingchun Deng, and Xiangqun Yang. "Extended Spike-Rate Perceptrons." In 2009 WRI World Congress on Computer Science and Information Engineering. IEEE, 2009. http://dx.doi.org/10.1109/csie.2009.470.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Perceptrons"

1

Chen, B., T. Hickling, M. Krnjajic, W. Hanley, G. Clark, J. Nitao, D. Knapp, L. Hiller, and M. Mugge. Multi-Layer Perceptrons and Support Vector Machines for Detection Problems with Low False Alarm Requirements: an Eight-Month Progress Report. Office of Scientific and Technical Information (OSTI), January 2007. http://dx.doi.org/10.2172/922310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vurkaç, Mehmet. Prestructuring Multilayer Perceptrons based on Information-Theoretic Modeling of a Partido-Alto-based Grammar for Afro-Brazilian Music: Enhanced Generalization and Principles of Parsimony, including an Investigation of Statistical Paradigms. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Raychev, Nikolay. Mathematical foundations of neural networks. Implementing a perceptron from scratch. Web of Open Science, August 2020. http://dx.doi.org/10.37686/nsr.v1i1.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kirichek, Galina, Vladyslav Harkusha, Artur Timenko, and Nataliia Kulykovska. System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3743.

Full text
Abstract:
In this article realization method of attacks and anomalies detection with the use of training of ordinary and attacking packages, respectively. The method that was used to teach an attack on is a combination of an uncontrollable and controlled neural network. In an uncontrolled network, attacks are classified in smaller categories, taking into account their features and using the self- organized map. To manage clusters, a neural network based on back-propagation method used. We use PyBrain as the main framework for designing, developing and learning perceptron data. This framework has a sufficient number of solutions and algorithms for training, designing and testing various types of neural networks. Software architecture is presented using a procedural-object approach. Because there is no need to save intermediate result of the program (after learning entire perceptron is stored in the file), all the progress of learning is stored in the normal files on hard disk.
APA, Harvard, Vancouver, ISO, and other styles
5

Alwan, Iktimal, Dennis D. Spencer, and Rafeed Alkawadri. Comparison of Machine Learning Algorithms in Sensorimotor Functional Mapping. Progress in Neurobiology, December 2023. http://dx.doi.org/10.60124/j.pneuro.2023.30.03.

Full text
Abstract:
Objective: To compare the performance of popular machine learning algorithms (ML) in mapping the sensorimotor cortex (SM) and identifying the anterior lip of the central sulcus (CS). Methods: We evaluated support vector machines (SVMs), random forest (RF), decision trees (DT), single layer perceptron (SLP), and multilayer perceptron (MLP) against standard logistic regression (LR) to identify the SM cortex employing validated features from six-minute of NREM sleep icEEG data and applying standard common hyperparameters and 10-fold cross-validation. Each algorithm was tested using vetted features based on the statistical significance of classical univariate analysis (p<0.05) and extended () 17 features representing power/coherence of different frequency bands, entropy, and interelectrode-based distance. The analysis was performed before and after weight adjustment for imbalanced data (w). Results: 7 subjects and 376 contacts were included. Before optimization, ML algorithms performed comparably employing conventional features (median CS accuracy: 0.89, IQR [0.88-0.9]). After optimization, neural networks outperformed others in means of accuracy (MLP: 0.86), the area under the curve (AUC) (SLPw, MLPw, MLP: 0.91), recall (SLPw: 0.82, MLPw: 0.81), precision (SLPw: 0.84), and F1-scores (SLPw: 0.82). SVM achieved the best specificity performance. Extending the number of features and adjusting the weights improved recall, precision, and F1-scores by 48.27%, 27.15%, and 39.15%, respectively, with gains or no significant losses in specificity and AUC across CS and Function (correlation r=0.71 between the two clinical scenarios in all performance metrics, p<0.001). Interpretation: Computational passive sensorimotor mapping is feasible and reliable. Feature extension and weight adjustments improve the performance and counterbalance the accuracy paradox. Optimized neural networks outperform other ML algorithms even in binary classification tasks. The best-performing models and the MATLAB® routine employed in signal processing are available to the public at (Link 1).
APA, Harvard, Vancouver, ISO, and other styles
6

Rivera-Casillas, Peter, and Ian Dettwiller. Neural Ordinary Differential Equations for rotorcraft aerodynamics. Engineer Research and Development Center (U.S.), April 2024. http://dx.doi.org/10.21079/11681/48420.

Full text
Abstract:
High-fidelity computational simulations of aerodynamics and structural dynamics on rotorcraft are essential for helicopter design, testing, and evaluation. These simulations usually entail a high computational cost even with modern high-performance computing resources. Reduced order models can significantly reduce the computational cost of simulating rotor revolutions. However, reduced order models are less accurate than traditional numerical modeling approaches, making them unsuitable for research and design purposes. This study explores the use of a new modified Neural Ordinary Differential Equation (NODE) approach as a machine learning alternative to reduced order models in rotorcraft applications—specifically to predict the pitching moment on a rotor blade section from an initial condition, mach number, chord velocity and normal velocity. The results indicate that NODEs cannot outperform traditional reduced order models, but in some cases they can outperform simple multilayer perceptron networks. Additionally, the mathematical structure provided by NODEs seems to favor time-dependent predictions. We demonstrate how this mathematical structure can be easily modified to tackle more complex problems. The work presented in this report is intended to establish an initial evaluation of the usability of the modified NODE approach for time-dependent modeling of complex dynamics over seen and unseen domains.
APA, Harvard, Vancouver, ISO, and other styles
7

Ogunbire, Abimbola, Panick Kalambay, Hardik Gajera, and Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, December 2023. http://dx.doi.org/10.31979/mti.2023.2320.

Full text
Abstract:
Nearly 5,000 people are killed and more than 418,000 are injured in weather-related traffic incidents each year. Assessments of the effectiveness of statistical models applied to crash severity prediction compared to machine learning (ML) and deep learning techniques (DL) help researchers and practitioners know what models are most effective under specific conditions. Given the class imbalance in crash data, the synthetic minority over-sampling technique for nominal (SMOTE-N) data was employed to generate synthetic samples for the minority class. The ordered logit model (OLM) and the ordered probit model (OPM) were evaluated as statistical models, while random forest (RF) and XGBoost were evaluated as ML models. For DL, multi-layer perceptron (MLP) and TabNet were evaluated. The performance of these models varied across severity levels, with property damage only (PDO) predictions performing the best and severe injury predictions performing the worst. The TabNet model performed best in predicting severe injury and PDO crashes, while RF was the most effective in predicting moderate injury crashes. However, all models struggled with severe injury classification, indicating the potential need for model refinement and exploration of other techniques. Hence, the choice of model depends on the specific application and the relative costs of false negatives and false positives. This conclusion underscores the need for further research in this area to improve the prediction accuracy of severe and moderate injury incidents, ultimately improving available data that can be used to increase road safety.
APA, Harvard, Vancouver, ISO, and other styles
8

Arhin, Stephen, Babin Manandhar, Hamdiat Baba Adam, and Adam Gatiba. Predicting Bus Travel Times in Washington, DC Using Artificial Neural Networks (ANNs). Mineta Transportation Institute, April 2021. http://dx.doi.org/10.31979/mti.2021.1943.

Full text
Abstract:
Washington, DC is ranked second among cities in terms of highest public transit commuters in the United States, with approximately 9% of the working population using the Washington Metropolitan Area Transit Authority (WMATA) Metrobuses to commute. Deducing accurate travel times of these metrobuses is an important task for transit authorities to provide reliable service to its patrons. This study, using Artificial Neural Networks (ANN), developed prediction models for transit buses to assist decision-makers to improve service quality and patronage. For this study, we used six months of Automatic Vehicle Location (AVL) and Automatic Passenger Counting (APC) data for six Washington Metropolitan Area Transit Authority (WMATA) bus routes operating in Washington, DC. We developed regression models and Artificial Neural Network (ANN) models for predicting travel times of buses for different peak periods (AM, Mid-Day and PM). Our analysis included variables such as number of served bus stops, length of route between bus stops, average number of passengers in the bus, average dwell time of buses, and number of intersections between bus stops. We obtained ANN models for travel times by using approximation technique incorporating two separate algorithms: Quasi-Newton and Levenberg-Marquardt. The training strategy for neural network models involved feed forward and errorback processes that minimized the generated errors. We also evaluated the models with a Comparison of the Normalized Squared Errors (NSE). From the results, we observed that the travel times of buses and the dwell times at bus stops generally increased over time of the day. We gathered travel time equations for buses for the AM, Mid-Day and PM Peaks. The lowest NSE for the AM, Mid-Day and PM Peak periods corresponded to training processes using Quasi-Newton algorithm, which had 3, 2 and 5 perceptron layers, respectively. These prediction models could be adapted by transit agencies to provide the patrons with accurate travel time information at bus stops or online.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography