Academic literature on the topic 'Perceptron'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Perceptron.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Perceptron"

1

Wang, Sheng-De, and Tsong-Chih Hsu. "Perceptron–perceptron net." Pattern Recognition Letters 19, no. 7 (May 1998): 559–68. http://dx.doi.org/10.1016/s0167-8655(98)00045-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Xin, Mian Xie, Li Xia Tang, and Chen Yu Li. "Learning Algorithm for Fuzzy Perceptron with Max-Product Composition." Applied Mechanics and Materials 687-691 (November 2014): 1359–62. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1359.

Full text
Abstract:
Fuzzy neural networks is a powerful computational model, which integrates fuzzy systems with neural networks, and fuzzy perceptron is a kind of this neural networks. In this paper, a learning algorithm is proposed for a fuzzy perceptron with max-product composition, and the topological structure of this fuzzy perceptron is the same as conventional linear perceptrons. The inner operations involved in the working process of this fuzzy perceptron are based on the max-product logical operations rather than conventional multiplication and summation etc. To illustrate the finite convergence of proposed algorithm, some numerical experiments are provided.
APA, Harvard, Vancouver, ISO, and other styles
3

Banda, Peter, Christof Teuscher, and Matthew R. Lakin. "Online Learning in a Chemical Perceptron." Artificial Life 19, no. 2 (April 2013): 195–219. http://dx.doi.org/10.1162/artl_a_00105.

Full text
Abstract:
Autonomous learning implemented purely by means of a synthetic chemical system has not been previously realized. Learning promotes reusability and minimizes the system design to simple input-output specification. In this article we introduce a chemical perceptron, the first full-featured implementation of a perceptron in an artificial (simulated) chemistry. A perceptron is the simplest system capable of learning, inspired by the functioning of a biological neuron. Our artificial chemistry is deterministic and discrete-time, and follows Michaelis-Menten kinetics. We present two models, the weight-loop perceptron and the weight-race perceptron, which represent two possible strategies for a chemical implementation of linear integration and threshold. Both chemical perceptrons can successfully identify all 14 linearly separable two-input logic functions and maintain high robustness against rate-constant perturbations. We suggest that DNA strand displacement could, in principle, provide an implementation substrate for our model, allowing the chemical perceptron to perform reusable, programmable, and adaptable wet biochemical computing.
APA, Harvard, Vancouver, ISO, and other styles
4

TOH, H. S. "WEIGHT CONFIGURATIONS OF TRAINED PERCEPTRONS." International Journal of Neural Systems 04, no. 03 (September 1993): 231–46. http://dx.doi.org/10.1142/s0129065793000195.

Full text
Abstract:
We strive to predict the function mapping and rules performed by a trained perceptron from studying the weights. We derive a few properties of the trained weights and show how the perceptron's representation of knowledge, rules and functions depend on these properties. Two types of perceptrons are studied — one case with continuous inputs and one hidden layer, the other a simple binary classifier with boolean inputs and no hidden units.
APA, Harvard, Vancouver, ISO, and other styles
5

Nadal, J. P., and N. Parga. "Duality Between Learning Machines: A Bridge Between Supervised and Unsupervised Learning." Neural Computation 6, no. 3 (May 1994): 491–508. http://dx.doi.org/10.1162/neco.1994.6.3.491.

Full text
Abstract:
We exhibit a duality between two perceptrons that allows us to compare the theoretical analysis of supervised and unsupervised learning tasks. The first perceptron has one output and is asked to learn a classification of p patterns. The second (dual) perceptron has p outputs and is asked to transmit as much information as possible on a distribution of inputs. We show in particular that the maximum information that can be stored in the couplings for the supervised learning task is equal to the maximum information that can be transmitted by the dual perceptron.
APA, Harvard, Vancouver, ISO, and other styles
6

Adams, A., and S. J. Bye. "New perceptron." Electronics Letters 28, no. 3 (1992): 321. http://dx.doi.org/10.1049/el:19920199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Martinelli, G., and F. M. Mascioli. "Cascade perceptron." Electronics Letters 28, no. 10 (May 7, 1992): 947–49. http://dx.doi.org/10.1049/el:19920600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ringienė, Laura, and Gintautas Dzemyda. "Specialios struktūros daugiasluoksnis perceptronas daugiamačiams duomenims vizualizuoti." Informacijos mokslai 50 (January 1, 2009): 358–64. http://dx.doi.org/10.15388/im.2009.0.3210.

Full text
Abstract:
Pasiūlytas ir ištirtas radialinių bazinių funkcijų ir daugiasluoksnio perceptrono junginys daugiamačiams duomenis vizualizuoti. Siūlomas vizualizavimo būdas apima daugiamačių duomenų matmenų mažinimą naudojant radialines bazines funkcijas, daugiamačių duomenų suskirstymą į klasterius, klasterį charakterizuojančių skaitinių reikšmių nustatymą ir daugiamačių duomenų vizualizavimą dirbtinio neuroninio tinklo paskutiniame paslėptajame sluoksnyje.Special Multilayer Perceptron for Multidimensional Data VisualizationLaura Ringienė, Gintautas Dzemyda SummaryIn this paper a special feed forward neural network, consisting of the radial basis function layer and a multilayer perceptron is presented. The multilayer perceptron has been proposed and investigated for multidimensional data visualization. The roposedvisualization approach includes data clustering, determining the parameters of the radial basis function and forming the data set to train the multilayer perceptron. The outputs of the last hidden layer are assigned as coordinates of the visualized points.
APA, Harvard, Vancouver, ISO, and other styles
9

KUKOLJ, DRAGAN D., MIROSLAVA T. BERKO-PUSIC, and BRANISLAV ATLAGIC. "Experimental design of supervisory control functions based on multilayer perceptrons." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, no. 5 (November 2001): 425–31. http://dx.doi.org/10.1017/s0890060401155058.

Full text
Abstract:
This article presents the results of research concerning possibilities of applying multilayer perceptron type of neural network for fault diagnosis, state estimation, and prediction in the gas pipeline transmission network. The influence of several factors on accuracy of the multilayer perceptron was considered. The emphasis was put on the multilayer perceptrons' function as a state estimator. The choice of the most informative features, the amount and sampling period of training data sets, as well as different configurations of multilayer perceptrons were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
10

Elizalde, E., and S. Gomez. "Multistate perceptrons: learning rule and perceptron of maximal stability." Journal of Physics A: Mathematical and General 25, no. 19 (October 7, 1992): 5039–45. http://dx.doi.org/10.1088/0305-4470/25/19/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Perceptron"

1

Vieira, Douglas Alexandre Gomes. "Rede perceptron com camadas paralelas (PLP - Parallel Layer Perceptron)." Universidade Federal de Minas Gerais, 2006. http://hdl.handle.net/1843/BUOS-8CTH6W.

Full text
Abstract:
This work presents a novel approach to deal with the structural risk minimization (SRM) applied to a general machine learning problem. The formulation is based on the fundamental concept that supervised learning is a bi-objective optimization problem in which two conflicting objectives should be minimized. The objectives are related to the training error, empirical risk (Remp), and the machine complexity (?). In this work one general Q-norm like method to compute the machine complexity is presented and it can be used to model and compare most of the learning machines found in the literature. The main advantage of the proposed complexity measure is that it is a simple method to split the linear and non-linear complexity influences, leading to a better understanding of the learning process. One novel learning machine, the Parallel Layer Perceptron (PLP) network was proposed here using a training algorithm based on the definitions and structures of learning, the Minimum Gradient Method (MGM). The combination of the PLP with the MGM (PLP-MGM) is held using a reliable least-squares procedure and it is the main contribution of this work.
Este trabalho apresenta uma nova abordagem para lidar com o problema de minimização do risco estrutural (structural risk minimization - SRM) aplicado ao problema geral de aprendizado de máquinas. A formulação é baseada no conceito fundamental de que o aprendizado supervisionado é um problema de otimização bi-objetivo, onde dois objetivos conflitantes devem ser minimizados. Estes objetivos estão relacionados ao erro de treinamento, risco empírico (Remp), e à complexidade (capacidade) da máquina de aprendizado (?). Neste trabalho uma formulação geral baseada na norma-Q é utilizada para calcular a complexidade da máquina e esta pode ser utilizada para modelar e comparar a maioria das máquinas de aprendizado encontradas na literatura. A principal vantagem da medida proposta é que esta é uma maneira simples de separar as influências dos parâmetros lineares e não-lineares na medida de complexidade, levando a um melhor entendimento do processo de aprendizagem. Uma nova máquina de aprendizado, a rede perceptron com camadas paralelas (Parallel Layer Perceptron -PLP), foi proposta neste trabalho utilizando um treinamento baseado nas definições e estruturas de aprendizado propostas nesta tese, o Método do Gradiente Mínimo (Minimum Gradient Method-MGM). A combinação da PLP com o MGM (PLP-MGM) é feita utilizando o estimador de mínimos quadrados, sendo esta a principal contribuição deste trabalho.
APA, Harvard, Vancouver, ISO, and other styles
2

Tsampouka, Petroula. "Perceptron-like large margin classifiers." Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/264242/.

Full text
Abstract:
We address the problem of binary linear classification with emphasis on algorithms that lead to separation of the data with large margins. We motivate large margin classification from statistical learning theory and review two broad categories of large margin classifiers, namely Support Vector Machines which operate in a batch setting and Perceptron-like algorithms which operate in an incremental setting and are driven by their mistakes. We subsequently examine in detail the class of Perceptron-like large margin classifiers. The algorithms belonging to this category are further classified on the basis of criteria such as the type of the misclassification condition or the behaviour of the effective learning rate, i.e. the ratio of the learning rate to the length of the weight vector, as a function of the number of mistakes. Moreover, their convergence is examined with a prominent role in such an investigation played by the notion of stepwise convergence which offers the possibility of a rather unified approach. Whenever possible, mistake bounds implying convergence in a finite number of steps are derived and discussed. Two novel families of approximate maximum margin algorithms called CRAMMA and MICRA are introduced and analysed theoretically. In addition, in order to deal with linearly inseparable data a soft margin approach for Perceptron-like large margin classifiers is discussed. Finally, a series of experiments on artificial as well as real-world data employing the newly introduced algorithms are conducted allowing a detailed comparative assessment of their performance with respect to other well-known Perceptron-like large margin classifiers and state-of-the-art Support Vector Machines.
APA, Harvard, Vancouver, ISO, and other styles
3

Shadafan, Raed Salem. "Sequential training of multilayer perceptron classifiers." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dunne, R. A. "Multi-layer perceptron models for classification." Thesis, Dunne, R.A. (2003) Multi-layer perceptron models for classification. PhD thesis, Murdoch University, 2003. https://researchrepository.murdoch.edu.au/id/eprint/50257/.

Full text
Abstract:
This thesis concerns the Multi-layer Perceptron (MLP) model, one of a variety of neural network models that have come into wide prominence since the mid 1980s for the classification of individuals into pre-defined classes based on a vector of individual measurements. Each discipline in which the MLP model has had influence, including computing, electrical engineering and psychology, has recast the model into its own language and imbued it with its own concerns. This divergence of terminologies has made the literature somewhat impenetrable but has also led to an appreciation of other disciplines' priorities and interests. The major aim of the thesis has been to bring the MLP model within the frame­work of statistics. We have two aims here: one is to make the MLP model more intelligible to statisticians; and the other is to bring the insights of statistics to the MLP model. A statistical modeling approach can make valuable contributions, ranging from small but important clarifications, such as clearing up the confusion in the MLP literature between the model and the methodology for fitting the model, to much larger insights such as determining the robustness of the model in the event of outlying or atypical data. We provide a treatment of the relationship of the MLP classifier to more familiar statistical models and of the various fitting and model selection methodologies currently used for MLP models. A description of the influence curves of the MLP is provided, leading to both an understanding of how the MLP model relates to logistic regression (and to robust versions of logistic regression) and to a proposal for a robust MLP model. Practical problems associated with the fitting of MLP models, from the effects of scaling of the input data to the effects of various penalty terms, are also considered. The MLP model has a variable architecture with the major source of variation being the number of hidden layer processing units. A direct method is given for determining this in multi-class problems where the pairwise decision boundary is linear in the feature space. Finally, in applications such as remote sensing each vector of measurements or pixel contains contextual information about the neighboring pixels. The MLP model is modified to incorporate this contextual information into the classification procedure.
APA, Harvard, Vancouver, ISO, and other styles
5

Rouleau, Christian. "Perceptron sous forme duale tronquée et variantes." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24492/24492.pdf.

Full text
Abstract:
L’apprentissage automatique fait parti d’une branche de l’intelligence artificielle et est utilisé dans de nombreux domaines en science. Il se divise en trois catégories principales : supervisé, non-supervisé et par renforcement. Ce mémoire de maîtrise portera uniquement sur l’apprentissage supervisé et plus précisément sur la classification de données. Un des premiers algorithmes en classification, le perceptron, fut proposé dans les années soixante. Nous proposons une variante de cet algorithme, que nous appelons le perceptron dual tronqué, qui permet l’arrêt de l’algorithme selon un nouveau critère. Nous comparerons cette nouvelle variante à d’autres variantes du perceptron. De plus, nous utiliserons le perceptron dual tronqué pour construire des classificateurs plus complexes comme les «Bayes Point Machines».
Machine Learning is a part of the artificial intelligence and is used in many fields in science. It is divided into three categories : supervised, not supervised and by reinforcement. This master’s paper will relate only the supervised learning and more precisely the classification of datas. One of the first algorithms in classification, the perceptron, was proposed in the Sixties. We propose an alternative of this algorithm, which we call the truncated dual perceptron, which allows the stop of the algorithm according to a new criterion. We will compare this new alternative with other alternatives of the perceptron. Moreover, we will use the truncated dual perceptron to build more complex classifiers like the «Bayes Point Machines».
APA, Harvard, Vancouver, ISO, and other styles
6

Power, Phillip David. "Non-linear multi-layer perceptron channel equalisation." Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Auld, Thomas James. "Bayesian applications of multilayer perceptron neural networks." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kelby, Robin J. "Formalized Generalization Bounds for Perceptron-Like Algorithms." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1594805966855804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Midhall, Ruben, and Amir Parmbäck. "Utvärdering av Multilayer Perceptron modeller för underlagsdetektering." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43469.

Full text
Abstract:
Antalet enheter som är uppkopplade till internet, Internet of Things (IoT), ökar hela tiden. År 2035 beräknas det finnas 1000 miljarder Internet of Things-enheter. Samtidigt som antalet enheter ökar, ökar belastningen på internet-nätverken som enheterna är uppkopplade till. Internet of Things-enheterna som finns i vår omgivning samlar in data som beskriver den fysiska tillvaron och skickas till molnet för beräkning. För att hantera belastningen på internet-nätverket flyttas beräkningarna på datan till IoT-enheten, istället för att skicka datan till molnet. Detta kallas för edge computing. IoT-enheter är ofta resurssnåla enheter med begränsad beräkningskapacitet. Detta innebär att när man designar exempelvis "machine learning"-modeller som ska köras med edge computing måste algoritmerna anpassas utifrån de resurser som finns tillgängliga på enheten. I det här arbetet har vi utvärderat olika multilayer perceptron-modeller för mikrokontrollers utifrån en rad olika experiment. "Machine learning"-modellerna har varit designade att detektera vägunderlag. Målet har varit att identifiera hur olika parametrar påverkar "machine learning"-systemen. Vi har försökt att maximera prestandan och minimera den mängd fysiskt minne som krävs av modellerna. Vi har även behövt förhålla oss till att mikrokontrollern inte haft tillgång till internet. Modellerna har varit ämnade att köras på en mikrokontroller "on the edge". Datainsamlingen skedde med hjälp av en accelerometer integrerad i en mikrokontroller som monterades på en cykel. I studien utvärderas två olika "machine learning"-system, ett som är en kombination av binära klassificerings modeller och ett multiklass klassificerings system som framtogs i ett tidigare arbete. Huvudfokus i arbetet har varit att träna modeller för klassificering av vägunderlag och sedan utvärdera modellerna. Datainsamlingen gjordes med en mikrokontroller utrustad med en accelerometer monterad på en cykel. Ett av systemen lyckas uppnå en träffsäkerhet på 93,1\% för klassificering av 3 vägunderlag. Arbetet undersöker även hur mycket fysiskt minne som krävs av de olika "machine learning"-systemen. Systemen krävde mellan 1,78kB och 5,71kB i fysiskt minne.
The number of devices connected to the internet, the Internet of Things (IoT), is constantly increasing. By 2035, it is estimated to be 1,000 billion Internet of Things devices in the world. At the same time as the number of devices increase, the load on the internet networks to which the devices are connected, increases. The Internet of Things devices in our environment collect data that describes our physical environment and is sent to the cloud for computation. To reduce the load on the internet networks, the calculations are done on the IoT devices themselves instead of in the cloud. This way no data needs to be sent over the internet and is called edge computing. In edge computing, however, other challenges arise. IoT devices are often resource-efficient devices with limited computing capacity. This means that when designing, for example, machine learning models that are to be run with edge computing, the models must be designed based on the resources available on the device. In this work, we have evaluated different multilayer perceptron models for microcontrollers based on a number of different experiments. The machine learning models have been designed to detect road surfaces. The goal has been to identify how different parameters affect the machine learning systems. We have tried to maximize the performance and minimize the memory allocation of the models. The models have been designed to run on a microcontroller on the edge. The data was collected using an accelerometer integrated in a microcontroller mounted on a bicycle. The study evaluates two different machine learning systems that were developed in a previous thesis. The main focus of the work has been to create algorithms for detecting road surfaces. The data collection was done with a microcontroller equipped with an accelerometer mounted on a bicycle. One of the systems succeeds in achieving an accuracy of 93.1\% for the classification of 3 road surfaces. The work also evaluates how much physical memory is required by the various machine learning systems. The systems required between 1.78kB and 5,71kB of physical memory.
APA, Harvard, Vancouver, ISO, and other styles
10

FASSARELA, M. S. "Treinamento de Redes Perceptron Usando Janelas Dinâmicas." Universidade Federal do Espírito Santo, 2009. http://repositorio.ufes.br/handle/10/9587.

Full text
Abstract:
Made available in DSpace on 2018-08-02T00:00:48Z (GMT). No. of bitstreams: 1 tese_2871_DissertacaoMestradoMarceloSouzaFassarella.pdf: 4412674 bytes, checksum: b98757b8830dc327c5ca5578387c8eaa (MD5) Previous issue date: 2009-12-21
Neste trabalho apresentamos as redes neurais e o problema envolvendo o dilema bias-variância. Propomos o método da Janela a ser inserido no treinamento de redes supervisionadas com conjuntos de dados ruidosos. O método possui uma característica intrínseca de função regularizadora, já que procura eliminar ruídos durante a etapa de treinamento, reduzindo a in uência destes no ajuste dos pesos da rede. Implementamos e analisamos o método nas redes lógicas adaptivas (ALN) e nas redes perceptrons de múltiplas camadas (MLP). Por último, testamos a rede em aplicações de aproximação de funções, ltragem adaptiva e previsão de séries temporais.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Perceptron"

1

Zheng, Gonghui. Design and evaluation of a multi-output-layer perceptron. [s.l: The Author], 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Zhe. A heuristic for general rule extraction from a multilayer perceptron. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Peeling, S. M. Experiments in isolated digit recognition using the multi-layer perceptron. [London: HMSO, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lont, Jerzy B. Analog CMOS implementatrion of a multi-layer perceptron with nonlinear synapses. Kontanz: Hartung-Gorre, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harrison, R. F. The multi-layer perceptron as an aid to the early diagnosis of myocardial infarction. Sheffield: University of Sheffield, Dept. of Control Engineering, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harrison, R. F. Neur al networks,heart attack and bayesian decisions: An application oof the Boltzmann perceptron network. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Zhe. Dynamic query algorithms for human-computer interaction based on information gain and the multi-layer perceptron. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

André, Delorme, and Flückiger Michelangelo 1939-, eds. Perception et réalité: Une introduction à la psychologie des perceptions. Boucherville, Qué: G. Morin, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beliveau, Marc. Canadian media's perceptions of Asia: Asian media's perception of Canada. Edited by Payrastre Georges, Phillips Susan 1950-, and Asia Pacific Foundation of Canada. [Canada]: Asia Pacific Foundation of Canada, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Berthon, Pierre. Managers' perceptions of their decision-making context: The influence of perception type. Henley: The Management College, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Perceptron"

1

Murty, M. N., and Rashmi Raghava. "Perceptron." In Support Vector Machines and Perceptrons, 27–40. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41063-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag, et al. "Perceptron." In Encyclopedia of Machine Learning, 773. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Torres, Joaquin J. "Perceptron Learning." In Encyclopedia of Computational Neuroscience, 2239–42. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Manaswi, Navin Kumar. "Multilayer Perceptron." In Deep Learning with Applications Using Python, 45–56. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3516-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shalev-Shwartz, Shai. "Perceptron Algorithm." In Encyclopedia of Algorithms, 1547–50. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4939-2864-4_287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shalev-Shwartz, Shai. "Perceptron Algorithm." In Encyclopedia of Algorithms, 1–5. Boston, MA: Springer US, 2015. http://dx.doi.org/10.1007/978-3-642-27848-8_287-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shalev-Shwartz, Shai. "Perceptron Algorithm." In Encyclopedia of Algorithms, 642–44. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-30162-4_287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rojas, Raúl. "Perceptron Learning." In Neural Networks, 77–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-61068-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Torres, Joaquin J. "Perceptron Learning." In Encyclopedia of Computational Neuroscience, 1–5. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7320-6_679-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Torres, Joaquín J. "Perceptron Learning." In Encyclopedia of Computational Neuroscience, 1–4. New York, NY: Springer New York, 2020. http://dx.doi.org/10.1007/978-1-4614-7320-6_679-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Perceptron"

1

Pupezescu, Valentin. "PULSATING MULTILAYER PERCEPTRON." In eLSE 2016. Carol I National Defence University Publishing House, 2016. http://dx.doi.org/10.12753/2066-026x-16-035.

Full text
Abstract:
The Knowledge Discovery in Databases represents the process of extracting useful information from data that are stored in real databases. The Knowledge Discovery in Databases process consists of multiple steps which include selection target data from raw data, preprocessing, data transformation, Data Mining and interpretation of mined data. As we see, the Data Mining is one step from the whole process and it will perform one of these Data Mining task: classification, regression, clustering, association rules, summarization, dependency modelling, change and deviation detection. In this experiments I used one neural network(multilayer perceptron) that performs the classification task. This paper proposes a functioning model for the classical multilayer perceptron that is a sequential simulation of a Distributed Committee Machine. Committee Machines are a group of neural structures that work in a distributed manner as a group in order to obtain better classification results than individual neural networks. The classical backpropagation algorithm is modified in order to simulate the execution of multiple multilayer perceptrons that run in a sequential manner. The classification was made for three standard data sets: iris1, wine1 and conc1. In my case the backpropagation algorithm still consists of three well known stages: the feedforward of the input training pattern, the calculation of the associated output error, and the correction of the weights. The proposed model makes a twist for the classical backpropagation algorithm meaning that all the weights of the multilayer perceptron will be reset and randomly regenerated after a certain number of training epochs. This model will have a pulsating effect that will also prevent the blockage of the perceptron on poor local minimum points. This research is useful in the Knowledge Discovery in Databases process because the classification gets the same performance results as in the case of a Distributed Committee Machine.
APA, Harvard, Vancouver, ISO, and other styles
2

Ramchoun, H., M. A. Janati Idrissi, Y. Ghanou, and M. Ettaouil. "Multilayer Perceptron." In BDCA'17: 2nd international Conference on Big Data, Cloud and Applications. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3090354.3090427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Karimi, B., T. Baradaran, Kaveh Ashenayi, and James Vogh. "Comparison of sinusoidal perceptron with multilayer classical perceptron." In Midwest - DL tentative, edited by Rudolph P. Guzik, Hans E. Eppinger, Richard E. Gillespie, Mary K. Dubiel, and James E. Pearson. SPIE, 1991. http://dx.doi.org/10.1117/12.25815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Saromo, Daniel, Elizabeth Villota, and Edwin Villanueva. "Auto-Rotating Perceptrons." In LatinX in AI at Neural Information Processing Systems Conference 2019. Journal of LatinX in AI Research, 2019. http://dx.doi.org/10.52591/lxai2019120826.

Full text
Abstract:
This paper proposes an improved design of the perceptron unit to mitigate the vanishing gradient problem. This nuisance appears when training deep multilayer perceptron networks with bounded activation functions. The new neuron design, named auto-rotating perceptron (ARP), has a mechanism to ensure that the node always operates in the dynamic region of the activation function, by avoiding saturation of the perceptron. The proposed method does not change the inference structure learned at each neuron. We test the effect of using ARP units in some network architectures which use the sigmoid activation function. The results support our hypothesis that neural networks with ARP units can achieve better learning performance than equivalent models with classic perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
5

Rauber, Thomas, and Karsten Berns. "Kernel Multilayer Perceptron." In 2011 24th SIBGRAPI Conference on Graphics, Patterns and Images (Sibgrapi). IEEE, 2011. http://dx.doi.org/10.1109/sibgrapi.2011.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmadi, Saba, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita. "The Strategic Perceptron." In EC '21: The 22nd ACM Conference on Economics and Computation. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3465456.3467629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Servedio, Rocco A. "On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm." In the twelfth annual conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/307400.307474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bayat, Farnood Merrikh, Xinjie Guo, and Dmitri Strukov. "Exponential-weight multilayer perceptron." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xiang, Xuyan, Yingchun Deng, and Xiangqun Yang. "Second Order Spiking Perceptron." In 2009 WRI Global Congress on Intelligent Systems. IEEE, 2009. http://dx.doi.org/10.1109/gcis.2009.376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang Hong-Qi, Chen Zong-Zhi, and Su Shi-Wei. "RECALL of multilayer perceptron." In 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170499.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Perceptron"

1

Raychev, Nikolay. Mathematical foundations of neural networks. Implementing a perceptron from scratch. Web of Open Science, August 2020. http://dx.doi.org/10.37686/nsr.v1i1.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kirichek, Galina, Vladyslav Harkusha, Artur Timenko, and Nataliia Kulykovska. System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3743.

Full text
Abstract:
In this article realization method of attacks and anomalies detection with the use of training of ordinary and attacking packages, respectively. The method that was used to teach an attack on is a combination of an uncontrollable and controlled neural network. In an uncontrolled network, attacks are classified in smaller categories, taking into account their features and using the self- organized map. To manage clusters, a neural network based on back-propagation method used. We use PyBrain as the main framework for designing, developing and learning perceptron data. This framework has a sufficient number of solutions and algorithms for training, designing and testing various types of neural networks. Software architecture is presented using a procedural-object approach. Because there is no need to save intermediate result of the program (after learning entire perceptron is stored in the file), all the progress of learning is stored in the normal files on hard disk.
APA, Harvard, Vancouver, ISO, and other styles
3

Buraschi, Daniel, and Dirk Godenau. How does Tenerife society perceive immigration? Observatorio de la Inmigración de Tenerife. Departamento de Geografía e Historia. Universidad de La Laguna. Tenerife, 2020. http://dx.doi.org/10.25145/r.obitfact.2019.15.

Full text
Abstract:
The social perceptions of immigration and the attitudes that Tenerife society has towards immigrants are essential aspects of the dynamics of intercultural coexistence. The Tenerife Immigration Observatory has conducted research that has shows that in general terms the society in Tenerife has a positive perception of immigration, although there is a generalized perception of comparative grievance, based on the idea that migrants are treated more favourably by public institutions
APA, Harvard, Vancouver, ISO, and other styles
4

Domínguez, Roberto. Perceptions of the European Union in Latin America. Fundación Carolina, January 2023. http://dx.doi.org/10.33960/issn-e.1885-9119.dt76en.

Full text
Abstract:
This working paper examines the puzzle of the gaps between the images that the EU projects, voluntarily and involuntarily, and the perceptions of the EU in Latin America. After reviewing some of the debates related to the role of perceptions in public policy and EU Public Diplomacy (EUPD), the paper analyzes some critical developments in global perceptions of the EU based on the study Update of the 2015 Analysis of the Perception of the EU and EU Policies Abroad (2021 Update Study), which assessed the attitudes of the EU in 13 countries. The third section examines some studies on the attitudes of the EU in Latin America, including some contributions from Latinobarometer. The fourth section offers comparative cases of EU perception in Brazil, Mexico, and Colombia based on the findings of the 2021 Update Study. The analysis of each country relies on the interpretation of surveys with some references to the press analysis and interview methods provided in the 2021 Update Study. Each case discusses specific trends in the following areas: visibility, primary descriptors, global economics, and international leadership. Also, it identifies some patterns in perceptions of the EU in social development, climate change, research/technology, development assistance, culture, the case of the critical juncture in the survey (pandemic), and the EU as a normative setter. The final section offers some general trends in the perceptions of the EU in Latin America.
APA, Harvard, Vancouver, ISO, and other styles
5

Cohen, Marion F. Auditory Perception. Fort Belvoir, VA: Defense Technical Information Center, December 1993. http://dx.doi.org/10.21236/ada277414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cohen, Marion F. Auditory Perception. Fort Belvoir, VA: Defense Technical Information Center, October 1997. http://dx.doi.org/10.21236/ada379396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cohen, Marion F. Auditory Perception. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada217012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sperling, George. Visual Motion Perception. Fort Belvoir, VA: Defense Technical Information Center, January 1989. http://dx.doi.org/10.21236/ada210994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Turano, Kathleen A. Visual Motion Perception. Fort Belvoir, VA: Defense Technical Information Center, March 2000. http://dx.doi.org/10.21236/ada375117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Caetano, Ana Paula, Clara Cruz Santos, and Lisete Mónico. Welfare Deservingness in the perspective of public opinion and street-level bureaucrats: a scoping review protocol. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, February 2023. http://dx.doi.org/10.37766/inplasy2023.2.0010.

Full text
Abstract:
Review question / Objective: This scoping review aims to systematize the scientific knowledge about the relationship between public opinion concerning the street-level bureaucrats’ actions and their perceptions about Welfare Deservingness and social protection measures implemented within the framework of the current Welfare State. In a more concrete way, we intend to demonstrate the following assumptions: (a) if there is a connection between the perception of Welfare Deservingness and the public support given to social policies; (b) if there are more valued dimensions of Welfare Deservingness in public opinion; and (c) if the street-level bureaucrats' perceptions of Welfare Deservingness will have an impact on the implementation of public policies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography