Dissertations / Theses on the topic 'Backpropagation'

To see the other types of publications on this topic, follow the link: Backpropagation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Backpropagation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Civelek, Ferda N. (Ferda Nur). "Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278824/.

Full text
Abstract:
Representing time has been considered a general problem for artificial intelligence research for many years. More recently, the question of representing time has become increasingly important in representing human decision making process through connectionist expert systems. Because most human behaviors unfold over time, any attempt to represent expert performance, without considering its temporal nature, can often lead to incorrect results. A temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems, has been introduced. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications. A temporal backpropagation algorithm which supports the model has been developed. The model along with the temporal backpropagation algorithm makes it extremely practical to define any artificial neural network application. Also, an approach that can be followed to decrease the memory space used by weight matrix has been introduced. The algorithm was tested using a medical connectionist expert system to show how best we describe not only the disease but also the entire course of the disease. The system, first, was trained using a pattern that was encoded from the expert system knowledge base rules. Following then, series of experiments were carried out using the temporal model and the temporal backpropagation algorithm. The first series of experiments was done to determine if the training process worked as predicted. In the second series of experiments, the weight matrix in the trained system was defined as a function of time intervals before presenting the system with the learned patterns. The result of the two experiments indicate that both approaches produce correct results. The only difference between the two results was that compressing the weight matrix required more training epochs to produce correct results. To get a measure of the correctness of the results, an error measure which is the value of the error squared was summed over all patterns to get a total sum of squares.
APA, Harvard, Vancouver, ISO, and other styles
2

Yee, Clifford Wing Wei Physics Faculty of Science UNSW. "Point source compensation ??? a backpropagation method for underwater acoustic imaging." Awarded by:University of New South Wales. School of Physics, 2003. http://handle.unsw.edu.au/1959.4/20590.

Full text
Abstract:
The backpropagation method of image reconstruction has been known for some time with the advantage of fast processing due to the use of Fast Fourier Transform. But its applicability to underwater imaging has been limited. At present the shift-and-add method is the more widely used method in underwater imaging. This is due to the fact that backpropagation has been derived for plane wave insonification, with the scattered waves detected in transmission-mode, or synthetic aperture set-up. One of the methods being used for underwater imaging is to use a point source for the insonification of the target and the scattered waves detected in reflection-mode by a receiver array. An advantage of this scanning method is only one transmission of the source is required to capture an image, instead of multiple transmissions. Therefore motion artifacts are kept to minimum. To be able to exploit the processing speed of the backpropagation method, it must be adapted for point source insonification. The coverage of this configuration in the literature has been scant, methods for spherical sources have been proposed for transmission mode and arbitrary surfaces in geophysical applications. These methods are complex and difficult to use. A novel point source compensation method is proposed in this thesis so that the backpropagation image formation method can be used for the point source insonification set-up. The method of investigation undertaken to derive this new backpropagation method was through theoretical analysis, numerical simulation and experimental verification. The effect of various compensation factors on the image quality was studied in simulation. In the experimental verification, practical issues relating to the application of the new method was addressed. The final proof of concept of our method was undertaken with our experimental verification. The quality of images formed with the point source compensation methods has also been compared with that with the shiftand- add method. Experimental and simulation results show that the point source compensated backpropagation algorithm can produce images of comparable quality with those formed with shift-and-add method for the set-up of wideband point-source insonification with detection in reflection-mode, with the advantage of faster image formation.
APA, Harvard, Vancouver, ISO, and other styles
3

Bendelac, Shiri. "Enhanced Neural Network Training Using Selective Backpropagation and Forward Propagation." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83714.

Full text
Abstract:
Neural networks are making headlines every day as the tool of the future, powering artificial intelligence programs and supporting technologies never seen before. However, the training of neural networks can take days or even weeks for bigger networks, and requires the use of super computers and GPUs in academia and industry in order to achieve state of the art results. This thesis discusses employing selective measures to determine when to backpropagate and forward propagate in order to reduce training time while maintaining classification performance. This thesis tests these new algorithms on the MNIST and CASIA datasets, and achieves successful results with both algorithms on the two datasets. The selective backpropagation algorithm shows a reduction of up to 93.3% of backpropagations completed, and the selective forward propagation algorithm shows a reduction of up to 72.90% in forward propagations and backpropagations completed compared to baseline runs of always forward propagating and backpropagating. This work also discusses employing the selective backpropagation algorithm on a modified dataset with disproportional under-representation of some classes compared to others.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Bonnell, Jeffrey A. "Implementation of a New Sigmoid Function in Backpropagation Neural Networks." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1342.

Full text
Abstract:
This thesis presents the use of a new sigmoid activation function in backpropagation artificial neural networks (ANNs). ANNs using conventional activation functions may generalize poorly when trained on a set which includes quirky, mislabeled, unbalanced, or otherwise complicated data. This new activation function is an attempt to improve generalization and reduce overtraining on mislabeled or irrelevant data by restricting training when inputs to the hidden neurons are sufficiently small. This activation function includes a flattened, low-training region which grows or shrinks during back-propagation to ensure a desired proportion of inputs inside the low-training region. With a desired low-training proportion of 0, this activation function reduces to a standard sigmoidal curve. A network with the new activation function implemented in the hidden layer is trained on benchmark data sets and compared with the standard activation function in an attempt to improve area under the curve for the receiver operating characteristic in biological and other classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
5

Hövel, Christoph A. "Finanzmarktprognose mit neuronalen Netzen : Training mit Backpropagation und genetisch-evolutionären Verfahren /." Lohmar ; Köln : Eul, 2003. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=010635637&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Seifert, Christin, and Jan Parthey. "Simulation Rekursiver Auto-Assoziativer Speicher (RAAM) durch Erweiterung eines klassischen Backpropagation-Simulators." Thesis, Universitätsbibliothek Chemnitz, 2003. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200300536.

Full text
Abstract:
Rekursive Auto-Assoziative Speicher (RAAM) sind spezielle Neuronale Netze (NN), die in der Lage sind, hierarchiche Strukturen zu verarbeiten. Bei der Simulation dieser Netze gibt es einige Besonderheiten, wie z.B. die dynamische Trainingsmenge, zu beachten. In der Arbeit werden diese und die daraus resultierenden angepassten Lernalgorithmen erörtert. Außerdem wird ein normaler Backpropagation-Simulator (Xerion) um die Fähigkeiten für die Simulation von RAAMs erweitert.
APA, Harvard, Vancouver, ISO, and other styles
7

Sam, Iat Tong. "Theory of backpropagation type learning of artificial neural networks and its applications." Thesis, University of Macau, 2001. http://umaclib3.umac.mo/record=b1446702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

potter, matthew james. "Improving ANN Generalization via Self-Organized Flocking in conjunction with Multitasked Backpropagation." NCSU, 2003. http://www.lib.ncsu.edu/theses/available/etd-03242003-075528/.

Full text
Abstract:
The purpose of this research has been to develop methods of improving the generalization capabilities of artificial neural networks. Tools for examining the influence of individual training set patterns on the learning abilities of individual neurons are put forth and utilized in the implementation of new network learning algorithms. Algorithms are based largely on the supervised training algorithm: backpropagation, and all experiments use the standard backpropagation algorithm for comparison of results. The focus of the new learning algorithms revolve around the addition of two main components. The first addition is that of an unsupervised learning algorithm called flocking. Flocking attempts to provide network hyperplane divisions that are evenly influenced by examples on either side of the hyperplane. The second addition is that of a multi-tasking approach called convergence training. Convergence training uses the information provided by a clustering algorithm in order to create subtasks that represent the divisions between clusters. These subtasks are then trained in unison in order to promote hyperplane sharing within the problem space. Generalization was improved in most cases and the solutions produced by the new learning algorithms are demonstrated to be very robust against different random weight initializations. This research is not only a search for better generalizing ANN learning algorithms, but also a search for better understanding when dealing with the complexities involved in ANN generalization.
APA, Harvard, Vancouver, ISO, and other styles
9

Wellington, Charles H. "Backpropagation neural network for noise cancellation applied to the NUWES test ranges." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26899.

Full text
Abstract:
Approved for public release; distribution is unlimited
This thesis investigates the application of backpropagation neural networks as an alternative to adaptive filtering at the NUWES test ranges. To facilitate the investigation, a model of the test range is developed. This model accounts for acoustic transmission losses, the effects of doppler shift, multipath, and finite propagation time delay. After describing the model, the backpropagation neural network algorithm and feature selection for the network are explained. Then, two schemes based on the network's output, signal waveform recovery and binary code recovery, are applied to the model. Simulation results of the signal waveform recovery and direct code recovery schemes are presented for several scenarios.
APA, Harvard, Vancouver, ISO, and other styles
10

Seifert, Christin Parthey Jan. "Simulation Rekursiver Auto-Assoziativer Speicher (RAAM) durch Erweiterung eines klassischen Backpropagation-Simulators." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10607558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Schilling, Glenn D. "Modeling Aircraft Fuel Consumption with a Neural Network." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36533.

Full text
Abstract:
This research involves the development of an aircraft fuel consumption model to simplify Bela Collins of the MITRE Corporation aircraft fuelburn model in terms of level of computation and level of capability. MATLAB and its accompanying Neural Network Toolbox, has been applied to data from the base model to predict fuel consumption. The approach to the base model and neural network is detailed in this paper. It derives from the basic concepts of energy balance. Multivariate curve fitting techniques used in conjunction with aircraft performance data derive the aircraft specific constants. Aircraft performance limits are represented by empirical relationships that also utilize aircraft specific constants. It is based on generally known assumptions and approximations for commercial jet operations. It will simulate fuel consumption by adaptation of a specific aircraft using constants that represent the relationship of lift-to-drag and thrust-to-fuel flow. The neural network model invokes the output from MITRE1s algorithm and provides: (1) a comparison to the polynomial fuelburn function in the fuelburn post- processor of the FAA Airport and Airspace Simulation Model (SIMMOD), (2) an established sensitivity of system performance for a range of variables that effect fuel consumption, (3) a comparison of post fuel burn (fuel consumption algorithms) techniques to new techniques, and (4) the development of a trained demo neural network. With the powerful features of optimization, graphics, and hierarchical modeling, the MATLAB toolboxes proved to be effective in this modeling process.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Jones, Lloyd H. "Machinery monitoring and diagnostics using pseudo Wigner-Ville distribution and backpropagation neural network." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA276219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Fernandes, Luiz Gustavo Leao. "Utilização de redes neurais na análise e previsão de séries temporais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1995. http://hdl.handle.net/10183/25774.

Full text
Abstract:
Este trabalho a um estudo a respeito da aplicação de Redes Neurais Artificiais (RNAs), mais especificamente do modelo perceptron multi-camadas com aprendizado por retro-propagação de erros, a previsão de valores futuros de Series Temporais. 0 estudo foi realizado através da realização de previsões a partir de uma determinada arquitetura de rede neural, a qual é construída com base na analise estatística da serie, para três series reais. A primeira representa o Índice mensal de passageiros das linhas aéreas americanas entre janeiro de 1960 e dezembro de 1971, a segunda corresponde ao índice pluviométrico anual da cidade de Fortaleza no estado do Ceara entre 1849 e 1984, e a terceira trata do índice mensal de produção industrial do estado do Rio Grande do Sul entre janeiro de 1981 e julho de 1993. As duas primeiras series são exemplos clássicos utilizados no estudo dos modelos estatísticos aplicados a previsão de Series Temporais. Os resultados obtidos com as RNAs foram comparados aos progn6sticos realizados pelo método economêtrico que apresenta os melhores resultados para o problema da previsão de Series Temporais: o método da decomposição da serie em suas componentes básicas não-observáveis (tendência, sazonalidade, ciclo e irregular). Tais resultados mostraram que as RNAs podem apresentar excelentes níveis de precisão em seus prognósticos, indicando sua adaptação ao problema da previsão de valores futuros de Séries Temporais.
This work presents a study of the prediction power of Artificial Neural Networks (ANN) in comparison with prediction capability of traditional Time Series models, more specifically the Unobservable Components Models (UCM). The data used to perform the study was the monthly american airlines passengers, the annual rainfall in Fortaleza, Brazil and the monthly gross industrial output for the state of Rio Grande do Sul, Brazil. The results show that Artificial Neural Networks can outperform the forecasts of Unobservable Components Models.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Yini. "Training Neural Networks with Evolutionary Algorithms for Flash Call Verification." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-283039.

Full text
Abstract:
Evolutionary algorithms have achieved great performance among a wide range of optimization problems. In this degree project, the network optimization problem has been reformulated and solved in an evolved way. A feasible evolutionary framework has been designed and implemented to train neural networks in supervised learning scenarios. Under the structure of evolutionary algorithms, a well-defined fitness function is applied to evaluate network parameters, and a carefully derived form of approximate gradients is used for updating parameters. Performance of the framework has been tested by training two different types of networks, linear affine networks and convolutional networks, for a flash call verification task.Under this application scenario, whether a flash call verification will be successful or not will be predicted by a network, which is inherently a binary classification problem. Furthermore, its performance has also been compared with traditional backpropagation optimizers from two aspects: accuracy and time consuming. The results show that this framework is able to push a network training process to converge into a certain level. During the training process, despite of noises and fluctuations, both accuracies and losses converge roughly under the same pattern as in backpropagation. Besides, the evolutionary algorithm seems to have higher updating efficiency per epoch at the first training stage before converging. While with respect to fine tuning, it doesn’t work as good as backpropagation in the final convergence period.
Evolutionära algoritmer uppnår bra prestanda för ett stort antal olika typer av optimeringsproblem. I detta examensprojekt har ett nätverksoptimeringsproblem lösts genom omformulering och vidareutveckling av angreppssättet. Ett förslag till ramverk har utformats och implementerats för att träna neuronnätverk i övervakade inlärningsscenarier. För evolutionära algoritmer används en väldefinierad träningsfunktion för att utvärdera nätverksparametrar, och en noggrant härledd form av approximerade gradienter används för att uppdatera parametrarna. Ramverkets prestanda har testats genom att träna två olika typer av linjära affina respektive konvolutionära neuronnätverk, för optimering av telefonnummerverifiering. I detta applikationsscenario förutses om en telefonnummerverifiering kommer att lyckas eller inte med hjälp av ett neuronnätverk som i sig är ett binärt klassificeringsproblem. Dessutom har dess prestanda också jämförts med traditionella backpropagationsoptimerare från två aspekter: noggrannhet och hastighet. Resultaten visar att detta ramverk kan driva en nätverksträningsprocess för att konvergera till en viss nivå. Trots brus och fluktuationer konvergerar både noggrannhet och förlust till ungefär under samma mönster som i backpropagation. Dessutom verkar den evolutionära algoritmen ha högre uppdateringseffektivitet per tidsenhet i det första träningsskedet innan den konvergerar. När det gäller finjustering fungerar det inte lika bra som backpropagation under den sista konvergensperioden.
APA, Harvard, Vancouver, ISO, and other styles
15

Fischer, Manfred M., and Sucharita Gopal. "Learning in Single Hidden Layer Feedforward Network Models: Backpropagation in a Real World Application." WU Vienna University of Economics and Business, 1994. http://epub.wu.ac.at/4192/1/WSG_DP_3994.pdf.

Full text
Abstract:
Leaming in neural networks has attracted considerable interest in recent years. Our focus is on learning in single hidden layer feedforward networks which is posed as a search in the network parameter space for a network that minimizes an additive error function of statistically independent examples. In this contribution, we review first the class of single hidden layer feedforward networks and characterize the learning process in such networks from a statistical point of view. Then we describe the backpropagation procedure, the leading case of gradient descent learning algorithms for the class of networks considered here, as well as an efficient heuristic modification. Finally, we analyse the applicability of these learning methods to the problem of predicting interregional telecommunication flows. Particular emphasis is laid on the engineering judgment, first, in choosing appropriate values for the tunable parameters, second, on the decision whether to train the network by epoch or by pattern (random approximation), and, third, on the overfitting problem. In addition, the analysis shows that the neural network model whether using either epoch-based or pattern-based stochastic approximation outperforms the classical regression approach to modelling telecommunication flows. (authors' abstract)
Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
16

Markham, Ina Samanta. "An exploration of the robustness of traditional regression analysis versus analysis using backpropagation networks." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-170305/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Murray, Andrew Gerard William, and n/a. "Micro-net the parallel path artificial neuron." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20070423.121528.

Full text
Abstract:
A feed forward architecture is suggested that increases the complexity of conventional neural network components through the implementation of a more complex scheme of interconnection. This is done with a view to increasing the range of application of the feed forward paradigm. The uniqueness of this new network design is illustrated by developing an extended taxonomy of accepted published constructs specific and similar to the higher order, product kernel approximations achievable using "parallel paths". Network topologies from this taxonomy are then compared to each other and the architectures containing parallel paths. In attempting this comparison, the context of the term "network topology" is reconsidered. The output of "channels" in these parallel paths are the products of a conventional connection as observed facilitating interconnection between two layers in a multilayered perceptron and the output of a network processing unit, a "control element", that can assume the identity of a number of pre-existing processing paradigms. The inherent property of universal approximation is tested by existence proof and the method found to be inconclusive. In so doing an argument is suggested to indicate that the parametric nature of the functions as determined by conditions upon initialization may only lead to conditional approximations. The property of universal approximation is neither, confirmed or denied. Universal approximation cannot be conclusively determined by the application of Stone Weierstrass Theorem, as adopted from real analysis. This novel implementation requires modifications to component concepts and the training algorithm. The inspiration for these modifications is related back to previously published work that also provides the basis of "proof of concept". By achieving proof of concept the appropriateness of considering network topology without assessing the impact of the method of training on this topology is considered and discussed in some detail. Results of limited testing are discussed with an emphasis on visualising component contributions to the global network output.
APA, Harvard, Vancouver, ISO, and other styles
18

Hettinger, Christopher James. "Hyperparameters for Dense Neural Networks." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7531.

Full text
Abstract:
Neural networks can perform an incredible array of complex tasks, but successfully training a network is difficult because it requires us to minimize a function about which we know very little. In practice, developing a good model requires both intuition and a lot of guess-and-check. In this dissertation, we study a type of fully-connected neural network that improves on standard rectifier networks while retaining their useful properties. We then examine this type of network and its loss function from a probabilistic perspective. This analysis leads to a new rule for parameter initialization and a new method for predicting effective learning rates for gradient descent. Experiments confirm that the theory behind these developments translates well into practice.
APA, Harvard, Vancouver, ISO, and other styles
19

Herrmann, Kai, Hannes Voigt, Thorsten Seyschab, and Wolfgang Lehner. "InVerDa - co-existing Schema Versions Made Foolproof." IEEE, 2016. https://tud.qucosa.de/id/qucosa%3A75285.

Full text
Abstract:
In modern software landscapes multiple applications usually share one database as their single point of truth. All these applications will evolve over time by their very nature. Often former versions need to stay available, so database developers find themselves maintaining co-existing schema version of multiple applications in multiple versions. This is highly error-prone and accounts for significant costs in software projects, as developers realize the translation of data accesses between schema versions with hand-written delta code. In this demo, we showcase INVERDA, a tool for integrated, robust, and easy to use database versioning. We rethink the way of specifying the evolution to new schema versions. Using the richer semantics of a descriptive database evolution language, we generate all required artifacts automatically and make database versioning foolproof.
APA, Harvard, Vancouver, ISO, and other styles
20

Staufer-Steinnocher, Petra, and Manfred M. Fischer. "A Neural Network Classifier for Spectral Pattern Recognition. On-Line versus Off-Line Backpropagation Training." WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/4152/1/WSG_DP_6097.pdf.

Full text
Abstract:
In this contributon we evaluate on-line and off-line techniques to train a single hidden layer neural network classifier with logistic hidden and softmax output transfer functions on a multispectral pixel-by-pixel classification problem. In contrast to current practice a multiple class cross-entropy error function has been chosen as the function to be minimized. The non-linear diffierential equations cannot be solved in closed form. To solve for a set of locally minimizing parameters we use the gradient descent technique for parameter updating based upon the backpropagation technique for evaluating the partial derivatives of the error function with respect to the parameter weights. Empirical evidence shows that on-line and epoch-based gradient descent backpropagation fail to converge within 100,000 iterations, due to the fixed step size. Batch gradient descent backpropagation training is superior in terms of learning speed and convergence behaviour. Stochastic epoch-based training tends to be slightly more effective than on-line and batch training in terms of generalization performance, especially when the number of training examples is larger. Moreover, it is less prone to fall into local minima than on-line and batch modes of operation. (authors' abstract)
Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
21

Oliver, Muncharaz Javier. "MODELIZACIÓN DE LA VOLATILIDAD CONDICIONAL EN ÍNDICES BURSÁTILES : COMPARATIVA MODELO EGARCH VERSUS RED NEURONAL BACKPROPAGATION." Doctoral thesis, Editorial Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/35803.

Full text
Abstract:
El siguiente proyecto de tesis pretende mostrar y verificar cómo las redes neuronales, en concreto, la red backpropagation son una alternativa para la predicción de la volatilidad condicional frente a los modelos econométricos clásicos de la familia GARCH. El estudio se realiza para diferentes índices bursátilies de diferentes tamaños y zonas geográficas, así como para datos tanto diarios como de alta frecuencia utilizando para la comparativa uno de los modelos más extendidos para el estudio de la volatildiad condicional en índices bursátiles como el EGARCH, dada la existencia comprobada de asimetrías en la volatildiad de dichos índices. La elección de la red neuronal backpropagation viene motivada por ser una de las redes neuronales más extendidas en su uso en finanzas por su capacidad de generalización método de aprendizaje basada en la relga delta generalizada.
Oliver Muncharaz, J. (2014). MODELIZACIÓN DE LA VOLATILIDAD CONDICIONAL EN ÍNDICES BURSÁTILES : COMPARATIVA MODELO EGARCH VERSUS RED NEURONAL BACKPROPAGATION [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35803
Alfresco
APA, Harvard, Vancouver, ISO, and other styles
22

Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.

Full text
Abstract:
Feedforward neural networks have been widely studied and used in many applications in science and engineering. The training of this type of networks is mainly undertaken using the well-known backpropagation based learning algorithms. One major problem with this type of algorithms is the slow training convergence speed, which hinders their applications. In order to improve the training convergence speed of this type of algorithms, many researchers have developed different improvements and enhancements. However, the slow convergence problem has not been fully addressed. This thesis makes several contributions by proposing new backpropagation learning algorithms based on the terminal attractor concept to improve the existing backpropagation learning algorithms such as the gradient descent and Levenberg-Marquardt algorithms. These new algorithms enable fast convergence both at a distance from and in a close range of the ideal weights. In particular, a new fast convergence mechanism is proposed which is based on the fast terminal attractor concept. Comprehensive simulation studies are undertaken to demonstrate the effectiveness of the proposed backpropagataion algorithms with terminal attractors. Finally, three practical application cases of time series forecasting, character recognition and image interpolation are chosen to show the practicality and usefulness of the proposed learning algorithms with comprehensive comparative studies with existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Gaspar, Thiago Lombardi. "Reconhecimento de faces humanas usando redes neurais MLP." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-27042006-231620/.

Full text
Abstract:
O objetivo deste trabalho foi desenvolver um algoritmo baseado em redes neurais para o reconhecimento facial. O algoritmo contém dois módulos principais, um módulo para a extração de características e um módulo para o reconhecimento facial, sendo aplicado sobre imagens digitais nas quais a face foi previamente detectada. O método utilizado para a extração de características baseia-se na aplicação de assinaturas horizontais e verticais para localizar os componentes faciais (olhos e nariz) e definir a posição desses componentes. Como entrada foram utilizadas imagens faciais de três bancos distintos: PICS, ESSEX e AT&T. Para esse módulo, a média de acerto foi de 86.6%, para os três bancos de dados. No módulo de reconhecimento foi utilizada a arquitetura perceptron multicamadas (MLP), e para o treinamento dessa rede foi utilizado o algoritmo de aprendizagem backpropagation. As características faciais extraídas foram aplicadas nas entradas dessa rede neural, que realizou o reconhecimento da face. A rede conseguiu reconhecer 97% das imagens que foram identificadas como pertencendo ao banco de dados utilizado. Apesar dos resultados satisfatórios obtidos, constatou-se que essa rede não consegue separar adequadamente características faciais com valores muito próximos, e portanto, não é a rede mais eficiente para o reconhecimento facial
This research presents a facial recognition algorithm based in neural networks. The algorithm contains two main modules: one for feature extraction and another for face recognition. It was applied in digital images from three database, PICS, ESSEX and AT&T, where the face was previously detected. The method for feature extraction was based on previously knowledge of the facial components location (eyes and nose) and on the application of the horizontal and vertical signature for the identification of these components. The mean result obtained for this module was 86.6% for the three database. For the recognition module it was used the multilayer perceptron architecture (MLP), and for training this network it was used the backpropagation algorithm. The extracted facial features were applied to the input of the neural network, that identified the face as belonging or not to the database with 97% of hit ratio. Despite the good results obtained it was verified that the MLP could not distinguish facial features with very close values. Therefore the MLP is not the most efficient network for this task
APA, Harvard, Vancouver, ISO, and other styles
24

U, San Cho. "Trading simulations on stock market by backpropagation learning of artificial neural networks and traditional linear regression." Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1447318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Al-Serhan, Hasan Muaidi. "Extraction of Arabic word roots : an approach based on computational model and multi-backpropagation neural networks." Thesis, De Montfort University, 2008. http://hdl.handle.net/2086/4921.

Full text
Abstract:
Stemming is a process of extracting the root of a given word, by stripping off the affixes attached to this word. Many attempts have been made to address the stemming of Arabic words problem. The majority of the existing Arabic stemming algorithms require a complete set of morphological rules and large vocabulary lookup tables. Furthermore, many of them give more than one potential stem or root for a given Arabic word. According to Ahmad [11], the Arabic stemming process based on the language morphological rules is still a very difficult task due to the nature of the language itself. The limitations of the current Arabic stemming methods have motivated this research in which we investigate a novel approach to extract the word roots of Arabic language named here as MUAIDI-STEMMER 2. This approach attempts to exploit numerical relations between Arabic letters, avoiding having a list of the root and pattern of each word in the language, and giving one root solution. This approach is composed of two phases. Phase I depends on a basic calculations extracted from linguistic analysis of Arabic patterns and affixes. Phase II is based on artificial neural network trained by backpropagation learning rule. In this proposed phase, we formulate the root extraction problem as a classification problem and the neural network as a classifier tool. This study demonstrates that a neural network can be effectively used to ex- tract the word roots of Arabic language The stemmer developed is tested using 46,895 Arabic word types3. Error counting accuracy evaluation was employed to evaluate the performance of the stemmer. It was successful in producing the stems of 44,107 Arabic words from the given test datasets with accuracy of 94.81%. 2.Muaidi is the author father's name. 3.Types mean distinct or unique words.
APA, Harvard, Vancouver, ISO, and other styles
26

Hudson, Erik Mark. "A Portable Computer System for Recording Heart Sounds and Data Modeling Using a Backpropagation Neural Network." UNF Digital Commons, 1995. http://digitalcommons.unf.edu/etd/158.

Full text
Abstract:
Cardiac auscultation is the primary tool used by cardiologists to diagnose heart problems. Although effective, auscultation is limited by the effectiveness of human hearing. Digital sound technology and the pattern classification ability of neural networks may offer improvements in this area. Digital sound technology is now widely available on personal computers in the form of sound cards. A good deal of research over the last fifteen years has shown that neural networks can excel in diagnostic problem solving. To date, most research involving cardiology and neural networks has focussed on ECG pattern classification. This thesis explores the prospects of recording heart sounds in Wave format and extracting information from the Wave file for use with a backpropagation neural network in order to classify heart patterns.
APA, Harvard, Vancouver, ISO, and other styles
27

Rola, Marcelo Coleto. "Previsão da geração de energia elétrica no médio prazo para o Estado do Rio Grande do Sul empregando redes neurais artificiais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157828.

Full text
Abstract:
A demanda e, consequentemente, a geração de energia elétrica são questões de suma importância para o desenvolvimento econômico e social dos países. Modelos para previsão destes parâmetros no longo e médio prazo são empregados com a finalidade de antever possíveis cenários e propor estratégias para a realização de um planejamento energético adequado. Neste contexto, o presente estudo tem como objetivo realizar a previsão da geração de energia elétrica no estado do Rio Grande do Sul (RS) em um horizonte de médio prazo (um ano), utilizando Redes Neurais Artificiais (RNA’s) do tipo feedforward com algoritmo de aprendizado supervisionado backpropagation. Para o desenvolvimento deste trabalho elaborou-se um script para executar as simulações necessárias, as quais foram realizadas através do software Matlab®. As variáveis de influência selecionadas como entradas do modelo de previsão referem-se à economia (estadual e nacional), ao balanço de energia elétrica e à meteorologia do estado, durante o período de janeiro de 2009 a março de 2016. Para realizar o treinamento da rede neural, adicionou-se a matriz de entrada este conjunto de dados, com frequência mensal, referentes a janeiro de 2009 a março de 2015 e para previsão foram inseridos dados de abril de 2015 a março de 2016. Por fim, depois de realizada a simulação completa da RNA, comparou-se o resultado observado da geração de energia elétrica do estado com o obtido através do modelo de previsão, indicando um erro percentual absoluto médio (MAPE) de 5,86% e um desvio absoluto médio (MAD) de 134,15 MW médio. Os resultados obtidos neste trabalho mostram-se promissores, além de semelhantes aos encontrados na literatura, demonstrando assim confiabilidade e eficácia do método empregado.
The demand and, consequently, the generation of electric power are very important issues for social and economic development of countries. Models to forecast these parameters in long and medium terms are used to anticipate possible sceneries and propose strategies for the energy planning of countries. In this context, the present study aims to forecast the generation of electric energy in Rio Grande do Sul State (RS) in a medium-term horizon (one year) using, Artificial Neural Networks (ANNs) of the feedforward type with algorithm of supervised learning backpropagation. For the development of this work, a script was elaborated in order to execute the necessary simulations, which were carried out through Matlab® software. The selected variables of influence as inputs of forecasting model refer to economy (State and National), to the electric energy balance and to the meteorology State, during the period from January, 2009 to March, 2016. In order to train the neural network, this data set was added to the entrance matrix, with monthly frequency, from January, 2009 to March, 2015 and for prediction, data were inserted from April, 2015 to March, 2016. Finally, after RNA complete simulation, the observed result of the electric power generation of the State was compared with the one obtained through the prediction model, indicating a mean absolute percent error (MAPE) of 5.86% and a mean absolute deviation (MAD) of 134.15 average MW. The obtained results in this work are promising, besides; they are similar to those found in literature, in this way demonstrating the reliability and efficacy of the using method.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Jianhua. "NEURAL NETWORK APPLICATIONS IN AGRICULTURAL ECONOMICS." UKnowledge, 2005. http://uknowledge.uky.edu/gradschool_diss/228.

Full text
Abstract:
Neural networks have become very important tools in many areas including economic researches. The objectives of this thesis are to examine the fundamental components, concepts and theory of neural network methods from econometric and statistic perspective, with particular focus on econometrically and statistically relevant models. In order to evaluate the relative effectiveness of econometric and neural network methods, two empirical studies are conducted by applying neural network methods in a methodological comparison fashion with traditional econometric models.Both neural networks and econometrics have similar models, common problems of modeling and interference. Neural networks and econometrics/statistics, particularly their discriminant methods, are two sides of the same coin in terms of the nature of modeling statistic issues. On one side, econometric models are sampling paradigm oriented methods, which estimate the distribution of the predictor variable separately for each class and combine these with the prior probabilities of each class occurring; while neural networks are one of the techniques based on diagnostic paradigm, which use theinformation from the samples to estimate the conditional probability of an observation belonging to each class, based on predictor variables. Hence, neural network and econometric/statistical methods (particularly, discriminant models) have the same properties, except that the natural parameterizations differ.The empirical studies indicate that neural network methods outperform or are as good as traditional econometric models including Multiple Regression Analysis, Linear Probability Model (LPM), and Logit model, in terms of minimizing the errors of in-sample predictions and out-of-sample forecasts. Although neural networks have some advantages over econometric methods, they have some limitations too. Hence, neural networks are perhaps best viewed as supplements to econometric methods in studying economic issues, and not necessarily as substitutes.
APA, Harvard, Vancouver, ISO, and other styles
29

Fischer, Manfred M., and Petra Staufer-Steinnocher. "Optimization in an Error Backpropagation Neural Network Environment with a Performance Test on a Pattern Classification Problem." WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/4150/1/WSG_DP_6298.pdf.

Full text
Abstract:
Various techniques of optimizing the multiple class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or generalization performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in generalisation is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid inacceptable instabilities in the generalization results. (authors' abstract)
Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
30

Draghici, Sorin. "Using constraints to improve generalisation and training of feedforward neural networks : constraint based decomposition and complex backpropagation." Thesis, University of St Andrews, 1996. http://hdl.handle.net/10023/13467.

Full text
Abstract:
Neural networks can be analysed from two points of view: training and generalisation. The training is characterised by a trade-off between the 'goodness' of the training algorithm itself (speed, reliability, guaranteed convergence) and the 'goodness' of the architecture (the difficulty of the problems the network can potentially solve). Good training algorithms are available for simple architectures which cannot solve complicated problems. More complex architectures, which have been shown to be able to solve potentially any problem do not have in general simple and fast algorithms with guaranteed convergence and high reliability. A good training technique should be simple, fast and reliable, and yet also be applicable to produce a network able to solve complicated problems. The thesis presents Constraint Based Decomposition (CBD) as a technique which satisfies the above requirements well. CBD is shown to build a network able to solve complicated problems in a simple, fast and reliable manner. Furthermore, the user is given a better control over the generalisation properties of the trained network with respect to the control offered by other techniques. The generalisation issue is addressed, as well. An analysis of the meaning of the term "good generalisation" is presented and a framework for assessing generalisation is given: the generalisation can be assessed only with respect to a known or desired underlying function. The known properties of the underlying function can be embedded into the network thus ensuring a better generalisation for the given problem. This is the fundamental idea of the complex backpropagation network. This network can associate signals through associating some of their parameters using complex weights. It is shown that such a network can yield better generalisation results than a standard backpropagation network associating instantaneous values.
APA, Harvard, Vancouver, ISO, and other styles
31

Scarborough, David J. (David James). "An Evaluation of Backpropagation Neural Network Modeling as an Alternative Methodology for Criterion Validation of Employee Selection Testing." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277752/.

Full text
Abstract:
Employee selection research identifies and makes use of associations between individual differences, such as those measured by psychological testing, and individual differences in job performance. Artificial neural networks are computer simulations of biological nerve systems that can be used to model unspecified relationships between sets of numbers. Thirty-five neural networks were trained to estimate normalized annual revenue produced by telephone sales agents based on personality and biographic predictors using concurrent validation data (N=1085). Accuracy of the neural estimates was compared to OLS regression and a proprietary nonlinear model used by the participating company to select agents.
APA, Harvard, Vancouver, ISO, and other styles
32

Trnkóci, Andrej. "Programová knihovna pro práci s umělými neuronovými sítěmi s akcelerací na GPU." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236155.

Full text
Abstract:
Artificial neural networks are demanding to computational power of a computer. Increasing their learning speed could mean new posibilities for research or aplication of the algorithm. And that is a purpose of this thesis. The usage of graphics processing units for neural networks learning is one way how to achieve above mentioned goals. This thesis is offering a survey of theoretical background and consequently implementation of a software library for neural networks learning with a Backpropagation algorithm with a support of acceleration on graphics processing unit.
APA, Harvard, Vancouver, ISO, and other styles
33

Viecheneski, Rodrigo. "APLICAÇÃO DE REDES NEURAIS ARTIFICIAIS NO TRATAMENTO DE DADOS AGROMETEOROLÓGICOS VISANDO A CORREÇÃO DE SÉRIES TEMPORAIS." UNIVERSIDADE ESTADUAL DE PONTA GROSSA, 2012. http://tede2.uepg.br/jspui/handle/prefix/157.

Full text
Abstract:
Made available in DSpace on 2017-07-21T14:19:35Z (GMT). No. of bitstreams: 1 Rodrigo Viecheneski.pdf: 2517433 bytes, checksum: edab7bfbbad98ea4871ef9dbb71009d3 (MD5) Previous issue date: 2012-09-24
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This dissertation presents the development of a computational system called System for Treatment of Agrometeorological weather Series (STST Agrometeorológicas), with the objective of treating agrometeorological data in order to correct time weather. For the development of the study some data were collected from the agrometeorological stations, provided by Fundação ABC. The stations were located in the state of Paraná, in the cities of Ponta Grossa (long - 49.95025733, lat - 25.30156819) and Castro (long -49.8672, lat -24.6752). The computational system that has been suggested made use of the technology of Artificial Neural Networks on the type of Multilayer Perceptron and the backpropagation training algorithm of backpropagation error. It was developed with the Object Pascal programming language, using the integrated development environment Embarcadero Delphi 2009. To validate the proposed method we conducted six case studies, and the one which presented the best result for agrometeorological variable average temperature was the first case study of Castro's weather station, with a hit percentage between the treated registers and the registers without failure of 96.5%, a Pearson correlation coefficient of 0.98 and a simple average of the errors obtained from the training the neural network of 0.026406. The average errors of the neural networks was calculated between the values of errors obtained in each training during a period of correction failure. For the agrometeorological variable relative humidity, the best result was found in the case study 5 of Castro’s weather station, with a hit percentage of 95.7%, a Pearson correlation coefficient of 0.97 and the simple average of the errors obtained from the training the neural network of 0,094298. Given this context, it was revealed that the STST Agrometeorological is a viable alternative in the treatment of meteorological variables such as temperature and relative humidity, since there were results with hit percentage greater than 95% in the treatments of fails of the weather series studied.
Esta dissertação apresenta o desenvolvimento de um sistema computacional deno-minado Sistema para Tratamento de Séries Temporais Agrometeorológicas (STST Agrometeorológicas), com o objetivo de tratar dados agrometeorológicos visando a correção de séries temporais. Para o desenvolvimento dos estudos foram utilizados dados de estações agrometeorológicas disponibilizados pela Fundação ABC, situa-da no estado do Paraná, nas cidades de Ponta Grossa (long -49.95025733, lat -25.30156819) e Castro (long -49.8672, lat -24.6752). O sistema computacional pro-posto fez uso da tecnologia de Redes Neurais Artificiais do tipo Perceptron de Múlti-plas Camadas e do algoritmo backpropagation de treinamento de retropropagação do erro. E foi desenvolvido com a linguagem de programação Object Pascal, utili-zando o ambiente de desenvolvimento integrado Embarcadero Delphi 2009. Para validar o método proposto, foram realizados seis estudos de caso, dentre os quais, o que apresentou o melhor resultado para variável agrometeorológica temperatura média foi o estudo de caso 1 da estação agrometeorológica de Castro, com um per-centual de acerto entre os registros tratados e os registros sem falha de 96,5%, um coeficiente de correlação de Pearson de 0,98 e uma média simples entre os erros obtidos nos treinamentos da rede neural de 0,026406. A média dos erros das redes neurais foi calculada entre os valores dos erros obtidos em cada treinamento, duran-te a correção de um determinado período de falha. Para variável agrometeorológica umidade relativa do ar, o melhor resultado encontrado foi o estudo de caso 5 da es-tação agrometeorológica de Castro, com um percentual de acerto de 95,7%, um coe-ficiente de correlação de Pearson de 0,97 e a média simples dos erros da rede neu-ral de 0,094298. Diante desse contexto, foi possível perceber que o STST Agrome-teorológicas é uma alternativa viável no tratamento das variáveis agrometeorológicas temperatura média e umidade relativa do ar, uma vez que, houve resultados com percentual de acerto superior a 95% no tratamento de falhas das séries temporais estudadas.
APA, Harvard, Vancouver, ISO, and other styles
34

Anderson, Thomas. "Built-In Self Training of Hardware-Based Neural Networks." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hansson, Jonas. "Image analysis, an approach to measure grass roots from images." Thesis, University of Skövde, Department of Computer Science, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-592.

Full text
Abstract:

In this project a method to analyse images is presented. The images document the development of grassroots in a tilled field in order to study the movement of nitrate in the field. The final aim of the image analysis is to estimate the volume of dead and living roots in the soil. Since the roots and the soil have a broad and overlapping range of colours the fundamental problem is to find the roots in the images. Earlier methods for analysis of root images have used methods based on thresholds to extract the roots. To use a threshold the pixels of the object must have a unique range of colours separating them from the colour of the background, this is not the case for the images in this project. Instead the method uses a neural network to classify the individual pixels. In this paper a complete method to analyse images is presented and although the results are far from perfect, the method gives interesting results

APA, Harvard, Vancouver, ISO, and other styles
36

Hsieh, Yuan-Chang, and 謝元章. "A Pipeline Backpropagation Neuro-Microprocessor." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/91143767222762737465.

Full text
Abstract:
碩士
大葉大學
電機工程學系碩士班
94
The study develops a pipelined 32-bit microprocessor embedded with first-order back-propagation neural network and MIPS-like architecture by using Algorithmic State Machine(ASM) and Verilog HDL. The designed neural network is verified by using Matlab. The Matlab source code is derived into MIPS-like assembly and machine code to be embedded in processor core. With the comparison of the simulation result of SynaptiCAD and Matlab, the verified processor core is further synthesized by using Xilinx FPGA development software. The VLSI layout of developed neuro-microprocessor is implemented under TSMC 0.18 um process technology at final.
APA, Harvard, Vancouver, ISO, and other styles
37

Mnih, Andriy. "Learning nonlinear constraints with contrastive backpropagation." 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=94951&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Gu Ming, and 顧明陽. "Lp norm backpropagation for adaptive equalizer." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/64225443979136227447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

"Variable background born inversion by wavefield backpropagation." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1986. http://hdl.handle.net/1721.1/2918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Bo-Wei, and 林柏威. "Reconfigurable Backpropagation Neural Network Implementation for FPGA." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/44687836984947728873.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
96
In this thesis, we proposed reconfigurable back propagation neural network (BPNN) hardware architecture. This architecture makes BPNN has more flexibility. It can process many complex applications and avoid synthesis again. User writes instructions into the Program Memory (PM) to reconfigure neural network architecture. The single neuron computation architecture executes reconfigurable BPNN hardware architecture. This computation architecture achieves resource sharing and reduces area in hardware. We proposed new reconfigurable feed-forward neural network hardware architecture. The purpose of new architecture reduces number of hidden layers in multilayer feed-forward neural network. The computation architecture is the same as BPNN hardware architecture. Finally, it uses Xilinx – ISE to synthesis BPNN and verification. This architecture verification and comparison are in the field-programmable gate arrays (FPGAs).
APA, Harvard, Vancouver, ISO, and other styles
41

Lo, Guo-Jhang, and 羅國彰. "Windows Programming of Backpropagation Artificial Neural Network." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/59899233059137113146.

Full text
Abstract:
碩士
國立臺北科技大學
化學工程系碩士班
92
In recent years, the artificial neural networks have been applied on many different fields, such like nonlinear regression analysis, weather forecast, etc. However, most of that are commercial software, it is not cheap for beginner. The object of this study is to develop a windows program of feedforward artificial neural network for beginner to use. Visual Basic is used to develop this package. Backpropagation algorithm of Bernard Widrow and Marcian Hoff is used to train parameters of neural networks. The Nguyen-Widrow method is used to set the initial values of the network parameters. The limitations of the program are: the number of hidden layer is only one, the hidden layer of transform function is Hyperbolic Tangent or Sigmoid function, the output layer transform function is Linear. Even though this software has so many limitations, it have proved that the applications of this package on many testing examples have good performances.
APA, Harvard, Vancouver, ISO, and other styles
42

Lin, Bo-Wei. "Reconfigurable Backpropagation Neural Network Implementation for FPGA." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0020-0907200812121500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Huang, Jeng-Horng, and 黃正宏. "Implementation of Distributed Backpropagation in a CORBA Environment." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/28167589994066286843.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
88
Learning plays an important role in neural computing, but it takes long time when the input data set is large and complex. Many papers have proposed how to implement learning algorithms on parallel machines or a cluster of computers to reduce learning time in the past. In this thesis, we present a distributed backpropagation learning that distributes the data set to learn in a cluster of computers. Our experiment results reveal that the error calculated by it is closer with the convention pattern mode backpropagation learning, and the time used by it is faster when the data is complex. Due to that the development and maintenance of distributed applications using conventional techniques are time-consuming, and that the applications may not be extensible, we use the CORBA technique as our implementation middleware. It provides a framework that seamlessly integrates heterogeneous objects. Thus, we can efficiently implement our distributed backpropagation learning on a cluster of computers.
APA, Harvard, Vancouver, ISO, and other styles
44

Chih-Yuan, Chen. "A Multi-level Backpropagation Network for Pattern Recognition Systems." 1993. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0009-0112200611360486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Powell, Alan Roy. "Application of backpropagation-like generative algorithms to various problems." Thesis, 1992. http://hdl.handle.net/10413/5619.

Full text
Abstract:
Artificial neural networks (ANNs) were originally inspired by networks of biological neurons and the interactions present in networks of these neurons. The recent revival of interest in ANNs has again focused attention on the apparent ability of ANNs to solve difficult problems, such as machine vision, in novel ways. There are many types of ANNs which differ in architecture and learning algorithms, and the list grows annually. This study was restricted to feed-forward architectures and Backpropagation- like (BP-like) learning algorithms. However, it is well known that the learning problem for such networks is NP-complete. Thus generative and incremental learning algorithms, which have various advantages and to which the NP-completeness analysis used for BP-like networks may not apply, were also studied. Various algorithms were investigated and the performance compared. Finally, the better algorithms were applied to a number of problems including music composition, image binarization and navigation and goal satisfaction in an artificial environment. These tasks were chosen to investigate different aspects of ANN behaviour. The results, where appropriate, were compared to those resulting from non-ANN methods, and varied from poor to very encouraging.
Thesis (M.Sc.)-University of Natal, Durban, 1992.
APA, Harvard, Vancouver, ISO, and other styles
46

Lin, Yi-Ting, and 林逸婷. "Backpropagation Neural Network Model for Stock Trading Points Prediction." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/61766390205991572629.

Full text
Abstract:
碩士
國立高雄第一科技大學
金融理財研究所
99
According to the high development of technology and internet, many of stock data are digitalized. Hence, it becomes convenience and fast to obtain the data from file transfer; however, the huge and complicated information are hard to be systemized and analyzed by human beings in a short time. Artificial intelligence (AI) techniques are excellent in dealing with the complicated problems; therefore, they could be the tools of predicting and analyzing the stock market information. The Backpropagation Neural Network (BPN) approach rapidly rises in these years, especially using in finance area such as, the prediction of stock prices, financial crisis prediction, the forecasting of exchange rate movement, and portfolio management, the performance is outstanding. In this research, several technical indicators are applied to analysis of a large number of historical data in order to enhance the predictability of the particular stocks. According to the technical indices, they are inputted to the BPN to train the model. Therefore, the possible turning points could be detected by BPN. Besides, the technical indicators including Stochastic, Relative Strength Index (RSI), Moving Average Convergence and Divergence (MACD), Directional Movement Index (DMI), Deviation rate (BIAS), foreign capital, and the suspension of margin purchase. The results of this research show that the combination of different indicators using the BPN approach is superior to the buy and hold strategy but still cannot reach positive returns of the target stocks after a period of training. As the result, the study provides a statement that even though the BPN approach is good at forecasting stock price in finance area; the input factors still play a significant role in determining the accuracy of trading decisions. In brief, settle on the appropriate input factor is still a crucial lesson for researching in the future.
APA, Harvard, Vancouver, ISO, and other styles
47

Mong-Tao-Tsai and 蔡孟陶. "The study of convergency analysis for backpropagation neural network." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/79255621222935149969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jin, Guo-Bin, and 金國斌. "Applying Backpropagation Neural Networks to GPS Navigation Satellite Selection." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/72449385533612576981.

Full text
Abstract:
碩士
國立海洋大學
航運技術研究所
89
For the GPS navigation and positioning, in order to improve satellite geometry and then improve accuracy, it is desirable to use all of the signals of the satellites in view except those with too low elevation angles. Because of the geometric relationships between the receiver position and the satellite positions, certain satellites are actually not effective in raising the whole positioning accuracy. It is, nevertheless, very time-consuming in the positioning process. Four satellites or more will generally be required for GPS position fix. Some receiver hardware may be limited to processing limited number of visible satellites. Therefore, it is sometimes necessary to select the optimal satellite subset. Geometry Dilution of Precision (GDOP) is an indicator of the quality of the geometry of the satellite constellation. It will be viewed as the multiplicative factor that magnifies ranging error. A smaller GDOP indicates that the geometry is better, which yield a better positioning accuracy. Matrix inversion will be required for computing GDOP. The GDOP will reach a minimum value when using all satellites in view, however it is very time-consuming especially when the number of satellites is large. Besides, the addition of satellites will not all raise accuracy effectively. In this paper, the application of backpropagation neural networks (BPNN) to GPS satellite GDOP approximation is presented. The BPNN can handle the non-linear mapping to avoid matrix inversion and choose the satellite subset that minimizes the GDOP. The proposed algorithms for GDOP approximation will provide an efficient alternative method for optimal satellite subset selection.
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, I. Kang, and 吳毅剛. "Designing Fuzzy Neural Network PID Controllers by Backpropagation Algorithms." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/33347754690292742478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Young, Chi-Jou, and 楊啟洲. "Risk Prediction of Credit Loan Using Backpropagation Nwural Network." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/28503255685667719504.

Full text
Abstract:
碩士
中華大學
科技管理研究所
93
According to the Bureau of Monetary Affairs, Financial Supervisory Commission in Taiwan, as of the end of December 2001, the average of nonperforming loan ratio of all banks had reached a historical high. As the financial leverage effect of the banks is getting worse, and the rate of interest is decreasing, the interest collected can no more cover the loss caused by the bad debts. Credit guaranty has becomes one of the important means, under the severe competition nowadays, to avoid as well as predict the risk of loan. Traditionally approaches using statistic or mathematic model to accomplish the risk-avoiding task, such as discriminant analysis and logistic regression have limited themselves to a stricter environment or background, which is lack of adaptability in reality. In this paper, a neural network trained by the backpropagation paradigm (BPN) is utilized as a tool for predicting the risk in credit guaranty. We enumerate 37 discriminated variables, partly theoretical and empirical, as the input variables for the neural network. The data were collected from a financial institute, where those between 1999 and 2002 were used for training and between 2003 and 2005 were used for testing. As a result, The BPN achieved a correct prediction rate of nearly 100% in predicting the attribute of the loaner. The proposed model is suitable for a decision-support tool in granting loans; furthermore, it establishes the groundwork for value-created activities in the customer relation management.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography