Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: BACK PROPAGATION ALGORITHM (BPA).

Дисертації з теми "BACK PROPAGATION ALGORITHM (BPA)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-34 дисертацій для дослідження на тему "BACK PROPAGATION ALGORITHM (BPA)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lowton, Andrew D. "A constructive learning algorithm based on back-propagation." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10663/.

Повний текст джерела
Анотація:
There are been a resurgence of interest in the neural networks field in recent years, provoked in part by the discovery of the properties of multi-layer networks. This interest has in turn raised questions about the possibility of making neural network behaviour more adaptive by automating some of the processes involved. Prior to these particular questions, the process of determining the parameters and network architecture required to solve a given problem had been a time consuming activity. A number of researchers have attempted to address these issues by automating these processes, concentrating in particular on the dynamic selection of an appropriate network architecture. The work presented here specifically explores the area of automatic architecture selection; it focuses upon the design and implementation of a dynamic algorithm based on the Back-Propagation learning algorithm. The algorithm constructs a single hidden layer as the learning process proceeds using individual pattern error as the basis of unit insertion. This algorithm is applied to several problems of differing type and complexity and is found to produce near minimal architectures that are shown to have a high level of generalisation ability. (DX 187, 339)
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xiao, Nancy Y. (Nancy Ying). "Using the modified back-propagation algorithm to perform automated downlink analysis." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40206.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 121-122).
by Nancy Y. Xiao.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sargelis, Kęstas. "Klaidos skleidimo atgal algoritmo tyrimai." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090630_094557-88383.

Повний текст джерела
Анотація:
Šiame darbe detaliai išanalizuotas klaidos skleidimo atgal algoritmas, atlikti tyrimai. Išsamiai analizuota neuroninių tinklų teorija. Algoritmui taikyti ir analizuoti sistemoje Visual Studio Web Developer 2008 sukurta programa su įvairiais tyrimo metodais, padedančiais ištirti algoritmo daromą klaidą. Taip pat naudotasi Matlab 7.1 sistemos įrankiais neuroniniams tinklams apmokyti. Tyrimo metu analizuotas daugiasluoksnis dirbtinis neuroninis tinklas su vienu paslėptu sluoksniu. Tyrimams naudoti gėlių irisų ir oro taršos duomenys. Atlikti gautų rezultatų palyginimai.
The present work provides an in-depth analysis of the error back-propagation algorithm, as well as information on the investigation carried out. A neural network theory has been analysed in detail. For the application and analysis of the algorithm in the system Visual Studio Web Developer 2008, a program has been developed with various investigation methods, which help to research into the error of the algorithm. For training neural networks, Matlab 7.1 tools have been used. In the course of the investigation, a multilayer artificial neural network with one hidden layer has been analysed. For the purpose of the investigation, data on irises (plants) and air pollution have been used. Comparisons of the results obtained have been made.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Albarakati, Noor. "FAST NEURAL NETWORK ALGORITHM FOR SOLVING CLASSIFICATION TASKS." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2740.

Повний текст джерела
Анотація:
Classification is one-out-of several applications in the neural network (NN) world. Multilayer perceptron (MLP) is the common neural network architecture which is used for classification tasks. It is famous for its error back propagation (EBP) algorithm, which opened the new way for solving classification problems given a set of empirical data. In the thesis, we performed experiments by using three different NN structures in order to find the best MLP neural network structure for performing the nonlinear classification of multiclass data sets. A developed learning algorithm used here is the batch EBP algorithm which uses all the data as a single batch while updating the NN weights. The batch EBP speeds up training significantly and this is also why the title of the thesis is dubbed 'fast NN …'. In the batch EBP, and when in the output layer a linear neurons are used, one implements the pseudo-inverse algorithm to calculate the output layer weights. In this way one always finds the local minimum of a cost function for a given hidden layer weights. Three different MLP neural network structures have been investigated while solving classification problems having K classes: one model/K output layer neurons, K separate models/One output layer neuron, and K joint models/One output layer neuron. The extensive series of experiments performed within the thesis proved that the best structure for solving multiclass classification problems is a K joint models/One output layer neuron structure.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Civelek, Ferda N. (Ferda Nur). "Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278824/.

Повний текст джерела
Анотація:
Representing time has been considered a general problem for artificial intelligence research for many years. More recently, the question of representing time has become increasingly important in representing human decision making process through connectionist expert systems. Because most human behaviors unfold over time, any attempt to represent expert performance, without considering its temporal nature, can often lead to incorrect results. A temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems, has been introduced. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications. A temporal backpropagation algorithm which supports the model has been developed. The model along with the temporal backpropagation algorithm makes it extremely practical to define any artificial neural network application. Also, an approach that can be followed to decrease the memory space used by weight matrix has been introduced. The algorithm was tested using a medical connectionist expert system to show how best we describe not only the disease but also the entire course of the disease. The system, first, was trained using a pattern that was encoded from the expert system knowledge base rules. Following then, series of experiments were carried out using the temporal model and the temporal backpropagation algorithm. The first series of experiments was done to determine if the training process worked as predicted. In the second series of experiments, the weight matrix in the trained system was defined as a function of time intervals before presenting the system with the learned patterns. The result of the two experiments indicate that both approaches produce correct results. The only difference between the two results was that compressing the weight matrix required more training epochs to produce correct results. To get a measure of the correctness of the results, an error measure which is the value of the error squared was summed over all patterns to get a total sum of squares.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sisman, Yilmaz Nuran Arzu. "A Temporal Neuro-fuzzy Approach For Time Series Analysis." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/570366/index.pdf.

Повний текст джерела
Анотація:
The subject of this thesis is to develop a temporal neuro-fuzzy system for fore- casting the future behavior of a multivariate time series data. The system has two components combined by means of a system interface. First, a rule extraction method is designed which is named Fuzzy MAR (Multivari- ate Auto-regression). The method produces the temporal relationships between each of the variables and past values of all variables in the multivariate time series system in the form of fuzzy rules. These rules may constitute the rule-base in a fuzzy expert system. Second, a temporal neuro-fuzzy system which is named ANFIS unfolded in - time is designed in order to make the use of fuzzy rules, to provide an environment that keeps temporal relationships between the variables and to forecast the future behavior of data. The rule base of ANFIS unfolded in time contains temporal TSK(Takagi-Sugeno-Kang) fuzzy rules. In the training phase, Back-propagation learning algorithm is used. The system takes the multivariate data and the num- ber of lags needed which are the output of Fuzzy MAR in order to describe a variable and predicts the future behavior. Computer simulations are performed by using synthetic and real multivariate data and a benchmark problem (Gas Furnace Data) used in comparing neuro- fuzzy systems. The tests are performed in order to show how the system efficiently model and forecast the multivariate temporal data. Experimental results show that the proposed model achieves online learning and prediction on temporal data. The results are compared by other neuro-fuzzy systems, specifically ANFIS.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Guan, Xing. "Predict Next Location of Users using Deep Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263620.

Повний текст джерела
Анотація:
Predicting the next location of a user has been interesting for both academia and industry. Applications like location-based advertising, traffic planning, intelligent resource allocation as well as in recommendation services are some of the problems that many are interested in solving. Along with the technological advancement and the widespread usage of electronic devices, many location-based records are created. Today, deep learning framework has successfully surpassed many conventional methods in many learning tasks, most known in the areas of image and voice recognition. One of the neural network architecture that has shown the promising result at sequential data is Recurrent Neural Network (RNN). Since the creation of RNN, much alternative architecture have been proposed, and architectures like Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU) are one of the popular ones that are created[5]. This thesis uses GRU architecture and features that incorporate time and location into the network to forecast people’s next location In this paper, a spatial-temporal neural network (ST-GRU) has been proposed. It can be seen as two parts, which are ST and GRU. The first part is a feature extraction algorithm that pulls out the information from a trajectory into location sequences. That process transforms the trajectory into a friendly sequence format in order to feed into the model. The second part, GRU is proposed to predict the next location given a user’s trajectory. The study shows that the proposed model ST-GRU has the best results comparing the baseline models.
Att förutspå vart en individ är på väg har varit intressant för både akademin och industrin. Tillämpningar såsom platsbaserad annonsering, trafikplanering, intelligent resursallokering samt rekommendationstjänster är några av de problem som många är intresserade av att lösa. Tillsammans med den tekniska utvecklingen och den omfattande användningen av elektroniska enheter har många platsbaserade data skapats. Idag har tekniken djupinlärning framgångsrikt överträffat många konventionella metoder i inlärningsuppgifter, bland annat inom områdena bild och röstigenkänning. En neural nätverksarkitektur som har visat lovande resultat med sekventiella data kallas återkommande neurala nätverk (RNN). Sedan skapandet av RNN har många alternativa arkitekturer skapats, bland de mest kända är Long Short Term Memory (LSTM) och Gated Recurrent Units (GRU). Den här studien använder en modifierad GRU där man bland annat lägger till attribut såsom tid och distans i nätverket för att prognostisera nästa plats. I det här examensarbetet har ett rumsligt temporalt neuralt nätverk (ST-GRU) föreslagits. Den består av två delar, nämligen ST och GRU. Den första delen är en extraktionsalgoritm som drar ut relevanta korrelationer mellan tid och plats som är inkorporerade i nätverket. Den andra delen, GRU, förutspår nästa plats med avseende på användarens aktuella plats. Studien visar att den föreslagna modellen ST-GRU ger bättre resultat jämfört med benchmarkmodellerna.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Halabian, Faezeh. "An Enhanced Learning for Restricted Hopfield Networks." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42271.

Повний текст джерела
Анотація:
This research investigates developing a training method for Restricted Hopfield Network (RHN) which is a subcategory of Hopfield Networks. Hopfield Networks are recurrent neural networks proposed in 1982 by John Hopfield. They are useful for different applications such as pattern restoration, pattern completion/generalization, and pattern association. In this study, we propose an enhanced training method for RHN which not only improves the convergence of the training sub-routine, but also is shown to enhance the learning capability of the network. Particularly, after describing the architecture/components of the model, we propose a modified variant of SPSA which in conjunction with back-propagation over time result in a training algorithm with an enhanced convergence for RHN. The trained network is also shown to achieve a better memory recall in the presence of noisy/distorted input. We perform several experiments, using various datasets, to verify the convergence of the training sub-routine, evaluate the impact of different parameters of the model, and compare the performance of the trained RHN in recreating distorted input patterns compared to conventional RBM and Hopfield network and other training methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cheng, Martin Chun-Sheng, and pjcheng@ozemail com au. "Dynamical Near Optimal Training for Interval Type-2 Fuzzy Neural Network (T2FNN) with Genetic Algorithm." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030722.172812.

Повний текст джерела
Анотація:
Type-2 fuzzy logic system (FLS) cascaded with neural network, called type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of type-2 fuzzy linguistic process as the antecedent part and the two-layer interval neural network as the consequent part. A general T2FNN is computational intensive due to the complexity of type 2 to type 1 reduction. Therefore the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of interval T2FNN is first developed. The stable and optimal left and right learning rates for the interval neural network, in the sense of maximum error reduction, can be derived for each iteration in the training process (back propagation). It can also be shown both learning rates can not be both negative. Further, due to variation of the initial MF parameters, i.e. the spread level of uncertain means or deviations of interval Gaussian MFs, the performance of back propagation training process may be affected. To achieve better total performance, a genetic algorithm (GA) is designed to search better-fit spread rate for uncertain means and near optimal learnings for the antecedent part. Several examples are fully illustrated. Excellent results are obtained for the truck backing-up control and the identification of nonlinear system, which yield more improved performance than those using type-1 FNN.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cheng, Martin Chun-Sheng. "Dynamical Near Optimal Training for Interval Type-2 Fuzzy Neural Network (T2FNN) with Genetic Algorithm." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/366350.

Повний текст джерела
Анотація:
Type-2 fuzzy logic system (FLS) cascaded with neural network, called type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of type-2 fuzzy linguistic process as the antecedent part and the two-layer interval neural network as the consequent part. A general T2FNN is computational intensive due to the complexity of type 2 to type 1 reduction. Therefore the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of interval T2FNN is first developed. The stable and optimal left and right learning rates for the interval neural network, in the sense of maximum error reduction, can be derived for each iteration in the training process (back propagation). It can also be shown both learning rates can not be both negative. Further, due to variation of the initial MF parameters, i.e. the spread level of uncertain means or deviations of interval Gaussian MFs, the performance of back propagation training process may be affected. To achieve better total performance, a genetic algorithm (GA) is designed to search better-fit spread rate for uncertain means and near optimal learnings for the antecedent part. Several examples are fully illustrated. Excellent results are obtained for the truck backing-up control and the identification of nonlinear system, which yield more improved performance than those using type-1 FNN.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Microelectronic Engineering
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Повний текст джерела
Анотація:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Levy, Pamela Campos. "Reconhecimento e segmentação do mycobacterium tuberculosis em imagens de microscopia de campo claro utilizando as características de cor e o algoritmo backpropagation." Universidade Federal do Amazonas, 2012. http://tede.ufam.edu.br/handle/tede/3292.

Повний текст джерела
Анотація:
Made available in DSpace on 2015-04-22T22:00:46Z (GMT). No. of bitstreams: 1 Pamela Campos Levy.pdf: 4863540 bytes, checksum: 820e34768b005399acf73dec3e491ae5 (MD5) Previous issue date: 2012-08-24
FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas
Tuberculosis (TB) is an infectious disease transmitted by Koch's bacillus, or Mycobacterium tuberculosis. An estimated 1.4 million people died of tuberculosis in 2010. About 95% of these deaths occurred in developing countries, or development. In Brazil, each year are registered more than 68,000 new cases. Currently, Amazon is the Brazilian state with the highest incidence rate of the disease. a of TB diagnostic methods, adopted by the Ministry of Health is examining smear of bright field. The smear is the count of bacilli in slides containing sputum samples of the patient, prepared and stained according to the methodology standard. Over the past five years, research related to the recognition of bacilli tuberculosis, using images obtained by microscopy bright field, has been carried out with a view to automating this diagnostic method, given the fact that the number high smear tests performed by professional induce eyestrain and due to diagnostic errors. This paper presents a new method of recognition and targeting of tubercle bacilli in slides fields of images, containing pulmonary secretions of the patient, stained by Kinyoun method. From these bacilli images of pixels and background samples were extracted for training classifier. Images were automatically broken down into two groups, according with substantial content. The developed method selects an optimal set of color characteristics of the bacillus and of the background, using the method of selection climbing characteristics. These features were used in a pixel classifier, a multilayer perceptron, trained by backpropagation algorithm. The optimal set of features selected, {GI, Y-Cr, La, RG, a}, from the RGB color spaces, HSI, YCbCr and Lab, combined with the network perceptron with eighteen (18) neurons in first layer three (3) and the second one (1) in the third (18-3-1), resulted in an accuracy of 92.47% in the segmentation of bacilli. The image discrimination method in relation to automated background content contributed to affirm that the method described in this paper it is more appropriate to target bacilli images with low content density background (more uniform background). For future work, new techniques to remove noise present in images with high density of background content (containing background many artifacts) should be developed.
A tuberculose (TB) é uma doença infectocontagiosa, transmitida pelo bacilo de Koch, ou Mycobacterium tuberculosis. Estima-se que 1,4 milhões de pessoas morreram de tuberculose em 2010. Cerca de 95% dessas mortes ocorreram em países subdesenvolvidos ou em desenvolvimento. No Brasil, a cada ano são registrados mais de 68 mil novos casos. Atualmente, o Amazonas é o estado brasileiro com a maior taxa de incidência da doença. Um dos métodos de diagnóstico da TB, adotado pelo Ministério da Saúde, é o exame de baciloscopia de campo claro. A baciloscopia consiste na contagem dos bacilos em lâminas contendo amostras de escarro do paciente, preparadas e coradas de acordo com metodologia padronizada. Nos últimos cinco anos, pesquisas relacionadas ao reconhecimento de bacilos da tuberculose, utilizando imagens obtidas por microscopia de campo claro, tem sido realizadas com vistas a automatização desse método diagnóstico, em face do fato de que o número elevado de exames de baciloscopia realizado pelos profissionais induzirem a fadiga visual e em consequência a erros diagnósticos. Esse trabalho apresenta um novo método de reconhecimento e segmentação de bacilos da tuberculose em imagens de campos de lâminas, contendo secreção pulmonar do paciente, coradas pelo método de Kinyoun. A partir dessas imagens foram extraídas amostras de pixels de bacilos e de fundo para treinamento do classificador. As imagens foram automaticamente discriminadas em dois grupos, de acordo com o conteúdo de fundo. O método desenvolvido seleciona um conjunto ótimo de características de cor do bacilo e do fundo da imagem, empregando o método de seleção escalar de características. Essas características foram utilizadas em um classificador de pixels, um perceptron multicamada, treinado pelo algoritmo backpropagation. O conjunto ótimo de características selecionadas, {G-I, Y-Cr, L-a, R-G, a}, proveniente dos espaços de cores RGB, HSI, YCbCr e Lab, combinado com a rede perceptron com 18 (dezoito) neurônios na primeira camada, 3 (três) na segunda e 1 (um) na terceira (18-3-1), resultou em uma acurácia de 92,47% na segmentação dos bacilos. O método de discriminação de imagens em relação ao conteúdo de fundo automatizado contribuiu para afirmar que o método descrito neste trabalho é mais adequado para segmentar bacilos em imagens com baixa densidade de conteúdo de fundo (fundo mais uniforme). Para os trabalhos futuros, novas técnicas para remover os ruídos presentes em imagens com alta densidade de conteúdo de fundo (fundo contendo muitos artefatos) devem ser desenvolvidas.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Зимовець, Т. С. "Інтелектуальна інформаційна технологія комп'ютерного діагностування патології волосся". Master's thesis, Сумський державний університет, 2020. https://essuir.sumdu.edu.ua/handle/123456789/78595.

Повний текст джерела
Анотація:
Проведено синтез системи підтримки прийняття рішень, яка здатна навчатися з використанням нейромережевої технології. Для чого використовувалася нейронна мережа зворотнього поширення. У роботі проведена оптимізація параметрів стандартного алгоритму навчання нейронної мережі такого типу, що дозволило підвищити точність сформованого нейронно мережевого класифікатора. Програмна реалізація виконувалася з використанням пакета розширення NNToolBox середовища MATLAB 6.5.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Valenta, Martin. "Predikce proteinových domén." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236163.

Повний текст джерела
Анотація:
The work is focused on the area of the proteins and their domains. It also briefly describes gathering methods of the protein´s structure at the various levels of the hierarchy. This is followed by examining of existing tools for protein´s domains prediction and databases consisting of domain´s information. In the next part of the work selected representatives of prediction methods are introduced.  These methods work with the information about the internal structure of the molecule or the amino acid sequence. The appropriate chapter outlines applied procedure of domains´ boundaries prediction. The prediction is derived from the primary structure of the protein, using a neural network  The implemented procedure and its possibility of further development in the related thesis are introduced at the conclusion of this work.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

BISWAS, ANUBHAB. "SIZE PREDICTION OF SILVER NANOPARTICLES USING ARTIFICIAL NEURAL NETWORK." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19640.

Повний текст джерела
Анотація:
The study emphasized the estimation and prediction of the size of silver nanoparticles, which are prepared via green synthesis, using the concept of an artificial neural network. A certain number of recordings of a suitable, thoroughly conducted experiment was taken into account, in which parameters like concentration of plant extract, reaction temperature, the concentration of silver nitrate and stirring duration were taken as input, whereas the size of silver nanoparticles was taken as the undisputed output. After taking all the possible parameters into account, we have been able to design an artificial neural network controller using the MATLAB platform, based completely on back propagation algorithm. After rigorous training of the ANN controller and adjusting the relevant network-based controller parameters, it is found to be performing close enough to expect. And as a result, we have also been able to determine the contribution of each factor involved in tuning the size of silver nanoparticles formed or prepared. We believe this proposed model can contribute to a greater extent when it comes to exploration of a wide range of applications and to exploration of possibilities of reduction of requirement of materials to a huge extent to produce silver nanoparticles with desirable sizes under optimised condition.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wu, Chen-Ling, and 吳晨翎. "A Novel Classification Algorithm Using Random Back-Propagation Neural Networks." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5bc47r.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Liang, Shu-fang, and 梁淑芳. "A Parallel Back-Propagation Algorithm with the Levenberg Marquardt Method." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/18644310071227714685.

Повний текст джерела
Анотація:
碩士
東吳大學
資訊科學系
94
Due to the excellent learning capability of the artificial neural network (ANN), many researches are interesting in using it to solve problems in pattern recognition, cluster analysis, forecasting, etc. The ANN applications have been approved by many specialists and scholars from many fields of science. It's knows by practical application, ANN manipulates different learn rules, it’s disappear speed, variance and learn time, there are obvious differences. Among the supervised networks, the Back Propagation Network (BPN) model is the most popular and it is the basis for model improvement in this study. BPN achieves the network structure of multiple layer perception, it studies and trains the way to adopt Signal feed forward and Error back propagation, it is belong to the supervising type learning method. The algorithm of combining Multiple layer Perception network structure and Error back propagation method, it makes BPN become the learn law that can process a large number of right value effectively. Traditional BPN adopts Steepest Descent Method to train and update right value. There is the following several shortcoming mainly: (1) Apt to disappear to Local minima, it is difficult to get Global minimum. (2) When search the result and close to the minimum gradually, the gradient is diminished, right value upgrades speed to slacken, so pile takes the place of the number of times and changes more, become long to study time. (3) When close to the minimum, if improving the learn speed value, may enable the convergence speed become fast; but may cause the result dispersed if it increases too much. Duo to the above shortcomings of the traditional BPN, this paper proposes the Levenberg Marquardt-Hidden Layer Partition (LM-HLP) algorithm to improve the traditional BPN. We adopt the Levenberg Marquardt method to substitute for the steepest descent training method used in the traditional Back-Propagation Algorithm. In addition, we use the batch learning method to load the dataset and use the block_cyclic distribution method to divide the dataset to processors averagely. Finally, we implement the BP learning algorithm by the parallel processing method. The LM-HLP algorithm not only makes the learning process of BPN more efficient but also makes it more difficult to trap into local minimum.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Pear, Huang, and 黃宗慶. "Improvement of Back Propagation Algorithm by Error Saturation Prevention Method." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/28527021447087639836.

Повний текст джерела
Анотація:
碩士
國立臺灣科技大學
電子工程技術研究所
86
Back Propagation algorithm is currently the most widely used learning algorithm in artificial neural networks. With properly selection of feed-forward neural network architecture, it is capable of approximating most problems with high accuracy and generalization ability. However, the slow convergence is a serious problem when using this well-known Back Propagation (BP) learning algorithm in many applications. As a result, many researchers take effort to improve the learning efficiency of BP algorithm by various enhancements. In our study, we consider the Error Saturation (ES) condition which is caused by the use of gradient descent method will greatly slow down the learning speed of BP algorithm.Thus, in this paper, we will analyze the causes of the ES condition both in outputand in hidden layers. An Error Saturation Prevention (ESP) function is then proposed to prevent the nodes in output layer from the ES condition, and we also apply this method to the nodes in hidden layers to adjust the learning term. Besides, an adaptive learning method for the temperature variable in activation function is proposed to help the learning process. By the proposed methods, we can not only improve the learning efficiency by the ES condition prevention but also maintain the semantic meaning of energy function. Finally, we will propose heuristics for constructing general energy functions that could preventthe ES condition during learning phase. Some simulations are also given to showthe workings of our proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chaudhari, Gaurav Uday, Manohar V, and Biswajit Mohanty. "Function approximation using back propagation algorithm in artificial neural networks." Thesis, 2007. http://ethesis.nitrkl.ac.in/4215/1/Function_Approximation_using_Back_Propagation_Algorithm_in_Artificial_neural_networks__3.pdf.

Повний текст джерела
Анотація:
Inspired by biological neural networks, Artificial neural networks are massively parallel computing systems consisting of a large number of simple processors with many interconnections. They have input connections which are summed together to determine the strength of their output, which is the result of the sum being fed into an activation function. Based on architecture ANNs can be feed forward network or feedback networks. Most common family of feed-forward networks, called multilayer perceptron, neurons are organized into layers that have unidirectional connections between them. These connections are directed (from the input to the output layer) and have weights assigned to them. The principle of ANN is applied for approximating a function where they learn a function by looking at examples of this function. Here the internal weights in the ANN are slowly adjusted so as to produce the same output as in the examples. Performance is improved over time by iteratively updating the weights in the network. The hope is that when the ANN is shown a new set of input variables, it will give a correct output. To train a neural network to perform some task, we must adjust the weights of each unit in such a way that the error between the desired output and the actual output is reduced. This process requires that the neural network compute the error derivative of the weights (EW). In other words, it must calculate how the error changes as each weight is increased or decreased slightly. The back-propagation algorithm is the most widely used method for determining EW. We have started our program for a fixed structure network. It’s a 4 layer network with 1 input, 2 hidden and 1 output layers. No of nodes in input layer is 9 and output layer is 1. Hidden layer nodes are fixed at 4 and 3. The learning rate is taken as 0.07. We have written the program in MAT LAB and got the output of the network. The graph is plotted taking no of iteration and mean square error as parameter. The converging rate of error is very good. Then we moved to a network with all its parameter varying. We have written the program in VISUAL C++ with no. of hidden layer, no of nodes in each hidden layer, learning rate all varying. The converging plots for different structure by varying the variables are taken.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yu-Wen, Lin, and 林郁文. "Forecasting Exchange Rate by Genetic Algorithm Based Back Propagation Network Model." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/70849697654226511825.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
國際企業系
97
Forecasting currency exchange rates is an important issue in finance. This topic has received much attention, particularly in econometrics and financial selection of variables that influence forecasts. In this paper, a new forecasting model is constructed: we adopt a Genetic Algorithm (GA) to provide the optimal variables weight and we select the optimal set of variables as the input layer neurons, and then we predict the exchange rates with the Back Propagation Network (BPN), called the Genetic Algorithm Based Back Propagation Network model (GABPN). Basically, we expect improved variable selection to provide better forecasting performance than a random method. As a result, our experiments showed that the GABPN obtained the best forecasting performance and was highly consistent with the actual data. Within the selected 27 variables, only 10 variables play critical factors in influencing forecasting performance; moreover, the GABPN method with proper variables even outperformed the case with full variables. In addition, the proposed model provides valuable information in financial analysis by providing the correct variables that most influence exchange rate trends.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Xie, Zhen-Hong, and 謝鎮鴻. "Convergence characteristics of back propagation algorithm to a bypass neural network." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/68593006948914059733.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chin, Da-Zen, and 秦大仁. "Combining Two-Dimensional Cepstrum and Extended Back-Propagation Algorithm to Speech Recognition." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/12737069190595924368.

Повний текст джерела
Анотація:
碩士
義守大學
電子工程學系
87
For almost four decades, research in automatic speech recognition by machine has been done. A successful recognition system requires knowledge and expertise from a wide range of disciplines. For human, speech recognition is a natural and simple process. However, to make a computer respond to even simple spoken commands is an extremely complex and difficult task. In spite of the enormous research efforts spent in trying to create an intelligent machine which can recognize the spoken words and comprehend its meaning, we are far away from achieving the desired goal of a machine can understand spoken discourse and any subject by any speaker in all environment. The greatest common denominator of all recognition system is the signal-processing front end, which converts the speech waveform to some type of parametric representations for further analysis and processing. In this thesis, the two-dimensional cepstrum (TDC) analysis and its application to Mandarin speech recognition is described. The most important property of TDC is that TDC can preserve static and dynamic features at the same time by use of two-dimensional Fourier transform. Experimental results show that by use of TDC, both requirement of storage space and computational complexity can be reduced significantly. Besides, the complex time-alignment procedure also is unnecessary for such a TDC based recognition system. The most popular and successful neural network - the back-propagation (BP) network is used as the basic recognizer. However, the long training time may be the major drawback for BP. For alleviating this problem, an extended BP (EBP) learning algorithm is used to train the employed neural network. The basic idea of EBP is to enhance the autonomous capability of each neuron in the network by modifying its output function. Each neuron is able to adjustable its activation function as necessary. It has been shown that EBP appears to have the following advantage over the standard BP: faster rate of convergence and greater accuracy of approximation. Although several equations were developed, they are very easy to implemented, because only little computational complexity is introduced. However, our experiments also showed that accelerating the learning procedure will degrade the recognition performance. Fortunately, the degradation can be minimized by careful selecting the corresponding learning parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Chen, Shi-Hsien, and 陳士賢. "Short-Term Thermal Generating Unit Commitment by Back Propagation Network and Genetic Algorithm." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/08530828674155252707.

Повний текст джерела
Анотація:
碩士
國立中山大學
電機工程學系研究所
89
Unit commitment is one of the most important subjects with respect to the economical operation of power systems, which attempts to minimize the total thermal generating cost while satisfying all the necessary restrictive conditions.   This thesis proposes a short-term thermal generating unit commitment by genetic algorithm and back propagation network. Genetic algorithm is based on the optimization theory developed from natural evolution principles, and in the optimization process, seeks a set of solutions simultaneously rather than any single one by adopting stochastic movement rule from one solution to another, which prevents restriction to fractional minimal values. Neural networks method outperforms in speed and stability. This thesis uses back propagation network method to complete neural networks and sets the optimal unit combination derived from genetic algorithm as the target output.   Under fixed electrical systems, instant responsiveness can be calculated by neural networks. When the systematical architecture changes, genetic algorithm can be applied to re-evaluation of the optimal unit commitment, hoping to improve the pitfalls of traditional methods.   This thesis takes the power system of six units for example to conduct performance assessment. The results show that genetic algorithm provides solutions closer to the overall optimal solution than traditional methods in optimizing unit commitment. On the other hand, neural networks method can not only approximate the solution obtained by genetic algorithm but also process faster than any other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tseng, Chia-Ming, and 曾嘉明. "Neural-Based Packet Equalization for Indoor Radio Channel by Fast Back Propagation Algorithm." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/39356562410865805443.

Повний текст джерела
Анотація:
碩士
國立交通大學
傳播科技研究所
81
In this thesis, a new decision feedback equalizer (DFE) based on neural network is proposed to overcome the multipath fading problem of the indoor radio channel. And the fast packet bipolar-state back propagation (fast PBSBP) algorithm is proposed for the training of the neural-based DFE. This algorithm is featured as: (1) high convergence rate, and (2) capable of tracking the time variations of the channel charateristics. Moreover, we use 2-D real-vector representation to process the complex-valued signals, thus no complex operation is needed and the problem of singularity can be avoided. For this reason, the complexity of the DFE architecture and the complexity of computation can be both reduced. From the computer simulation results, it can be shown that the new equalizer with the fast PBSBP algorithm can achieve lower error and lower bit error rate than the traditional DFE and training algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Lin, Chia-Tseng, and 林家增. "THE STUDY ON FUZZY NEURAL NETWORK CONTROLLER USING ARTIFICIAL IMMUNE BACK-PROPAGATION ALGORITHM." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/90209604944539874518.

Повний текст джерела
Анотація:
碩士
大同大學
電機工程學系(所)
99
A fuzzy neural network (FNN) identifier based on back-propagation artificial immune (BPIA) algorithm, named the FNN-BPIA controller, is proposed for the nonlinear systems in this thesis. The proposed controller is composed of an FNN identifier, an IA estimator, a hitting controller, and a computation controller. Firstly, The FNN identifier is utilized to estimate the dynamics of the nonlinear system. These parameters which include weights, means, and standard deviations of the FNN identifier are adjusted by the BP algorithm. Secondly, the initial values which include weights, means, and standard deviations of the FNN identifier and the parameters of the BP algorithm are estimated by the IA estimator. Thirdly, the training process of the IA estimator has four stages which include initialization, crossover, mutation, and evolution. Further, the computation controller is given to calculate the control effect and the hitting controller is utilized to eliminate the uncertainties. Finally, the inverted pendulum system and the second-order chaotic system are simulated to verify the performance and the effectiveness of the FNN-BPIA controller.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

KHATRI, HARSH. "RECOMMENDER SYSTEM BASED ON AFFECTIVE FEEDBACK INCORPORATING HYBRID OPTIMIZATION ALGORITHM." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14687.

Повний текст джерела
Анотація:
Websites have become more and more dynamic but they still lack intelligence. Although websites are able to mould themselves according to users’ preferences and mouse clicks yet they cannot predict the content user might like, intelligently. The amount of data collected online by web-sites is increasing and therefore the demand for unique content by users is increasing every day. This need for self-organizing and transforming websites, to suit every customer’s requirements has become a challenging problem. This work concentrates on solving the problem of creating relevant content for each user. To achieve the required flexibility we propose using mouse movements as a way of capturing the points of interest or the points where user had the most focus by capturing his mouse locations, since mouse pointer usually follows the eye trails for reaching the point of interest on website. These mouse movements can be used for studying the patterns of user behaviour when exposed to a similar reference web-page by using a pattern analysis technique like Back-propagation neural networks, which enables using soft computing to recursively map users into groups/clusters with similar interests. These clusters can be related to the historical observations as documented in the well-known and referenced MovieLens database, dynamically, as new users provide rating to movies on the system. The correlation between these two systems can be attained by using Teacher Learning Optimization framework. The proposed algorithm therefore produces very effective results even on a cold start and produces linear precision.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Sheng, Tang Tien, and 唐天生. "The Studies of A Neural Network Fuzzy Controller and A Grey Back Propagation Algorithm." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/18005035464066460461.

Повний текст джерела
Анотація:
博士
國防大學中正理工學院
國防科學研究所
92
This article is consisted of two subjects. The first one is the study of neural network fuzzy controller; the second is the study of grey back propagation(GBP)algorithm. First, this paper presents two learning methods for automatically generating fuzzy if-then rules in the neural network fuzzy controller. One is the combining heuristic method with back propagation(BP)algorithm method; the other is the hybrid neural network learning method. Through computer simulations, the proposed two methods are shown to have following advantages: (1)It is unnecessary to rely on experts or experienced human to acquire fuzzy rules. (2)It does require neither time-consuming iterative learning procedures nor complicated rule generation mechanisms. (3)The obtained fuzzy rules have self-learning and robust capabilities. The second objective of this paper is to use BP algorithm in conjunction with grey relationship in order to improve BP algorithm. This new technique is developed by directly incorporating grey relationship into BP algorithm, and a grey BP learning method, namely the GBP, is proposed. Furthermore, the GBP can effectively learn neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kim, Seong-Hee. "Intelligent information retrieval using an inductive learning algorithm and a back-propagation neural network." 1994. http://catalog.hathitrust.org/api/volumes/oclc/32620649.html.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--University of Wisconsin--Madison, 1994.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 173-189).
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Lin, Jun-Shuw, and 林宗順. "Constructing the Wafer Yield Prediction Model Using Genetic Algorithm and Back-Propagation Neural Network." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/93330461139548193587.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chang, Fu-Kai, and 張富凱. "Recurrent Fuzzy Neural System Design and Its Applications Using A Hybrid Algorithm of Electromagnetism-like and Back-propagation." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/58699938852425541579.

Повний текст джерела
Анотація:
碩士
元智大學
電機工程學系
96
Based on the electromagnetism-like algorithm (EM), we propose two kinds of novel hybrid learning algorithms. One is the improved EM algorithm with BP technique (IEMBP) and the other is the improved EM algorithm with GA technique (IEMGA) for recurrent fuzzy neural system design. IEMBP and IEMGA are composed of initialization, local search, total force calculation, movement, and evaluation. They are hybridization of EM and BP, EM and GA, respectively. EM algorithm is a population-based meta-heuristic originated from the electromagnetism theory. For recurrent fuzzy neural system design, IEMBP and IEMGA simulates the “accraction” and “repulsion” of charged particles by considering each neural system parameters as an electrical charge. The modification from EM algorithm is randomly the neighborhood local search substituted for BP, GA, and the competitive concept is adopted for training the recurrent fuzzy neural network (RFNN) system. IEMBP combines EM with BP to obtain high speed convergence and less computation complexity. However, it needs the system gradient information for optimization. For gradient information free system, IEMGA is proposed to treat the optimization problem. IEMGA consists of EM and GA to reduce the computation complexity of EM. IEMBP and IEMGA are used to develop the update laws of RFNN for nonlinear systems identification and control. Finally, several illustration examples are presented to show the performance and effectiveness of IEMBP and IEMGA.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lei, Ho Chun, and 雷賀君. "A Rapid Diagnosis System for Anterior Cruciate Ligament Injury- Using Rough sets、Genetic Algorithm and Back Propagation Neural Network." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/25005222240908956730.

Повний текст джерела
Анотація:
碩士
大葉大學
工業工程學系碩士班
92
The technology of data mining can reduce the cost of maintains data and increase the added value of data. The most important thing is it can find some benefits hidden behind data. Data with a large number and out of order not only increase the difficult of data mining but also result in error of data analysis since the incompleteness of information. For the above mentioned, this research combine the Rough Sets and Genetic Algorithm as the tool of data mining to solve problems from data uncertainly ,and keep the accuracy of the classified rule. Another purpose is to establish a link among the data and use Neural Network to learn the classified rule. Furthermore, this research use factors cause anterior cruciate ligament injury to discuss and use the technology of data mining different from the traditional method in medicine to find out the relationship between anterior cruciate ligament injury and other type injuries of knee; the key factors cause anterior cruciate ligament injury to develop a rapid diagnosis system further. We hope to achieve the goal follows: 1.Develp the data mining tool which is combined with Rough Set and Genetic Algorithm. 2. To found rapid diagnosis rules for someone has anterior cruciate ligament injury. 3. To prevent the anterior cruciate ligament injury.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Chang, Miao-Han, and 昌妙韓. "Off-Bed Model and Sensing Detection System for Human Body Using the Back-Propagation Neural Network Algorithm: Design and Implementation." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/60606134593793467642.

Повний текст джерела
Анотація:
碩士
國立屏東科技大學
資訊管理系所
102
As the populace of elderly is growing quickly, the healthcare system based the state-of-the-art ICT technology is more and more important. According to the statistics of Department of Health Executive Yuan, falling-down accident is the second place of elder accident injury. In addition, there are 30% people, who will fall down in the hospital. Most falls occur at the time points of out off the bed and get on the bed in the hospital. At before, although the hospital provided the emergent bell beside the bed for emergency calls, there are few patients using the emergent bell for the off-bed situation. There is no one thought he will fall before the falls occur. Most of elders consider they can leave bed in safe by themselves. To solve the falling accidents, this project will design the smart sensing and detection system based on the triaxial accelerometer and Back-propagation neural network algorithm to detect abnormal body movement and achieve the smart action awareness. The proposed system not only correctly detects the actions of off-bed and falls, the system but also precisely detects falls before falling. Furthermore, since many elder patients, who have the high risk of fall, are getting out of bed three to five steps then fall occur. The proposed system can detect the actions of the elder patient leaving the bed, and then the system sends the alarm messages to the nursing stations and the duty nurses, who can help the elders leave the bed and to prevent the accident injury. This research project firstly proposed the formal models of off-bed. The proposed detection system used the triaxial accelerations and Back-propagation neural network algorithm to improve the accuracy of action detection. The final purpose of this project is to assist the medical professionals and people to help the elders and help the elders prevent from falling.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Huang, Chien-Yu, and 黃建裕. "Optimizing Time Series Related Factors for the Forecasting Model by Employing the Taguchi Method, Back-propagation Network and Genetic Algorithm." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/11440061245469905830.

Повний текст джерела
Анотація:
博士
國立成功大學
工業與資訊管理學系碩博士班
94
To satisfy the volatile nature of today's markets, businesses require a significant reduction in product development lead times. Consequently, the ability to develop product sales forecasts accurately is of fundamental importance to decision-makers. Over the years, many forecasting techniques of varying capabilities have been introduced. The precise extent of their influences, and the interactions between them, has never been fully clarified, though various forecasting factors have been explored in previous studies. Accordingly, this study adopts the Taguchi method to calibrate the controllable factors of a forecasting model. An inner orthogonal array was constructed for the time series related controllable factors. An experimental design was then performed to establish the appropriate levels for each factor. At the same time, an outer orthogonal array was used to incorporate the inherited parameters of the forecasting method as the noise factors of the Taguchi method simultaneously. As to the forecasting method, a heuristic technique such as the genetic algorithm (GA) has been recognized as a potential method to establish the parameter and topology settings, which optimize the back-propagation network (BPN) performance. However, there are too many undetermined parameters of the BPN and the genetic algorithm themselves, and the impact and interactions of these controllable factors have not been fully explored as they interact simultaneously. Hence, it is desirable to develop a more methodical approach to identifying these parameters’ values. The solutions obtained using the proposed forecasting model, which are combined with the Taguchi method, are compared with the results presented in previous studies. Illustrated examples, employing data from a power company and chaotic time series, serve to demonstrate the thesis. The results show that the proposed model permits the construction of a better forecasting model through the suggested data collection method.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Fick, Machteld. "Neurale netwerke as moontlike woordafkappingstegniek vir Afrikaans." Diss., 2002. http://hdl.handle.net/10500/584.

Повний текст джерела
Анотація:
Text in Afrikaans
Summaries in Afrikaans and English
In Afrikaans, soos in NederJands en Duits, word saamgestelde woorde aanmekaar geskryf. Nuwe woorde word dus voortdurend geskep deur woorde aanmekaar te haak Dit bemoeilik die proses van woordafkapping tydens teksprosessering, wat deesdae deur rekenaars gedoen word, aangesien die verwysingsbron gedurig verander. Daar bestaan verskeie afkappingsalgoritmes en tegnieke, maar die resultate is onbevredigend. Afrikaanse woorde met korrekte lettergreepverdeling is net die elektroniese weergawe van die handwoordeboek van die Afrikaanse Taal (HAT) onttrek. 'n Neutrale netwerk ( vorentoevoer-terugpropagering) is met sowat. 5 000 van hierdie woorde afgerig. Die neurale netwerk is verfyn deur 'n gcskikte afrigtingsalgoritme en oorfragfunksie vir die probleem asook die optimale aantal verborge lae en aantal neurone in elke laag te bepaal. Die neurale netwerk is met 5 000 nuwe woorde getoets en dit het 97,56% van moontlike posisies korrek as of geldige of ongeldige afkappingsposisies geklassifiseer. Verder is 510 woorde uit tydskrifartikels met die neurale netwerk getoets en 98,75% van moontlike posisies is korrek geklassifiseer.
In Afrikaans, like in Dutch and German, compound words are written as one word. New words are therefore created by simply joining words. Word hyphenation during typesetting by computer is a problem, because the source of reference changes all the time. Several algorithms and techniques for hyphenation exist, but results are not satisfactory. Afrikaans words with correct syllabification were extracted from the electronic version of the Handwoordeboek van die Afrikaans Taal (HAT). A neural network (feedforward backpropagation) was trained with about 5 000 of these words. The neural network was refined by heuristically finding a suitable training algorithm and transfer function for the problem as well as determining the optimal number of layers and number of neurons in each layer. The neural network was tested with 5 000 words not the training data. It classified 97,56% of possible points in these words correctly as either valid or invalid hyphenation points. Furthermore, 510 words from articles in a magazine were tested with the neural network and 98,75% of possible positions were classified correctly.
Computing
M.Sc. (Operasionele Navorsing)
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії