Dissertations / Theses on the topic 'Neural Networks method'

To see the other types of publications on this topic, follow the link: Neural Networks method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Neural Networks method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dunn, Nathan A. "A Novel Neural Network Analysis Method Applied to Biological Neural Networks." Thesis, view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1251892251&sid=2&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 122- 131). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Youping. "Neural network approximation for linear fitting method." Ohio : Ohio University, 1992. http://www.ohiolink.edu/etd/view.cgi?ohiou1172243968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

CUNHA, JOAO MARCO BRAGA DA. "ESTIMATING ARTIFICIAL NEURAL NETWORKS WITH GENERALIZED METHOD OF MOMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26922@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As Redes Neurais Artificiais (RNAs) começaram a ser desenvolvidas nos anos 1940. Porém, foi a partir dos anos 1980, com a popularização e o aumento de capacidade dos computadores, que as RNAs passaram a ter grande relevância. Também nos anos 1980, houve dois outros acontecimentos acadêmicos relacionados ao presente trabalho: (i) um grande crescimento do interesse de econometristas por modelos não lineares, que culminou nas abordagens econométricas para RNAs, no final desta década; e (ii) a introdução do Método Generalizado dos Momentos (MGM) para estimação de parâmetros, em 1982. Nas abordagens econométricas de RNAs, sempre predominou a estimação por Quasi Máxima Verossimilhança (QMV). Apesar de possuir boas propriedades assintóticas, a QMV é muito suscetível a um problema nas estimações em amostra finita, conhecido como sobreajuste. O presente trabalho estende o estado da arte em abordagens econométricas de RNAs, apresentando uma proposta alternativa à estimação por QMV que preserva as suas boas propriedades assintóticas e é menos suscetível ao sobreajuste. A proposta utiliza a estimação pelo MGM. Como subproduto, a estimação pelo MGM possibilita a utilização do chamado Teste J para verifificar a existência de não linearidade negligenciada. Os estudos de Monte Carlo realizados indicaram que as estimações pelo MGM são mais precisas que as geradas pela QMV em situações com alto ruído, especialmente em pequenas amostras. Este resultado é compatível com a hipótese de que o MGM é menos suscetível ao sobreajuste. Experimentos de previsão de taxas de câmbio reforçaram estes resultados. Um segundo estudo de Monte Carlo apontou boas propriedades em amostra finita para o Teste J aplicado à não linearidade negligenciada, comparado a um teste de referência amplamente conhecido e utilizado. No geral, os resultados apontaram que a estimação pelo MGM é uma alternativa recomendável, em especial no caso de dados com alto nível de ruído.
Artificial Neural Networks (ANN) started being developed in the decade of 1940. However, it was during the 1980 s that the ANNs became relevant, pushed by the popularization and increasing power of computers. Also in the 1980 s, there were two other two other academic events closely related to the present work: (i) a large increase of interest in nonlinear models from econometricians, culminating in the econometric approaches for ANN by the end of that decade; and (ii) the introduction of the Generalized Method of Moments (GMM) for parameter estimation in 1982. In econometric approaches for ANNs, the estimation by Quasi Maximum Likelihood (QML) always prevailed. Despite its good asymptotic properties, QML is very prone to an issue in finite sample estimations, known as overfiting. This thesis expands the state of the art in econometric approaches for ANNs by presenting an alternative to QML estimation that keeps its good asymptotic properties and has reduced leaning to overfiting. The presented approach relies on GMM estimation. As a byproduct, GMM estimation allows the use of the so-called J Test to verify the existence of neglected nonlinearity. The performed Monte Carlo studies indicate that the estimates from GMM are more accurate than those generated by QML in situations with high noise, especially in small samples. This result supports the hypothesis that GMM is susceptible to overfiting. Exchange rate forecasting experiments reinforced these findings. A second Monte Carlo study revealed satisfactory finite sample properties of the J Test applied to the neglected nonlinearity, compared with a reference test widely known and used. Overall, the results indicated that the estimation by GMM is a better alternative, especially for data with high noise level.
APA, Harvard, Vancouver, ISO, and other styles
4

Bishop, Russell C. "A Method for Generating Robot Control Systems." Connect to resource online, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1222394834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

KAIMAL, VINOD GOPALKRISHNA. "A NEURAL METHOD OF COMPUTING OPTICAL FLOW BASED ON GEOMETRIC CONSTRAINTS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1037632137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sung, Woong Je. "A neural network construction method for surrogate modeling of physics-based analysis." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43721.

Full text
Abstract:
A connectivity adjusting learning algorithm, Optimal Brain Growth (OBG) was proposed. Contrast to the conventional training methods for the Artificial Neural Network (ANN) which focus on the weight-only optimization, the OBG method trains both weights and connectivity of a network in a single training process. The standard Back-Propagation (BP) algorithm was extended to exploit the error gradient information of the latent connection whose current weight has zero value. Based on this, the OBG algorithm makes a rational decision between a further adjustment of an existing connection weight and a creation of a new connection having zero weight. The training efficiency of a growing network is maintained by freezing stabilized connections in the further optimization process. A stabilized computational unit is also decomposed into two units and a particular set of decomposition rules guarantees a seamless local re-initialization of a training trajectory. The OBG method was tested for the multiple canonical, regression and classification problems and for a surrogate modeling of the pressure distribution on transonic airfoils. The OBG method showed an improved learning capability in computationally efficient manner compared to the conventional weight-only training using connectivity-fixed Multilayer Perceptrons (MLPs).
APA, Harvard, Vancouver, ISO, and other styles
7

Chavali, Krishna Kumar. "Integration of statistical and neural network method for data analysis." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4749.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains viii, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
8

Radhakrishnan, Kapilan. "A non-intrusive method to evaluate perceived voice quality of VoIP networks using random neural networks." Thesis, Glasgow Caledonian University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.547414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mohamed, Ibrahim. "A method for the analysis of the MDTF data using neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0032/MQ62402.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rowlands, H. "Optimum design using the Taguchi method with neural networks and genetic algorithms." Thesis, Cardiff University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Stitson, Mark Oliver. "Design, implementation and applications of the Support Vector method and learning algorithm." Thesis, Royal Holloway, University of London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bittner, Ray Albert. "Development and VLSI implementation of a new neural net generation method." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-12042009-020129/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Butler, Martin A. "A Method of Structural Health Monitoring for Unpredicted Combinations of Damage." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1575967420002943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Shen, Luou. "Freeway Travel Time Estimation and Prediction Using Dynamic Neural Networks." FIU Digital Commons, 2008. http://digitalcommons.fiu.edu/etd/17.

Full text
Abstract:
Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.
APA, Harvard, Vancouver, ISO, and other styles
15

Gabrié, Marylou. "Towards an understanding of neural networks : mean-field incursions." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE035.

Full text
Abstract:
Les algorithmes d’apprentissage automatique utilisant des réseaux de neurones profonds ont récemment révolutionné l'intelligence artificielle. Malgré l'engouement suscité par leurs diverses applications, les excellentes performances de ces algorithmes demeurent largement inexpliquées sur le plan théorique. Ces problèmes d'apprentissage sont décrits mathématiquement par de très grands ensembles de variables en interaction, difficiles à manipuler aussi bien analytiquement que numériquement. Cette multitude est précisément le champ d'étude de la physique statistique qui s'attelle à comprendre, originellement dans les systèmes naturels, comment rendre compte des comportements macroscopiques à partir de cette complexité microscopique. Dans cette thèse nous nous proposons de mettre à profit les progrès récents des méthodes de champ moyen de la physique statistique des systèmes désordonnés pour dériver des approximations pertinentes dans ce contexte. Nous nous appuyons sur les équivalences et les complémentarités entre les algorithmes de passage de message, les développements haute température et la méthode des répliques. Cette stratégie nous mène d'une part à des contributions pratiques pour l'apprentissage non supervisé des machines de Boltzmann. Elle nous permet d'autre part de contribuer à des réflexions théoriques en considérant le paradigme du professeur-étudiant pour modéliser des situations d'apprentissage. Nous développons une méthode pour caractériser dans ces modèles l'évolution de l'information au cours de l’entraînement, et nous proposons une direction de recherche afin de généraliser l'étude de l'apprentissage bayésien des réseaux de neurones à une couche aux réseaux de neurones profonds
Machine learning algorithms relying on deep new networks recently allowed a great leap forward in artificial intelligence. Despite the popularity of their applications, the efficiency of these algorithms remains largely unexplained from a theoretical point of view. The mathematical descriptions of learning problems involves very large collections of interacting random variables, difficult to handle analytically as well as numerically. This complexity is precisely the object of study of statistical physics. Its mission, originally pointed towards natural systems, is to understand how macroscopic behaviors arise from microscopic laws. In this thesis we propose to take advantage of the recent progress in mean-field methods from statistical physics to derive relevant approximations in this context. We exploit the equivalences and complementarities of message passing algorithms, high-temperature expansions and the replica method. Following this strategy we make practical contributions for the unsupervised learning of Boltzmann machines. We also make theoretical contributions considering the teacher-student paradigm to model supervised learning problems. We develop a framework to characterize the evolution of information during training in these model. Additionally, we propose a research direction to generalize the analysis of Bayesian learning in shallow neural networks to their deep counterparts
APA, Harvard, Vancouver, ISO, and other styles
16

Khabou, Mohamed Ali. "Improving shared weight neural networks generalization using regularization theory and entropy maximization /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9953870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

McFall, Kevin Stanley. "An artificial neural network method for solving boundary value problems with arbitrary irregular boundaries." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-04052006-154934/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006.
George Vachtsevanos, Committee Member ; Nader Sadegh, Committee Member ; J. Robert Mahan, Committee Chair ; Ali Siadat, Committee Member ; Zhuomin Zhang, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
18

Lout, Kapildev. "Development of a fault location method based on fault induced transients in distribution networks with wind farm connections." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.678845.

Full text
Abstract:
Electrical transmission and distribution networks are prone to short circuit faults since they span over long distances to deliver the electrical power from generating units to where the energy is required. These faults are usually caused by vegetation growing underneath bare overhead conductors, large birds short circuiting the phases, mechanical failure of pin-type insulators or even insulation failure of cables due to wear and tear, resulting in creepage current. Short circuit faults are highly undesirable for distribution network companies since they cause interruption of supply, thus affecting the reliability of their network, leading to a loss of revenue for the companies. Therefore, accurate offline fault location is required to quickly tackle the repair of permanent faults on the system so as to improve system reliability. Moreover, it also provides a tool to identify weak spots on the system following transient fault events such that these future potential sources of system failure can be checked during preventive maintenance. With these aims in mind, a novel fault location technique has been developed to accurately determine the location of short circuit faults in a distribution network consisting of feeders and spurs, using only the phase currents measured at the outgoing end of the feeder in the substation. These phase currents are analysed using the Discrete Wavelet Transform to identify distinct features for each type of fault. To achieve better accuracy and success, the scheme firstly uses these distinct features to train an Artificial Neural Network based algorithm to identify the type of fault on the system. Another Artificial Neural Network based algorithm dedicated to this type of fault then identifies the location of the fault on the feeder or spur. Finally, a series of Artificial Neural Network based algorithms estimate the distance to the point of fault along the feeder or spur. The impact of wind farm connections consisting of doubly-fed induction generators and permanent magnet synchronous generators on the accuracy of the developed algorithms has also been investigated using detailed models of these wind turbine generator types in Simulink. The results obtained showed that the developed scheme allows the accurate location of the short circuit faults in an active distribution network. Further sensitivity tests such as the change in fault inception angle, fault impedance, line length, wind farm capacity, network configuration and white noise confirm the robustness of the novel fault location technique in active distribution networks.
APA, Harvard, Vancouver, ISO, and other styles
19

Sequeira, Bernardo Pinto Machado Portugal. "American put option pricing : a comparison between neural networks and least-square Monte Carlo method." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19631.

Full text
Abstract:
Mestrado em Mathematical Finance
Esta tese compara dois métodos de pricing de opções de venda Americanas. Os métodos estudados são redes neurais (NN), um método de Machine Learning, e Least-Square Monte Carlo Method (LSM). Em termos de redes neurais foram desenvolvidos dois modelos diferentes, um modelo mais simples, Model 1, e um modelo mais complexo, Model 2. O estudo depende dos preços das opões de 4 gigantes empresas norte-americanas, de Dezembro de 2018 a Março de 2019. Todos os métodos mostram uma precisão elevada, no entanto, uma vez calibradas, as redes neuronais mostram um tempo de execução muito inferior ao LSM. Ambos os modelos de redes neurais têm uma raiz quadrada do erro quadrático médio (RMSE) menor que o LSM para opções de diferentes maturidades e preço de exercício. O Modelo 2 supera substancialmente os outros modelos, tendo um RMSE ca. 40% inferior ao do LSM. O menor RMSE é consistente em todas as empresas, níveis de preço de exercício e maturidade.
This thesis compares two methods to evaluate the price of American put options. The methods are the Least-Square Monte Carlo Method (LSM) and Neural Networks, a machine learning method. Two different models for Neural Networks were developed, a simple one, Model 1, and a more complex model, Model 2. It relies on market option prices on 4 large US companies, from December 2018 to March 2019. All methods show a good accuracy, however, once calibrated, Neural Networks show a much better execution time, than the LSM. Both Neural Network end up with a lower Root Mean Square Error (RMSE) than the LSM for options of different levels of maturity and strike. Model 2 substantially outperforms the other models, having a RMSE ca. 40% lower than that of LSM. The lower RMSE is consistent across all companies, strike levels and maturities.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
20

Tarullo, Viviana. "Artificial Neural Networks for classification of EMG data in hand myoelectric control." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19195/.

Full text
Abstract:
This thesis studies the state-of-the-art in myoelectric control of active hand prostheses for people with trans-radial amputation using pattern recognition and machine learning techniques. Our work is supported by Centro Protesi INAIL in Vigorso di Budrio (BO). We studied the control system developed by INAIL consisting in acquiring EMG signals from amputee subjects and using pattern recognition methods for the classifcation of acquired signals, associating them with specifc gestures and consequently commanding the prosthesis. Our work consisted in improving classifcation methods used in the learning phase. In particular, we proposed a classifer based on a neural network as a valid alternative to the INAIL one-versus-all approach to multiclass classifcation.
APA, Harvard, Vancouver, ISO, and other styles
21

MIGDADY, HAZEM MOH'D. "A FEATURES EXTRACTION WRAPPER METHOD FOR NEURAL NETWORKS WITH APPLICATION TO DATA MINING AND MACHINE LEARNING." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/dissertations/691.

Full text
Abstract:
This dissertation presents a novel features selection wrapper method based on neural networks, named the Binary Wrapper for Features Selection Technique. The major aim of this method is to reduce the computation time that is consumed during the implementation of the process of features selection and classifier optimization in the Heuristic for Features Selection (HVS) method. The HVS technique is a neural network based features selection technique that uses the weights of a well-trained neural network as relevance index for each input feature with respect to the target. The HVS technique consumes long computation time because it follows a sequential approach to discard irrelevant, low relevance, and redundant features. Hence, the HVS technique discards a single feature only at each training session of the classifier. In order to reduce the computation time of the HVS technique, a threshold was produced and used to implement the features selection process. In this dissertation, a new technique, named the replacement technique, was designed and implemented to produce an appropriate threshold that can be used in discarding a group of features instead of discarding a single feature only, which is currently the case with HVS technique. Since the distribution of the candidate features (i.e. relevant, low relevance, redundant and irrelevant features) with respect to the target in a dataset is unknown, the replacement technique produces low relevance features (i.e. probes) to generate a low relevance threshold that is compared to the candidate features and used to detect low relevance, irrelevant and redundant features. Moreover, the replacement technique is considered to be a novel technique that overcomes the limitation of another similar technique that is known as: random shuffling technique. The goal of the random shuffling technique is to produce low relevance features (i.e. probes) in comparison with the relevance of the candidate features with respect to the target. However, using the random shuffling technique, it is not guaranteed to produce such features, whereas this is guaranteed when using the replacement technique. The binary wrapper for features selection technique was evaluated by implementing it over a number of experiments. In those experiments, three different datasets were used, which are: Congressional Voting Records, Wave Forms, and Multiple Features. The numbers of features in the datasets are: 16, 40, and 649 respectively. The results of those experiments were compared to the results of the HVS method and other similar methods to evaluate the performance of the binary wrapper for features selection technique. The technique showed a critical improvement in the consumed time for features selection and classifier optimization, since the consumed computation time using this method was largely less than the time consumed by the HVS method and other methods. The binary wrapper technique was able to save 0.889, 0.931, and 0.993 of the time that is consumed by the HVS method to produce results identical to those produced by the binary wrapper technique over the former three datasets. This implies that the amount of the saved computation time by the binary wrapper technique in comparison with the HVS method increases as the number of features in a dataset increases as well. Regarding the classification accuracy, the results showed that the binary wrapper technique was able to enhance the classification accuracy after discarding features, which is considered as an advantage in comparison with the HVS which did not enhance the classification accuracy after discarding features.
APA, Harvard, Vancouver, ISO, and other styles
22

Khoshnoud, Farbod. "A novel modal analysis method based on fuzzy sets." Thesis, Brunel University, 2005. http://bura.brunel.ac.uk/handle/2438/380.

Full text
Abstract:
A novel method of vibration modelling is proposed in this thesis. This method involves estimating the mode shapes of a general structure and describing these shapes in terms of fuzzy membership functions. These estimations or initial guesses are based on engineer's experience or physical insight into natural mode shapes assisted by end and boundary conditions and some rules. The guessed mode shapes were referred to as Mode Shape Forms (MSFs). MSFs are approximate mode shapes, therefore there are uncertainties involve with their values where this uncertainty is expressed by fuzzy sets. The deflection or displacement magnitude of the mode shape forms are described with Zero, Medium, and Large fuzzy linguistic terms and constructed using fuzzy membership functions and rules. Fuzzy rules are introduced for each MSF. In that respect fuzzy membership functions provides a means of dealing with uncertainty in measured data, it gives access to a large repertoire of tools available in fuzzy reasoning field. The second stage of the process addresses the issues of updating these curves by experimental data. This involves performing experimental modal analysis. The mode shapes derived from experimental FRFs collect a limited number of sampling points. When the fuzzy data is updated by experimental data, the method proposes that the points of the fuzzy data correspond to the sampling points of FRF are to be replaced by the experimental data. Doing this creates a new fuzzy curve which is the same as the previous one, except at those points. In another word a 'spiked' version of the original fuzzy curve is obtained. In the last stage of this process, neural network is used to 'learn' the spiked curve. By controlling the learning process (by preventing it from overtraining), an updated fuzzy curve is generated that is the final version of the mode shape. Examples are presented to demonstrate the application of the proposed method in modelling of beams, a plate and a structure (a three beams frame). The method is extended to evaluate the error where a wrong MSF is assumed for the mode shape. In this case the method finds the correct MSF among available guessed MSFs. A further extension of the method is proposed for cases where there is no guess available for the mode shape. In this situation the 'closest' MSF is selected among available MSFs. This MSF is modified by correcting the fuzzy rules that is used in constructing of the fuzzy MSF. Using engineering experience, heuristic knowledge and the developed MSF rules in this method are the capabilities that cannot be provided with any artificial intelligent system. This provides additional advantage relative to vibration modelling approaches that have been developed until now. Therefore this method includes all aspects of an effective analysis such as mixed artificial intelligence and experimental validation, plus human interface/intelligence. Another advantage is, MSF rules provide a novel approach in vibration modelling where enables the method to start and operate with unknown input parameters such as unknown material properties and imprecise structure dimensions. Hence the classical computational procedures of obtaining the vibration behaviour of the system, from these inputs, are not used in this approach. As a result, this method avoids the time consuming computational procedure that exhibit in existing vibration modelling methods. However, the validation procedure, using experimental tests (modal testing) is the same acceptable procedure that is used in any other available methods which proves the accuracy of the method.
APA, Harvard, Vancouver, ISO, and other styles
23

Bazargan-Harandi, Hamid. "Neural network based simulation of sea-state sequences." Thesis, Brunel University, 2006. http://bura.brunel.ac.uk/handle/2438/379.

Full text
Abstract:
The present PhD study, in its first part, uses artificial neural networks (ANNs), an optimization technique called simulated annealing, and statistics to simulate the significant wave height (Hs) and mean zero-up-crossing period ( ) of 3-hourly sea-states of a location in the North East Pacific using a proposed distribution called hepta-parameter spline distribution for the conditional distribution of Hs or given some inputs. Two different seven- network sets of ANNs for the simulation and prediction of Hs and were trained using 20-year observed Hs’s and ’s. The preceding Hs’s and ’s were the most important inputs given to the networks, but the starting day of the simulated period was also necessary. However, the code replaced the day with the corresponding time and the season. The networks were trained by a simulated annealing algorithm and the outputs of the two sets of networks were used for calculating the parameters of the probability density function (pdf) of the proposed hepta-parameter distribution. After the calculation of the seven parameters of the pdf from the network outputs, the Hs and of the future sea-state is predicted by generating random numbers from the corresponding pdf. In another part of the thesis, vertical piles have been studied with the goal of identifying the range of sea-states suitable for the safe pile driving operation. Pile configuration including the non-linear foundation and the gap between the pile and the pile sleeve shims were modeled using the finite elements analysis facilities within ABAQUS. Dynamic analyses of the system for a sea-state characterized by Hs and and modeled as a combination of several wave components were performed. A table of safe and unsafe sea-states was generated by repeating the analysis for various sea-states. If the prediction for a particular sea-state is repeated N times of which n times prove to be safe, then it could be said that the predicted sea-state is safe with the probability of 100(n/N)%. The last part of the thesis deals with the Hs return values. The return value is a widely used measure of wave extremes having an important role in determining the design wave used in the design of maritime structures. In this part, Hs return value was calculated demonstrating another application of the above simulation of future 3-hourly Hs’s. The maxima method for calculating return values was applied in such a way that avoids the conventional need for unrealistic assumptions. The significant wave height return value has also been calculated using the convolution concept from a model presented by Anderson et al. (2001).
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Yong Yook. "Inverse Problems In Structural Damage Identification, Structural Optimization, And Optical Medical Imaging Using Artificial Neural Networks." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/11111.

Full text
Abstract:
The objective of this work was to employ artificial neural networks (NN) to solve inverse problems in different engineering fields, overcoming various obstacles in applying NN to different problems and benefiting from the experience of solving different types of inverse problems. The inverse problems investigated are: 1) damage detection in structures, 2) detection of an anomaly in a light-diffusive medium, such as human tissue using optical imaging, 3) structural optimization of fiber optic sensor design. All of these problems require solving highly complex inverse problems and the treatments benefit from employing neural networks which have strength in generalization, pattern recognition, and fault tolerance. Moreover, the neural networks for the three problems are similar, and a method found suitable for solving one type of problem can be applied for solving other types of problems. Solution of inverse problems using neural networks consists of two parts. The first is repeatedly solving the direct problem, obtaining the response of a system for known parameters and constructing the set of the solutions to be used as training sets for NN. The next step is training neural networks so that the trained neural networks can produce a set of parameters of interest for the response of the system. Mainly feed-forward backpropagation NN were used in this work. One of the obstacles in applying artificial neural networks is the need for solving the direct problem repeatedly and generating a large enough number of training sets. To reduce the time required in solving the direct problems of structural dynamics and photon transport in opaque tissue, the finite element method was used. To solve transient problems, which include some of the problems addressed here, and are computationally intensive, the modal superposition and the modal acceleration methods were employed. The need for generating a large enough number of training sets required by NN was fulfilled by automatically generating the training sets using a script program in the MATLAB environment. This program automatically generated finite element models with different parameters, and the program also included scripts that combined the whole solution processes in different engineering packages for the direct problem and the inverse problem using neural networks. Another obstacle in applying artificial neural networks in solving inverse problems is that the dimension and the size of the training sets required for the NN can be too large to use NN effectively with the available computational resources. To overcome this obstacle, Principal Component Analysis is used to reduce the dimension of the inputs for the NN without excessively impairing the integrity of the data. Orthogonal Arrays were also used to select a smaller number of training sets that can efficiently represent the given system.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Man, Hou Michael Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Implicit coupled constitutive relations and an energy-based method for material modelling." Publisher:University of New South Wales. Mechanical & Manufacturing Engineering, 2009. http://handle.unsw.edu.au/1959.4/43652.

Full text
Abstract:
The contributions of this thesis are an implicit modelling method for the coupled constitutive relations and an energy-based method for material modelling. The two developed methods utilise implicit models to represent material constitutive relations without the requirement of physical parameters. The first method is developed to model coupled constitutive relations using state-space representation with neural networks. State-space representation is employed to express the desired relations in a compact fashion while simultaneously providing the capability of modelling rate- and/or path-dependent behaviour. The employment of neural networks with the generalised state-space representation results in a single implicit model that can be adapted for a broad range of constitutive behaviours. The performance and applicability of the method are highlighted through the applications for various constitutive behaviour of piezoelectric materials, including the effects of hysteresis and cyclic degradation. An energy-based method is subsequently developed for implicit constitutive modelling by utilising the energy principle on a deformed continuum. Two formulations of the proposed method are developed for the modelling of materials with varying nature in directional properties. The first formulation is based on an implicit strain energy density function, represented by a neural network with strain invariants as input, to derive the desired stress-strain relations. The second formulation consists of the derivation of an energy-based performance function for training a neural network that represents the stress-strain relations. The requirement of deriving stress is eliminated in both formulations and this facilitates the use of advanced experimental setup, such as multi-axial load tests or non-standard specimens, to produce the most information for constitutive modelling from a single experiment. A series of numerical studies -- including validation problems and practical cases with actual experimental setup -- have been conducted, the results of which demonstrate the applicability and effectiveness of the proposed method for constitutive modelling on a continuum basis.
APA, Harvard, Vancouver, ISO, and other styles
26

Sarlak, Nermin. "Evaluation And Modeling Of Streamflow Data: Entropy Method, Autoregressive Models With Asymmetric Innovations And Artificial Neural Networks." Phd thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12606135/index.pdf.

Full text
Abstract:
In the first part of this study, two entropy methods under different distribution assumptions are examined on a network of stream gauging stations located in Kizilirmak Basin to rank the stations according to their level of importance. The stations are ranked by using two different entropy methods under different distributions. Thus, showing the effect of the distribution type on both entropy methods is aimed. In the second part of this study, autoregressive models with asymmetric innovations and an artificial neural network model are introduced. Autoregressive models (AR) which have been developed in hydrology are based on several assumptions. The normality assumption for the innovations of AR models is investigated in this study. The main reason of making this assumption in the autoregressive models established is the difficulties faced in finding the model parameters under the distributions other than the normal distributions. From this point of view, introduction of the modified maximum likelihood procedure developed by Tiku et. al. (1996) in estimation of the autoregressive model parameters having non-normally distributed residual series, in the area of hydrology has been aimed. It is also important to consider how the autoregressive model parameters having skewed distributions could be estimated. Besides these autoregressive models, the artificial neural network (ANN) model was also constructed for annual and monthly hydrologic time series due to its advantages such as no statistical distribution and no linearity assumptions. The models considered are applied to annual and monthly streamflow data obtained from five streamflow gauging stations in Kizilirmak Basin. It is shown that AR(1) model with Weibull innovations provides best solutions for annual series and AR(1) model with generalized logistic innovations provides best solution for monthly as compared with the results of artificial neural network models.
APA, Harvard, Vancouver, ISO, and other styles
27

Akkala, Arjun. "Development of Artificial Neural Networks Based Interpolation Techniques for the Modeling and Estimation of Radon Concentrations in Ohio." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1279315482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hluška, Milan. "Využití rychlého algoritmu kosoúhlého rovnání k optimalizaci procesu rovnání pomocí neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-418190.

Full text
Abstract:
Master's thesis deals with a fast cross roll bar straightening algorithm and its modifications to allow for an automatic calculation of a large number of simulations and an arbitrary straightening machine configuration. Modified program is then verified using the original algorithm. It also deals with the algorithm's application to straightening process optimization using neural networks.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Feiyang. "Implementation and verification of the Information Bottleneck interpretation of deep neural networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235744.

Full text
Abstract:
Although deep neural networks (DNNs) have made remarkable achievementsin various elds, there is still not a matching practical theory that is able toexplain DNNs' performances. Tishby (2015) proposed a new insight to analyzeDNN via the Information bottleneck (IB) method. By visualizing how muchrelevant information each layer contains in input and output, he claimed thatthe DNNs training is composed of tting phase and compression phase. Thetting phase is when DNNs learn information both in input and output, andthe prediction accuracy goes high during this process. Afterwards, it is thecompression phase when information in output is preserved while unrelatedinformation in input is thrown away in hidden layers. This is a tradeo betweenthe network complexity (complicated DNNs lose less information in input) andprediction accuracy, which is the same goal with the IB method.In this thesis, we verify this IB interpretation rst by reimplementing Tishby'swork, where the hidden layer distribution is approximated by the histogram(binning). Additionally, we introduce various mutual information estimationmethods like kernel density estimators. Based upon simulation results, we concludethat there exists an optimal bound on the mutual information betweenhidden layers with input and output. But the compression mainly occurs whenthe activation function is \double saturated", like hyperbolic tangent function.Furthermore, we extend the work to the simulated wireless model where thedata set is generated by a wireless system simulator. The results reveal that theIB interpretation is true, but the binning is not a correct tool to approximatehidden layer distributions. The ndings of this thesis reect the informationvariations in each layer during the training, which might contribute to selectingtransmission parameter congurations in each frame in wireless communicationsystems.
Ä ven om djupa neuronnät (DNN) har gjort anmärkningsvärda framsteg på olikaområden, finns det fortfarande ingen matchande praktisk teori som kan förklara DNNs prestanda. Tishby (2015) föreslog en ny insikt att analysera DNN via informationsflaskhack (IB) -metoden. Genom att visualisera hur mycket relevant information varje lager innehåller i ingång och utgång, hävdade han att DNNs träning består av monteringsfas och kompressionsfas. Monteringsfasenär när DNN lär sig information både i ingång och utgång, och prediktionsnoggrannheten ökar under denna process. Efteråt är det kompressionsfasen när information i utgången bevaras medan orelaterad information i ingången kastas bort. Det här är en kompromiss mellan nätkomplexiteten (komplicerade DNN förlorar mindre information i inmatning) och predictionsnoggrannhet, vilket är exakt samma mål med informationsflaskhals (IB) -metoden.I detta examensarbete kontrollerar vi denna IB-framställning först genom att implementera om Tishby’s arbete, där den dolda lagerfördelningen approximeras av histogrammet (binning). Dessutom introducerar vi olika metoder förömsesidig information uppskattning som kernel density estimators. Baserat på simuleringsresultatet drar vi slutsatsen att det finns en optimal bindning för denömsesidiga informationen mellan dolda lager med ingång och utgång. Men komprimeringen sker huvudsakligen när aktiveringsfunktionen är “dubbelmättad”, som hyperbolisk tangentfunktion.Dessutom utvidgar vi arbetet till den simulerad trådlösa modellen där data set genereras av en trådlös systemsimulator. Resultaten visar att IB-framställning är sann, men binningen är inte ett korrekt verktyg för att approximera dolda lagerfördelningar. Resultatet av denna examensarbete reflekterar informationsvariationerna i varje lager, vilket kan bidra till att välja överföringspa-rameterns konfigurationer i varje ram i trådlösa kommunikationssystem
APA, Harvard, Vancouver, ISO, and other styles
30

Flynn, Myles M. 1966. "A method of assessing near-view scenic beauty models: A comparison of neural networks and multiple linear regression." Thesis, The University of Arizona, 1997. http://hdl.handle.net/10150/292054.

Full text
Abstract:
With recent advances in artificial intelligence, new methods are being developed that provide faster, and more consistent predictions for data in complex environments. In the field of landscape assessment, where an array of physical variables effect environmental perception, natural resource managers need tools to assist them in isolating the significant predictors critical for the protection and management of these resources. Recent studies that have utilized neural networks to assist in developing predictive models of scenic beauty that have typically utilized linear regression techniques have found limited success. The goal of this research is to compare NN's with linear regression models to determine their efficiency predictive capability for assessing near view scenic beauty in the Cedar City District of the Dixie National forest (DNF). Results of this study strongly conclude that neural networks are consistently better predictors of near view scenic beauty in spruce/fir dominated forests than hierarchical linear regression models.
APA, Harvard, Vancouver, ISO, and other styles
31

Barakat, Mustapha. "Fault diagnostic to adaptive classification schemes based on signal processing and using neural networks." Le Havre, 2011. http://www.theses.fr/2011LEHA0023.

Full text
Abstract:
La détection et l'isolation des défauts industriels (FDI) est devenue de plus en plus importante en raison de l'augmentation de l'automatisation industrielle. L'augmentation significative des complexités des systèmes industriels au cours des dernières années a fait de la FDI une étape majeure de tous les processus industriels. Dans cette thèse, des techniques adaptatives et intelligentes basées sur les réseaux de neurones artificiels combinés avec des outils avancés de traitements du signal utilisés pour la détection et le diagnostic systèmatique des défauts dans les systèmes industriels ont été développés et appliqués. Les techniques proposées de classification en ligne consistent de trois différentes étapes : (1) modélisation du signal et extraction des caractéristiques, (2) classification des caractéristiques et (3) décision de sortie. Dans une première étape, notre approche est basée sur le fait que les défauts sont reflétés dans les caractéristiques extraites. Pour l'algorithme de classifiaction des caractéristiques, plusieurs techniques basées sur les réseaux de neurones ont été utilisées. Un arbre de décision binaire basé sur la classification par une Machine à Vecteurs Supportes (SVM) a été aussi appliqué. Cette technique choisit la caractéristique dynamique appropriée à chaque niveau (branche) et classifie cette caractéristique par un classifier binaire. Une autre technique de classification avancée est prévue. Cette technique est basée sur la cartographie (mapping) de l'algorithme des réseaux qui peut extraire des caractéristiques à partir de données historiques et nécessité une connaissance à priori sur le processus. L'importance de ce réseau réside dans sa capacité à garder les anciennes données de probabilités équitables au cours du processus de cartographie. Une troisième contribution porte sur la construction du réseau avec des noeuds qui peuvent activer dans des sous-espaces spécifiques des différentes classes. Le concept de cette dernière méthode est de diviser l'espace des défauts d'une manière hiérarchique en un nombre de plus petits sous-espaces selon les zones d'activation des paramètres groupés. Pour chaque type de défauts, dans un sous espace particulier un agent spécial de diagnostic est entrainé. Une sélection avancée des paramètres est intégrée dans cet algorithme pour améliorer la confidence de classification. Toutes les contributions sont appliquées pour la détection et le diagnostic des différents systèmes industriels dans les domaines de l'ingénierie mécanique ou chimique. Les performances de nos approches sont étudiées et comparées avec plusieurs méthodes existantes utilisant des réseaux de neurones et la précision de toutes les méthodologies est examinée et évaluée avec soin
Industrial Fault Detection and Isolation (FDI) become more essential in light of increased automation in industry. The signifiant increase of systemes and plants complexity during last decades made the FDI tasks appear as major steps in all industrial processes. In this thesis, adaptive intelligent tcehniques based on artificial neural networks combined with advanced signal processing methods for systematic detection and diagnosis of faults in industrial systemes are developed and put forward? The proposed on-line classification techniques consists of three main stages : (1) signal modeling and featured extraction, (2) feature classification and (3) output decision. In first stage, our approach is relied on the assumption that faults are reflected in the extracted features. For feature classification algorithm, several techniques bases on neural networks are proposed. A binary decision tree relied on multiclass Support Vector Machine (SVM) algorithm is put forward. The technique selects dynamic appropriate feature at each level (branch) and classify it in a binary classifier. Another advance classification technique is anticipated based on mapping algorithm network that can extract features from historical data and require prior knowledge about the process. The signifiance of this network focuses on its ability to reserve old data in equitable porpabilities during the mapping process. Each class of faults or disturbances will be represented by a particular surface at the end of the mapping process. Third contribution focuses on building network with nodes that can activate in specific subspaces of different classes. The concept behind this method is to divide the pattern space of faults, in a particular sub-space, a special diagnosis agent is trained. An advanced parameter selection is embedded in this algorithm for improving the confidence of classification diagnosis. All contributions are applied to the fault detection and diagnosis of various industrial systems in the domains of mechanical or chemical engineering. The performances of our approaches are studied and compared with several existing neural network methods and the accuracy of all methodologies is considered carefully and evaluated
APA, Harvard, Vancouver, ISO, and other styles
32

Jayasundara, Walpola Kankanamalage Nirmani. "Damage detection of arch bridges using vibration characteristics and artificial neural network." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/135524/1/Walpola%20Kankanamalage%20Nirmani_Jayasundara_Thesis.pdf.

Full text
Abstract:
This project developed a method to detect, locate and quantify damage in arch bridges using variations in their vibration characteristics and artificial neural network. The method was successfully tested on a few real-life arch bridges. Outcomes of this project will contribute towards the safety of our bridges.
APA, Harvard, Vancouver, ISO, and other styles
33

Tornstad, Magnus. "Evaluating the Practicality of Using a Kronecker-Factored Approximate Curvature Matrix in Newton's Method for Optimization in Neural Networks." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275741.

Full text
Abstract:
For a long time, second-order optimization methods have been regarded as computationally inefficient and intractable for solving the optimization problem associated with deep learning. However, proposed in recent research is an adaptation of Newton's method for optimization in which the Hessian is approximated by a Kronecker-factored approximate curvature matrix, known as KFAC. This work aims to assess its practicality for use in deep learning. Benchmarks were performed using abstract, binary, classification problems, as well as the real-world Boston Housing regression problem, and both deep and shallow network architectures were employed. KFAC was found to offer great savings in computational complexity compared to a naive approximate second-order implementation using the Gauss Newton matrix. Comparing performance in deep and shallow networks, the loss convergence of both stochastic gradient descent (SGD) and KFAC showed a dependency upon network architecture, where KFAC tended to converge quicker in deep networks, and SGD tended to converge quicker in shallow networks. The study concludes that KFAC can perform well in deep learning, showing competitive loss minimization versus basic SGD, but that it can be sensitive to initial weigths. This sensitivity could be remedied by allowing the first steps to be taken by SGD, in order to set KFAC on a favorable trajectory.
Andra ordningens optimeringsmetoder have länge ansetts vara beräkningsmässigt ineffektiva för att lösa optimeringsproblemet inom djup maskininlärning. En alternativ optimiseringsstrategi som använder en Kronecker-faktoriserad approximativ Hessian (KFAC) i Newtons metod för optimering, har föreslagits i tidigare studier. Detta arbete syftar till att utvärdera huruvida metoden är praktisk att använda i djup maskininlärning. Test körs på abstrakta, binära, klassificeringsproblem, samt ett verkligt regressionsproblem: Boston Housing data. Studien fann att KFAC erbjuder stora besparingar i tidskopmlexitet jämfört med när en mer naiv implementation med Gauss-Newton matrisen används. Vidare visade sig losskonvergensen hos både stokastisk gradient descent (SGD) och KFAC beroende av nätverksarkitektur: KFAC tenderade att konvergera snabbare i djupa nätverk, medan SGD tenderade att konvergera snabbare i grunda nätverk. Studien drar slutsatsen att KFAC kan prestera väl för djup maskininlärning jämfört med en grundläggande variant av SGD. KFAC visade sig dock kunna vara mycket känslig för initialvikter. Detta problem kunde lösas genom att låta de första stegen tas av SGD så att KFAC hamnade på en gynnsam bana.
APA, Harvard, Vancouver, ISO, and other styles
34

Bhalala, Smita Ashesh 1966. "Modified Newton's method for supervised training of dynamical neural networks for applications in associative memory and nonlinear identification problems." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/277969.

Full text
Abstract:
There have been several innovative approaches towards realizing an intelligent architecture that utilizes artificial neural networks for applications in information processing. The development of supervised training rules for updating the adjustable parameters of neural networks has received extensive attention in the recent past. In this study, specific learning algorithms utilizing modified Newton's method for the optimization of the adjustable parameters of a dynamical neural network are developed. Computer simulation results show that the convergence performance of the proposed learning schemes match very closely that of the LMS learning algorithm for applications in the design of associative memories and nonlinear mapping problems. However, the implementation of the modified Newton's method is complex due to the computation of the slope of the nonlinear sigmoidal function, whereas, the LMS algorithm approximates the slope to be zero.
APA, Harvard, Vancouver, ISO, and other styles
35

BUENO, ELAINE I. "Group Method of Data Handling (GMDH) e redes neurais na monitoração e detecção de falhas em sensores de centrais nucleares." reponame:Repositório Institucional do IPEN, 2011. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9982.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:33:30Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:05:40Z (GMT). No. of bitstreams: 0
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
36

Shedd, Stephen F. "Semantic and syntactic object correlation in the object-oriented method for interoperability." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FShedd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cronley, Thomas J. "The use of neural networks as a method of correlating thermal fluid data to provide useful information on thermal systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA380226.

Full text
Abstract:
Thesis (M.S. in Mechanical Engineering)--Naval Postgraduate School, June 2000.
Thesis advisor(s): Kelleher, Matthew D. "June 2000." Includes bibliographical references (p. 43). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
38

Burger, Christiaan. "A novel method of improving EEG signals for BCI classification." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95984.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Muscular dystrophy, spinal cord injury, or amyotrophic lateral sclerosis (ALS) are injuries and disorders that disrupts the neuromuscular channels of the human body thus prohibiting the brain from controlling the body. Brain computer interface (BCI) allows individuals to bypass the neuromuscular channels and interact with the environment using the brain. The system relies on the user manipulating his neural activity in order to control an external device. Electroencephalography (EEG) is a cheap, non-invasive, real time acquisition device used in BCI applications to record neural activity. However, noise, known as artifacts, can contaminate the recording, thus distorting the true neural activity. Eye blinks are a common source of artifacts present in EEG recordings. Due to its large amplitude it greatly distorts the EEG data making it difficult to interpret data for BCI applications. This study proposes a new combination of techniques to detect and correct eye blink artifacts to improve the quality of EEG for BCI applications. Independent component analysis (ICA) is used to separate the EEG signals into independent source components. The source component containing eye blink artifacts are corrected by detecting each eye blink within the source component and using a trained wavelet neural network (WNN) to correct only a segment of the source component containing the eye blink artifact. Afterwards, the EEG is reconstructed without distorting or removing the source component. The results show a 91.1% detection rate and a 97.9% correction rate for all detected eye blinks. Furthermore for channels located over the frontal lobe, eye blink artifacts are corrected preserving the neural activity. The novel combination overall reduces EEG information lost, when compared to existing literature, and is a step towards improving EEG pre-processing in order to provide cleaner EEG data for BCI applications.
AFRIKAANSE OPSOMMING: Spierdistrofie, ’n rugmurgbesering, of amiotrofiese laterale sklerose (ALS) is beserings en steurnisse wat die neuromuskulêre kanale van die menslike liggaam ontwrig en dus verhoed dat die brein die liggaam beheer. ’n Breinrekenaarkoppelvlak laat toe dat die neuromuskulêre kanale omlei word en op die omgewing reageer deur die brein. Die BCI-stelsel vertrou op die gebruiker wat sy eie senuwee-aktiwiteit manipuleer om sodoende ’n eksterne toestel te beheer. Elektro-enkefalografie (EEG) is ’n goedkoop, nie-indringende, intydse dataverkrygingstoestel wat gebruik word in BCI toepassings. Nie net senuwee aktiwiteit nie, maar ook geraas , bekend as artefakte word opgeneem, wat dus die ware senuwee aktiwiteit versteur. Oogknip artefakte is een van die algemene artefakte wat teenwoordig is in EEG opnames. Die groot omvang van hierdie artefakte verwring die EEG data wat dit moeilik maak om die data te ontleed vir BCI toepassings. Die studie stel ’n nuwe kombinasie tegnieke voor wat oogknip artefakte waarneem en regstel om sodoende die kwaliteit van ’n EEG vir BCI toepassings te verbeter. Onafhanklike onderdeel analise (Independent component analysis (ICA)) word gebruik om die EEG seine te skei na onafhanklike bron-komponente. Die bronkomponent wat oogknip artefakte bevat word reggestel binne die komponent en gebruik ’n ervare/geoefende golfsenuwee-netwerk om slegs ’n deel van die komponent wat die oogknip artefak bevat reg te stel. Daarna word die EEG hervorm sonder verwringing of om die bron-komponent te verwyder. Die resultate toon ’n 91.1% opsporingskoers en ’n 97.9% regstellingskoers vir alle waarneembare oogknippe. Oogknip artefakte in kanale op die voorste lob word reggestel en behou die senuwee aktiwiteit wat die oorhoofse EEG kwaliteit vir BCI toepassings verhoog.
APA, Harvard, Vancouver, ISO, and other styles
39

Upadrasta, Bharat. "Boolean factor analysis a review of a novel method of matrix decomposition and neural network Boolean factor analysis /." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
40

Max, Lindblad. "The impact of parsing methods on recurrent neural networks applied to event-based vehicular signal data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223966.

Full text
Abstract:
This thesis examines two different approaches to parsing event-based vehicular signal data to produce input to a neural network prediction model: event parsing, where the data is kept unevenly spaced over the temporal domain, and slice parsing, where the data is made to be evenly spaced over the temporal domain instead. The dataset used as a basis for these experiments consists of a number of vehicular signal logs taken at Scania AB. Comparisons between the parsing methods have been made by first training long short-term memory (LSTM) recurrent neural networks (RNN) on each of the parsed datasets and then measuring the output error and resource costs of each such model after having validated them on a number of shared validation sets. The results from these tests clearly show that slice parsing compares favourably to event parsing.
Denna avhandling jämför två olika tillvägagångssätt vad gäller parsningen av händelsebaserad signaldata från fordon för att producera indata till en förutsägelsemodell i form av ett neuronnät, nämligen händelseparsning, där datan förblir ojämnt fördelad över tidsdomänen, och skivparsning, där datan är omgjord till att istället vara jämnt fördelad över tidsdomänen. Det dataset som används för dessa experiment är ett antal signalloggar från fordon som kommer från Scania. Jämförelser mellan parsningsmetoderna gjordes genom att först träna ett lång korttidsminne (LSTM) återkommande neuronnät (RNN) på vardera av de skapade dataseten för att sedan mäta utmatningsfelet och resurskostnader för varje modell efter att de validerats på en delad uppsättning av valideringsdata. Resultaten från dessa tester visar tydligt på att skivparsning står sig väl mot händelseparsning.
APA, Harvard, Vancouver, ISO, and other styles
41

Minasny, Budiman. "Efficient Methods for Predicting Soil Hydraulic Properties." University of Sydney. Land, Water & Crop Sciences, 2000. http://hdl.handle.net/2123/853.

Full text
Abstract:
Both empirical and process-simulation models are useful for evaluating the effects of management practices on environmental quality and crop yield. The use of these models is limited, however, because they need many soil property values as input. The first step towards modelling is the collection of input data. Soil properties can be highly variable spatially and temporally, and measuring them is time-consuming and expensive. Efficient methods, which consider the uncertainty and cost of measurements, for estimating soil hydraulic properties form the main thrust of this study. Hydraulic properties are affected by other soil physical, and chemical properties, therefore it is possible to develop empirical relations to predict them. This idea quantified is called a pedotransfer function. Such functions may be global or restricted to a country or region. The different classification of particle-size fractions used in Australia compared with other countries presents a problem for the immediate adoption of exotic pedotransfer functions. A database of Australian soil hydraulic properties has been compiled. Pedotransfer functions for estimating water-retention and saturated hydraulic conductivity from particle size and bulk density for Australian soil are presented. Different approaches for deriving hydraulic transfer functions have been presented and compared. Published pedotransfer functions were also evaluated, generally they provide a satisfactory estimation of water retention and saturated hydraulic conductivity depending on the spatial scale and accuracy of prediction. Several pedotransfer functions were developed in this study to predict water retention and hydraulic conductivity. The pedotransfer functions developed here may predict adequately in large areas but for site-specific applications local calibration is needed. There is much uncertainty in the input data, and consequently the transfer functions can produce varied outputs. Uncertainty analysis is therefore needed. A general approach to quantifying uncertainty is to use Monte Carlo methods. By sampling repeatedly from the assumed probability distributions of the input variables and evaluating the response of the model the statistical distribution of the outputs can be estimated. A modified Latin hypercube method is presented for sampling joint multivariate probability distributions. This method is applied to quantify the uncertainties in pedotransfer functions of soil hydraulic properties. Hydraulic properties predicted using pedotransfer functions developed in this study are also used in a field soil-water model to analyze the uncertainties in the prediction of dynamic soil-water regimes. The use of the disc permeameter in the field conventionally requires the placement of a layer of sand in order to provide good contact between the soil surface and disc supply membrane. The effect of sand on water infiltration into the soil and on the estimate of sorptivity was investigated. A numerical study and a field experiment on heavy clay were conducted. Placement of sand significantly increased the cumulative infiltration but showed small differences in the infiltration rate. Estimation of sorptivity based on the Philip's two term algebraic model using different methods was also examined. The field experiment revealed that the error in infiltration measurement was proportional to the cumulative infiltration curve. Infiltration without placement of sand was considerably smaller because of the poor contact between the disc and soil surface. An inverse method for predicting soil hydraulic parameters from disc permeameter data has been developed. A numerical study showed that the inverse method is quite robust in identifying the hydraulic parameters. However application to field data showed that the estimated water retention curve is generally smaller than the one obtained in laboratory measurements. Nevertheless the estimated near-saturated hydraulic conductivity matched the analytical solution quite well. Th author believes that the inverse method can give a reasonable estimate of soil hydraulic parameters. Some experimental and theoretical problems were identified and discussed. A formal analysis was carried out to evaluate the efficiency of the different methods in predicting water retention and hydraulic conductivity. The analysis identified the contribution of individual source of measurement errors to the overall uncertainty. For single measurements, the inverse disc-permeameter analysis is economically more efficient than using pedotransfer functions or measuring hydraulic properties in the laboratory. However, given the large amount of spatial variation of soil hydraulic properties it is perhaps not surprising that lots of cheap and imprecise measurements, e.g. by hand texturing, are more efficient than a few expensive precise ones.
APA, Harvard, Vancouver, ISO, and other styles
42

Mazumdar, Joy. "System and method for determining harmonic contributions from nonlinear loads in power systems." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/23215.

Full text
Abstract:
The objective of this research is to introduce a neural network based solution for the problem of measuring the actual amount of harmonic current injected into a power network by an individual nonlinear load. Harmonic currents from nonlinear loads propagate through the system and cause harmonic pollution. As a result, voltage at the point of common coupling (PCC) is rarely sinusoidal. The IEEE 519 harmonic standard provides customer and utility harmonic limits and many utilities are now requiring their customers to comply with IEEE 519. Measurements of the customer’s current at the PCC are expected to determine the customer’s compliance with IEEE 519. However, results in this research show that the current measurements at the PCC are not always reliable in that determination. In such a case, it may be necessary to determine what the customer’s true current harmonic distortions would be if the PCC voltage could be a pure sinusoidal voltage. However, establishing a pure sinusoidal voltage at the PCC may not be feasible since that would mean performing utility switching to reduce the system impedance. An alternative approach is to use a neural network that is able to learn the customer’s load admittance. Then, it is possible to predict the customer’s true current harmonic distortions based on mathematically applying a pure sinusoidal voltage to the learned load admittance. The proposed method is called load modeling. Load modeling predicts the true harmonic current that can be attributed to a customer regardless of whether a resonant condition exists on the utility power system. If a corrective action is taken by the customer, another important parameter of interest is the change in the voltage distortion level at the PCC due to the corrective action of the customer. This issue is also addressed by using the dual of the load modeling method. Topologies of the neural networks used in this research include multilayer perceptron neural networks and recurrent neural networks. The theory and implementation of a new neural network topology known as an Echo State Networks is also introduced. The proposed methods are verified on a number of different power electronic test circuits as well as field data. The main advantages of the proposed methods are that only waveforms of voltages and currents are required for their operation and they are applicable to both single and three phase systems. The proposed methods can be integrated into any existing power quality instrument or can be fabricated into a commercial standalone instrument that could be installed in substations of large customer loads, or used as a hand-held clip on instrument.
APA, Harvard, Vancouver, ISO, and other styles
43

Cloyd, James Dale. "Data mining with Newton's method." [Johnson City, Tenn. : East Tennessee State University], 2002. http://etd-submit.etsu.edu/etd/theses/available/etd-1101102-081311/unrestricted/CloydJ111302a.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

POTIENS, JUNIOR ADEMAR J. "Aplicação de redes neurais artificiais na caracterização isotópica de tambores de rejeito radioativo." reponame:Repositório Institucional do IPEN, 2005. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11354.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:51:01Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:59:15Z (GMT). No. of bitstreams: 1 11135.pdf: 7189578 bytes, checksum: 2301b9d209a5d40ecb7cb637fe73b0f8 (MD5)
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
45

Bueno, Elaine Inacio. "Group Method of Data Handling (GMDH) e redes neurais na monitoração e detecção de falhas em sensores de centrais nucleares." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/85/85133/tde-02092011-140535/.

Full text
Abstract:
A demanda crescente na complexidade, eficiência e confiabilidade nos sistemas industriais modernos têm estimulado os estudos da teoria de controle aplicada no desenvolvimento de sistemas de Monitoração e Detecção de Falhas. Neste trabalho foi desenvolvida uma metodologia inédita de Monitoração e Detecção de Falhas através do algoritmo GMDH e Redes Neurais Artificiais (RNA) que foi aplicada ao reator de pesquisas do IPEN, IEA-R1. O desenvolvimento deste trabalho foi dividido em duas etapas: sendo a primeira etapa dedicada ao pré-processamento das informações, realizada através do algoritmo GMDH; e a segunda o processamento das informações através de RNA. O algoritmo GMDH foi utilizado de duas maneiras diferentes: primeiramente, o algoritmo GMDH foi utilizado para gerar uma melhor estimativa da base de dados, tendo como resultado uma matriz denominada matriz_z, que foi utilizada no treinamento das RNA. Logo após, o GMDH foi utilizado no estudo das variáveis mais relevantes, sendo estas variáveis utilizadas no processamento das informações. Para realizar as simulações computacionais, foram propostos cinco modelos: Modelo 1 (Modelo Teórico) e Modelos 2, 3, 4 e 5 (Dados de operação do reator). Após a realização de um estudo exaustivo dedicado a Monitoração, iniciou-se a etapa de Detecção de Falhas em sensores, onde foram simuladas falhas na base de dados dos sensores. Para tanto as leituras dos sensores tiveram um acréscimo dos seguintes valores: 5%, 10%, 15% e 20%. Os resultados obtidos utilizando o algoritmo GMDH na escolha das melhores variáveis de entrada para as RNA foram melhores do que aqueles obtidos utilizando apenas RNA, o que viabiliza o uso da nova metodologia de Monitoração e Detecção de Falhas em sensores apresentada.
The increasing demand in the complexity, efficiency and reliability in modern industrial systems stimulated studies on control theory applied to the development of Monitoring and Fault Detection system. In this work a new Monitoring and Fault Detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and Artificial Neural Networks (ANNs) which was applied to the IEA-R1 research reactor at IPEN. The Monitoring and Fault Detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second part to the process information using ANNs. The GMDH algorithm was used in two different ways: firstly, the GMDH algorithm was used to generate a better database estimated, called matrix_z, which was used to train the ANNs. After that, the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one Theoretical Model and four Models using different sets of reactor variables. After an exhausting study dedicated to the sensors Monitoring, the Fault Detection in sensors was developed by simulating faults in the sensors database using values of 5%, 10%, 15% and 20% in these sensors database. The results obtained using GMDH algorithm in the choice of the best input variables to the ANNs were better than that using only ANNs, thus making possible the use of these methods in the implementation of a new Monitoring and Fault Detection methodology applied in sensors.
APA, Harvard, Vancouver, ISO, and other styles
46

Torres, Cedillo Sergio Guillermo. "The identification of unbalance in a nonlinear squeeze-film damped system using an inverse method : a computational and experimental study." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/the-identification-of-unbalance-in-a-nonlinear-squeezefilm-damped-system-using-an-inverse-method--a-computational-and-experimental-study(045efa19-bf0f-40de-8657-cb187283a6c6).html.

Full text
Abstract:
Typical aero-engine assemblies have at least two nested rotors mounted within a flexible casing via squeeze-film damper (SFD) bearings. As a result, the flexible casing structures become highly sensitive to the vibration excitation arising from the High and Low pressure rotors. Lowering vibrations at the aircraft engine casing can reduce harmful effects on the aircraft engine. Inverse problem techniques provide a means toward solving the unbalance identification problem for a rotordynamic system supported by nonlinear SFD bearings, requiring prior knowledge of the structure and measurements of vibrations at the casing. This thesis presents two inverse solution techniques for the nonlinear rotordynamic inverse problem, which are focused on applications where the rotor is inaccessible under operating conditions, e.g. high pressure rotors. Numerical and experimental validations under hitherto unconsidered conditions have been conducted to test the robustness of each technique. The main contributions of this thesis are:• The development of a non-invasive inverse procedure for unbalance identification and balancing of a nonlinear SFD rotordynamic system. This method requires at least a linear connection to ensure a well-conditioned explicit relationship between the casing vibration and the rotor unbalance via frequency response functions. The method makes no simplifying assumptions made in previous research e.g. neglect of gyroscopic effects; assumption of structural isotropy; restriction to one SFD; circular centred orbits (CCOs) of the SFD. • The identification and validation of the inverse dynamic model of the nonlinear SFD element, based on recurrent neural networks (RNNs) that are trained to reproduce the Cartesian displacements of the journal relative to the bearing housing, when presented with given input time histories of the Cartesian SFD bearing forces.• The empirical validation of an entirely novel approach towards the solution of a nonlinear inverse rotor-bearing problem, one involving an identified empirical inverse SFD bearing model. This method is suitable for applications where there is no adequate linear connection between rotor and casing. Both inverse solutions are formulated using the Receptance Harmonic Balance Method (RHBM) as the underpinning theory. The first inverse solution uses the RHBM to generate the backwards operator, where a linear connection is required to guarantee an explicit inverse solution. A least-squares solution yields the equivalent unbalance distribution in prescribed planes of the rotor, which is consequently used to balance it. This method is successfully validated on distinct rotordynamic systems, using simulated data considering different practical scenarios of error sources, such as noisy data, model uncertainty and balancing errors. Focus is then shifted to the second inverse solution, which is experimentally-based. In contrast to the explicit inverse solution, the second alternative uses the inverse SFD model as an implicit inverse solution. Details of the SFD test rig and its set up for empirical identification are presented. The empirical RNN training process for the inverse function of an SFD is presented and validated as a part of a nonlinear inverse problem. Finally, it is proved that the RNN could thus serve as reliable virtual instrumentation for use within an inverse rotor-bearing problem.
APA, Harvard, Vancouver, ISO, and other styles
47

Sebastiani, Andrea. "Deep Plug-and-Play Gradient Method for Super-Resolution." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20619/.

Full text
Abstract:
In diversi settori sono necessarie immagini ad alta risoluzione. Per risoluzione non intendiamo solamente le dimensioni spaziali in pixel dell'immagine ma anche la qualità dell'immagine stessa, priva di distorsioni e/o rumore. Solitamente immagini di questo tipo si possono ottenere usando apparecchiature di acquisizione munite di sensori ad alta precisione, in grado di convertire in digitale i dati analogici. Spesso, per limitazioni fisiche ed economiche, la qualità raggiunta dagli strumenti è ben lontana da quella richiesta per le varie applicazioni. Per risolvere questo problema sono state sviluppate numeroso tecniche comunemente dette di Super-Resolution. Lo scopo di questi metodi è di ricostruire un'immagine in alta risoluzione (HR) da un'immagine acquisita ad una risoluzione più bassa (LR). Sono due gli obiettivi principali di questa tesi. Il primo è quello di studiare e valutare come è possibile combinare tecniche classiche con le recenti innovazioni nell'ambito del deep learning applicato alla ricostruzione di immagini. Il secondo è quello di estendere una classe di metodi detti plug-and-play, introducendo un regolarizzatore delle derivate. Per questi motivi, abbiamo deciso di chiamare Deep Plug-and-Play Gradient Method l'algoritmo risultante da questo lavoro di ricerca. Vogliamo puntualizzare che il metodo proposto può essere utilizzato in molti problemi che hanno una formulazione matematica simile al problema di super-risoluzione. In questa tesi abbiamo preferito concentrarci ed implementare solamente una versione per la Super-Resolution.
APA, Harvard, Vancouver, ISO, and other styles
48

Angelico, João Carlos 1971. "Desempenho das redes neurais artificiais na estimativa das variáveis físicas e químicas do solo /." Botucatu : [s.n.], 2005. http://hdl.handle.net/11449/101867.

Full text
Abstract:
Resumo: Métodos estatísticos de interpolação são freqüentemente utilizados para se obter as características dos solos em locais não amostrados, visando diminuir o número de amostras necessárias para um bom mapeamento do campo. Nesse trabalho, a estimativa da variabilidade espacial de atributos do solo foi realizada de duas maneiras: primeiramente utilizando-se os métodos estatísticos da krigagem e da co-krigagem e posteriormente as redes neurais artificiais. Os resultados obtidos pelos dois métodos foram comparados, com a finalidade de se verificar a eficiência das redes neurais artificiais na estimativa de atributos do solo. Os resultados mostraram que as redes neurais artificiais, em particular as redes Perceptron, com uma e com duas camadas de neurônios, são capazes de estimar as variabilidades espaciais dos solos com precisão maior do que os métodos estatísticos da krigagem e da co-krigagem. As redes neurais artificiais também se mostraram eficientes na estimativa de uma determinada variável do solo em função de sua classe textural.
Abstract: Statistic methods of interpolation are often used to get the soil characteristics in non-sampled places in order to decrease samples numbers, which are necessary to obtain a good field mapping. In this project, the estimation of soil spatial variability attributes was done in two different ways. First, it was used statistic methods of kriging and cokriging, and in second instance, it was used artificial neural networks. The results computed by both techniques were compared each other in order to verify the efficiency of the artificial neural networks in estimating soil attributes. The results indicated that artificial neural networks, especially Perceptron networks, both with one and two layers of neurons, are able to estimate the soil spatial variability much better than the kriging and the cokriging methods. The artificial neural networks have also showed very efficient in estimating soil variables with respect to its textural class.
Orientador: Ivan Nunes da Silva
Coorientador: José Alfredo Covolan Ulson
Banca: Angelo Cataneo
Banca: Luiz Gonzaga Campos Porto
Banca: Gastão Moraes da Silveira
Banca: Casimiro Dias Gadanha Júnior
Doutor
APA, Harvard, Vancouver, ISO, and other styles
49

Marchesan, Gustavo. "Estimadores de frequência aplicados a sistemas elétricos de potência." Universidade Federal de Santa Maria, 2013. http://repositorio.ufsm.br/handle/1/8523.

Full text
Abstract:
The frequency estimation is a problem widely studied in many fields including electric power systems. Several methods have been proposed for this purpose, and most of them perform well when the signal is not distorted by harmonics or noises. This paper presents two new methods based on Artificial Neural Networks for frequency estimation. Both use Clarck s transform to generate a phasor that represent the system s signal. In the first methodology this phasor is normalized and feeds the Generalized Regression Neural Network, that ponders the values. At the end it s obtained a phasor where noisy and harmonics are attenuated. The neural network output is then used to calculate the electrical system frequency. Otherwise, the second methodology uses the Adaptive Linear Neural Network. This work tested also various methodologies of frequency estimation proposed in other knowledge fields such as radar, sonar, communications, biomedicine and aviation however with electrical power systems signals. These methods are: Lavopa (proposed by Lavopa et al. 2007), Quinn (proposed by Quinn, 1994), Jacobsen (proposed by Jacobsen e Kootsookos, 2007), Candan (proposed by Candan, 2011), Macleod (proposed by Macleod, 1998), Aboutanios (proposed by Aboutanios, 2004), Mulgrew (proposed by Aboutanios e Mulgrew, 2005), Ferreira (proposed by Ferreira 2001) e DPLL (proposed by Sithamparanathan, 2008). With the exception of DPLL the remaining methods are based on the Discrete Fourier Transform and seek the spectrum frequency peak to than find the fundamental frequency. The nine methodologies are compared with the proposed methods and with the commonly techniques used or studied for electric power systems. Tests include noisy signals, harmonics, sub-harmonics, frequency variations on step, ramp and sinusoidal, also variations on voltage and phase are considered. The tests also include a simulated signal where a load block is inserted and immediately after removed from the system. At the end a comparison is made between the techniques, been able to point each technique advantage and disadvantage trough the comparison identify the best methods to be applied on electrical power systems.
A estimação de frequência é um problema muito estudado em diversas áreas, dentre elas a dos sistemas elétricos de potência. Inúmeras metodologias foram propostas para esse fim, sendo que a maioria delas apresenta bom desempenho quando o sinal não está distorcido por componentes harmônicas ou ruídos. Este trabalho propõe duas novas metodologias fundamentadas em Redes Neurais Artificiais, de modo a estimar a frequência. Elas utilizam a transformada de Clarck para gerar um fasor que representa o sinal trifásico do sistema. Na primeira metodologia, esse fasor é normalizado e alimenta a Rede Neural de Regressão Generalizada, que faz a ponderação dos valores. Ao final, obtém-se um fasor em que ruídos e harmônicas são atenuados. A saída da rede neural é, então, utilizada para o cálculo da frequência do sistema elétrico. A segunda metodologia utiliza a Rede Neural Linear Adaptativa. Neste trabalho, também são testadas, para uso em sistemas elétricos de potência, diversas metodologias propostas em outras áreas de conhecimento, tais como radar, sonar, comunicação, biomedicina e aviação. São elas: Lavopa (proposta por Lavopa et al. 2007), Quinn (proposta por Quinn, 1994), Jacobsen (proposta por Jacobsen e Kootsookos, 2007), Candan (proposta por Candan, 2011), Macleod (proposta por Macleod, 1998), Aboutanios (proposta por Aboutanios, 2004), Mulgrew (proposta por Aboutanios e Mulgrew, 2005), Ferreira (proposta por Ferreira 2001) e DPLL (proposta por Sithamparanathan, 2008). Com exceção da DPLL, os demais métodos são fundamentados na transformada discreta de Fourier e buscam encontrar o pico do espectro de frequências, para, então, encontrar a frequência fundamental. As nove metodologias são comparadas juntamente com os métodos propostos e as técnicas já comumente usadas ou estudadas para sistemas elétricos. Os testes incluem sinais com ruídos, harmônicas, sub-harmônicas, variações de frequência em degrau, rampa e senoidal, variações de fase e tensão em degrau. Os testes ainda incluem um sinal provindo de simulação em que um bloco de carga é inserido e logo após retirado do sistema. Ao final é realizada uma comparação entre as técnicas, sendo possível identificar as vantagens e desvantagens de cada uma e, assim, indicar as melhores a serem usadas em sistemas elétricos de potência.
APA, Harvard, Vancouver, ISO, and other styles
50

Nagaoka, Marilda da Penha Teixeira [UNESP]. "Aplicação de redes neurais em análise de viabilidade econômica de co-geração de energia elétrica." Universidade Estadual Paulista (UNESP), 2005. http://hdl.handle.net/11449/101766.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:31:35Z (GMT). No. of bitstreams: 0 Previous issue date: 2005-06-11Bitstream added on 2014-06-13T20:42:03Z : No. of bitstreams: 1 nagaoka_mpt_dr_botfca.pdf: 822099 bytes, checksum: 888a950f539fe4016466561df2e320b2 (MD5)
Universidade Estadual Paulista (UNESP)
A co-geração de energia elétrica excedente por meio do aproveitamento do bagaço de cana-de-açúcar tem sido considerada uma alternativa importante na diversificação de fontes de geração de energia elétrica no Brasil, considerando-se as vantagens em relação à grande produção de matéria prima, menores custos de geração de energia e a possibilidade de reduzir o ônus dos investimentos em geração de energia do setor público. Apesar do grande potencial apresentado por esta fonte alternativa de energia, o mercado para a energia elétrica co-gerada está ainda hoje, sujeito a um ambiente de grande risco e incerteza, seja decorrente de condições do mercado de energia ou da produção. Este trabalho teve por objetivos analisar a viabilidade econômica de um projeto de investimento em co-geração de energia elétrica em uma usina sucroalcooleira na região Oeste do estado de São Paulo,com vistas à comercialização de excedentes, sob condições de risco, utilizando o algoritmo de Redes Neurais Artificiais. Procurou-se também testar a convergência dos resultados obtidos por este método com outro mais tradicionalmente utilizado em análise de risco para a determinação dos indicadores de viabilidade econômica do investimento. Os indicadores utilizados foram Valor Atual Líquido (VAL); Taxa Interna de Retorno (TIR); Relação Benefício - Custo (RBC); Payback Simples (PBS) e Payback Econômico (PBE). A análise foi realizada considerando seis cenários, considerando a possibilidade ou não de obtenção de financiamento e diferentes níveis de eficiência de queima do bagaço. No método de Redes Neurais Artificiais, as redes foram alimentadas com as seguintes variáveis de entrada: valor do investimento; despesas com juros e amortização; despesa com aquisição e transporte do bagaço e receita operacional, tendo como variável de saída o fluxo líquido de caixa.
The co-generation of surplus electrical energy by means of the use of sugar-cane bagasse has been considered as an important alternative in the diversification of sources of electrical energy in Brazil. Its advantages in relation to the production of raw material are: smaller costs of generation of energy and the possibility to reduce the costs of the investments in the generation of energy in the public sector. In spite of the great potential presented by this alternative source of energy, the market for the co-generation of electrical is still today subject to an atmosphere of great risk and uncertainty, be it due to conditions of the energy or of the production market. The objective of this research study was to analyze the economic viability of an investment project of cogeneration of electrical energy in an alcohol and sugar mill based on the Western area of the state of São Paulo having in view the commercialization of surpluses, under risk conditions, using the algorithm of Artificial Neural Networks. It was also tried to test the convergence of the results obtained by this method with a more traditionally method used in analysis of risk for the determination of the indicators of economic viability of the investment. The indicators used were Liquid Current Value (LCV); Internal Rate of Return (IRR); Benefit - Cost Relationship (BCR); Simple Payback (SPB) and Economic Payback (EPB). The analysis was performed into six different scenarios, having into consideration the possibility or not availabity of financing, and the different levels of efficiency in the burning of bagasse. In the method of Artificial Neural Networks the nets were supplied with entrance variables, such as, the value of the investment; expenses with interests and amortization; expense with acquisition and transport of the bagasse; operational revenue, and the exit variable included the liquid cash flow.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography