Dissertations / Theses on the topic 'Radial basis function (RBF) model'

To see the other types of publications on this topic, follow the link: Radial basis function (RBF) model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Radial basis function (RBF) model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

LACERDA, Estefane George Macedo de. "Model Selection of RBF Networks Via Genetic Algorithms." Universidade Federal de Pernambuco, 2003. https://repositorio.ufpe.br/handle/123456789/1845.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:52:45Z (GMT). No. of bitstreams: 2 arquivo4692_1.pdf: 1118830 bytes, checksum: 96894dd8a22373c59d67d3b286b6c902 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2003
Um dos principais obstáculos para o uso em larga escala das Redes Neurais é a dificuldade de definir valores para seus parâmetros ajustáveis. Este trabalho discute como as Redes Neurais de Funções Base Radial (ou simplesmente Redes RBF) podem ter seus parâmetros ajustáveis definidos por algoritmos genéticos (AGs). Para atingir este objetivo, primeiramente é apresentado uma visão abrangente dos problemas envolvidos e as diferentes abordagens utilizadas para otimizar geneticamente as Redes RBF. É também proposto um algoritmo genético para Redes RBF com codificação genética não redundante baseada em métodos de clusterização. Em seguida, este trabalho aborda o problema de encontrar os parâmetros ajustáveis de um algoritmo de aprendizagem via AGs. Este problema é também conhecido como o problema de seleção de modelos. Algumas técnicas de seleção de modelos (e.g., validação cruzada e bootstrap) são usadas como funções objetivo do AG. O AG é modificado para adaptar-se a este problema por meio de heurísticas tais como narvalha de Occam e growing entre outras. Algumas modificações exploram características do AG, como por exemplo, a abilidade para resolver problemas de otimização multiobjetiva e manipular funções objetivo com ruído. Experimentos usando um problema benchmark são realizados e os resultados alcançados, usando o AG proposto, são comparados com aqueles alcançados por outras abordagens. As técnicas propostas são genéricas e podem também ser aplicadas a um largo conjunto de algoritmos de aprendizagem
APA, Harvard, Vancouver, ISO, and other styles
2

Amouzgar, Kaveh. "Metamodel based multi-objective optimization." Licentiate thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Produktutveckling - Simulering och optimering, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-28432.

Full text
Abstract:
As a result of the increase in accessibility of computational resources and the increase in the power of the computers during the last two decades, designers are able to create computer models to simulate the behavior of a complex products. To address global competitiveness, companies are forced to optimize their designs and products. Optimizing the design needs several runs of computationally expensive simulation models. Therefore, using metamodels as an efficient and sufficiently accurate approximate of the simulation model is necessary. Radial basis functions (RBF) is one of the several metamodeling methods that can be found in the literature. The established approach is to add a bias to RBF in order to obtain a robust performance. The a posteriori bias is considered to be unknown at the beginning and it is defined by imposing extra orthogonality constraints. In this thesis, a new approach in constructing RBF with the bias to be set a priori by using the normal equation is proposed. The performance of the suggested approach is compared to the classic RBF with a posteriori bias. Another comprehensive comparison study by including several modeling criteria, such as problem dimension, sampling technique and size of samples is conducted. The studies demonstrate that the suggested approach with a priori bias is in general as good as the performance of RBF with a posteriori bias. Using the a priori RBF, it is clear that the global response is modeled with the bias and that the details are captured with radial basis functions. Multi-objective optimization and the approaches used in solving such problems are briefly described in this thesis. One of the methods that proved to be efficient in solving multi-objective optimization problems (MOOP) is the strength Pareto evolutionary algorithm (SPEA2). Multi-objective optimization of a disc brake system of a heavy truck by using SPEA2 and RBF with a priori bias is performed. As a result, the possibility to reduce the weight of the system without extensive compromise in other objectives is found. Multi-objective optimization of material model parameters of an adhesive layer with the aim of improving the results of a previous study is implemented. The result of the original study is improved and a clear insight into the nature of the problem is revealed.
APA, Harvard, Vancouver, ISO, and other styles
3

Sze, Tiam Lin. "System identification using radial basis function networks." Thesis, University of Sheffield, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Toit Wilna. "Radial basis function interpolation." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/2002.

Full text
Abstract:
Thesis (MSc (Applied Mathematics))--Stellenbosch University, 2008.
A popular method for interpolating multidimensional scattered data is using radial basis functions. In this thesis we present the basic theory of radial basis function interpolation and also regard the solvability and stability of the method. Solving the interpolant directly has a high computational cost for large datasets, hence using numerical methods to approximate the interpolant is necessary. We consider some recent numerical algorithms. Software to implement radial basis function interpolation and to display the 3D interpolants obtained, is developed. We present results obtained from using our implementation for radial basis functions on GIS and 3D face data as well as an image warping application.
APA, Harvard, Vancouver, ISO, and other styles
5

Shcherbakov, Victor. "Localised Radial Basis Function Methods for Partial Differential Equations." Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332715.

Full text
Abstract:
Radial basis function methods exhibit several very attractive properties such as a high order convergence of the approximated solution and flexibility to the domain geometry. However the method in its classical formulation becomes impractical for problems with relatively large numbers of degrees of freedom due to the ill-conditioning and dense structure of coefficient matrix. To overcome the latter issue we employ a localisation technique, namely a partition of unity method, while the former issue was previously addressed by several authors and was of less concern in this thesis. In this thesis we develop radial basis function partition of unity methods for partial differential equations arising in financial mathematics and glaciology. In the applications of financial mathematics we focus on pricing multi-asset equity and credit derivatives whose models involve several stochastic factors. We demonstrate that localised radial basis function methods are very effective and well-suited for financial applications thanks to the high order approximation properties that allow for the reduction of storage and computational requirements, which is crucial in multi-dimensional problems to cope with the curse of dimensionality. In the glaciology application we in the first place make use of the meshfree nature of the methods and their flexibility with respect to the irregular geometries of ice sheets and glaciers. Also, we exploit the fact that radial basis function methods are stated in strong form, which is advantageous for approximating velocity fields of non-Newtonian viscous liquids such as ice, since it allows to avoid a full coefficient matrix reassembly within the nonlinear iteration. In addition to the applied problems we develop a least squares radial basis function partition of unity method that is robust with respect to the node layout. The method allows for scaling to problem sizes of a few hundred thousand nodes without encountering the issue of large condition numbers of the coefficient matrix. This property is enabled by the possibility to control the coefficient matrix condition number by the rate of oversampling and the mode of refinement.
APA, Harvard, Vancouver, ISO, and other styles
6

Triastuti, Sugiyarto Endang. "Analysing rounding data using radial basis function neural networks model." Thesis, University of Northampton, 2007. http://nectar.northampton.ac.uk/2809/.

Full text
Abstract:
Unspecified counting practices used in a data collection may create rounding to certain ‘based’ number that can have serious consequences on data quality. Statistical methods for analysing missing data are commonly used to deal with the issue but it could actually aggravate the problem. Rounded data are not missing data, instead some observations were just systematically lumped to certain based numbers reflecting the rounding process or counting behaviour. A new method to analyse rounded data would therefore be academically valuable. The neural network model developed in this study fills the gap and serves the purpose by complementing and enhancing the conventional statistical methods. The model detects, analyses, and quantifies the existence of periodic structures in a data set because of rounding. The robustness of the model is examined using simulated data sets containing specific rounding numbers of different levels. The model is also subjected to theoretical and numerical tests to confirm its validity before being used on real applications. Overall, the model performs very well making it suitable for many applications. The assessment results show the importance of using the right best fit in rounding detection. The detection power and cut-off point estimation also depend on data distribution and rounding based numbers. Detecting rounding of prime numbers is easier than non-prime numbers due to the unique characteristics of the former. The bigger the number, the easier is the detection. This is in a complete contrast with non-prime numbers, where the bigger the number, the more will be the “factor” numbers distracting rounding detection. Using uniform best fit on uniform data produces the best result and lowest cut-off point. The consequence of using a wrong best fit on uniform data is however also the worst. The model performs best on data containing 10-40% rounding levels as less or more rounding levels produce unclear rounding pattern or distort the rounding detection, respectively. The modulo-test method also suffers the same problem. Real data applications on religious census data confirms the modulo-test finding that the data contains rounding base 5, while applications on cigarettes smoked and alcohol consumed data show good detection results. The cigarettes data seem to contain rounding base 5, while alcohol consumption data indicate no rounding patterns that may be attributed to the ways the two data were collected. The modelling applications can be extended to other areas in which rounding is common and can have significant consequences. The modelling development can he refined to include data-smoothing process and to make it user friendly as an online modelling tool. This will maximize the model’s potential use
APA, Harvard, Vancouver, ISO, and other styles
7

Toratti, Luiz Otávio. "Design de campos vetoriais em volumes usando RBF." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-22102018-170348/.

Full text
Abstract:
Em Computação Gráfica, campos vetoriais possuem diversas aplicações desde a síntese e mapeamento de texturas à animações de fluidos, produzindo efeitos amplamente utilizados na indústria do entretenimento. Para produzir tais campos, é preferível o uso de ferramentas de design em vez de simulações numéricas não só devido ao menor custo computacional mas, principalmente, por prover liberdade ao artista ao sintetizar o campo de acordo com a sua necessidade. Atualmente, na literatura, existem bons métodos de design de campos vetoriais em superfícies de objetos tridimensionais porém, o design no interior desses objetos ainda é pouco estudado, principalmente quando o campo de interesse possui propriedades específicas. O objetivo deste trabalho é desenvolver uma técnica para sintetizar campos vetoriais, com características do movimento de fluidos incompressíveis, no interior de domínios. Em uma primeira etapa, o método consiste na interpolação dos vetores de controle, com uma certa propriedade desejada, em todo o domínio. Posteriormente, o campo obtido é modificado para respeitar a geometria do contorno.
Vector fields are important to an wide range of applications on the field of Computer Graphics, from the synthesis and mapping of textures to fluid animation, producing effects widely used on the entertainment industry. To produce such fields, design tools are prefered over numerical simulations not only for its lower computational cost, but mainly by providing freedom to the artist in the creation process. Nowadays, good methods of vector field design over surfaces exist in literature, however there is only a few studies on the synthesis of vector fields of the interior of objects and even fewer when specific properties of the field are required. This work presents a technique to synthesize vector fields with properties of imcompressible fluids motion in the interior of objects. On a first step, the method consists in interpolating control vectors with a certain desired property throughout the whole domain and later the resulting field is modified to properly fit the boundary geometry of the object.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Cong. "Evaluation of a least-squares radial basis function approximation method for solving the Black-Scholes equation for option pricing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-183042.

Full text
Abstract:
Radial basis function (RBF) approximation, is a new extremely powerful tool that is promising for high-dimensional problems, such as those arising from pricing of basket options using the Black-Scholes partial differential equation. The main problem for RBF methods have been ill-conditioning as the RBF shape parameter becomes small, corresponding to flat RBFs. This thesis employs a recently developed method called the RBF-QR method to reduce computational cost by improving the conditioning, thereby allowing for the use of a wider range of shape parameter values. Numerical experiments for the one-dimensional case are presented  and a MATLAB implementation is provided. In our thesis, the RBF-QR method performs better  than the RBF-Direct method for small shape parameters. Using Chebyshev points, instead of a standard uniform distribution, can increase the accuracy through clustering of the nodes towards the boundary. The least squares formulation for RBF methods is preferable to the collocation approach because it can result in smaller errors  for the same number of basis functions.
APA, Harvard, Vancouver, ISO, and other styles
9

Stephanson, Matthew B. "An Adaptive, Black-Box Model Order Reduction Algorithm Using Radial Basis Functions." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345226428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sjödin, Hällstrand Andreas. "Bilinear Gaussian Radial Basis Function Networks for classification of repeated measurements." Thesis, Linköpings universitet, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170850.

Full text
Abstract:
The Growth Curve Model is a bilinear statistical model which can be used to analyse several groups of repeated measurements. Normally the Growth Curve Model is defined in such a way that the permitted sampling frequency of the repeated measurement is limited by the number of observed individuals in the data set.In this thesis, we examine the possibilities of utilizing highly frequently sampled measurements to increase classification accuracy for real world data. That is, we look at the case where the regular Growth Curve Model is not defined due to the relationship between the sampling frequency and the number of observed individuals. When working with this high frequency data, we develop a new method of basis selection for the regression analysis which yields what we call a Bilinear Gaussian Radial Basis Function Network (BGRBFN), which we then compare to more conventional polynomial and trigonometrical functional bases. Finally, we examine if Tikhonov regularization can be used to further increase the classification accuracy in the high frequency data case.Our findings suggest that the BGRBFN performs better than the conventional methods in both classification accuracy and functional approximability. The results also suggest that both high frequency data and furthermore Tikhonov regularization can be used to increase classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
11

Cowley, Marlise Sunne. "Optimising pressure profiles in superplastic forming." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/61288.

Full text
Abstract:
Some metals, such as Ti-6Al-4V, have a high elongation to failure when strained at certain rates and temperatures. Superplastic forming is the utilisation of this property, and it can be used to form thin, geometrically complex components. Superplastic forming is a slow process, and this is one of the reasons why it is an expensive manufacturing process. Localised thinning occurs if the specimen is strained too quickly, and components with locally thin wall thickness fail prematurely. The goal of this study is to find a technique that can be used to minimise the forming time while limiting the minimum final thickness. The superplastic forming process is investigated with the finite element method. The finite element method requires a material model which describes the superplastic behaviour of the metal. Several material models are investigated in order to select a material model that can show localised thinning at higher strain rates. The material models are calibrated with stress-strain data, grain size-time data and strain rate sensitivity-strain data. The digitised data from literature is for Ti-6Al-4V with three different initial grain sizes strained at different strain rates at 927 C. The optimisation of the forming time is done with an approximate optimisation algorithm. This algorithm involves fitting a metamodel to simulated data, and using the metamodels to find the optimum instead of using the finite element model directly. One metamodel is fitted to the final forming time results, and another metamodel is fitted to the final minimum thickness results. A regressive radial basis function method is used to construct the metamodels. The interpolating radial basis function method proved to be unreliable at the design space boundaries due to non-smooth finite element results. The non-smooth results are due to the problem being path dependent. The final forming time of the superplastic forming of a rectangular box was successfully minimised while limiting the final minimum thickness. The metamodels predicted that allowing a 4% decrease in the minimum allowable thickness (1.0 mm to 0.96 mm) and a 1 mm gap between the sheet and the die corner the forming time is decreased by 28.84%. The finite element verification indicates that the final minimum thickness reduced by 3.8% and that the gap between the sheet and the die corner is less than 1 mm, resulting in the forming time being reduced by 28.81%.
Dissertation (MEng)--University of Pretoria, 2017.
Mechanical and Aeronautical Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
12

Gerace, Salvadore. "A MODEL INTEGRATED MESHLESS SOLVER (MIMS) FOR FLUID FLOW AND HEAT TRANSFER." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2371.

Full text
Abstract:
Numerical methods for solving partial differential equations are commonplace in the engineering community and their popularity can be attributed to the rapid performance improvement of modern workstations and desktop computers. The ubiquity of computer technology has allowed all areas of engineering to have access to detailed thermal, stress, and fluid flow analysis packages capable of performing complex studies of current and future designs. The rapid pace of computer development, however, has begun to outstrip efforts to reduce analysis overhead. As such, most commercially available software packages are now limited by the human effort required to prepare, develop, and initialize the necessary computational models. Primarily due to the mesh-based analysis methods utilized in these software packages, the dependence on model preparation greatly limits the accessibility of these analysis tools. In response, the so-called meshless or mesh-free methods have seen considerable interest as they promise to greatly reduce the necessary human interaction during model setup. However, despite the success of these methods in areas demanding high degrees of model adaptability (such as crack growth, multi-phase flow, and solid friction), meshless methods have yet to gain notoriety as a viable alternative to more traditional solution approaches in general solution domains. Although this may be due (at least in part) to the relative youth of the techniques, another potential cause is the lack of focus on developing robust methodologies. The failure to approach development from a practical perspective has prevented researchers from obtaining commercially relevant meshless methodologies which reach the full potential of the approach. The primary goal of this research is to present a novel meshless approach called MIMS (Model Integrated Meshless Solver) which establishes the method as a generalized solution technique capable of competing with more traditional PDE methodologies (such as the finite element and finite volume methods). This was accomplished by developing a robust meshless technique as well as a comprehensive model generation procedure. By closely integrating the model generation process into the overall solution methodology, the presented techniques are able to fully exploit the strengths of the meshless approach to achieve levels of automation, stability, and accuracy currently unseen in the area of engineering analysis. Specifically, MIMS implements a blended meshless solution approach which utilizes a variety of shape functions to obtain a stable and accurate iteration process. This solution approach is then integrated with a newly developed, highly adaptive model generation process which employs a quaternary triangular surface discretization for the boundary, a binary-subdivision discretization for the interior, and a unique shadow layer discretization for near-boundary regions. Together, these discretization techniques are able to achieve directionally independent, automatic refinement of the underlying model, allowing the method to generate accurate solutions without need for intermediate human involvement. In addition, by coupling the model generation with the solution process, the presented method is able to address the issue of ill-constructed geometric input (small features, poorly formed faces, etc.) to provide an intuitive, yet powerful approach to solving modern engineering analysis problems.
Ph.D.
Department of Mechanical, Materials and Aerospace Engineering
Engineering and Computer Science
Mechanical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
13

Martínez, Brito Izacar Jesús. "Quantitative structure fate relationships for multimedia environmental analysis." Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8590.

Full text
Abstract:
Key physicochemical properties for a wide spectrum of chemical pollutants are unknown. This thesis analyses the prospect of assessing the environmental distribution of chemicals directly from supervised learning algorithms using molecular descriptors, rather than from multimedia environmental models (MEMs) using several physicochemical properties estimated from QSARs. Dimensionless compartmental mass ratios of 468 validation chemicals were compared, in logarithmic units, between: a) SimpleBox 3, a Level III MEM, propagating random property values within statistical distributions of widely recommended QSARs; and, b) Support Vector Regressions (SVRs), acting as Quantitative Structure-Fate Relationships (QSFRs), linking mass ratios to molecular weight and constituent counts (atoms, bonds, functional groups and rings) for training chemicals. Best predictions were obtained for test and validation chemicals optimally found to be within the domain of applicability of the QSFRs, evidenced by low MAE and high q2 values (in air, MAE≤0.54 and q2≥0.92; in water, MAE≤0.27 and q2≥0.92).
Las propiedades fisicoquímicas de un gran espectro de contaminantes químicos son desconocidas. Esta tesis analiza la posibilidad de evaluar la distribución ambiental de compuestos utilizando algoritmos de aprendizaje supervisados alimentados con descriptores moleculares, en vez de modelos ambientales multimedia alimentados con propiedades estimadas por QSARs. Se han comparado fracciones másicas adimensionales, en unidades logarítmicas, de 468 compuestos entre: a) SimpleBox 3, un modelo de nivel III, propagando valores aleatorios de propiedades dentro de distribuciones estadísticas de QSARs recomendados; y, b) regresiones de vectores soporte (SVRs) actuando como relaciones cuantitativas de estructura y destino (QSFRs), relacionando fracciones másicas con pesos moleculares y cuentas de constituyentes (átomos, enlaces, grupos funcionales y anillos) para compuestos de entrenamiento. Las mejores predicciones resultaron para compuestos de test y validación correctamente localizados dentro del dominio de aplicabilidad de los QSFRs, evidenciado por valores bajos de MAE y valores altos de q2 (en aire, MAE≤0.54 y q2≥0.92; en agua, MAE≤0.27 y q2≥0.92).
APA, Harvard, Vancouver, ISO, and other styles
14

Guo, Zhihao. "Intelligent multiple objective proactive routing in MANET with predictions on delay, energy, and link lifetime." online version, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=case1195705509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kohram, Mojtaba. "Experiments with Support Vector Machines and Kernels." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1378112059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sarmah, Dipsikha. "Evaluation of Spatial Interpolation Techniques Built in the Geostatistical Analyst Using Indoor Radon Data for Ohio,USA." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1350048688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Jun won. "Relationships Among Learning Algorithms and Tasks." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2478.

Full text
Abstract:
Metalearning aims to obtain knowledge of the relationship between the mechanism of learning and the concrete contexts in which that mechanisms is applicable. As new mechanisms of learning are continually added to the pool of learning algorithms, the chances of encountering behavior similarity among algorithms are increased. Understanding the relationships among algorithms and the interactions between algorithms and tasks help to narrow down the space of algorithms to search for a given learning task. In addition, this process helps to disclose factors contributing to the similar behavior of different algorithms. We first study general characteristics of learning tasks and their correlation with the performance of algorithms, isolating two metafeatures whose values are fairly distinguishable between easy and hard tasks. We then devise a new metafeature that measures the difficulty of a learning task that is independent of the performance of learning algorithms on it. Building on these preliminary results, we then investigate more formally how we might measure the behavior of algorithms at a ner grained level than a simple dichotomy between easy and hard tasks. We prove that, among all many possible candidates, the Classifi er Output Difference (COD) measure is the only one possessing the properties of a metric necessary for further use in our proposed behavior-based clustering of learning algorithms. Finally, we cluster 21 algorithms based on COD and show the value of the clustering in 1) highlighting interesting behavior similarity among algorithms, which leads us to a thorough comparison of Naive Bayes and Radial Basis Function Network learning, and 2) designing more accurate algorithm selection models, by predicting clusters rather than individual algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Gao, Zhiyuan, and Likai Qi. "Predicting Stock Price Index." Thesis, Halmstad University, Applied Mathematics and Physics (CAMP), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-3784.

Full text
Abstract:

This study is based on three models, Markov model, Hidden Markov model and the Radial basis function neural network. A number of work has been done before about application of these three models to the stock market. Though, individual researchers have developed their own techniques to design and test the Radial basis function neural network. This paper aims to show the different ways and precision of applying these three models to predict price processes of the stock market. By comparing the same group of data, authors get different results. Based on Markov model, authors find a tendency of stock market in future and, the Hidden Markov model behaves better in the financial market. When the fluctuation of the stock price index is not drastic, the Radial basis function neural network has a nice prediction.

APA, Harvard, Vancouver, ISO, and other styles
19

Rodríguez, Martínez Cecilia. "Software quality studies using analytical metric analysis." Thesis, KTH, Kommunikationssystem, CoS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-120325.

Full text
Abstract:
Today engineering companies expend a large amount of resources on the detection and correction of the bugs (defects) in their software. These bugs are usually due to errors and mistakes made by programmers while writing the code or writing the specifications. No tool is able to detect all of these bugs. Some of these bugs remain undetected despite testing of the code. For these reasons, many researchers have tried to find indicators in the software’s source codes that can be used to predict the presence of bugs. Every bug in the source code is a potentially failure of the program to perform as expected. Therefore, programs are tested with many different cases in an attempt to cover all the possible paths through the program to detect all of these bugs. Early prediction of bugs informs the programmers about the location of the bugs in the code. Thus, programmers can more carefully test the more error prone files, and thus save a lot of time by not testing error free files. This thesis project created a tool that is able to predict error prone source code written in C++. In order to achieve this, we have utilized one predictor which has been extremely well studied: software metrics. Many studies have demonstrated that there is a relationship between software metrics and the presence of bugs. In this project a Neuro-Fuzzy hybrid model based on Fuzzy c-means and Radial Basis Neural Network has been used. The efficiency of the model has been tested in a software project at Ericsson. Testing of this model proved that the program does not achieve high accuracy due to the lack of independent samples in the data set. However, experiments did show that classification models provide better predictions than regression models. The thesis concluded by suggesting future work that could improve the performance of this program.
Idag spenderar ingenjörsföretag en stor mängd resurser på att upptäcka och korrigera buggar (fel) i sin mjukvara. Det är oftast programmerare som inför dessa buggar på grund av fel och misstag som uppkommer när de skriver koden eller specifikationerna. Inget verktyg kan detektera alla dessa buggar. Några av buggarna förblir oupptäckta trots testning av koden. Av dessa skäl har många forskare försökt hitta indikatorer i programvarans källkod som kan användas för att förutsäga förekomsten av buggar. Varje fel i källkoden är ett potentiellt misslyckande som gör att applikationen inte fungerar som förväntat. För att hitta buggarna testas koden med många olika testfall för att försöka täcka alla möjliga kombinationer och fall. Förutsägelse av buggar informerar programmerarna om var i koden buggarna finns. Således kan programmerarna mer noggrant testa felbenägna filer och därmed spara mycket tid genom att inte behöva testa felfria filer. Detta examensarbete har skapat ett verktyg som kan förutsäga felbenägen källkod skriven i C ++. För att uppnå detta har vi utnyttjat en välkänd metod som heter Software Metrics. Många studier har visat att det finns ett samband mellan Software Metrics och förekomsten av buggar. I detta projekt har en Neuro-Fuzzy hybridmodell baserad på Fuzzy c-means och Radial Basis Neural Network använts. Effektiviteten av modellen har testats i ett mjukvaruprojekt på Ericsson. Testning av denna modell visade att programmet inte Uppnå hög noggrannhet på grund av bristen av oberoende urval i datauppsättningen. Men gjordt experiment visade att klassificering modeller ger bättre förutsägelser än regressionsmodeller. Exjobbet avslutade genom att föreslå framtida arbetet som skulle kunna förbättra detta program.
Actualmente las empresas de ingeniería derivan una gran cantidad de recursos a la detección y corrección de errores en sus códigos software. Estos errores se deben generalmente a los errores cometidos por los desarrolladores cuando escriben el código o sus especificaciones.  No hay ninguna herramienta capaz de detectar todos estos errores y algunos de ellos pasan desapercibidos tras el proceso de pruebas. Por esta razón, numerosas investigaciones han intentado encontrar indicadores en los códigos fuente del software que puedan ser utilizados para detectar la presencia de errores. Cada error en un código fuente es un error potencial en el funcionamiento del programa, por ello los programas son sometidos a exhaustivas pruebas que cubren (o intentan cubrir) todos los posibles caminos del programa para detectar todos sus errores. La temprana localización de errores informa a los programadores dedicados a la realización de estas pruebas sobre la ubicación de estos errores en el código. Así, los programadores pueden probar con más cuidado los archivos más propensos a tener errores dejando a un lado los archivos libres de error. En este proyecto se ha creado una herramienta capaz de predecir código software propenso a errores escrito en C++. Para ello, en este proyecto se ha utilizado un indicador que ha sido cuidadosamente estudiado y ha demostrado su relación con la presencia de errores: las métricas del software. En este proyecto un modelo híbrido neuro-disfuso basado en Fuzzy c-means y en redes neuronales de función de base radial ha sido utilizado. La eficacia de este modelo ha sido probada en un proyecto software de Ericsson. Como resultado se ha comprobado que el modelo no alcanza una alta precisión debido a la falta de muestras independientes en el conjunto de datos y los experimentos han mostrado que los modelos de clasificación proporcionan mejores predicciones que los modelos de regresión. El proyecto concluye sugiriendo trabajo que mejoraría el funcionamiento del programa en el futuro.
APA, Harvard, Vancouver, ISO, and other styles
20

Hinkle, Kurt Berlin. "An Automated Method for Optimizing Compressor Blade Tuning." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6230.

Full text
Abstract:
Because blades in jet engine compressors are subject to dynamic loads based on the engine's speed, it is essential that the blades are properly "tuned" to avoid resonance at those frequencies to ensure safe operation of the engine. The tuning process can be time consuming for designers because there are many parameters controlling the geometry of the blade and, therefore, its resonance frequencies. Humans cannot easily optimize design spaces consisting of multiple variables, but optimization algorithms can effectively optimize a design space with any number of design variables. Automated blade tuning can reduce design time while increasing the fidelity and robustness of the design. Using surrogate modeling techniques and gradient-free optimization algorithms, this thesis presents a method for automating the tuning process of an airfoil. Surrogate models are generated to relate airfoil geometry to the modal frequencies of the airfoil. These surrogates enable rapid exploration of the entire design space. The optimization algorithm uses a novel objective function that accounts for the contribution of every mode's value at a specific operating speed on a Campbell diagram. When the optimization converges on a solution, the new blade parameters are output to the designer for review. This optimization guarantees a feasible solution for tuning of a blade. With 21 geometric parameters controlling the shape of the blade, the geometry for an optimally tuned blade can be determined within 20 minutes.
APA, Harvard, Vancouver, ISO, and other styles
21

Benki, Aalae. "Méthodes efficaces de capture de front de pareto en conception mécanique multicritère : applications industrielles." Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00959099.

Full text
Abstract:
Dans le domaine d'optimisation de forme de structures, la réduction des coûts et l'amélioration des produits sont des défis permanents à relever. Pour ce faire, le procédé de mise en forme doit être optimisé. Optimiser le procédé revient alors à résoudre un problème d'optimisation. Généralement ce problème est un problème d'optimisation multicritère très coûteux en terme de temps de calcul, où on cherche à minimiser plusieurs fonctions coût en présence d'un certain nombre de contraintes. Pour résoudre ce type de problème, on a développé un algorithme robuste, efficace et fiable. Cet algorithme, consiste à coupler un algorithme de capture de front de Pareto (NBI ou NNCM) avec un métamodèle (RBF), c'est-à-dire des approximations des résultats des simulations coûteuses. D'après l'ensemble des résultats obtenus par cette approche, il est intéressant de souligner que la capture de front de Pareto génère un ensemble des solutions non dominées. Pour savoir lesquelles choisir, le cas échéant, il est nécessaire de faire appel à des algorithmes de sélection, comme par exemple Nash et Kalai-Smorodinsky. Ces deux approches, issues de la théorie des jeux, ont été utilisées pour notre travail. L'ensemble des algorithmes sont validés sur deux cas industriels proposés par notre partenaire industriel. Le premier concerne un modèle 2D du fond de la canette (elasto-plasticité) et le second est un modèle 3D de la traverse (élasticité linéaire). Les résultats obtenus confirment l'efficacité de nos algorithmes développés.
APA, Harvard, Vancouver, ISO, and other styles
22

Lamraoui, Mourad. "Surveillance des centres d'usinage grande vitesse par approche cyclostationnaire et vitesse instantanée." Phd thesis, Université Jean Monnet - Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-01001576.

Full text
Abstract:
Dans l'industrie de fabrication mécanique et notamment pour l'utilisation des centres d'usinage haute vitesse, la connaissance des propriétés dynamiques du système broche-outil-pièce en opération est d'une grande importance. L'accroissement des performances des machines-outils et des outils de coupe a œuvré au développement de ce procédé compétitif. D'innombrables travaux ont été menés pour accroître les performances et les remarquables avancées dans les matériaux, les revêtements des outils coupants et les lubrifiants ont permis d'accroître considérablement les vitesses de coupe tout en améliorant la qualité de la surface usinée. Cependant, l'utilisation rationnelle de cette technologie est encore fortement pénalisée par les lacunes dans la connaissance de la coupe, que ce soit au niveau microscopique des interactions fines entre l'outil et la matière coupée, aussi bien qu'au niveau macroscopique intégrant le comportement de la cellule élémentaire d'usinage, si bien que le comportement dynamique en coupe garde encore une grande part de questionnement et exige de l'utilisateur un bon niveau de savoir-faire et parfois d'empirisme pour exploiter au mieux les capacités des moyens de production. Le fonctionnement des machines d'usinage engendre des vibrations qui sont souvent la cause des dysfonctionnements et accélère l'usure des composantes mécaniques (roulements) et outils. Ces vibrations sont une image des efforts internes des systèmes, d'où l'intérêt d'analyser les grandeurs mécaniques vibratoires telle que la vitesse ou l'accélération vibratoire. Ces outils sont indispensables pour une maintenance moderne dont l'objectif est de réduire les coûts liés aux pannes
APA, Harvard, Vancouver, ISO, and other styles
23

Duplex, Benjamin. "Transfert de déformations géométriques lors des couplages de codes de calcul - Application aux dispositifs expérimentaux du réacteur de recherche Jules Horowitz." Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2011. http://tel.archives-ouvertes.fr/tel-00679015.

Full text
Abstract:
Le CEA développe et utilise des logiciels de calcul, également appelés codes de calcul, dans différentes disciplines physiques pour optimiser les coûts de ses installations et de ses expérimentations. Lors d'une étude, plusieurs phénomènes physiques interagissent. Un couplage et des échanges de données entre plusieurs codes sont nécessaires. Chaque code réalise ses calculs sur une géométrie, généralement représentée sous forme d'un maillage contenant des milliers voire des millions de mailles. Cette thèse se focalise sur le transfert de déformations géométriques entre les maillages spécifiques de chacun des codes de calcul couplés. Pour cela, elle présente une méthode de couplage de plusieurs codes, dont le calcul des déformations est réalisé par l'un d'entre eux. Elle traite également de la mise en place d'un modèle commun aux différents codes de l'étude regroupant l'ensemble des données partagées. Enfin, elle porte sur les transferts de déformations entre des maillages représentant une même géométrie ou des géométries adjacentes. Les modifications géométriques sont de nature discrète car elles s'appuient sur un maillage. Afin de les rendre accessible à l'ensemble des codes de l'étude et pour permettre leur transfert, une représentation continue est calculée. Pour cela, deux fonctions sont développées : l'une à support global, l'autre à support local. Toutes deux combinent une méthode de simplification et un réseau de fonctions de base radiale. Un cas d'application complet est traité dans le cadre du réacteur Jules Horowitz. L'effet des dilatations différentielles sur le refroidissement d'un dispositif expérimental est étudié.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Tsung-Hsien, and 吳宗憲. "Dynamic Point Rendering and Compact Representations for 3D Models with Multiple Radial Basis Function (RBF) Surfaces." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/v44dbr.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
90
The paper describes a point rendering system for complex 3D models. For a given 3D model, we build a compact representation of points that significantly compress 3D model. In this representation, we exploit 3D Harr wavelet to build a hierarchy of points that preserves key features inherent in the original models. Our rendering algorithm can use this hierarchy to dynamically choose appropriate resolution for display. In contrast to other well-known previous work such as QSplat [5] and POP [6], there are many key features in the proposed method: 1. High compression data ratio. 2. Points are dynamically added according to a novel camera-sampling field (CSF) to yield smooth surface representation. 3. Point-shapes are dynamically adjusted also by CSF to yield a smooth silhouette. 4. Points are uniformly added layer by layer to avoid blurring. 5. 3D models are decomposed into many parts and each part is reconstructed by a radial-based function (RBF). At run time, our rendering algorithm takes the following into account such as frame coherence, view frustum culling, back-face culling and so on. Therefore, the proposed system can effectively render 3D models.
APA, Harvard, Vancouver, ISO, and other styles
25

Joseph, P. J. "Superscalar Processor Models Using Statistical Learning." Thesis, 2006. http://hdl.handle.net/2005/537.

Full text
Abstract:
Processor architectures are becoming increasingly complex and hence architects have to evaluate a large design space consisting of several parameters, each with a number of potential settings. In order to assist in guiding design decisions we develop simple and accurate models of the superscalar processor design space using a detailed and validated superscalar processor simulator. Firstly, we obtain precise estimates of all significant micro-architectural parameters and their interactions by building linear regression models using simulation based experiments. We obtain good approximate models at low simulation costs using an iterative process in which Akaike’s Information Criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We use this procedure for model construction and show that it provides a cost effective scheme to experiment with all relevant parameters. We also obtain accurate predictors of the processors performance response across the entire design-space, by constructing radial basis function networks from sampled simulation experiments. We construct these models, by simulating at limited design points selected by latin hypercube sampling, and then deriving the radial neural networks from the results. We show that these predictors provide accurate approximations to the simulator’s performance response, and hence provide a cheap alternative to simulation while searching for optimal processor design points.
APA, Harvard, Vancouver, ISO, and other styles
26

Dittmar, Jörg. "Modellierung dynamischer Prozesse mit radialen Basisfunktionen." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B4DD-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Martins, Fernando Manuel Pires. "An implementation of flexible RBF neural networks." Master's thesis, 2009. http://hdl.handle.net/10451/5482.

Full text
Abstract:
Tese de mestrado, Informática, Universidade de Lisboa, Faculdade de Ciências, 2009
Sempre que o trabalho de investigação resulta numa nova descoberta, a comunidade científica, e o mundo em geral, enriquece. Mas a descoberta científica per se não é suficiente. Para beneficio de todos, é necessário tornar estas inovações acessíveis através da sua fácil utilização e permitindo a sua melhoria, potenciando assim o progresso científico. Uma nova abordagem na modelação de núcleos em redes neuronais com Funções de Base Radial (RBF) foi proposta por Falção et al. em Flexible Kernels for RBF Networks[14]. Esta abordagem define um algoritmo de aprendizagem para classificação, inovador na àrea da aprendizagem das redes neuronais RBF. Os testes efectuados mostraram que os resultados estão ao nível dos melhores nesta área, tornando como um dever óbvio para com a comunidade científica a sua disponibilização de forma aberta. Neste contexto, a motivação da implementação do algoritmo de núcleos flexíveis para redes neuronais RBF (FRBF) ganhou novos contornos, resultando num conjunto de objectivos bem definidos: (i) integração, o FRBF deveria ser integrado, ou integrável, numa plataforma facilmente acessível à comunidade científica; (ii) abertura, o código fonte deveria ser aberto para potenciar a expansão e melhoria do FRBF; (iii) documentação, imprescindível para uma fácil utilização e compreensão; e (iv) melhorias, melhorar o algoritmo original, no procedimento de cálculo das distâncias e no suporte de parâmetros de configuração. Foi com estes objectivos em mente que se iniciou o trabalho de implementação do FRBF. O FRBF segue a tradicional abordagem de redes neuronais RBF, com duas camadas, dos algoritmos de aprendizagem para classificação. A camada escondida, que contém os núcleos, calcula a distância entre o ponto e uma classe, sendo o ponto atribuído à classe com menor distância. Este algoritmo foca-se num método de ajuste de parâmetros para uma rede de funções Gaussianas multivariáveis com formas elípticas, conferindo um grau de flexibilidade extra à estrutura do núcleo. Esta flexibilidade é obtida através da utilização de funções de modificação aplicadas ao procedimento de cálculo da distância, que é essencial na avaliaçãoo dos núcleos. É precisamente nesta flexibilidade e na sua aproximação ao Classificador Bayeseano ´Optimo (BOC), com independência dos núcleos em relação às classes, que reside a invovação deste algoritmo. O FRBF divide-se em duas fases, aprendizagem e classificação, sendo ambas semelhantes em relaçãoo às tradicionais redes neuronais RBF. A aprendizagem faz-se em dois passos distintos. No primeiro passo: (i) o número de núcleos para cada classe é definido através da proporção da variância do conjunto de treino associado a cada classe; (ii) o conjunto de treino é separado de acordo com cada classe e os centros dos núcleos são determinados através do algoritmo K-Means; e (iii) é efectuada uma decomposição espectral para as matrizes de covariância para cada núcleo, determinando assim a matriz de vectores próprios e os valores próprios correspondentes. No segundo passo são encontrados os valores dos parâmetros de ajuste de expansão para cada núcleo. Após a conclusão da fase de aprendizagem, obtém-se uma rede neuronal que representa um modelo de classificação para dados do mesmo domínio do conjunto de treino. A classificação é bastante simples, bastando aplicar o modelo aos pontos a classificar, obtendo-se o valor da probabilidade do ponto pertencer a uma determinada classe. As melhorias introduzidas ao algoritmo original, definidas após análise do protótipo, centram-se: (i) na parametrização, permitindo a especificação de mais parâmetros, como por exemplo o algoritmo a utilizar pelo K-Means; (ii) no teste dos valores dos parâmetros de ajuste de expansão dos núcleos, testando sempre as variações acima e abaixo; (iii) na indicação de utilização, ou não, da escala na PCA; e (iv) na possibilidade do cálculo da distãncia ser feito ao centróide ou à classe. A análise à plataforma para desenvolvimento do FRBF, e das suas melhorias, resultou na escolha do R. O R é, ao mesmo tempo, uma linguagem de programação, uma plataforma de desenvolvimento e um ambiente. O R foi seleccionado por várias razões, de onde se destacam: (i) abertura e expansibilidade, permitindo a sua utilização e expansão por qualquer pessoa; (ii) repositório CRAN, que permite a distribuição de pacotes de expansão; e (iii) largamente usado para desenvolvimento de aplicações estatísticas e análise de dados, sendo mesmo o standard de facto na comunidade científica estatística. Uma vez escolhida a plataforma, iniciou-se a implementação do FRBF e das suas melhorias. Um dos primeiros desafios a ultrapassar foi a inexistência de documentação para desenvolvimento. Tal facto implicou a definição de boas práticas e padrões de desenvolvimento específicos, tais como documentação e definição de variáveis. O desenvolvimento do FRBF dividiu-se em duas funções principais, frbf que efectua o procedimento de aprendizagem e retorna o modelo, e predict uma função base do R que foi redefinida para suportar o modelo gerado e que é responsável pela classificacão. As primeiras versões do FRBF tinham uma velocidade de execução lenta, mas tal não foi inicialmente considerado preocupante. No entanto, alguns testes ao procedimento de aprendizagem eram demasiado morosos, passando a velocidade de execução a ser um problema crítico. Para o resolver, foi efectuada uma análise para identificar os pontos de lentidão. Esta acção revelou que os procedimentos de manipulação de objectos eram bastante lentos. Assim, aprofundou-se o conhecimento das funções e operadores do R que permitissem efectuar essa manipulação de forma mais eficiente e rápida. A aplicação desta acção correctiva resultou numa redução drástica no tempo de execução. O processo de qualidade do FRBF passou por três tipos de testes: (i) unitários, verificando as funções individualmente; (ii) de caixa negra, testando as funções de aprendizagem e classificação; e (iii) de precisão, aferindo a qualidade dos resultados. Considerando a complexidade do FRBF e o número de configurações possíveis, os resultados obtidos foram bastante satisfatórios, mostrando uma implementação sólida. A precisão foi alvo de atenção especial, sendo precisamente aqui onde não foi plena a satisfação com os resultados obtidos. Tal facto advém das discrepâncias obtidas entre os resultados do FRBF e do protótipo, onde comparação dos resultados beneficiou sempre este último. Uma análise cuidada a esta situação revelou que a divergência acontecia na PCA, que é efectuada de forma distinta. O próprio R possui formas distintas de obter os vectores próprios e os valores próprios, tendo essas formas sido testadas, mas nenhuma delas suplantou os resultados do protótipo. Uma vez certificado o algoritmo, este foi empacotado e submetido ao CRAN. Este processo implicou a escrita da documentação do pacote, das funções e classes envolvidas. O pacote é distribuído sob a licença LGPL, permitindo uma utilização bastante livre do FRBF e, espera-se, potenciando a sua exploração e inovação. O trabalho desenvolvido cumpre plenamente os objectivos inicialmente definidos. O algoritmo original foi melhorado e implementado na plataforma standard usada pela comunidade científica estatística. A sua disponibilização através de um pacote no CRAN sob uma licença de código aberto permite a sua exploração e inovação. No entanto, a implementação do FRBF não se esgota aqui, existindo espaço para trabalho futuro na redução do tempo de execução e na melhoria dos resultados de classificação.
This dissertation is focused on the implementation and improvements of the Flexible Radial Basis Function Neural Networks algorithm. It is a clustering algorithm that describes a method for adjusting parameters for a Radial Basis Function neural network of multivariate Gaussians with ellipsoid shapes. This provides an extra degree of flexibility to the kernel structure through the usage of modifier functions applied to the distance computation procedure. The focus of this work is the improvement and implementation of this clustering algorithm under an open source licensing on a data analysis platform. Hence, the algorithm was implemented under the R platform, the de facto open standard framework among statisticians, allowing the scientific community to use it and, hopefully, improve it. The implementation presented several challenges at various levels, such as inexistent development standards, the distributable package creation and the profiling and tuning process. The enhancements introduced provide a slightly different learning process and extra configuration options to the end user, resulting in more tuning possibilities to be tried and tested during the learning phase. The tests performed show a robust implementation of the algorithm and its enhancements on the R platform. The resulting work has been made available as a R package under an open source licensing, allowing everyone to used it and improve it. This contribution to the scientific community complies with the goals defined for this work.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Yu-chen, and 楊淯程. "Using Radial Basis Function Networks to Model Multi-attribute Utility Functions." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/00316965235574973316.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
92
On-line negotiation and bargaining systems can work effectively on the Internet based on the prerequisite that user utility functions are known while undergoing transactions. However, this prerequisite is hard to meet due to the variety and anonymous nature of Internet surfing. Therefore, how to rapidly and precisely construct a user’s utility function is an essential issue. This research proposes a radial basis function (RBF) network, a neural network, to model a user’s utility function in order to rapidly and precisely model user utility function. We verify the feasibility of the method through experiments, and compare the performance of RBF networks in prediction performance, time expenses, and subjects’ perceptions with the Multiple Regression (MR), SMARTS, and SMARTER methods. The results show that the RBF network method is feasible in these criteria. Not only the RBF network needs less time to construct the users’ utility function than the SMARTS method does, but also it can model user utility functions more precisely than the MR, SMARTS, and SMARTER methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Shen, Cheng-Hung, and 沈政泓. "Application of Local Radial Basis Function Refinement with Finite Element Model in Groundwater Studies." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/50349052175897181171.

Full text
Abstract:
碩士
國立中興大學
水土保持學系所
98
This study proposed a numerical procedure which is a combination of the finite element method (FEM) and local radial basis function collocation method (LRBFCM). The proposed model was developed to solve the groundwater equation with a complex point seeding system. The present procedure starts with an initial computation by FEM with a coarse mesh. By adopting the initial computational results as the boundary conditions, we employed a meshless numerical technique - LRBFCM to further evaluate the precise solutions by the arbitrarily refined computational nodes in the local area. The converged solutions are able to be obtained by using this numerical procedure via repeatedly superposition. The proposed model is suitable for application in the investigated area that possesses the information on the coarse mesh and the partially measured topographic and geologic data. To avoid the interpolating errors resulting, the coordinates of the coarse mesh can be treated as the initial mesh directly. The partially further measured or probed locations are able to be attached with the refined computational nodes without additional mesh generating. In this thesis, we firstly solved the 2D and 3D Poisson equations by the present numerical procedure. The groundwater level simulations for well-pumping problems are also tested. In order to verify the accuracy, stability and robustness of the proposed model, regular and irregular shaped computational domains were adopted. The steady and unsteady numerical solutions show good agreements with the analytical solutions or reference data. At last, two practical cases are introduced to perform the applications of this study. The simulation results of the proposed model are compared with the measurement data or numerical solutions of reference articles. These applications present reasonable results as well. It demonstrates that the FE-LRBFCM is a satisfactory numerical tool for simulating the groundwater problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Schoelkopf, B., K. Sung, C. Burges, F. Girosi, P. Niyogi, T. Poggio, and V. Vapnik. "Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers." 1996. http://hdl.handle.net/1721.1/7180.

Full text
Abstract:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
APA, Harvard, Vancouver, ISO, and other styles
31

Mosharrof, Faisal Tanveer. "Structural optimization using MATLAB partial differential equation toolbox and radial basis function based response surface model." 2008. http://hdl.handle.net/10106/1081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Y. "Intelligent control for surface vessels based on Kalman filter variants trained radial basis function neural networks." Thesis, 2018. https://eprints.utas.edu.au/28702/1/Wang_whole_thesis.pdf.

Full text
Abstract:
For decades, there has been a significant increase in the demand of using a ship’s autopilot for complicated manoeuvres, such as maritime underway replenishment and sailing in constrained waters. In order to achieve these applications even in the presence of severe sea conditions, new control algorithms are required for the autopilots to control the underactuated ships. The study detailed in this thesis investigates the development of Radial Basis Function Neural Networks (RBFNN) based autopilot to satisfy the functionalities of course keeping, rudder roll damping, and path tracking. Two novel Kalman Filter Variants (KFV) based training algorithms, namely Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF), were proposed to improve the performance of the autopilot in the aspects of compensating the effects of system nonlinearity and unpredictable external disturbances. The primary emphasis of this study is in the design of autopilots, analysis of their performances, verification and validation through the experimental and numerical investigations. Considering the better generalisation ability and faster converge performance, modified EKF and UKF were proposed as the alternatives of the Back-Propagation (BP) training method for RBFNN controller to approximate the control law of the ship’s motions. The research splits into four phases. In first two phases, the capabilities of the proposed controllers, i.e., course keeping and path tracking controllers incorporating with roll damping controllers, were validated by adopting the mathematical model of a full scale ship with environmental disturbances. In order to enable both the experimental and numerical studies of proposed autopilots, the third phase focused on the modelling of the free running scaled model ‘Hoorn’, which was newly developed by utilising the embedded open-source hardware and low-cost sensors. In the last phase, the performances of course keeping and path tracking were investigated by conducting experiments using the physical model on Trevallyn Lake (Tasmania, Australia) and simulations using the developed mathematical model. The simulation results of the full scale ship showed that both EKF RBFNN and UKF RBFNN based control schemes were feasible to maintain the ship advancing on desired course and trajectory while reducing the roll damping only use the rudder as the actuator. The free running tests and system identification were successfully implemented to develop the four Degree of Freedom mathematical model of ‘Hoorn’, which has been verified by the comparison between experimental data and simulation results. The following experimental and numerical studies showed that the presented signal processing methods were effectively employed to provide acceptable states estimation, while the KFV trained neural network controllers were adequately making the ship to follow the desired states in the presence of variable external disturbances. Consequently, the ship’s robustness and controllability in counteracting environmental disturbances were corroborated. Based on the above-mentioned investigations, it is concluded that the developed control schemes could effectively determine the deflections of rudder to fulfil the proposed functionalities. The experiment results also demonstrated that the developed autopilots were assisted in effectively tracking desired states and enhancing the ship’s controllability with unpredictable disturbances. Moreover, in comparison with the EKF RBFNN based autopilot, the advantages of UKF RBFNN based autopilot consisted in the fast learning rate and smooth control law output while making the ship to meet the predefined requirements. Additionally, the experimental and simulated results have indicated that the developed control schemes have a great potential to be utilised commercially on marine vehicles, while the presented methods in developing free running model supplied a low-cost but efficient way to investigate the ship’s hydrodynamic characteristics and intelligent autopilot experimentally.
APA, Harvard, Vancouver, ISO, and other styles
33

Τρικοίλης, Ιωάννης. "Εύρεση γεωμετρικών χαρακτηριστικών ερυθρών αιμοσφαιρίων από εικόνες σκεδασμένου φωτός." Thesis, 2010. http://nemertes.lis.upatras.gr/jspui/handle/10889/3696.

Full text
Abstract:
Στην παρούσα διπλωματική εργασία θα γίνει μελέτη και εφαρμογή μεθόδων επίλυσης του προβλήματος αναγνώρισης γεωμετρικών χαρακτηριστικών ανθρώπινων ερυθρών αιμοσφαιρίων από προσομοιωμένες εικόνες σκέδασης ΗΜ ακτινοβολίας ενός He-Ne laser 632.8 μm. Στο πρώτο κεφάλαιο γίνεται μια εισαγωγή στις ιδιότητες και τα χαρακτηριστικά του ερυθροκυττάρου καθώς, επίσης, παρουσιάζονται διάφορες ανωμαλίες των ερυθροκυττάρων και οι μέχρι στιγμής χρησιμοποιούμενοι τρόποι ανίχνευσής των. Στο δεύτερο κεφάλαιο της εργασίας γίνεται μια εισαγωγή στις ιδιότητες της ΗΜ ακτινοβολίας, περιγράφεται το φαινόμενο της σκέδασης και παρουσιάζεται το ευθύ πρόβλημα σκέδασης ΗΜ ακτινοβολίας ανθρώπινων ερυθροκυττάρων. Το τρίτο κεφάλαιο αποτελείται από δύο μέρη. Στο πρώτο μέρος γίνεται εκτενής ανάλυση της θεωρίας των τεχνητών νευρωνικών δικτύων και περιγράφονται τα νευρωνικά δίκτυα ακτινικών συναρτήσεων RBF. Στη συνέχεια, αναφέρονται οι μέθοδοι εξαγωγής παραμέτρων και, πιο συγκεκριμένα, δίνεται το θεωρητικό και μαθηματικό υπόβαθρο των μεθόδων που χρησιμοποιήθηκαν οι οποίες είναι ο αλογόριθμος Singular Value Decomposition (SVD), o Angular Radial μετασχηματισμός (ART) και φίλτρα Gabor. Στο δεύτερο μέρος περιγράφεται η επίλυση του αντίστροφου προβλήματος σκέδασης. Παρουσιάζεται η μεθοδολογία της διαδικασίας επίλυσης όπου εφαρμόστηκαν ο αλογόριθμος συμπίεσης εικόνας SVD, o περιγραφέας σχήματος ART και ο περιγραφέας υφής με φίλτρα Gabor για την εύρεση των γεωμετρικών χαρακτηριστικών και νευρωνικό δίκτυο ακτινικών συναρτήσεων RBF για την ταξινόμηση των ερυθροκυττάρων. Στο τέταρτο και τελευταίο κεφάλαιο γίνεται δοκιμή και αξιολόγηση της μεθόδου και συνοψίζονται τα αποτελέσματα και τα συμπεράσματα που εξήχθησαν κατά τη διάρκεια της εκπόνησης αυτής της διπλωματικής.
In this thesis we study and implement methods of estimating the geometrical features of the human red blood cell from a set of simulated light scattering images produced by a He-Ne laser beam at 632.8 μm. Ιn first chapter an introduction to the properties and the characteristics of red blood cells are presented. Furthermore, we describe various abnormalities of erythrocytes and the until now used ways of detection. In second chapter the properties of electromagnetic radiation and the light scattering problem of EM radiation from human erythrocytes are presented. The third chapter consists of two parts. In first part we analyse the theory of neural networks and we describe the radial basis function neural network. Then, we describe the theoritical and mathematical background of the methods that we use for feature extraction which are Singular Value Decomposition (SVD), Angular Radial Transform and Gabor filters. In second part the solution of the inverse problem of light scattering is described. We present the methodology of the solution process in which we implement a Singular Value Decomposition approach, a shape descriptor with Angular Radial Transform and a homogenous texture descriptor which uses Gabor filters for the estimation of the geometrical characteristics and a RBF neural network for the classification of the erythrocytes. In the forth and last chapter the described methods are evaluated and we summarise the experimental results and conclusions that were extracted from this thesis.
APA, Harvard, Vancouver, ISO, and other styles
34

Carrelli, David John. "Utilising Local Model Neural Network Jacobian Information in Neurocontrol." Thesis, 2006. http://hdl.handle.net/10539/1815.

Full text
Abstract:
Student Number : 8315331 - MSc dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built Environment
In this dissertation an efficient algorithm to calculate the differential of the network output with respect to its inputs is derived for axis orthogonal Local Model (LMN) and Radial Basis Function (RBF) Networks. A new recursive Singular Value Decomposition (SVD) adaptation algorithm, which attempts to circumvent many of the problems found in existing recursive adaptation algorithms, is also derived. Code listings and simulations are presented to demonstrate how the algorithms may be used in on-line adaptive neurocontrol systems. Specifically, the control techniques known as series inverse neural control and instantaneous linearization are highlighted. The presented material illustrates how the approach enhances the flexibility of LMN networks making them suitable for use in both direct and indirect adaptive control methods. By incorporating this ability into LMN networks an important characteristic of Multi Layer Perceptron (MLP) networks is obtained whilst retaining the desirable properties of the RBF and LMN approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography