Dissertations / Theses on the topic 'Function approximation'

To see the other types of publications on this topic, follow the link: Function approximation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Function approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

楊文聰 and Man-chung Yeung. "Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31209531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

伍卓仁 and Cheuk-yan Ng. "Pointwise Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

吳家樂 and Ka-lok Ng. "Relative korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Miranda, Brando M. Eng Massachusetts Institute of Technology. "Training hierarchical networks for function approximation." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113159.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-60).
In this work we investigate function approximation using Hierarchical Networks. We start of by investigating the theory proposed by Poggio et al [2] that Deep Learning Convolutional Neural Networks (DCN) can be equivalent to hierarchical kernel machines with the Radial Basis Functions (RBF).We investigate the difficulty of training RBF networks with stochastic gradient descent (SGD) and hierarchical RBF. We discovered that training singled layered RBF networks can be quite simple with a good initialization and good choice of standard deviation for the Gaussian. Training hierarchical RBFs remains as an open question, however, we clearly identified the issue surrounding training hierarchical RBFs and potential methods to resolve this. We also compare standard DCN networks to hierarchical Radial Basis Functions in tasks that has not been explored yet; the role of depth in learning compositional functions.
by Brando Miranda.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Ng, Ka-lok. "Relative korovkin approximation in function spaces /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B17506074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ng, Cheuk-yan. "Pointwise Korovkin approximation in function spaces /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13474522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cheung, Ho Yin. "Function approximation with higher-order fuzzy systems /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20CHEUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ong, Wen Eng. "Some Basis Function Methods for Surface Approximation." Thesis, University of Canterbury. Mathematics and Statistics, 2012. http://hdl.handle.net/10092/7776.

Full text
Abstract:
This thesis considers issues in surface reconstruction such as identifying approximation methods that work well for certain applications and developing efficient methods to compute and manipulate these approximations. The first part of the thesis illustrates a new fast evaluation scheme to efficiently calculate thin-plate splines in two dimensions. In the fast multipole method scheme, exponential expansions/approximations are used as an intermediate step in converting far field series to local polynomial approximations. The contributions here are extending the scheme to the thin-plate spline and a new error analysis. The error analysis covers the practically important case where truncated series are used throughout, and through off line computation of error constants gives sharp error bounds. In the second part of this thesis, we investigates fitting a surface to an object using blobby models as a coarse level approximation. The aim is to achieve a given quality of approximation with relatively few parameters. This process involves an optimization procedure where a number of blobs (ellipses or ellipsoids) are separately fitted to a cloud of points. Then the optimized blobs are combined to yield an implicit surface approximating the cloud of points. The results for our test cases in 2 and 3 dimensions are very encouraging. For many applications, the coarse level blobby model itself will be sufficient. For example adding texture on top of the blobby surface can give a surprisingly realistic image. The last part of the thesis describes a method to reconstruct surfaces with known discontinuities. We fit a surface to the data points by performing a scattered data interpolation using compactly supported RBFs with respect to a geodesic distance. Techniques from computational geometry such as the visibility graph are used to compute the shortest Euclidean distance between two points, avoiding any obstacles. Results have shown that discontinuities on the surface were clearly reconstructed, and the
APA, Harvard, Vancouver, ISO, and other styles
9

Strand, Filip. "Latent Task Embeddings forFew-Shot Function Approximation." Thesis, KTH, Optimeringslära och systemteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-243832.

Full text
Abstract:
Approximating a function from a few data points is of great importance in fields where data is scarce, like, for example, in robotics applications. Recently, scalable and expressive parametric models like deep neural networks have demonstrated superior performance on a wide variety of function approximation tasks when plenty of data is available –however, these methods tend to perform considerably worse in low-data regimes which calls for alternative approaches. One way to address such limitations is by leveraging prior information about the function class to be estimated when such data is available. Sometimes this prior may be known in closed mathematical form but in general it is not. This the-sis is concerned with the more general case where the prior can only be sampled from, such as a black-box forward simulator. To this end, we propose a simple and scalable approach to learning a prior over functions by training a neural network on data from a distribution of related functions. This steps amounts to building a so called latent task embedding where all related functions (tasks) reside and which later can be efficiently searched at task-inference time - a process called fine-tuning. The pro-posed method can be seen as a special type of auto-encoder and employs the same idea of encoding individual data points during training as the recently proposed Conditional Neural Processes. We extend this work by also incorporating an auxiliary task and by providing additional latent space search methods for increased performance after the initial training step. The task-embedding framework makes finding the right function from a family of related function quick and generally requires only a few informative data points from that function. We evaluate the method by regressing onto the harmonic family of curves and also by applying it to two robotic systems with the aim of quickly identifying and controlling those systems.
Att snabbt kunna approximera en funktion baserat på ett fåtal data-punkter är ett viktigt problem, speciellt inom områden där tillgängliga datamängder är relativt små, till exempel inom delar av robotikområdet. Under de senaste åren har flexibla och skalbara inlärningsmetoder, såsom exempelvis neurala nätverk, uppvisat framstående egenskaper i scenarion där en stor mängd data finns att tillgå. Dessa metoder tenderar dock att prestera betydligt sämre i låg-data regimer vilket motiverar sökandet efter alternativa metoder. Ett sätt att adressera denna begränsning är genom att utnyttja tidigare erfarenheter och antaganden (eng. prior information) om funktionsklassen som skall approximeras när sådan information finns tillgänglig. Ibland kan denna typ av information uttryckas i sluten matematisk form, men mer generellt är så inte fallet. Denna uppsats är fokuserad på det mer generella fallet där vi endast antar att vi kan sampla datapunkter från en databas av tidigare erfarenheter - exempelvis från en simulator där vi inte känner till de interna detaljerna. För detta ändamål föreslår vi en metod för att lära från dessa tidigare erfarenheter genom att i förväg träna på en större datamängd som utgör en familj av relaterade funktioner. I detta steg bygger vi upp ett så kallat latent funktionsrum (eng. latent task embeddings) som innesluter alla variationer av funktioner från träningsdatan och som sedan effektivt kan genomsökas i syfte av att hitta en specifik funktion - en process som vi kallar för finjustering (eng. fine-tuning). Den föreslagna metoden kan betraktas som ett specialfall av en auto-encoder och använder sig av samma ide som den nyligen publicerade Conditional Neural Processes metoden där individuella datapunkter enskilt kodas och grupperas. Vi utökar denna metod genom att inkorporera en sidofunktion (eng. auxiliary function) och genom att föreslå ytterligare metoder för att genomsöka det latenta funktionsrummet efter den initiala träningen. Den föreslagna metoden möjliggör att sökandet efter en specifik funktion typiskt kan göras med endast ett fåtal datapunkter. Vi utvärderar metoden genom att studera kurvanpassningsförmågan på sinuskurvor och genom att applicera den på två robotikproblem med syfte att snabbt kunna identifiera och styra dessa dynamiska system.
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Jun. "Function Approximation and Classification with Perturbed Data." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618266875924225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jackson, Ian Robert Hart. "Radial basis function methods for multivariable approximation." Thesis, University of Cambridge, 1988. https://www.repository.cam.ac.uk/handle/1810/270416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hales, Stephen. "Approximation by translates of a radial basis function." Thesis, University of Leicester, 2000. http://hdl.handle.net/2381/30513.

Full text
Abstract:
The aim of this work is to investigate the properties of approximations obtained by translates of radial basis functions. A natural progression in the discussion starts with an iterative refinement scheme using a strictly positive definite inverse multiquadric. Error estimates for this method are greatly simplified if the inverse multiquadric is replaced by a strictly conditionally positive definite polyharmonic spline. Such error analysis is conducted in a native space generated by the Fourier transform of the basis function. This space can be restrictive when using very smooth basis functions. Some instances are discussed where the native space of can be enlarged by creating a strictly positive definite basis function with comparable approximating properties to , but with a significantly different Fourier transform to . Before such a construction is possible however, strictly positive definite functions in d for d < with compact support must be examined in some detail. It is demonstrated that the dimension in which a function is positive definite can be determined from its univariate Fourier transform. This work is biased towards the computational aspects of interpolation, and the theory is always given with a view to explaining observable phenomena.
APA, Harvard, Vancouver, ISO, and other styles
13

Steuding, Rasa, Jörn Steuding, Kohji Matsumoto, Antanas Laurinčikas, and Ramūnas Garunkštis. "Effective uniform approximation by the Riemann zeta-function." Department of Mathematics of the Universitat Autònoma de Barcelona, 2010. http://hdl.handle.net/2237/20429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Qi. "Multilevel adaptive radial basis function approximation using error indicators." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/38284.

Full text
Abstract:
In some approximation problems, sampling from the target function can be both expensive and time-consuming. It would be convenient to have a method for indicating where the approximation quality is poor, so that generation of new data provides the user with greater accuracy where needed. In this thesis, the author describes a new adaptive algorithm for Radial Basis Function (RBF) interpolation which aims to assess the local approximation quality and adds or removes points as required to improve the error in the specified region. For a multiquadric and Gaussian approximation, one has the flexibility of a shape parameter which one can use to keep the condition number of the interpolation matrix to a moderate size. In this adaptive error indicator (AEI) method, an adaptive shape parameter is applied. Numerical results for test functions which appear in the literature are given for one, two, and three dimensions, to show that this method performs well. A turbine blade design problem form GE Power (Rugby, UK) is considered and the AEI method is applied to this problem. Moreover, a new multilevel approximation scheme is introduced in this thesis by coupling it with the adaptive error indicator. Preliminary numerical results from this Multilevel Adaptive Error Indicator (MAEI) approximation method are shown. These indicate that the MAEI is able to express the target function well. Moreover, it provides a highly efficient sampling.
APA, Harvard, Vancouver, ISO, and other styles
15

Mahadevan, Swaminathan. "Probabilistic linear function approximation for value-based reinforcement learning." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98759.

Full text
Abstract:
Reinforcement learning (RL) is a computational framework for learning sequential decision strategies from the interaction of an agent with an unknown dynamic environment. This thesis focuses on value-based reinforcement learning methods, which rely on computing utility values for different behavior strategies that can be adopted by the agent. Real-world complex problems involve very large discrete or continuous state spaces where the use of approximate methods is required. It has been observed that subtle differences in the approximate methods result in very different theoretical properties and empirical behavior. In this thesis, we propose a new framework for discussing many popular function approximation methods, called Probabilistic Linear Function Approximation. This allows us to highlight the key differences of several approximation algorithms used in RL.
APA, Harvard, Vancouver, ISO, and other styles
16

Van, Roy Benjamin. "Learning and value function approximation in complex decision processes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9960.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 127-133).
by Benjamin Van Roy.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Swingler, Kevin. "Mixed order hyper-networks for function approximation and optimisation." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/25349.

Full text
Abstract:
Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence. There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method. This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n →R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared. Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP. A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature.
APA, Harvard, Vancouver, ISO, and other styles
18

Skelly, Margaret Mary. "Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Valenzuela, Zaldy M. "Constant and power-of-2 segmentation algorithms for a high speed numerical function generator." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FValenzuela.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hed, Lisa. "Approximation and Subextension of Negative Plurisubharmonic Functions." Licentiate thesis, Umeå : Department of Mathematics and Mathematical Statistics, Umeå University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Burniston, J. D. "A neural network/rule-based architecture for continuous function approximation." Thesis, University of Nottingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tham, Chen Khong. "Modular on-line function approximation for scaling up reinforcement learning." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Fischer, Manfred M. "Neural networks. A general framework for non-linear function approximation." Wiley-Blackwell, 2006. http://epub.wu.ac.at/5493/1/NeuralNetworks.pdf.

Full text
Abstract:
The focus of this paper is on the neural network modelling approach that has gained increasing recognition in GIScience in recent years. The novelty about neural networks lies in their ability to model non-linear processes with few, if any, a priori assumptions about the nature of the data-generating process. The paper discusses some important issues that are central for successful application development. The scope is limited to feedforward neural networks, the leading example of neural networks. It is argued that failures in applications can usually be attributed to inadequate learning and/or inadequate complexity of the network model. Parameter estimation and a suitably chosen number of hidden units are, thus, of crucial importance for the success of real world neural network applications. The paper views network learning as an optimization problem, reviews two alternative approaches to network learning, and provides insights into current best practice to optimize complexity so to perform well on generalization tasks.
APA, Harvard, Vancouver, ISO, and other styles
24

Burton, Christina Marie. "Quadratic Spline Approximation of the Newsvendor Problem Optimal Cost Function." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3087.

Full text
Abstract:
We consider a single-product dynamic inventory problem where the demand distributions in each period are known and independent but with density. We assume the lead time and the fixed cost for ordering are zero and that there are no capacity constraints. There is a holding cost and a backorder cost for unfulfilled demand, which is backlogged until it is filled by another order. The problem may be nonstationary, and in fact our approximation of the optimal cost function using splines is most advantageous when demand falls suddenly. In this case the myopic policy, which is most often used in practice to calculate optimal inventory level, would be very costly. Our algorithm uses quadratic splines to approximate the optimal cost function for this dynamic inventory problem and calculates the optimal inventory level and optimal cost.
APA, Harvard, Vancouver, ISO, and other styles
25

Geva, Shlomo. "Exponential response artificial neurons for pattern classification and function approximation." Thesis, Queensland University of Technology, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dadashi, Shirin. "Modeling and Approximation of Nonlinear Dynamics of Flapping Flight." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78224.

Full text
Abstract:
The first and most imperative step when designing a biologically inspired robot is to identify the underlying mechanics of the system or animal of interest. It is most common, perhaps, that this process generates a set of coupled nonlinear ordinary or partial differential equations. For this class of systems, the models derived from morphology of the skeleton are usually very high dimensional, nonlinear, and complex. This is particularly true if joint and link flexibility are included in the model. In addition to complexities that arise from morphology of the animal, some of the external forces that influence the dynamics of animal motion are very hard to model. A very well-established example of these forces is the unsteady aerodynamic forces applied to the wings and the body of insects, birds, and bats. These forces result from the interaction of the flapping motion of the wing and the surround- ing air. These forces generate lift and drag during flapping flight regime. As a result, they play a significant role in the description of the physics that underlies such systems. In this research we focus on dynamic and kinematic models that govern the motion of ground based robots that emulate flapping flight. The restriction to ground based biologically inspired robotic systems is predicated on two observations. First, it has become increasingly popular to design and fabricate bio-inspired robots for wind tunnel studies. Second, by restricting the robotic systems to be anchored in an inertial frame, the robotic equations of motion are well understood, and we can focus attention on flapping wing aerodynamics for such nonlinear systems. We study nonlinear modeling, identification, and control problems that feature the above complexities. This document summarizes research progress and plans that focuses on two key aspects of modeling, identification, and control of nonlinear dynamics associated with flapping flight.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
27

Peña, Helena [Verfasser]. "Affine Iterated Function Systems, invariant measures and their approximation / Helena Peña." Greifswald : Universitätsbibliothek Greifswald, 2017. http://d-nb.info/113081551X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Pan, Minzhe. "Detection and approximation of function of two variables in high dimensions." Master's thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4556.

Full text
Abstract:
This thesis originates from the deterministic algorithm of DeVore, Petrova, and Wojtaszcsyk for the detection and approximation of functions of one variable in high dimensions. We propose a deterministic algorithm for the detection and approximation of function of two variables in high dimensions.
ID: 029050403; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2010.; Includes bibliographical references (p. 49).
M.S.
Masters
Department of Mathematics
Sciences
APA, Harvard, Vancouver, ISO, and other styles
29

Heinen, Milton Roberto. "A connectionist approach for incremental function approximation and on-line tasks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/29015.

Full text
Abstract:
Este trabalho propõe uma nova abordagem conexionista, chamada de IGMN (do inglês Incremental Gaussian Mixture Network), para aproximação incremental de funções e tarefas de tempo real. Ela é inspirada em recentes teorias do cérebro, especialmente o MPF (do inglês Memory-Prediction Framework) e a Inteligência Artificial Construtivista, que fazem com que o modelo proposto possua características especiais que não estão presentes na maioria dos modelos de redes neurais existentes. Além disso, IGMN é baseado em sólidos princípios estatísticos (modelos de mistura gaussianos) e assintoticamente converge para a superfície de regressão ótima a medida que os dados de treinamento chegam. As principais vantagens do IGMN em relação a outros modelos de redes neurais são: (i) IGMN aprende instantaneamente analisando cada padrão de treinamento apenas uma vez (cada dado pode ser imediatamente utilizado e descartado); (ii) o modelo proposto produz estimativas razoáveis baseado em poucos dados de treinamento; (iii) IGMN aprende de forma contínua e perpétua a medida que novos dados de treinamento chegam (não existem fases separadas de treinamento e utilização); (iv) o modelo proposto resolve o dilema da estabilidade-plasticidade e não sofre de interferência catastrófica; (v) a topologia da rede neural é definida automaticamente e de forma incremental (novas unidades são adicionadas sempre que necessário); (vi) IGMN não é sensível às condições de inicialização (de fato IGMN não utiliza nenhuma decisão e/ou inicialização aleatória); (vii) a mesma rede neural IGMN pode ser utilizada em problemas diretos e inversos (o fluxo de informações é bidirecional) mesmo em regiões onde a função alvo tem múltiplas soluções; e (viii) IGMN fornece o nível de confiança de suas estimativas. Outra contribuição relevante desta tese é o uso do IGMN em importantes tarefas nas áreas de robótica e aprendizado de máquina, como por exemplo a identificação de modelos, a formação incremental de conceitos, o aprendizado por reforço, o mapeamento robótico e previsão de séries temporais. De fato, o poder de representação e a eficiência e do modelo proposto permitem expandir o conjunto de tarefas nas quais as redes neurais podem ser utilizadas, abrindo assim novas direções nos quais importantes contribuições do estado da arte podem ser feitas. Através de diversos experimentos, realizados utilizando o modelo proposto, é demonstrado que o IGMN é bastante robusto ao problema de overfitting, não requer um ajuste fino dos parâmetros de configuração e possui uma boa performance computacional que permite o seu uso em aplicações de controle em tempo real. Portanto pode-se afirmar que o IGMN é uma ferramenta de aprendizado de máquina bastante útil em tarefas de aprendizado incremental de funções e predição em tempo real.
This work proposes IGMN (standing for Incremental Gaussian Mixture Network), a new connectionist approach for incremental function approximation and real time tasks. It is inspired on recent theories about the brain, specially the Memory-Prediction Framework and the Constructivist Artificial Intelligence, which endows it with some unique features that are not present in most ANN models such as MLP, RBF and GRNN. Moreover, IGMN is based on strong statistical principles (Gaussian mixture models) and asymptotically converges to the optimal regression surface as more training data arrive. The main advantages of IGMN over other ANN models are: (i) IGMN learns incrementally using a single scan over the training data (each training pattern can be immediately used and discarded); (ii) it can produce reasonable estimates based on few training data; (iii) the learning process can proceed perpetually as new training data arrive (there is no separate phases for leaning and recalling); (iv) IGMN can handle the stability-plasticity dilemma and does not suffer from catastrophic interference; (v) the neural network topology is defined automatically and incrementally (new units added whenever is necessary); (vi) IGMN is not sensible to initialization conditions (in fact there is no random initialization/ decision in IGMN); (vii) the same neural network can be used to solve both forward and inverse problems (the information flow is bidirectional) even in regions where the target data are multi-valued; and (viii) IGMN can provide the confidence levels of its estimates. Another relevant contribution of this thesis is the use of IGMN in some important state-of-the-art machine learning and robotic tasks such as model identification, incremental concept formation, reinforcement learning, robotic mapping and time series prediction. In fact, the efficiency of IGMN and its representational power expand the set of potential tasks in which the neural networks can be applied, thus opening new research directions in which important contributions can be made. Through several experiments using the proposed model it is demonstrated that IGMN is also robust to overfitting, does not require fine-tunning of its configuration parameters and has a very good computational performance, thus allowing its use in real time control applications. Therefore, IGMN is a very useful machine learning tool for incremental function approximation and on-line prediction.
APA, Harvard, Vancouver, ISO, and other styles
30

Kürschner, Patrick. "Two-sided Eigenvalue Algorithms for Modal Approximation." Master's thesis, Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201001082.

Full text
Abstract:
Large scale linear time invariant (LTI) systems arise in many physical and technical fields. An approximation, e.g. with model order reduction techniques, of this large systems is crucial for a cost efficient simulation. In this thesis we focus on a model order reduction method based on modal approximation, where the LTI system is projected onto the left and right eigenspaces corresponding to the dominant poles of the system. These dominant poles are related to the most dominant parts of the residue expansion of the transfer function and usually form a small subset of the eigenvalues of the system matrices. The computation of this dominant poles can be a formidable task, since they can lie anywhere inside the spectrum and the corresponding left eigenvectors have to be approximated as well. We investigate the subspace accelerated dominant pole algorithm and the two-sided and alternating Jacobi-Davidson method for this modal truncation approach. These methods can be seen as subspace accelerated versions of certain Rayleigh quotient iterations. Several strategies that admit an efficient computation of several dominant poles of single-input single-output LTI systems are examined. Since dominant poles can lie in the interior of the spectrum, we discuss also harmonic subspace extraction approaches which might improve the convergence of the methods. Extentions of the modal approximation approach and the applied eigenvalue solvers to multi-input multi-output are also examined. The discussed eigenvalue algorithms and the model order reduction approach will be tested for several practically relevant LTI systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Kathman, Steven Jay Jr. "Discrete Small Sample Asymptotics." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/30101.

Full text
Abstract:
Random variables defined on the natural numbers may often be approximated by Poisson variables. Just as normal approximations may be improved by saddlepoint methods, Poisson approximations may be substantially improved by tilting, expansion, and other related methods. This work will develop and examine the use of these methods, as well as present examples where such methods may be needed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Mouadeb, Mark. "A study in using temporal abstraction and function approximation in reinforcement learning /." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97974.

Full text
Abstract:
The incorporation of temporally extended actions, or options , into the reinforcement learning framework has proven to be useful in planning and knowledge representation. Options provide a form of temporal abstraction, as they represent courses of action of variable duration. At the same time, agents operating in large domains often require the use of function approximation in order to generalize their experience over many states. Previous research has demonstrated the theoretical properties and usefulness of options in small domains where function approximation is not necessary. However, the theoretical and empirical properties of options with function approximation are still not well understood. In this thesis, we investigate the performance of options with the CMAC function approximator on an implementation of the game Asteroids (Atari, 1979). Our results demonstrate that options can be integrated successfully with function approximation. If good options are available, they can facilitate faster learning.
APA, Harvard, Vancouver, ISO, and other styles
33

Ebeigbe, Donald Ehima. "CONTROL OF RIGID ROBOTS WITH LARGE UNCERTAINTIES USING THE FUNCTION APPROXIMATION TECHNIQUE." Cleveland State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=csu1568034334694515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Desai, Dileep Reddy. "Analog Non-Linear Multi-Variable Function Evaluation By Piece-wise Linear Approximation." University of Akron / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=akron1280110386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Vail, Michelle Louise. "Error estimates for spaces arising from approximation by translates of a basic function." Thesis, University of Leicester, 2002. http://hdl.handle.net/2381/30519.

Full text
Abstract:
We look at aspects of error analysis for interpolation by translates of a basic function. In particular, we consider ideas of localisation and how they can be used to obtain improved error estimates. We shall consider certain seminorms and associated spaces of functions which arise in the study of such interpolation methods. These seminorms are naturally given in an indirect form, that is in terms of the Fourier Transform of the function rather than the function itself. Thus, they do not lend themselves to localisation. However, work by Levesley and Light [17] rewrites these seminorms in a direct form and thus gives a natural way of defining a local seminorm. Using this form of local seminorm we construct associated local spaces. We develop bounded, linear extension operators for these spaces and demonstrate how such extension operators can be used in developing improved error estimates. Specifically, we obtain improved L2 estimates for these spaces in terms of the spacing of the interpolation points. Finally, we begin a discussion of how this approach to localisation compares with alternatives.
APA, Harvard, Vancouver, ISO, and other styles
36

White, David A. (David Allan) 1966. "In-situ wafer uniformity estimation using principal component analysis and function approximation methods." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Hongyan. "Analysis of statistical learning algorithms in data dependent function spaces /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ma-b23750534f.pdf.

Full text
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [87]-100)
APA, Harvard, Vancouver, ISO, and other styles
38

Murray, Andrew Gerard William, and n/a. "Micro-net the parallel path artificial neuron." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20070423.121528.

Full text
Abstract:
A feed forward architecture is suggested that increases the complexity of conventional neural network components through the implementation of a more complex scheme of interconnection. This is done with a view to increasing the range of application of the feed forward paradigm. The uniqueness of this new network design is illustrated by developing an extended taxonomy of accepted published constructs specific and similar to the higher order, product kernel approximations achievable using "parallel paths". Network topologies from this taxonomy are then compared to each other and the architectures containing parallel paths. In attempting this comparison, the context of the term "network topology" is reconsidered. The output of "channels" in these parallel paths are the products of a conventional connection as observed facilitating interconnection between two layers in a multilayered perceptron and the output of a network processing unit, a "control element", that can assume the identity of a number of pre-existing processing paradigms. The inherent property of universal approximation is tested by existence proof and the method found to be inconclusive. In so doing an argument is suggested to indicate that the parametric nature of the functions as determined by conditions upon initialization may only lead to conditional approximations. The property of universal approximation is neither, confirmed or denied. Universal approximation cannot be conclusively determined by the application of Stone Weierstrass Theorem, as adopted from real analysis. This novel implementation requires modifications to component concepts and the training algorithm. The inspiration for these modifications is related back to previously published work that also provides the basis of "proof of concept". By achieving proof of concept the appropriateness of considering network topology without assessing the impact of the method of training on this topology is considered and discussed in some detail. Results of limited testing are discussed with an emphasis on visualising component contributions to the global network output.
APA, Harvard, Vancouver, ISO, and other styles
39

Zeriahi, Ahmed. "Fonctions plurisousharmoniques extremales, approximation et croissance des fonctions holomorphes sur des ensembles algebriques." Toulouse 3, 1986. http://www.theses.fr/1986TOU30105.

Full text
Abstract:
On etudie les proprietes extremales de la "mesure capacitaire d'equilibre" associee a un "condensateur" d'un espace de stein x de dimension pure. On introduit sur un espace de stein parabolique x la notion de fonction extremale associee a un compact kcx. On generalise la theorie des fonctions extremales de siciak-zaharyuta donnant une nouvelle approche de celle-ci basee sur la theorie du potentiel complexe pour l'operateur de monge ampere complexe. On en deduit des resultats sur la theorie des fonctions a croissance controlee a l'infini. On etudie le cas d'un ensemble algebrique de c**(n). Dans ce cas, on demontre des inegalites polynomiales, une version precise d'un theoreme d'approximation de type bernstein-walsh et on en deduit le comportement asymptotique de certaines suites de polynomes orthogonaux pour la mesure d'equilibre. Pour une variete de c**(n) intersection complete, on construit des operateurs lineaires integraux pour la meilleure approximation polynomiale sur un compact avec un controle precis de l'erreur et on en deduit des resultats sur la croissance des fonctions entieres sur de telles varietes
APA, Harvard, Vancouver, ISO, and other styles
40

Haider, Syed Shabbir. "Simplified neural networks algorithms for function approximation and regression boosting on discrete input spaces." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/simplified-neural-networks-algorithms-for-function-approximation-and-regression-boosting-on-discrete-input-spaces(566bb98e-1659-4169-af5f-12edb48067fd).html.

Full text
Abstract:
Function approximation capabilities of feedforward Neural Networks have been widely investigated over the past couple of decades. There has been quite a lot of work carried out in order to prove 'Universal Approximation Property' of these Networks. Most of the work in application of Neural Networks for function approximation has concentrated on problems where the input variables are continuous. However, there are many real world examples around us in which input variables constitute only discrete values, or a significant number of these input variables are discrete. Most of the learning algorithms proposed so far do not distinguish between different features of continuous and discrete input spaces and treat them in more or less the same way. Due to this reason, corresponding learning algorithms becomes unnecessarily complex and time consuming, especially when dealing with inputs mainly consisting of discrete variables. More recently, it has been shown that by focusing on special features of discrete input spaces, more simplified and robust algorithms can be developed. The main objective of this work is to address the function approximation capabilities of Artificial Neural Networks. There is particular emphasis on development, implementation, testing and analysis of new learning algorithms for the Simplified Neural Network approximation scheme for functions defined on discrete input spaces. By developing the corresponding learning algorithms, and testing with different benchmarking data sets, it is shown that comparing conventional multilayer neural networks for approximating functions on discrete input spaces, the proposed simplified neural network architecture and algorithms can achieve similar or better approximation accuracy. This is particularly the case when dealing with high dimensional-low sample cases, but with a much simpler architecture and less parameters. In order to investigate wider implications of simplified Neural Networks, their application has been extended to the Regression Boosting frame work. By developing, implementing and testing with empirical data it has been shown that these simplified Neural Network based algorithms also performs well in other Neural Network based ensembles.
APA, Harvard, Vancouver, ISO, and other styles
41

Seppälä, L. (Louna). "Diophantine perspectives to the exponential function and Euler’s factorial series." Doctoral thesis, University of Oulu, 2019. http://urn.fi/urn:isbn:9789529418237.

Full text
Abstract:
Abstract The focus of this thesis is on two functions: the exponential function and Euler’s factorial series. By constructing explicit Padé approximations, we are able to improve lower bounds for linear forms in the values of these functions. In particular, the dependence on the height of the coefficients of the linear form will be sharpened in the lower bound. The first chapter contains some necessary definitions and auxiliary results needed in later chapters.We give precise definitions for a transcendence measure and Padé approximations of the second type. Siegel’s lemma will be introduced as a fundamental tool in Diophantine approximation. A brief excursion to exterior algebras shows how they can be used to prove determinant expansion formulas. The reader will also be familiarised with valuations of number fields. In Chapter 2, a new transcendence measure for e is proved using type II Hermite-Padé approximations to the exponential function. An improvement to the previous transcendence measures is achieved by estimating the common factors of the coefficients of the auxiliary polynomials. The exponential function is the underlying topic of the third chapter as well. Now we study the common factors of the maximal minors of some large block matrices that appear when constructing Padé-type approximations to the exponential function. The factorisation of these minors is of interest both because of Bombieri and Vaaler’s improved version of Siegel’s lemma and because they are connected to finding explicit expressions for the approximation polynomials. In the beginning of Chapter 3, two general theorems concerning factors of Vandermonde-type block determinants are proved. In the final chapter, we concentrate on Euler’s factorial series which has a positive radius of convergence in p-adic fields. We establish some non-vanishing results for a linear form in the values of Euler’s series at algebraic integer points. A lower bound for this linear form is derived as well.
APA, Harvard, Vancouver, ISO, and other styles
42

Wiryana, I. M. "Applications of fuzzy counterpropagation neural networks to non-linear function approximation and background noise elimination." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1994. https://ro.ecu.edu.au/theses/1107.

Full text
Abstract:
An adaptive filter which can operate in an unknown environment by performing a learning mechanism that is suitable for the speech enhancement process. This research develops a novel ANN model which incorporates the fuzzy set approach and which can perform a non-linear function approximation. The model is used as the basic structure of an adaptive filter. The learning capability of ANN is expected to be able to reduce the development time and cost of the designing adaptive filters based on fuzzy set approach. A combination of both techniques may result in a learnable system that can tackle the vagueness problem of a changing environment where the adaptive filter operates. This proposed model is called Fuzzy Counterpropagation Network (Fuzzy CPN). It has fast learning capability and self-growing structure. This model is applied to non-linear function approximation, chaotic time series prediction and background noise elimination.
APA, Harvard, Vancouver, ISO, and other styles
43

Haro, Antonio. "Example Based Processing For Image And Video Synthesis." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5283.

Full text
Abstract:
The example based processing problem can be expressed as: "Given an example of an image or video before and after processing, apply a similar processing to a new image or video". Our thesis is that there are some problems where a single general algorithm can be used to create varieties of outputs, solely by presenting examples of what is desired to the algorithm. This is valuable if the algorithm to produce the output is non-obvious, e.g. an algorithm to emulate an example painting's style. We limit our investigations to example based processing of images, video, and 3D models as these data types are easy to acquire and experiment with. We represent this problem first as a texture synthesis influenced sampling problem, where the idea is to form feature vectors representative of the data and then sample them coherently to synthesize a plausible output for the new image or video. Grounding the problem in this manner is useful as both problems involve learning the structure of training data under some assumptions to sample it properly. We then reduce the problem to a labeling problem to perform example based processing in a more generalized and principled manner than earlier techniques. This allows us to perform a different estimation of what the output should be by approximating the optimal (and possibly not known) solution through a different approach.
APA, Harvard, Vancouver, ISO, and other styles
44

Marroquin, Jose L. "Measure Fields for Function Approximation." 1993. http://hdl.handle.net/1721.1/7211.

Full text
Abstract:
The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.
APA, Harvard, Vancouver, ISO, and other styles
45

Hsu, Pei-Hsun, and 許珮薰. "Function Approximation Using Generalized Adalines." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/70276008756386302420.

Full text
Abstract:
碩士
國立東華大學
應用數學系
91
This work explores the learning process of a network of generalized adalines for function approximation. A generalized adaline contains a receptive field and a multi-state activation function, of which the output is the normalized response of an embedded Gaussian array and the input is the feature extracted by the receptive field from a multi-component stimulus. The supervised learning process is modeled by a mathematical framework, addressing on subjects of extracting the most effective feature, maximizing utilization of Gaussian units, and fitting criteria proposed by training samples. The mathematical framework is a mixed integer and linear programming, and can be solved by a hybrid of the mean field annealing and the gradient descent methods. As a result, we have three sets of interactive dynamics for the new supervised learning process. Numerical simulations show that the learning process is able to generate essential internal representations for the mapping underlying training samples.
APA, Harvard, Vancouver, ISO, and other styles
46

Taylor, Gavin. "Feature Selection for Value Function Approximation." Diss., 2011. http://hdl.handle.net/10161/3891.

Full text
Abstract:

The field of reinforcement learning concerns the question of automated action selection given past experiences. As an agent moves through the state space, it must recognize which state choices are best in terms of allowing it to reach its goal. This is quantified with value functions, which evaluate a state and return the sum of rewards the agent can expect to receive from that state. Given a good value function, the agent can choose the actions which maximize this sum of rewards. Value functions are often chosen from a linear space defined by a set of features; this method offers a concise structure, low computational effort, and resistance to overfitting. However, because the number of features is small, this method depends heavily on these few features being expressive and useful, making the selection of these features a core problem. This document discusses this selection.

Aside from a review of the field, contributions include a new understanding of the role approximate models play in value function approximation, leading to new methods for analyzing feature sets in an intuitive way, both using the linear and the related kernelized approximation architectures. Additionally, we present a new method for automatically choosing features during value function approximation which has a bounded approximation error and produces superior policies, even in extremely noisy domains.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
47

Shih-ChiehYu and 游世杰. "Design of Special Function Unit with Dual-Precision Function Approximation." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/94dmzc.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Painter-Wakefield, Christopher Robert. "Sparse Value Function Approximation for Reinforcement Learning." Diss., 2013. http://hdl.handle.net/10161/7250.

Full text
Abstract:

A key component of many reinforcement learning (RL) algorithms is the approximation of the value function. The design and selection of features for approximation in RL is crucial, and an ongoing area of research. One approach to the problem of feature selection is to apply sparsity-inducing techniques in learning the value function approximation; such sparse methods tend to select relevant features and ignore irrelevant features, thus automating the feature selection process. This dissertation describes three contributions in the area of sparse value function approximation for reinforcement learning.

One method for obtaining sparse linear approximations is the inclusion in the objective function of a penalty on the sum of the absolute values of the approximation weights. This L1 regularization approach was first applied to temporal difference learning in the LARS-inspired, batch learning algorithm LARS-TD. In our first contribution, we define an iterative update equation which has as its fixed point the L1 regularized linear fixed point of LARS-TD. The iterative update gives rise naturally to an online stochastic approximation algorithm. We prove convergence of the online algorithm and show that the L1 regularized linear fixed point is an equilibrium fixed point of the algorithm. We demonstrate the ability of the algorithm to converge to the fixed point, yielding a sparse solution with modestly better performance than unregularized linear temporal difference learning.

Our second contribution extends LARS-TD to integrate policy optimization with sparse value learning. We extend the L1 regularized linear fixed point to include a maximum over policies, defining a new, "greedy" fixed point. The greedy fixed point adds a new invariant to the set which LARS-TD maintains as it traverses its homotopy path, giving rise to a new algorithm integrating sparse value learning and optimization. The new algorithm is demonstrated to be similar in performance with policy iteration using LARS-TD.

Finally, we consider another approach to sparse learning, that of using a simple algorithm that greedily adds new features. Such algorithms have many of the good properties of the L1 regularization methods, while also being extremely efficient and, in some cases, allowing theoretical guarantees on recovery of the true form of a sparse target function from sampled data. We consider variants of orthogonal matching pursuit (OMP) applied to RL. The resulting algorithms are analyzed and compared experimentally with existing L1 regularized approaches. We demonstrate that perhaps the most natural scenario in which one might hope to achieve sparse recovery fails; however, one variant provides promising theoretical guarantees under certain assumptions on the feature dictionary while another variant empirically outperforms prior methods both in approximation accuracy and efficiency on several benchmark problems.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Jack, and 李忠庭. "AN APPROXIMATION METHOD OF A PROBABILITY FUNCTION." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/92cbb4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lai, Jiun-Jao, and 賴俊兆. "Bilinear System Control Using Function Approximation Technique." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/25167568604284579800.

Full text
Abstract:
碩士
國立臺灣大學
生物產業機電工程學研究所
97
This study presents a new adaptive controller based on function approximation technique for uncertain nonhomogeneous bilinear systems containing time-varying uncertainties with unknown bounds. For applying conventional robust strategies, we must know the variation bounds of some of the uncertainties. Moreover, due to the time-varying property of these uncertainties, traditional adaptive schemes can not be adopted. This paper solves the stabilization problem when FAT (Function Approximation Technique) is used to approximate the time-varying nonlinearity with unknown bounds of the system and results bounded error performance is generated. Meanwhile, the singularity problem of the control input can be overcome if the bound of the input gain matrix is available. The proposed approach is based on Lyapunov-like function with rigorous derivation. The computer simulation result is provided to verify the performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography