Dissertations / Theses on the topic 'Function approximation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Function approximation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
楊文聰 and Man-chung Yeung. "Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31209531.
Full text伍卓仁 and Cheuk-yan Ng. "Pointwise Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210934.
Full text吳家樂 and Ka-lok Ng. "Relative korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213479.
Full textMiranda, Brando M. Eng Massachusetts Institute of Technology. "Training hierarchical networks for function approximation." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113159.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-60).
In this work we investigate function approximation using Hierarchical Networks. We start of by investigating the theory proposed by Poggio et al [2] that Deep Learning Convolutional Neural Networks (DCN) can be equivalent to hierarchical kernel machines with the Radial Basis Functions (RBF).We investigate the difficulty of training RBF networks with stochastic gradient descent (SGD) and hierarchical RBF. We discovered that training singled layered RBF networks can be quite simple with a good initialization and good choice of standard deviation for the Gaussian. Training hierarchical RBFs remains as an open question, however, we clearly identified the issue surrounding training hierarchical RBFs and potential methods to resolve this. We also compare standard DCN networks to hierarchical Radial Basis Functions in tasks that has not been explored yet; the role of depth in learning compositional functions.
by Brando Miranda.
M. Eng.
Ng, Ka-lok. "Relative korovkin approximation in function spaces /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B17506074.
Full textNg, Cheuk-yan. "Pointwise Korovkin approximation in function spaces /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13474522.
Full textCheung, Ho Yin. "Function approximation with higher-order fuzzy systems /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20CHEUNG.
Full textOng, Wen Eng. "Some Basis Function Methods for Surface Approximation." Thesis, University of Canterbury. Mathematics and Statistics, 2012. http://hdl.handle.net/10092/7776.
Full textStrand, Filip. "Latent Task Embeddings forFew-Shot Function Approximation." Thesis, KTH, Optimeringslära och systemteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-243832.
Full textAtt snabbt kunna approximera en funktion baserat på ett fåtal data-punkter är ett viktigt problem, speciellt inom områden där tillgängliga datamängder är relativt små, till exempel inom delar av robotikområdet. Under de senaste åren har flexibla och skalbara inlärningsmetoder, såsom exempelvis neurala nätverk, uppvisat framstående egenskaper i scenarion där en stor mängd data finns att tillgå. Dessa metoder tenderar dock att prestera betydligt sämre i låg-data regimer vilket motiverar sökandet efter alternativa metoder. Ett sätt att adressera denna begränsning är genom att utnyttja tidigare erfarenheter och antaganden (eng. prior information) om funktionsklassen som skall approximeras när sådan information finns tillgänglig. Ibland kan denna typ av information uttryckas i sluten matematisk form, men mer generellt är så inte fallet. Denna uppsats är fokuserad på det mer generella fallet där vi endast antar att vi kan sampla datapunkter från en databas av tidigare erfarenheter - exempelvis från en simulator där vi inte känner till de interna detaljerna. För detta ändamål föreslår vi en metod för att lära från dessa tidigare erfarenheter genom att i förväg träna på en större datamängd som utgör en familj av relaterade funktioner. I detta steg bygger vi upp ett så kallat latent funktionsrum (eng. latent task embeddings) som innesluter alla variationer av funktioner från träningsdatan och som sedan effektivt kan genomsökas i syfte av att hitta en specifik funktion - en process som vi kallar för finjustering (eng. fine-tuning). Den föreslagna metoden kan betraktas som ett specialfall av en auto-encoder och använder sig av samma ide som den nyligen publicerade Conditional Neural Processes metoden där individuella datapunkter enskilt kodas och grupperas. Vi utökar denna metod genom att inkorporera en sidofunktion (eng. auxiliary function) och genom att föreslå ytterligare metoder för att genomsöka det latenta funktionsrummet efter den initiala träningen. Den föreslagna metoden möjliggör att sökandet efter en specifik funktion typiskt kan göras med endast ett fåtal datapunkter. Vi utvärderar metoden genom att studera kurvanpassningsförmågan på sinuskurvor och genom att applicera den på två robotikproblem med syfte att snabbt kunna identifiera och styra dessa dynamiska system.
Hou, Jun. "Function Approximation and Classification with Perturbed Data." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618266875924225.
Full textJackson, Ian Robert Hart. "Radial basis function methods for multivariable approximation." Thesis, University of Cambridge, 1988. https://www.repository.cam.ac.uk/handle/1810/270416.
Full textHales, Stephen. "Approximation by translates of a radial basis function." Thesis, University of Leicester, 2000. http://hdl.handle.net/2381/30513.
Full textSteuding, Rasa, Jörn Steuding, Kohji Matsumoto, Antanas Laurinčikas, and Ramūnas Garunkštis. "Effective uniform approximation by the Riemann zeta-function." Department of Mathematics of the Universitat Autònoma de Barcelona, 2010. http://hdl.handle.net/2237/20429.
Full textZhang, Qi. "Multilevel adaptive radial basis function approximation using error indicators." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/38284.
Full textMahadevan, Swaminathan. "Probabilistic linear function approximation for value-based reinforcement learning." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98759.
Full textVan, Roy Benjamin. "Learning and value function approximation in complex decision processes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9960.
Full textIncludes bibliographical references (p. 127-133).
by Benjamin Van Roy.
Ph.D.
Swingler, Kevin. "Mixed order hyper-networks for function approximation and optimisation." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/25349.
Full textSkelly, Margaret Mary. "Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818.
Full textValenzuela, Zaldy M. "Constant and power-of-2 segmentation algorithms for a high speed numerical function generator." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FValenzuela.pdf.
Full textHed, Lisa. "Approximation and Subextension of Negative Plurisubharmonic Functions." Licentiate thesis, Umeå : Department of Mathematics and Mathematical Statistics, Umeå University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1799.
Full textBurniston, J. D. "A neural network/rule-based architecture for continuous function approximation." Thesis, University of Nottingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387198.
Full textTham, Chen Khong. "Modular on-line function approximation for scaling up reinforcement learning." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309702.
Full textFischer, Manfred M. "Neural networks. A general framework for non-linear function approximation." Wiley-Blackwell, 2006. http://epub.wu.ac.at/5493/1/NeuralNetworks.pdf.
Full textBurton, Christina Marie. "Quadratic Spline Approximation of the Newsvendor Problem Optimal Cost Function." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3087.
Full textGeva, Shlomo. "Exponential response artificial neurons for pattern classification and function approximation." Thesis, Queensland University of Technology, 1992.
Find full textDadashi, Shirin. "Modeling and Approximation of Nonlinear Dynamics of Flapping Flight." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78224.
Full textPh. D.
Peña, Helena [Verfasser]. "Affine Iterated Function Systems, invariant measures and their approximation / Helena Peña." Greifswald : Universitätsbibliothek Greifswald, 2017. http://d-nb.info/113081551X/34.
Full textPan, Minzhe. "Detection and approximation of function of two variables in high dimensions." Master's thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4556.
Full textID: 029050403; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2010.; Includes bibliographical references (p. 49).
M.S.
Masters
Department of Mathematics
Sciences
Heinen, Milton Roberto. "A connectionist approach for incremental function approximation and on-line tasks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/29015.
Full textThis work proposes IGMN (standing for Incremental Gaussian Mixture Network), a new connectionist approach for incremental function approximation and real time tasks. It is inspired on recent theories about the brain, specially the Memory-Prediction Framework and the Constructivist Artificial Intelligence, which endows it with some unique features that are not present in most ANN models such as MLP, RBF and GRNN. Moreover, IGMN is based on strong statistical principles (Gaussian mixture models) and asymptotically converges to the optimal regression surface as more training data arrive. The main advantages of IGMN over other ANN models are: (i) IGMN learns incrementally using a single scan over the training data (each training pattern can be immediately used and discarded); (ii) it can produce reasonable estimates based on few training data; (iii) the learning process can proceed perpetually as new training data arrive (there is no separate phases for leaning and recalling); (iv) IGMN can handle the stability-plasticity dilemma and does not suffer from catastrophic interference; (v) the neural network topology is defined automatically and incrementally (new units added whenever is necessary); (vi) IGMN is not sensible to initialization conditions (in fact there is no random initialization/ decision in IGMN); (vii) the same neural network can be used to solve both forward and inverse problems (the information flow is bidirectional) even in regions where the target data are multi-valued; and (viii) IGMN can provide the confidence levels of its estimates. Another relevant contribution of this thesis is the use of IGMN in some important state-of-the-art machine learning and robotic tasks such as model identification, incremental concept formation, reinforcement learning, robotic mapping and time series prediction. In fact, the efficiency of IGMN and its representational power expand the set of potential tasks in which the neural networks can be applied, thus opening new research directions in which important contributions can be made. Through several experiments using the proposed model it is demonstrated that IGMN is also robust to overfitting, does not require fine-tunning of its configuration parameters and has a very good computational performance, thus allowing its use in real time control applications. Therefore, IGMN is a very useful machine learning tool for incremental function approximation and on-line prediction.
Kürschner, Patrick. "Two-sided Eigenvalue Algorithms for Modal Approximation." Master's thesis, Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201001082.
Full textKathman, Steven Jay Jr. "Discrete Small Sample Asymptotics." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/30101.
Full textPh. D.
Mouadeb, Mark. "A study in using temporal abstraction and function approximation in reinforcement learning /." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97974.
Full textEbeigbe, Donald Ehima. "CONTROL OF RIGID ROBOTS WITH LARGE UNCERTAINTIES USING THE FUNCTION APPROXIMATION TECHNIQUE." Cleveland State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=csu1568034334694515.
Full textDesai, Dileep Reddy. "Analog Non-Linear Multi-Variable Function Evaluation By Piece-wise Linear Approximation." University of Akron / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=akron1280110386.
Full textVail, Michelle Louise. "Error estimates for spaces arising from approximation by translates of a basic function." Thesis, University of Leicester, 2002. http://hdl.handle.net/2381/30519.
Full textWhite, David A. (David Allan) 1966. "In-situ wafer uniformity estimation using principal component analysis and function approximation methods." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11456.
Full textWang, Hongyan. "Analysis of statistical learning algorithms in data dependent function spaces /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ma-b23750534f.pdf.
Full text"Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [87]-100)
Murray, Andrew Gerard William, and n/a. "Micro-net the parallel path artificial neuron." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20070423.121528.
Full textZeriahi, Ahmed. "Fonctions plurisousharmoniques extremales, approximation et croissance des fonctions holomorphes sur des ensembles algebriques." Toulouse 3, 1986. http://www.theses.fr/1986TOU30105.
Full textHaider, Syed Shabbir. "Simplified neural networks algorithms for function approximation and regression boosting on discrete input spaces." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/simplified-neural-networks-algorithms-for-function-approximation-and-regression-boosting-on-discrete-input-spaces(566bb98e-1659-4169-af5f-12edb48067fd).html.
Full textSeppälä, L. (Louna). "Diophantine perspectives to the exponential function and Euler’s factorial series." Doctoral thesis, University of Oulu, 2019. http://urn.fi/urn:isbn:9789529418237.
Full textWiryana, I. M. "Applications of fuzzy counterpropagation neural networks to non-linear function approximation and background noise elimination." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1994. https://ro.ecu.edu.au/theses/1107.
Full textHaro, Antonio. "Example Based Processing For Image And Video Synthesis." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5283.
Full textMarroquin, Jose L. "Measure Fields for Function Approximation." 1993. http://hdl.handle.net/1721.1/7211.
Full textHsu, Pei-Hsun, and 許珮薰. "Function Approximation Using Generalized Adalines." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/70276008756386302420.
Full text國立東華大學
應用數學系
91
This work explores the learning process of a network of generalized adalines for function approximation. A generalized adaline contains a receptive field and a multi-state activation function, of which the output is the normalized response of an embedded Gaussian array and the input is the feature extracted by the receptive field from a multi-component stimulus. The supervised learning process is modeled by a mathematical framework, addressing on subjects of extracting the most effective feature, maximizing utilization of Gaussian units, and fitting criteria proposed by training samples. The mathematical framework is a mixed integer and linear programming, and can be solved by a hybrid of the mean field annealing and the gradient descent methods. As a result, we have three sets of interactive dynamics for the new supervised learning process. Numerical simulations show that the learning process is able to generate essential internal representations for the mapping underlying training samples.
Taylor, Gavin. "Feature Selection for Value Function Approximation." Diss., 2011. http://hdl.handle.net/10161/3891.
Full textThe field of reinforcement learning concerns the question of automated action selection given past experiences. As an agent moves through the state space, it must recognize which state choices are best in terms of allowing it to reach its goal. This is quantified with value functions, which evaluate a state and return the sum of rewards the agent can expect to receive from that state. Given a good value function, the agent can choose the actions which maximize this sum of rewards. Value functions are often chosen from a linear space defined by a set of features; this method offers a concise structure, low computational effort, and resistance to overfitting. However, because the number of features is small, this method depends heavily on these few features being expressive and useful, making the selection of these features a core problem. This document discusses this selection.
Aside from a review of the field, contributions include a new understanding of the role approximate models play in value function approximation, leading to new methods for analyzing feature sets in an intuitive way, both using the linear and the related kernelized approximation architectures. Additionally, we present a new method for automatically choosing features during value function approximation which has a bounded approximation error and produces superior policies, even in extremely noisy domains.
Dissertation
Shih-ChiehYu and 游世杰. "Design of Special Function Unit with Dual-Precision Function Approximation." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/94dmzc.
Full textPainter-Wakefield, Christopher Robert. "Sparse Value Function Approximation for Reinforcement Learning." Diss., 2013. http://hdl.handle.net/10161/7250.
Full textA key component of many reinforcement learning (RL) algorithms is the approximation of the value function. The design and selection of features for approximation in RL is crucial, and an ongoing area of research. One approach to the problem of feature selection is to apply sparsity-inducing techniques in learning the value function approximation; such sparse methods tend to select relevant features and ignore irrelevant features, thus automating the feature selection process. This dissertation describes three contributions in the area of sparse value function approximation for reinforcement learning.
One method for obtaining sparse linear approximations is the inclusion in the objective function of a penalty on the sum of the absolute values of the approximation weights. This
Our second contribution extends LARS-TD to integrate policy optimization with sparse value learning. We extend the
Finally, we consider another approach to sparse learning, that of using a simple algorithm that greedily adds new features. Such algorithms have many of the good properties of the
Dissertation
Lee, Jack, and 李忠庭. "AN APPROXIMATION METHOD OF A PROBABILITY FUNCTION." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/92cbb4.
Full textLai, Jiun-Jao, and 賴俊兆. "Bilinear System Control Using Function Approximation Technique." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/25167568604284579800.
Full text國立臺灣大學
生物產業機電工程學研究所
97
This study presents a new adaptive controller based on function approximation technique for uncertain nonhomogeneous bilinear systems containing time-varying uncertainties with unknown bounds. For applying conventional robust strategies, we must know the variation bounds of some of the uncertainties. Moreover, due to the time-varying property of these uncertainties, traditional adaptive schemes can not be adopted. This paper solves the stabilization problem when FAT (Function Approximation Technique) is used to approximate the time-varying nonlinearity with unknown bounds of the system and results bounded error performance is generated. Meanwhile, the singularity problem of the control input can be overcome if the bound of the input gain matrix is available. The proposed approach is based on Lyapunov-like function with rigorous derivation. The computer simulation result is provided to verify the performance of the proposed method.