Дисертації з теми "Approximation of convex function"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Approximation of convex function".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Azimi, Roushan Tahere. "Inequalities related to norm and numerical radius of operators." Thesis, Pau, 2020. http://www.theses.fr/2020PAUU3002.
Повний текст джерелаIn this thesis, after expressing concepts and necessary preconditions, we investigate Hermite-Hadamard inequality for geometrically convex functions. Then, by introducing operator geometricallyconvex functions, we extend the results and prove Hermite-Hadamard type inequalityfor these kind of functions. In the following, by proving the log-convexity of somefunctions which are based on the unitarily invariant norm and considering the relation betweengeometrically convex functions and log-convex functions, we present several refinementsfor some well-known operator norm inequalities. Also, we prove operator version ofsome numerical results, which were obtained for approximating a class of convex functions,as an application,we refine Hermite-Hadamard inequality for a class of operator convex functions.Finally, we discuss about the numerical radius of an operator which is equivalent withthe operator norm and we state some related results, and by obtaining some upper boundsfor the Berezin number of an operator which is contained in the numerical range of that operator, we finish the thesis
در این رساله، پس از بیان مفاهیم و مقدمات لازم به بررسی نامساوی هرمیت-هادامار برای توابع محدب هندسی پرداخته سپس با معرفی تابع محدب هندسی عملگری، نتایج را توسیع داده و نامساوی هرمیت-هادامار گونه را برای این دست توابع اثبات می کنیم. در ادامه با نشان دادن محدب لگاریتمی بودن چند تابع که براساس نرم پایای یکانی تعریف شده اند، و با در نظر گرفتن ارتباط بین توابع محدب هندسی و توابع محدب لگاریتمی بهبودهایی از حالت نرمی چند نامساوی عملگری معروف ارائه می دهیم. هم چنین نتایج عددی بدست آمده جهت تقریب رده ای از توابع محدب را برای نسخه عملگری آن ها اثبات نموده و به عنوان کاربردی از نتایج حاصل، نامساوی هرمیت-هادامار را برای برخی توابع محدب عملگری بهبود می بخشیم. در نهایت، در مورد شعاع عددی یک عملگر، که معادل با نرم عملگری آن می باشد بحث نموده و به بیان برخی از نتایج مرتبط پرداخته، و با بدست آوردن کران های بالایی از عدد برزین یک عملگر که زیر مجموعه ای از برد عددی آن عملگر می باشد، رساله را به پایان می بریم
Висоцька, Марія Андріївна. "Модель оптимального податку". Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2021. https://ela.kpi.ua/handle/123456789/45204.
Повний текст джерелаThe diploma thesis contains 95 p., 16 fig., 8 tabl, 2 appendices, 11 sources. Theme: optimal tax model. The purpose: to analyze the existing model of the optimal tax, to propose an alternative model and to make a study based on some demonstration data. Objective: analyze the existing model of the optimal tax, propose an alternative model and do research based on some demonstration data. The result of this work is a software product with a user interface that helps to find the optimal tax model for some data and checks it for correctness. At the entrance we receive data on the amount of tax, tax rate and their schedules.
Bose, Gibin. "Approximation H infini, interpolation analytique et optimisation convexe : application à l’adaptation d’impédance large bande." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4007.
Повний текст джерелаThe thesis makes an in-depth study of one of the classical problems in RF circuit design,the problem of impedance matching. Matching problem addresses the issue of transmitting the maximum available power from a source to a load within a frequency band. Antennas are one of the classical devices in which impedance matching plays an important role. The design of a matching circuit for a given load primarily amounts to find a lossless scattering matrix which when chained to the load minimize the reflection of power in the total system.In this work, both the theoretical aspects of the broadband matching problem and thepractical applicability of the developed approaches are given due importance. Part I of the thesis covers two different yet closely related approaches to the matching problem. These are based on the classical approaches developed by Helton and Fano-Youla to study the broadband matching problems. The framework established in the first approach entails in finding the best H infinity approximation to an L infinity function, Փ via Nehari's theory. This amounts to reduce the problem to a generalized eigen value problem based on an operator defined on H2, the Hankel operator, HՓ. The realizability of a given gain is provided by the constraint, operator norm of HՓ less than or equal to one. The second approach formulates the matching problem as a convex optimisation problem where in further flexibility is provided to the gain profiles compared to the previous approach. It is based on two rich theories, namely Fano-Youla matching theory and analytic interpolation. The realizabilty of a given gain is based on the Fano-Youla de-embedding conditions which reduces to the positivity of a classical matrix in analytic interpolation theory, the Pick matrix. The concavity of the concerned Pick matrix allows finding the solution to the problem by means of implementing a non-linear semi-definite programming problem. Most importantly, we estimate sharp lower bounds to the matching criterion for finite degree matching circuits and furnish circuits attaining those bounds.Part II of the thesis aims at realizing the matching circuits as ladder networks consisting of inductors and capacitors and discusses some important realizability constraints as well. Matching circuits are designed for several mismatched antennas, testing the robustness of the developed approach. The theory developed in the first part of the thesis provides an efficient way of comparing the matching criterion obtained to the theoretical limits
Lopez, Mario A., Shlomo Reisner, and reisner@math haifa ac il. "Linear Time Approximation of 3D Convex Polytopes." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1005.ps.
Повний текст джерелаFung, Ping-yuen, and 馮秉遠. "Approximation for minimum triangulations of convex polyhedra." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B29809964.
Повний текст джерелаFung, Ping-yuen. "Approximation for minimum triangulations of convex polyhedra." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23273197.
Повний текст джерелаVerschueren, Robin [Verfasser], and Moritz [Akademischer Betreuer] Diehl. "Convex approximation methods for nonlinear model predictive control." Freiburg : Universität, 2018. http://d-nb.info/1192660641/34.
Повний текст джерелаBoiger, Wolfgang Josef. "Stabilised finite element approximation for degenerate convex minimisation problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16790.
Повний текст джерелаInfimising sequences of nonconvex variational problems often do not converge strongly in Sobolev spaces due to fine oscillations. These oscillations are physically meaningful; finite element approximations, however, fail to resolve them in general. Relaxation methods replace the nonconvex energy with its (semi)convex hull. This leads to a macroscopic model which is degenerate in the sense that it is not strictly convex and possibly admits multiple minimisers. The lack of control on the primal variable leads to difficulties in the a priori and a posteriori finite element error analysis, such as the reliability-efficiency gap and no strong convergence. To overcome these difficulties, stabilisation techniques add a discrete positive definite term to the relaxed energy. Bartels et al. (IFB, 2004) apply stabilisation to two-dimensional problems and thereby prove strong convergence of gradients. This result is restricted to smooth solutions and quasi-uniform meshes, which prohibit adaptive mesh refinements. This thesis concerns a modified stabilisation term and proves convergence of the stress and, for smooth solutions, strong convergence of gradients, even on unstructured meshes. Furthermore, the thesis derives the so-called flux error estimator and proves its reliability and efficiency. For interface problems with piecewise smooth solutions, a refined version of this error estimator is developed, which provides control of the error of the primal variable and its gradient and thus yields strong convergence of gradients. The refined error estimator converges faster than the flux error estimator and therefore narrows the reliability-efficiency gap. Numerical experiments with five benchmark examples from computational microstructure and topology optimisation complement and confirm the theoretical results.
Schulz, Henrik. "Polyhedral Surface Approximation of Non-Convex Voxel Sets and Improvements to the Convex Hull Computing Method." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-27865.
Повний текст джерелаSchulz, Henrik. "Polyhedral Surface Approximation of Non-Convex Voxel Sets and Improvements to the Convex Hull Computing Method." Forschungszentrum Dresden-Rossendorf, 2009. https://hzdr.qucosa.de/id/qucosa%3A21613.
Повний текст джерелаLubin, Miles (Miles C. ). "Mixed-integer convex optimization : outer approximation algorithms and modeling power." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113434.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 137-143).
In this thesis, we study mixed-integer convex optimization, or mixed-integer convex programming (MICP), the class of optimization problems where one seeks to minimize a convex objective function subject to convex constraints and integrality restrictions on a subset of the variables. We focus on two broad and complementary questions on MICP. The first question we address is, "what are efficient methods for solving MICP problems?" The methodology we develop is based on outer approximation, which allows us, for example, to reduce MICP to a sequence of mixed-integer linear programming (MILP) problems. By viewing MICP from the conic perspective of modern convex optimization as defined by Ben-Tal and Nemirovski, we obtain significant computational advances over the state of the art, e.g., by automating extended formulations by using disciplined convex programming. We develop the first finite-time outer approximation methods for problems in general mixed-integer conic form (which includes mixed-integer second-order-cone programming and mixed-integer semidefinite programming) and implement them in an open-source solver, Pajarito, obtaining competitive performance with the state of the art. The second question we address is, "which nonconvex constraints can be modeled with MICP?" This question is important for understanding both the modeling power gained in generalizing from MILP to MICP and the potential applicability of MICP to nonconvex optimization problems that may not be naturally represented with integer variables. Among our contributions, we completely characterize the case where the number of integer assignments is bounded (e.g., mixed-binary), and to address the more general case we develop the concept of "rationally unbounded" convex sets. We show that under this natural restriction, the projections of MICP feasible sets are well behaved and can be completely characterized in some settings.
by Miles Lubin.
Ph. D.
楊文聰 and Man-chung Yeung. "Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31209531.
Повний текст джерелаWright, Stephen E. "Convergence and approximation for primal-dual methods in large-scale optimization /." Thesis, Connect to this title online; UW restricted, 1990. http://hdl.handle.net/1773/5751.
Повний текст джерелаKuhn, Daniel. "Generalized bounds for convex multistage stochastic programs /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/enhancements/fy0818/2004109705-d.html.
Повний текст джерела伍卓仁 and Cheuk-yan Ng. "Pointwise Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210934.
Повний текст джерела吳家樂 and Ka-lok Ng. "Relative korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213479.
Повний текст джерелаNg, Ka-lok. "Relative korovkin approximation in function spaces /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B17506074.
Повний текст джерелаNg, Cheuk-yan. "Pointwise Korovkin approximation in function spaces /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13474522.
Повний текст джерелаMiranda, Brando M. Eng Massachusetts Institute of Technology. "Training hierarchical networks for function approximation." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113159.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-60).
In this work we investigate function approximation using Hierarchical Networks. We start of by investigating the theory proposed by Poggio et al [2] that Deep Learning Convolutional Neural Networks (DCN) can be equivalent to hierarchical kernel machines with the Radial Basis Functions (RBF).We investigate the difficulty of training RBF networks with stochastic gradient descent (SGD) and hierarchical RBF. We discovered that training singled layered RBF networks can be quite simple with a good initialization and good choice of standard deviation for the Gaussian. Training hierarchical RBFs remains as an open question, however, we clearly identified the issue surrounding training hierarchical RBFs and potential methods to resolve this. We also compare standard DCN networks to hierarchical Radial Basis Functions in tasks that has not been explored yet; the role of depth in learning compositional functions.
by Brando Miranda.
M. Eng.
Chan, Jor-ting, and 陳作庭. "Compact convex sets and their affine function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B30425840.
Повний текст джерелаChan, Jor-ting. "Compact convex sets and their affine function spaces /." [Hong Kong : University of Hong Kong, 1987. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12344953.
Повний текст джерелаMalek, Alaeddin. "The numerical approximation of surface area by surface triangulation /." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65498.
Повний текст джерелаPathak, Harsh Nilesh. "Parameter Continuation with Secant Approximation for Deep Neural Networks." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1256.
Повний текст джерелаTrienis, Michael Joseph. "Computational convex analysis : from continuous deformation to finite convex integration." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/2799.
Повний текст джерелаBen, Daya Mohamed. "Barrier function algorithms for linear and convex quadratic programming." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/25502.
Повний текст джерелаCheung, Ho Yin. "Function approximation with higher-order fuzzy systems /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20CHEUNG.
Повний текст джерелаOng, Wen Eng. "Some Basis Function Methods for Surface Approximation." Thesis, University of Canterbury. Mathematics and Statistics, 2012. http://hdl.handle.net/10092/7776.
Повний текст джерелаStrand, Filip. "Latent Task Embeddings forFew-Shot Function Approximation." Thesis, KTH, Optimeringslära och systemteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-243832.
Повний текст джерелаAtt snabbt kunna approximera en funktion baserat på ett fåtal data-punkter är ett viktigt problem, speciellt inom områden där tillgängliga datamängder är relativt små, till exempel inom delar av robotikområdet. Under de senaste åren har flexibla och skalbara inlärningsmetoder, såsom exempelvis neurala nätverk, uppvisat framstående egenskaper i scenarion där en stor mängd data finns att tillgå. Dessa metoder tenderar dock att prestera betydligt sämre i låg-data regimer vilket motiverar sökandet efter alternativa metoder. Ett sätt att adressera denna begränsning är genom att utnyttja tidigare erfarenheter och antaganden (eng. prior information) om funktionsklassen som skall approximeras när sådan information finns tillgänglig. Ibland kan denna typ av information uttryckas i sluten matematisk form, men mer generellt är så inte fallet. Denna uppsats är fokuserad på det mer generella fallet där vi endast antar att vi kan sampla datapunkter från en databas av tidigare erfarenheter - exempelvis från en simulator där vi inte känner till de interna detaljerna. För detta ändamål föreslår vi en metod för att lära från dessa tidigare erfarenheter genom att i förväg träna på en större datamängd som utgör en familj av relaterade funktioner. I detta steg bygger vi upp ett så kallat latent funktionsrum (eng. latent task embeddings) som innesluter alla variationer av funktioner från träningsdatan och som sedan effektivt kan genomsökas i syfte av att hitta en specifik funktion - en process som vi kallar för finjustering (eng. fine-tuning). Den föreslagna metoden kan betraktas som ett specialfall av en auto-encoder och använder sig av samma ide som den nyligen publicerade Conditional Neural Processes metoden där individuella datapunkter enskilt kodas och grupperas. Vi utökar denna metod genom att inkorporera en sidofunktion (eng. auxiliary function) och genom att föreslå ytterligare metoder för att genomsöka det latenta funktionsrummet efter den initiala träningen. Den föreslagna metoden möjliggör att sökandet efter en specifik funktion typiskt kan göras med endast ett fåtal datapunkter. Vi utvärderar metoden genom att studera kurvanpassningsförmågan på sinuskurvor och genom att applicera den på två robotikproblem med syfte att snabbt kunna identifiera och styra dessa dynamiska system.
Jackson, Ian Robert Hart. "Radial basis function methods for multivariable approximation." Thesis, University of Cambridge, 1988. https://www.repository.cam.ac.uk/handle/1810/270416.
Повний текст джерелаHou, Jun. "Function Approximation and Classification with Perturbed Data." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618266875924225.
Повний текст джерелаKarimi, Belhal. "Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.
Повний текст джерелаMany problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
Yaskina, Maryna. "Topics in functional analysis and convex geometry." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4346.
Повний текст джерелаThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (March 1, 2007) Vita. Includes bibliographical references.
Ahmad, Nur Syazreen. "Convex methods for discrete-time constrained control." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/convex-methods-for-discretetime-constrained-control(ae161164-767c-41ec-8c03-75779ccc0699).html.
Повний текст джерелаBezushko, V. P. "Recognition system of flat convex figures by using disproportionate function." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/65234.
Повний текст джерелаSteuding, Rasa, Jörn Steuding, Kohji Matsumoto, Antanas Laurinčikas, and Ramūnas Garunkštis. "Effective uniform approximation by the Riemann zeta-function." Department of Mathematics of the Universitat Autònoma de Barcelona, 2010. http://hdl.handle.net/2237/20429.
Повний текст джерелаHales, Stephen. "Approximation by translates of a radial basis function." Thesis, University of Leicester, 2000. http://hdl.handle.net/2381/30513.
Повний текст джерелаYaskin, Vladyslav. "Applications of the fourier transform to convex geometry." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4464.
Повний текст джерелаThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (March 1, 2007) Vita. Includes bibliographical references.
Lin, Yier. "Dynamics of a continuum characterized by a non-convex energy function." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/87809.
Повний текст джерелаVenema, Viktor. "Non-Convex Potential Function Boosting Versus Noise Peeling : - A Comparative Study." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-302289.
Повний текст джерелаHodrea, Ioan Bogdan. "Farkas - type results for convex and non - convex inequality systems." Doctoral thesis, [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800075.
Повний текст джерелаMahadevan, Swaminathan. "Probabilistic linear function approximation for value-based reinforcement learning." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98759.
Повний текст джерелаVan, Roy Benjamin. "Learning and value function approximation in complex decision processes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9960.
Повний текст джерелаIncludes bibliographical references (p. 127-133).
by Benjamin Van Roy.
Ph.D.
Swingler, Kevin. "Mixed order hyper-networks for function approximation and optimisation." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/25349.
Повний текст джерелаSkelly, Margaret Mary. "Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818.
Повний текст джерелаZhang, Qi. "Multilevel adaptive radial basis function approximation using error indicators." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/38284.
Повний текст джерелаDieuleveut, Aymeric. "Stochastic approximation in Hilbert spaces." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE059/document.
Повний текст джерелаThe goal of supervised machine learning is to infer relationships between a phenomenon one seeks to predict and “explanatory” variables. To that end, multiple occurrences of the phenomenon are observed, from which a prediction rule is constructed. The last two decades have witnessed the apparition of very large data-sets, both in terms of the number of observations (e.g., in image analysis) and in terms of the number of explanatory variables (e.g., in genetics). This has raised two challenges: first, avoiding the pitfall of over-fitting, especially when the number of explanatory variables is much higher than the number of observations; and second, dealing with the computational constraints, such as when the mere resolution of a linear system becomes a difficulty of its own. Algorithms that take their roots in stochastic approximation methods tackle both of these difficulties simultaneously: these stochastic methods dramatically reduce the computational cost, without degrading the quality of the proposed prediction rule, and they can naturally avoid over-fitting. As a consequence, the core of this thesis will be the study of stochastic gradient methods. The popular parametric methods give predictors which are linear functions of a set ofexplanatory variables. However, they often result in an imprecise approximation of the underlying statistical structure. In the non-parametric setting, which is paramount in this thesis, this restriction is lifted. The class of functions from which the predictor is proposed depends on the observations. In practice, these methods have multiple purposes, and are essential for learning with non-vectorial data, which can be mapped onto a vector in a functional space using a positive definite kernel. This allows to use algorithms designed for vectorial data, but requires the analysis to be made in the non-parametric associated space: the reproducing kernel Hilbert space. Moreover, the analysis of non-parametric regression also sheds some light on the parametric setting when the number of predictors is much larger than the number of observations. The first contribution of this thesis is to provide a detailed analysis of stochastic approximation in the non-parametric setting, precisely in reproducing kernel Hilbert spaces. This analysis proves optimal convergence rates for the averaged stochastic gradient descent algorithm. As we take special care in using minimal assumptions, it applies to numerous situations, and covers both the settings in which the number of observations is known a priori, and situations in which the learning algorithm works in an on-line fashion. The second contribution is an algorithm based on acceleration, which converges at optimal speed, both from the optimization point of view and from the statistical one. In the non-parametric setting, this can improve the convergence rate up to optimality, even inparticular regimes for which the first algorithm remains sub-optimal. Finally, the third contribution of the thesis consists in an extension of the framework beyond the least-square loss. The stochastic gradient descent algorithm is analyzed as a Markov chain. This point of view leads to an intuitive and insightful interpretation, that outlines the differences between the quadratic setting and the more general setting. A simple method resulting in provable improvements in the convergence is then proposed
Hess, Roxana. "Some approximation schemes in polynomial optimization." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30129/document.
Повний текст джерелаThis thesis is dedicated to investigations of the moment-sums-of-squares hierarchy, a family of semidefinite programming problems in polynomial optimization, commonly called the Lasserre hierarchy. We examine different aspects of its properties and purposes. As applications of the hierarchy, we approximate some potentially complicated objects, namely the polynomial abscissa and optimal designs on semialgebraic domains. Applying the Lasserre hierarchy results in approximations by polynomials of fixed degree and hence bounded complexity. With regard to the complexity of the hierarchy itself, we construct a modification of it for which an improved convergence rate can be proved. An essential concept of the hierarchy is to use quadratic modules and their duals as a tractable characterization of the cone of positive polynomials and the moment cone, respectively. We exploit further this idea to construct tight approximations of semialgebraic sets with polynomial separators
Burniston, J. D. "A neural network/rule-based architecture for continuous function approximation." Thesis, University of Nottingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387198.
Повний текст джерелаTham, Chen Khong. "Modular on-line function approximation for scaling up reinforcement learning." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309702.
Повний текст джерелаBurton, Christina Marie. "Quadratic Spline Approximation of the Newsvendor Problem Optimal Cost Function." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3087.
Повний текст джерела