Dissertations / Theses on the topic 'Approximation of convex function'

To see the other types of publications on this topic, follow the link: Approximation of convex function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Approximation of convex function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Azimi, Roushan Tahere. "Inequalities related to norm and numerical radius of operators." Thesis, Pau, 2020. http://www.theses.fr/2020PAUU3002.

Full text
Abstract:
Dans cette thèse, après la présentation des notions et des introductions nécessaires, nous étudionslinégalité Hermite-Hadamard pour les fonctions convexes géométriques. Après, nousdéveloppons les résultats en introduisant la fonction convexe géométrique opérationnelle etnous confirmons linégalité Hermite-Hadamard pour ces sortes de fonctions. Ensuite, nousmontrons certaines améliorations du cas normatif de certaines inégalités opérationnelles célèbres,en montrant le rôle convexe logarithmique de quelques fonctions classées selon lanorme stable et aussi en considérant le lien entre les fonctions convexes géométriques et lesfonctions logarithmiques. De plus, nous confirmons les résultats numériques obtenus pourapprocher une catégorie des fonctions convexes pour leur version opérationnelle et nousaméliorons linégalitéHermite-Hadamard pour certaines fonctions convexes opérationnellesen tant quune utilisation des résultats obtenus. Enfin, nous discutons à propos du rayon numériquedun opérateur qui est équivalent de sa norme opérationnelle et nous présentonsdes résultats concernés. Nous terminons cette thèse en obtenant les bornes supérieures dunombre Berezin dun opérateur
In this thesis, after expressing concepts and necessary preconditions, we investigate Hermite-Hadamard inequality for geometrically convex functions. Then, by introducing operator geometricallyconvex functions, we extend the results and prove Hermite-Hadamard type inequalityfor these kind of functions. In the following, by proving the log-convexity of somefunctions which are based on the unitarily invariant norm and considering the relation betweengeometrically convex functions and log-convex functions, we present several refinementsfor some well-known operator norm inequalities. Also, we prove operator version ofsome numerical results, which were obtained for approximating a class of convex functions,as an application,we refine Hermite-Hadamard inequality for a class of operator convex functions.Finally, we discuss about the numerical radius of an operator which is equivalent withthe operator norm and we state some related results, and by obtaining some upper boundsfor the Berezin number of an operator which is contained in the numerical range of that operator, we finish the thesis
در این رساله، پس از بیان مفاهیم و مقدمات لازم به بررسی نامساوی هرمیت-هادامار برای توابع محدب هندسی پرداخته سپس با معرفی تابع محدب هندسی عملگری، نتایج را توسیع داده و نامساوی هرمیت-هادامار گونه را برای این دست توابع اثبات می کنیم. در ادامه با نشان دادن محدب لگاریتمی بودن چند تابع که براساس نرم پایای یکانی تعریف شده اند، و با در نظر گرفتن ارتباط بین توابع محدب هندسی و توابع محدب لگاریتمی بهبودهایی از حالت نرمی چند نامساوی عملگری معروف ارائه می دهیم. هم چنین نتایج عددی بدست آمده جهت تقریب رده ای از توابع محدب را برای نسخه عملگری آن ها اثبات نموده و به عنوان کاربردی از نتایج حاصل، نامساوی هرمیت-هادامار را برای برخی توابع محدب عملگری بهبود می بخشیم. در نهایت، در مورد شعاع عددی یک عملگر، که معادل با نرم عملگری آن می باشد بحث نموده و به بیان برخی از نتایج مرتبط پرداخته، و با بدست آوردن کران های بالایی از عدد برزین یک عملگر که زیر مجموعه ای از برد عددی آن عملگر می باشد، رساله را به پایان می بریم
APA, Harvard, Vancouver, ISO, and other styles
2

Висоцька, Марія Андріївна. "Модель оптимального податку." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2021. https://ela.kpi.ua/handle/123456789/45204.

Full text
Abstract:
Дипломна робота містить 95 с., 16 рис., 8 табл., 1 додатки, 11 джерел. Тема дослідження: модель оптимального податку Мета дослідження: проаналізувати існуючі модель оптимального податку, запропонувати альтернативну моделі та зробити дослідження на основі деяких демонстраційних даних. Результатом даної роботи є програмний продукт з користувацьким інтерфейсом, який допомагає знайти оптимальну моддель оподаткування для деяких даних та перевіряє її на коректність. На вхід ми отримуємо дані про розмір податку, ставку від податку та ще їх графіки.
The diploma thesis contains 95 p., 16 fig., 8 tabl, 2 appendices, 11 sources. Theme: optimal tax model. The purpose: to analyze the existing model of the optimal tax, to propose an alternative model and to make a study based on some demonstration data. Objective: analyze the existing model of the optimal tax, propose an alternative model and do research based on some demonstration data. The result of this work is a software product with a user interface that helps to find the optimal tax model for some data and checks it for correctness. At the entrance we receive data on the amount of tax, tax rate and their schedules.
APA, Harvard, Vancouver, ISO, and other styles
3

Bose, Gibin. "Approximation H infini, interpolation analytique et optimisation convexe : application à l’adaptation d’impédance large bande." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4007.

Full text
Abstract:
La thèse étudie en profondeur l'un des problèmes classiques de la conception de circuits RF, le problème de l'adaptation d'impédance. L’adaptation d’impédance consiste à maximiser le transfert de puissance d'une source à une charge dans une bande de fréquences. Les antennes sont l'un des dispositifs classiques dans lesquels l'adaptation d'impédance joue un rôle important. La conception d'un circuit d'adaptation pour une charge donnée revient principalement à trouver une matrice de diffusion sans perte qui, lorsqu'elle est enchaînée à la charge, minimise la réflexion de la puissance dans l'ensemble du système.Dans ce travail, les aspects théoriques du problème de l'adaptation et l'applicabilité pratique des approches développées sont dûment pris en compte. La partie I de la thèse couvre deux approches différentes mais étroitement liées du problème de l'adaptation large bande. Le cadre développé dans la première approche consiste à trouver la meilleure approximation H infini d'une fonction L infini, Փ via la théorie de Nehari. Cela revient à réduire le problème à un problème généralisé de valeurs propres basé sur un opérateur défini sur H2, l'opérateur de Hankel, HՓ. La réalisabilité d'un gain donné est fournie par la contrainte, opérateur norme de HՓ inférieure ou égale à un. La seconde approche formule le problème de l'adaptation comme un problème d'optimisation convexe où une plus grande flexibilité est fournie aux profils de gain par rapport à l'approche précédente. Il est basé sur deux théories riches, à savoir la théorie de l'adaptation de Fano-Youla et l'interpolation analytique. La réalisabilité d'un gain donné est basée sur les conditions de dé-chaînage de Fano-Youla qui se réduisent à la positivité d'une matrice classique en théorie d'interpolation analytique, la matrice de Pick. La concavité de la matrice de Pick concernée permet de trouver la solution au problème au moyen de l'implémentation d'un problème de programmation semi-défini non linéaire. Ainsi, nous estimons des limites inférieures nettes au niveau d'adaptation pour les circuits d'adaptation de degré fini et fournissons des circuits atteignant ces limites.La partie II de la thèse vise à réaliser les circuits d'adaptation sous forme de réseaux en échelle constitués d'inductances et de condensateurs et aborde également certaines contraintes importantes de réalisabilité. Les circuits d'adaptation sont conçus pour plusieurs antennes non-adaptées, testant la robustesse de l'approche développée. La théorie développée dans la première partie de la thèse offre un moyen efficace de comparer le niveau d'adaptation atteint aux limites théoriques
The thesis makes an in-depth study of one of the classical problems in RF circuit design,the problem of impedance matching. Matching problem addresses the issue of transmitting the maximum available power from a source to a load within a frequency band. Antennas are one of the classical devices in which impedance matching plays an important role. The design of a matching circuit for a given load primarily amounts to find a lossless scattering matrix which when chained to the load minimize the reflection of power in the total system.In this work, both the theoretical aspects of the broadband matching problem and thepractical applicability of the developed approaches are given due importance. Part I of the thesis covers two different yet closely related approaches to the matching problem. These are based on the classical approaches developed by Helton and Fano-Youla to study the broadband matching problems. The framework established in the first approach entails in finding the best H infinity approximation to an L infinity function, Փ via Nehari's theory. This amounts to reduce the problem to a generalized eigen value problem based on an operator defined on H2, the Hankel operator, HՓ. The realizability of a given gain is provided by the constraint, operator norm of HՓ less than or equal to one. The second approach formulates the matching problem as a convex optimisation problem where in further flexibility is provided to the gain profiles compared to the previous approach. It is based on two rich theories, namely Fano-Youla matching theory and analytic interpolation. The realizabilty of a given gain is based on the Fano-Youla de-embedding conditions which reduces to the positivity of a classical matrix in analytic interpolation theory, the Pick matrix. The concavity of the concerned Pick matrix allows finding the solution to the problem by means of implementing a non-linear semi-definite programming problem. Most importantly, we estimate sharp lower bounds to the matching criterion for finite degree matching circuits and furnish circuits attaining those bounds.Part II of the thesis aims at realizing the matching circuits as ladder networks consisting of inductors and capacitors and discusses some important realizability constraints as well. Matching circuits are designed for several mismatched antennas, testing the robustness of the developed approach. The theory developed in the first part of the thesis provides an efficient way of comparing the matching criterion obtained to the theoretical limits
APA, Harvard, Vancouver, ISO, and other styles
4

Lopez, Mario A., Shlomo Reisner, and reisner@math haifa ac il. "Linear Time Approximation of 3D Convex Polytopes." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1005.ps.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fung, Ping-yuen, and 馮秉遠. "Approximation for minimum triangulations of convex polyhedra." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B29809964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fung, Ping-yuen. "Approximation for minimum triangulations of convex polyhedra." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23273197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Verschueren, Robin [Verfasser], and Moritz [Akademischer Betreuer] Diehl. "Convex approximation methods for nonlinear model predictive control." Freiburg : Universität, 2018. http://d-nb.info/1192660641/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Boiger, Wolfgang Josef. "Stabilised finite element approximation for degenerate convex minimisation problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16790.

Full text
Abstract:
Infimalfolgen nichtkonvexer Variationsprobleme haben aufgrund feiner Oszillationen häufig keinen starken Grenzwert in Sobolevräumen. Diese Oszillationen haben eine physikalische Bedeutung; Finite-Element-Approximationen können sie jedoch im Allgemeinen nicht auflösen. Relaxationsmethoden ersetzen die nichtkonvexe Energie durch ihre (semi)konvexe Hülle. Das entstehende makroskopische Modell ist degeneriert: es ist nicht strikt konvex und hat eventuell mehrere Minimalstellen. Die fehlende Kontrolle der primalen Variablen führt zu Schwierigkeiten bei der a priori und a posteriori Fehlerschätzung, wie der Zuverlässigkeits- Effizienz-Lücke und fehlender starker Konvergenz. Zur Überwindung dieser Schwierigkeiten erweitern Stabilisierungstechniken die relaxierte Energie um einen diskreten, positiv definiten Term. Bartels et al. (IFB, 2004) wenden Stabilisierung auf zweidimensionale Probleme an und beweisen dabei starke Konvergenz der Gradienten. Dieses Ergebnis ist auf glatte Lösungen und quasi-uniforme Netze beschränkt, was adaptive Netzverfeinerungen ausschließt. Die vorliegende Arbeit behandelt einen modifizierten Stabilisierungsterm und beweist auf unstrukturierten Netzen sowohl Konvergenz der Spannungstensoren, als auch starke Konvergenz der Gradienten für glatte Lösungen. Ferner wird der sogenannte Fluss-Fehlerschätzer hergeleitet und dessen Zuverlässigkeit und Effizienz gezeigt. Für Interface-Probleme mit stückweise glatter Lösung wird eine Verfeinerung des Fehlerschätzers entwickelt, die den Fehler der primalen Variablen und ihres Gradienten beschränkt und so starke Konvergenz der Gradienten sichert. Der verfeinerte Fehlerschätzer konvergiert schneller als der Fluss- Fehlerschätzer, und verringert so die Zuverlässigkeits-Effizienz-Lücke. Numerische Experimente mit fünf Benchmark-Tests der Mikrostruktursimulation und Topologieoptimierung ergänzen und bestätigen die theoretischen Ergebnisse.
Infimising sequences of nonconvex variational problems often do not converge strongly in Sobolev spaces due to fine oscillations. These oscillations are physically meaningful; finite element approximations, however, fail to resolve them in general. Relaxation methods replace the nonconvex energy with its (semi)convex hull. This leads to a macroscopic model which is degenerate in the sense that it is not strictly convex and possibly admits multiple minimisers. The lack of control on the primal variable leads to difficulties in the a priori and a posteriori finite element error analysis, such as the reliability-efficiency gap and no strong convergence. To overcome these difficulties, stabilisation techniques add a discrete positive definite term to the relaxed energy. Bartels et al. (IFB, 2004) apply stabilisation to two-dimensional problems and thereby prove strong convergence of gradients. This result is restricted to smooth solutions and quasi-uniform meshes, which prohibit adaptive mesh refinements. This thesis concerns a modified stabilisation term and proves convergence of the stress and, for smooth solutions, strong convergence of gradients, even on unstructured meshes. Furthermore, the thesis derives the so-called flux error estimator and proves its reliability and efficiency. For interface problems with piecewise smooth solutions, a refined version of this error estimator is developed, which provides control of the error of the primal variable and its gradient and thus yields strong convergence of gradients. The refined error estimator converges faster than the flux error estimator and therefore narrows the reliability-efficiency gap. Numerical experiments with five benchmark examples from computational microstructure and topology optimisation complement and confirm the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
9

Schulz, Henrik. "Polyhedral Surface Approximation of Non-Convex Voxel Sets and Improvements to the Convex Hull Computing Method." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-27865.

Full text
Abstract:
In this paper we introduce an algorithm for the creation of polyhedral approximations for objects represented as strongly connected sets of voxels in three-dimensional binary images. The algorithm generates the convex hull of a given object and modifies the hull afterwards by recursive repetitions of generating convex hulls of subsets of the given voxel set or subsets of the background voxels. The result of this method is a polyhedron which separates object voxels from background voxels. The objects processed by this algorithm and also the background voxel components inside the convex hull of the objects are restricted to have genus 0. The second aim of this paper is to present some improvements to our convex hull algorithm to reduce computation time.
APA, Harvard, Vancouver, ISO, and other styles
10

Schulz, Henrik. "Polyhedral Surface Approximation of Non-Convex Voxel Sets and Improvements to the Convex Hull Computing Method." Forschungszentrum Dresden-Rossendorf, 2009. https://hzdr.qucosa.de/id/qucosa%3A21613.

Full text
Abstract:
In this paper we introduce an algorithm for the creation of polyhedral approximations for objects represented as strongly connected sets of voxels in three-dimensional binary images. The algorithm generates the convex hull of a given object and modifies the hull afterwards by recursive repetitions of generating convex hulls of subsets of the given voxel set or subsets of the background voxels. The result of this method is a polyhedron which separates object voxels from background voxels. The objects processed by this algorithm and also the background voxel components inside the convex hull of the objects are restricted to have genus 0. The second aim of this paper is to present some improvements to our convex hull algorithm to reduce computation time.
APA, Harvard, Vancouver, ISO, and other styles
11

Lubin, Miles (Miles C. ). "Mixed-integer convex optimization : outer approximation algorithms and modeling power." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113434.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 137-143).
In this thesis, we study mixed-integer convex optimization, or mixed-integer convex programming (MICP), the class of optimization problems where one seeks to minimize a convex objective function subject to convex constraints and integrality restrictions on a subset of the variables. We focus on two broad and complementary questions on MICP. The first question we address is, "what are efficient methods for solving MICP problems?" The methodology we develop is based on outer approximation, which allows us, for example, to reduce MICP to a sequence of mixed-integer linear programming (MILP) problems. By viewing MICP from the conic perspective of modern convex optimization as defined by Ben-Tal and Nemirovski, we obtain significant computational advances over the state of the art, e.g., by automating extended formulations by using disciplined convex programming. We develop the first finite-time outer approximation methods for problems in general mixed-integer conic form (which includes mixed-integer second-order-cone programming and mixed-integer semidefinite programming) and implement them in an open-source solver, Pajarito, obtaining competitive performance with the state of the art. The second question we address is, "which nonconvex constraints can be modeled with MICP?" This question is important for understanding both the modeling power gained in generalizing from MILP to MICP and the potential applicability of MICP to nonconvex optimization problems that may not be naturally represented with integer variables. Among our contributions, we completely characterize the case where the number of integer assignments is bounded (e.g., mixed-binary), and to address the more general case we develop the concept of "rationally unbounded" convex sets. We show that under this natural restriction, the projections of MICP feasible sets are well behaved and can be completely characterized in some settings.
by Miles Lubin.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

楊文聰 and Man-chung Yeung. "Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31209531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wright, Stephen E. "Convergence and approximation for primal-dual methods in large-scale optimization /." Thesis, Connect to this title online; UW restricted, 1990. http://hdl.handle.net/1773/5751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kuhn, Daniel. "Generalized bounds for convex multistage stochastic programs /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/enhancements/fy0818/2004109705-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

伍卓仁 and Cheuk-yan Ng. "Pointwise Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

吳家樂 and Ka-lok Ng. "Relative korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ng, Ka-lok. "Relative korovkin approximation in function spaces /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B17506074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ng, Cheuk-yan. "Pointwise Korovkin approximation in function spaces /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13474522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Miranda, Brando M. Eng Massachusetts Institute of Technology. "Training hierarchical networks for function approximation." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113159.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-60).
In this work we investigate function approximation using Hierarchical Networks. We start of by investigating the theory proposed by Poggio et al [2] that Deep Learning Convolutional Neural Networks (DCN) can be equivalent to hierarchical kernel machines with the Radial Basis Functions (RBF).We investigate the difficulty of training RBF networks with stochastic gradient descent (SGD) and hierarchical RBF. We discovered that training singled layered RBF networks can be quite simple with a good initialization and good choice of standard deviation for the Gaussian. Training hierarchical RBFs remains as an open question, however, we clearly identified the issue surrounding training hierarchical RBFs and potential methods to resolve this. We also compare standard DCN networks to hierarchical Radial Basis Functions in tasks that has not been explored yet; the role of depth in learning compositional functions.
by Brando Miranda.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
20

Chan, Jor-ting, and 陳作庭. "Compact convex sets and their affine function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B30425840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chan, Jor-ting. "Compact convex sets and their affine function spaces /." [Hong Kong : University of Hong Kong, 1987. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12344953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Malek, Alaeddin. "The numerical approximation of surface area by surface triangulation /." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Pathak, Harsh Nilesh. "Parameter Continuation with Secant Approximation for Deep Neural Networks." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1256.

Full text
Abstract:
Non-convex optimization of deep neural networks is a well-researched problem. We present a novel application of continuation methods for deep learning optimization that can potentially arrive at a better solution. In our method, we first decompose the original optimization problem into a sequence of problems using a homotopy method. To achieve this in neural networks, we derive the Continuation(C)- Activation function. First, C-Activation is a homotopic formulation of existing activation functions such as Sigmoid, ReLU or Tanh. Second, we apply a method which is standard in the parameter continuation domain, but to the best of our knowledge, novel to the deep learning domain. In particular, we use Natural Parameter Continuation with Secant approximation(NPCS), an effective training strategy that may find a superior local minimum for a non-convex optimization problem. Additionally, we extend our work on Step-up GANs, a data continuation approach, by deriving a method called Continuous(C)-SMOTE which is an extension of standard oversampling algorithms. We demonstrate the improvements made by our methods and establish a categorization of recent work done on continuation methods in the context of deep learning.
APA, Harvard, Vancouver, ISO, and other styles
24

Trienis, Michael Joseph. "Computational convex analysis : from continuous deformation to finite convex integration." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/2799.

Full text
Abstract:
After introducing concepts from convex analysis, we study how to continuously transform one convex function into another. A natural choice is the arithmetic average, as it is pointwise continuous; however, this choice fails to average functions with different domains. On the contrary, the proximal average is not only continuous (in the epi-topology) but can actually average functions with disjoint domains. In fact, the proximal average not only inherits strict convexity (like the arithmetic average) but also inherits smoothness and differentiability (unlike the arithmetic average). Then we introduce a computational framework for computer-aided convex analysis. Motivated by the proximal average, we notice that the class of piecewise linear-quadratic (PLQ) functions is closed under (positive) scalar multiplication, addition, Fenchel conjugation, and Moreau envelope. As a result, the PLQ framework gives rise to linear-time and linear-space algorithms for convex PLQ functions. We extend this framework to nonconvex PLQ functions and present an explicit convex hull algorithm. Finally, we discuss a method to find primal-dual symmetric antiderivatives from cyclically monotone operators. As these antiderivatives depend on the minimal and maximal Rockafellar functions [5, Theorem 3.5, Corollary 3.10], it turns out that the minimal and maximal function in [12, p.132,p.136] are indeed the same functions. Algorithms used to compute these antiderivatives can be formulated as shortest path problems.
APA, Harvard, Vancouver, ISO, and other styles
25

Ben, Daya Mohamed. "Barrier function algorithms for linear and convex quadratic programming." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/25502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cheung, Ho Yin. "Function approximation with higher-order fuzzy systems /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20CHEUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ong, Wen Eng. "Some Basis Function Methods for Surface Approximation." Thesis, University of Canterbury. Mathematics and Statistics, 2012. http://hdl.handle.net/10092/7776.

Full text
Abstract:
This thesis considers issues in surface reconstruction such as identifying approximation methods that work well for certain applications and developing efficient methods to compute and manipulate these approximations. The first part of the thesis illustrates a new fast evaluation scheme to efficiently calculate thin-plate splines in two dimensions. In the fast multipole method scheme, exponential expansions/approximations are used as an intermediate step in converting far field series to local polynomial approximations. The contributions here are extending the scheme to the thin-plate spline and a new error analysis. The error analysis covers the practically important case where truncated series are used throughout, and through off line computation of error constants gives sharp error bounds. In the second part of this thesis, we investigates fitting a surface to an object using blobby models as a coarse level approximation. The aim is to achieve a given quality of approximation with relatively few parameters. This process involves an optimization procedure where a number of blobs (ellipses or ellipsoids) are separately fitted to a cloud of points. Then the optimized blobs are combined to yield an implicit surface approximating the cloud of points. The results for our test cases in 2 and 3 dimensions are very encouraging. For many applications, the coarse level blobby model itself will be sufficient. For example adding texture on top of the blobby surface can give a surprisingly realistic image. The last part of the thesis describes a method to reconstruct surfaces with known discontinuities. We fit a surface to the data points by performing a scattered data interpolation using compactly supported RBFs with respect to a geodesic distance. Techniques from computational geometry such as the visibility graph are used to compute the shortest Euclidean distance between two points, avoiding any obstacles. Results have shown that discontinuities on the surface were clearly reconstructed, and the
APA, Harvard, Vancouver, ISO, and other styles
28

Strand, Filip. "Latent Task Embeddings forFew-Shot Function Approximation." Thesis, KTH, Optimeringslära och systemteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-243832.

Full text
Abstract:
Approximating a function from a few data points is of great importance in fields where data is scarce, like, for example, in robotics applications. Recently, scalable and expressive parametric models like deep neural networks have demonstrated superior performance on a wide variety of function approximation tasks when plenty of data is available –however, these methods tend to perform considerably worse in low-data regimes which calls for alternative approaches. One way to address such limitations is by leveraging prior information about the function class to be estimated when such data is available. Sometimes this prior may be known in closed mathematical form but in general it is not. This the-sis is concerned with the more general case where the prior can only be sampled from, such as a black-box forward simulator. To this end, we propose a simple and scalable approach to learning a prior over functions by training a neural network on data from a distribution of related functions. This steps amounts to building a so called latent task embedding where all related functions (tasks) reside and which later can be efficiently searched at task-inference time - a process called fine-tuning. The pro-posed method can be seen as a special type of auto-encoder and employs the same idea of encoding individual data points during training as the recently proposed Conditional Neural Processes. We extend this work by also incorporating an auxiliary task and by providing additional latent space search methods for increased performance after the initial training step. The task-embedding framework makes finding the right function from a family of related function quick and generally requires only a few informative data points from that function. We evaluate the method by regressing onto the harmonic family of curves and also by applying it to two robotic systems with the aim of quickly identifying and controlling those systems.
Att snabbt kunna approximera en funktion baserat på ett fåtal data-punkter är ett viktigt problem, speciellt inom områden där tillgängliga datamängder är relativt små, till exempel inom delar av robotikområdet. Under de senaste åren har flexibla och skalbara inlärningsmetoder, såsom exempelvis neurala nätverk, uppvisat framstående egenskaper i scenarion där en stor mängd data finns att tillgå. Dessa metoder tenderar dock att prestera betydligt sämre i låg-data regimer vilket motiverar sökandet efter alternativa metoder. Ett sätt att adressera denna begränsning är genom att utnyttja tidigare erfarenheter och antaganden (eng. prior information) om funktionsklassen som skall approximeras när sådan information finns tillgänglig. Ibland kan denna typ av information uttryckas i sluten matematisk form, men mer generellt är så inte fallet. Denna uppsats är fokuserad på det mer generella fallet där vi endast antar att vi kan sampla datapunkter från en databas av tidigare erfarenheter - exempelvis från en simulator där vi inte känner till de interna detaljerna. För detta ändamål föreslår vi en metod för att lära från dessa tidigare erfarenheter genom att i förväg träna på en större datamängd som utgör en familj av relaterade funktioner. I detta steg bygger vi upp ett så kallat latent funktionsrum (eng. latent task embeddings) som innesluter alla variationer av funktioner från träningsdatan och som sedan effektivt kan genomsökas i syfte av att hitta en specifik funktion - en process som vi kallar för finjustering (eng. fine-tuning). Den föreslagna metoden kan betraktas som ett specialfall av en auto-encoder och använder sig av samma ide som den nyligen publicerade Conditional Neural Processes metoden där individuella datapunkter enskilt kodas och grupperas. Vi utökar denna metod genom att inkorporera en sidofunktion (eng. auxiliary function) och genom att föreslå ytterligare metoder för att genomsöka det latenta funktionsrummet efter den initiala träningen. Den föreslagna metoden möjliggör att sökandet efter en specifik funktion typiskt kan göras med endast ett fåtal datapunkter. Vi utvärderar metoden genom att studera kurvanpassningsförmågan på sinuskurvor och genom att applicera den på två robotikproblem med syfte att snabbt kunna identifiera och styra dessa dynamiska system.
APA, Harvard, Vancouver, ISO, and other styles
29

Jackson, Ian Robert Hart. "Radial basis function methods for multivariable approximation." Thesis, University of Cambridge, 1988. https://www.repository.cam.ac.uk/handle/1810/270416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hou, Jun. "Function Approximation and Classification with Perturbed Data." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618266875924225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Karimi, Belhal. "Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.

Full text
Abstract:
De nombreux problèmes en Apprentissage Statistique consistent à minimiser une fonction non convexe et non lisse définie sur un espace euclidien. Par exemple, les problèmes de maximisation de la vraisemblance et la minimisation du risque empirique en font partie.Les algorithmes d'optimisation utilisés pour résoudre ce genre de problèmes ont été largement étudié pour des fonctions convexes et grandement utilisés en pratique.Cependant, l'accrudescence du nombre d'observation dans l'évaluation de ce risque empirique ajoutée à l'utilisation de fonctions de perte de plus en plus sophistiquées représentent des obstacles.Ces obstacles requièrent d'améliorer les algorithmes existants avec des mis à jour moins coûteuses, idéalement indépendantes du nombre d'observations, et d'en garantir le comportement théorique sous des hypothèses moins restrictives, telles que la non convexité de la fonction à optimiser.Dans ce manuscrit de thèse, nous nous intéressons à la minimisation de fonctions objectives pour des modèles à données latentes, ie, lorsque les données sont partiellement observées ce qui inclut le sens conventionnel des données manquantes mais est un terme plus général que cela.Dans une première partie, nous considérons la minimisation d'une fonction (possiblement) non convexe et non lisse en utilisant des mises à jour incrémentales et en ligne. Nous proposons et analysons plusieurs algorithmes à travers quelques applications.Dans une seconde partie, nous nous concentrons sur le problème de maximisation de vraisemblance non convexe en ayant recourt à l'algorithme EM et ses variantes stochastiques. Nous en analysons plusieurs versions rapides et moins coûteuses et nous proposons deux nouveaux algorithmes du type EM dans le but d'accélérer la convergence des paramètres estimés
Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
APA, Harvard, Vancouver, ISO, and other styles
32

Yaskina, Maryna. "Topics in functional analysis and convex geometry." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4346.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (March 1, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahmad, Nur Syazreen. "Convex methods for discrete-time constrained control." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/convex-methods-for-discretetime-constrained-control(ae161164-767c-41ec-8c03-75779ccc0699).html.

Full text
Abstract:
Feedback is used to control systems whose open-loop behaviour is uncertain. Over the last twenty years a mature theory of robust control has been developed for linear multivariable systems in continuous time. But most practical control systems have constraints such as saturation limits on the actuators, which render the closed-loop nonlinear. Most of the modern controllers are also implemented digitally using computers.The study of this research is divided in two directions: the stability analysis of discrete-time Lur’e systems and the synthesis of static discrete-time anti-windup schemes. With respect to stability analysis, the main contributions of this thesis are the derivations of new LMI-based stability criteria for the discrete-time Lur’e systems with monotonic, slope-restricted nonlinearities via the Lyapunov method. The criteria provide convex stability conditions via LMIs, which can be efficiently computed via convex optimization methods. They are also extended to the general case that includes the non-diagonal MIMO nonlinearities. The importance of extending them to the general case is that it can eventually be applied to the stability analysis of several optimization-based controllers such as an input-constrainedmodel predictive control (MPC), which is inherently discrete. With respect to synthesis, the contribution is the convex formulation of a static discrete-time anti-windup scheme via one of the Jury-Lee criteria (a discrete-time counterpart of Popov criterion), which was previously conjectured to be unachievable. The result is also in the form of LMI, and is extended to several existing static anti-windup schemes with open-loop stable plants.
APA, Harvard, Vancouver, ISO, and other styles
34

Bezushko, V. P. "Recognition system of flat convex figures by using disproportionate function." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/65234.

Full text
Abstract:
With the development of technologies and optimization of technological processes, it was necessary to perform and making decisions without human intervention. Processes that are associated with unilateral routine work or poses a risk to the humans it is rational to replace by machines. The construction of these machines is the first step towards the construction of different recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Steuding, Rasa, Jörn Steuding, Kohji Matsumoto, Antanas Laurinčikas, and Ramūnas Garunkštis. "Effective uniform approximation by the Riemann zeta-function." Department of Mathematics of the Universitat Autònoma de Barcelona, 2010. http://hdl.handle.net/2237/20429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hales, Stephen. "Approximation by translates of a radial basis function." Thesis, University of Leicester, 2000. http://hdl.handle.net/2381/30513.

Full text
Abstract:
The aim of this work is to investigate the properties of approximations obtained by translates of radial basis functions. A natural progression in the discussion starts with an iterative refinement scheme using a strictly positive definite inverse multiquadric. Error estimates for this method are greatly simplified if the inverse multiquadric is replaced by a strictly conditionally positive definite polyharmonic spline. Such error analysis is conducted in a native space generated by the Fourier transform of the basis function. This space can be restrictive when using very smooth basis functions. Some instances are discussed where the native space of can be enlarged by creating a strictly positive definite basis function with comparable approximating properties to , but with a significantly different Fourier transform to . Before such a construction is possible however, strictly positive definite functions in d for d < with compact support must be examined in some detail. It is demonstrated that the dimension in which a function is positive definite can be determined from its univariate Fourier transform. This work is biased towards the computational aspects of interpolation, and the theory is always given with a view to explaining observable phenomena.
APA, Harvard, Vancouver, ISO, and other styles
37

Yaskin, Vladyslav. "Applications of the fourier transform to convex geometry." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4464.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (March 1, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Yier. "Dynamics of a continuum characterized by a non-convex energy function." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/87809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Venema, Viktor. "Non-Convex Potential Function Boosting Versus Noise Peeling : - A Comparative Study." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-302289.

Full text
Abstract:
In recent decades, boosting methods have emerged as one of the leading ensemble learning techniques. Among the most popular boosting algorithm is AdaBoost, a highly influential algorithm that has been noted for its excellent performance in many tasks. One of the most explored weaknesses of AdaBoost and many other boosting algorithms is that they tend to overfit to label noise, and consequently several alterative algorithms that are more robust have been proposed. Among boosting algorithms which aim to accommodate noisy instances, the non-convex potential function optimizing RobustBoost algorithm has gained popularity by a recent result stating that all convex potential boosters can be misled by random noise. Contrasting this approach, Martinez and Gray (2016) propose a simple but reportedly effective way of remedying the noise problems inherent in the traditional AdaBoost algorithm by introducing peeling strategies in relation to boosting. This thesis evaluates the robustness of these two alternatives on empirical and synthetic data sets in the case of binary classification. The results indicate that the two methods are able to enhance the robustness compared to traditional convex potential function boosting algorithms, but not to a significant extent.
APA, Harvard, Vancouver, ISO, and other styles
40

Hodrea, Ioan Bogdan. "Farkas - type results for convex and non - convex inequality systems." Doctoral thesis, [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Mahadevan, Swaminathan. "Probabilistic linear function approximation for value-based reinforcement learning." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98759.

Full text
Abstract:
Reinforcement learning (RL) is a computational framework for learning sequential decision strategies from the interaction of an agent with an unknown dynamic environment. This thesis focuses on value-based reinforcement learning methods, which rely on computing utility values for different behavior strategies that can be adopted by the agent. Real-world complex problems involve very large discrete or continuous state spaces where the use of approximate methods is required. It has been observed that subtle differences in the approximate methods result in very different theoretical properties and empirical behavior. In this thesis, we propose a new framework for discussing many popular function approximation methods, called Probabilistic Linear Function Approximation. This allows us to highlight the key differences of several approximation algorithms used in RL.
APA, Harvard, Vancouver, ISO, and other styles
42

Van, Roy Benjamin. "Learning and value function approximation in complex decision processes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9960.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 127-133).
by Benjamin Van Roy.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
43

Swingler, Kevin. "Mixed order hyper-networks for function approximation and optimisation." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/25349.

Full text
Abstract:
Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence. There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method. This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n →R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared. Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP. A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature.
APA, Harvard, Vancouver, ISO, and other styles
44

Skelly, Margaret Mary. "Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Qi. "Multilevel adaptive radial basis function approximation using error indicators." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/38284.

Full text
Abstract:
In some approximation problems, sampling from the target function can be both expensive and time-consuming. It would be convenient to have a method for indicating where the approximation quality is poor, so that generation of new data provides the user with greater accuracy where needed. In this thesis, the author describes a new adaptive algorithm for Radial Basis Function (RBF) interpolation which aims to assess the local approximation quality and adds or removes points as required to improve the error in the specified region. For a multiquadric and Gaussian approximation, one has the flexibility of a shape parameter which one can use to keep the condition number of the interpolation matrix to a moderate size. In this adaptive error indicator (AEI) method, an adaptive shape parameter is applied. Numerical results for test functions which appear in the literature are given for one, two, and three dimensions, to show that this method performs well. A turbine blade design problem form GE Power (Rugby, UK) is considered and the AEI method is applied to this problem. Moreover, a new multilevel approximation scheme is introduced in this thesis by coupling it with the adaptive error indicator. Preliminary numerical results from this Multilevel Adaptive Error Indicator (MAEI) approximation method are shown. These indicate that the MAEI is able to express the target function well. Moreover, it provides a highly efficient sampling.
APA, Harvard, Vancouver, ISO, and other styles
46

Dieuleveut, Aymeric. "Stochastic approximation in Hilbert spaces." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE059/document.

Full text
Abstract:
Le but de l’apprentissage supervisé est d’inférer des relations entre un phénomène que l’on souhaite prédire et des variables « explicatives ». À cette fin, on dispose d’observations de multiples réalisations du phénomène, à partir desquelles on propose une règle de prédiction. L’émergence récente de sources de données à très grande échelle, tant par le nombre d’observations effectuées (en analyse d’image, par exemple) que par le grand nombre de variables explicatives (en génétique), a fait émerger deux difficultés : d’une part, il devient difficile d’éviter l’écueil du sur-apprentissage lorsque le nombre de variables explicatives est très supérieur au nombre d’observations; d’autre part, l’aspect algorithmique devient déterminant, car la seule résolution d’un système linéaire dans les espaces en jeupeut devenir une difficulté majeure. Des algorithmes issus des méthodes d’approximation stochastique proposent uneréponse simultanée à ces deux difficultés : l’utilisation d’une méthode stochastique réduit drastiquement le coût algorithmique, sans dégrader la qualité de la règle de prédiction proposée, en évitant naturellement le sur-apprentissage. En particulier, le cœur de cette thèse portera sur les méthodes de gradient stochastique. Les très populaires méthodes paramétriques proposent comme prédictions des fonctions linéaires d’un ensemble choisi de variables explicatives. Cependant, ces méthodes aboutissent souvent à une approximation imprécise de la structure statistique sous-jacente. Dans le cadre non-paramétrique, qui est un des thèmes centraux de cette thèse, la restriction aux prédicteurs linéaires est levée. La classe de fonctions dans laquelle le prédicteur est construit dépend elle-même des observations. En pratique, les méthodes non-paramétriques sont cruciales pour diverses applications, en particulier pour l’analyse de données non vectorielles, qui peuvent être associées à un vecteur dans un espace fonctionnel via l’utilisation d’un noyau défini positif. Cela autorise l’utilisation d’algorithmes associés à des données vectorielles, mais exige une compréhension de ces algorithmes dans l’espace non-paramétrique associé : l’espace à noyau reproduisant. Par ailleurs, l’analyse de l’estimation non-paramétrique fournit également un éclairage révélateur sur le cadre paramétrique, lorsque le nombre de prédicteurs surpasse largement le nombre d’observations. La première contribution de cette thèse consiste en une analyse détaillée de l’approximation stochastique dans le cadre non-paramétrique, en particulier dans le cadre des espaces à noyaux reproduisants. Cette analyse permet d’obtenir des taux de convergence optimaux pour l’algorithme de descente de gradient stochastique moyennée. L’analyse proposée s’applique à de nombreux cadres, et une attention particulière est portée à l’utilisation d’hypothèses minimales, ainsi qu’à l’étude des cadres où le nombre d’observations est connu à l’avance, ou peut évoluer. La seconde contribution est de proposer un algorithme, basé sur un principe d’accélération, qui converge à une vitesse optimale, tant du point de vue de l’optimisation que du point de vue statistique. Cela permet, dans le cadre non-paramétrique, d’améliorer la convergence jusqu’au taux optimal, dans certains régimes pour lesquels le premier algorithme analysé restait sous-optimal. Enfin, la troisième contribution de la thèse consiste en l’extension du cadre étudié au delà de la perte des moindres carrés : l’algorithme de descente de gradient stochastiqueest analysé comme une chaine de Markov. Cette approche résulte en une interprétation intuitive, et souligne les différences entre le cadre quadratique et le cadre général. Une méthode simple permettant d’améliorer substantiellement la convergence est également proposée
The goal of supervised machine learning is to infer relationships between a phenomenon one seeks to predict and “explanatory” variables. To that end, multiple occurrences of the phenomenon are observed, from which a prediction rule is constructed. The last two decades have witnessed the apparition of very large data-sets, both in terms of the number of observations (e.g., in image analysis) and in terms of the number of explanatory variables (e.g., in genetics). This has raised two challenges: first, avoiding the pitfall of over-fitting, especially when the number of explanatory variables is much higher than the number of observations; and second, dealing with the computational constraints, such as when the mere resolution of a linear system becomes a difficulty of its own. Algorithms that take their roots in stochastic approximation methods tackle both of these difficulties simultaneously: these stochastic methods dramatically reduce the computational cost, without degrading the quality of the proposed prediction rule, and they can naturally avoid over-fitting. As a consequence, the core of this thesis will be the study of stochastic gradient methods. The popular parametric methods give predictors which are linear functions of a set ofexplanatory variables. However, they often result in an imprecise approximation of the underlying statistical structure. In the non-parametric setting, which is paramount in this thesis, this restriction is lifted. The class of functions from which the predictor is proposed depends on the observations. In practice, these methods have multiple purposes, and are essential for learning with non-vectorial data, which can be mapped onto a vector in a functional space using a positive definite kernel. This allows to use algorithms designed for vectorial data, but requires the analysis to be made in the non-parametric associated space: the reproducing kernel Hilbert space. Moreover, the analysis of non-parametric regression also sheds some light on the parametric setting when the number of predictors is much larger than the number of observations. The first contribution of this thesis is to provide a detailed analysis of stochastic approximation in the non-parametric setting, precisely in reproducing kernel Hilbert spaces. This analysis proves optimal convergence rates for the averaged stochastic gradient descent algorithm. As we take special care in using minimal assumptions, it applies to numerous situations, and covers both the settings in which the number of observations is known a priori, and situations in which the learning algorithm works in an on-line fashion. The second contribution is an algorithm based on acceleration, which converges at optimal speed, both from the optimization point of view and from the statistical one. In the non-parametric setting, this can improve the convergence rate up to optimality, even inparticular regimes for which the first algorithm remains sub-optimal. Finally, the third contribution of the thesis consists in an extension of the framework beyond the least-square loss. The stochastic gradient descent algorithm is analyzed as a Markov chain. This point of view leads to an intuitive and insightful interpretation, that outlines the differences between the quadratic setting and the more general setting. A simple method resulting in provable improvements in the convergence is then proposed
APA, Harvard, Vancouver, ISO, and other styles
47

Hess, Roxana. "Some approximation schemes in polynomial optimization." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30129/document.

Full text
Abstract:
Cette thèse est dédiée à l'étude de la hiérarchie moments-sommes-de-carrés, une famille de problèmes de programmation semi-définie en optimisation polynomiale, couramment appelée hiérarchie de Lasserre. Nous examinons différents aspects de ses propriétés et applications. Comme application de la hiérarchie, nous approchons certains objets potentiellement compliqués, comme l'abscisse polynomiale et les plans d'expérience optimaux sur des domaines semi-algébriques. L'application de la hiérarchie de Lasserre produit des approximations par des polynômes de degré fixé et donc de complexité bornée. En ce qui concerne la complexité de la hiérarchie elle-même, nous en construisons une modification pour laquelle un taux de convergence amélioré peut être prouvé. Un concept essentiel de la hiérarchie est l'utilisation des modules quadratiques et de leurs duaux pour appréhender de manière flexible le cône des polynômes positifs et le cône des moments. Nous poursuivons cette idée pour construire des approximations étroites d'ensembles semi-algébriques à l'aide de séparateurs polynomiaux
This thesis is dedicated to investigations of the moment-sums-of-squares hierarchy, a family of semidefinite programming problems in polynomial optimization, commonly called the Lasserre hierarchy. We examine different aspects of its properties and purposes. As applications of the hierarchy, we approximate some potentially complicated objects, namely the polynomial abscissa and optimal designs on semialgebraic domains. Applying the Lasserre hierarchy results in approximations by polynomials of fixed degree and hence bounded complexity. With regard to the complexity of the hierarchy itself, we construct a modification of it for which an improved convergence rate can be proved. An essential concept of the hierarchy is to use quadratic modules and their duals as a tractable characterization of the cone of positive polynomials and the moment cone, respectively. We exploit further this idea to construct tight approximations of semialgebraic sets with polynomial separators
APA, Harvard, Vancouver, ISO, and other styles
48

Burniston, J. D. "A neural network/rule-based architecture for continuous function approximation." Thesis, University of Nottingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tham, Chen Khong. "Modular on-line function approximation for scaling up reinforcement learning." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Burton, Christina Marie. "Quadratic Spline Approximation of the Newsvendor Problem Optimal Cost Function." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3087.

Full text
Abstract:
We consider a single-product dynamic inventory problem where the demand distributions in each period are known and independent but with density. We assume the lead time and the fixed cost for ordering are zero and that there are no capacity constraints. There is a holding cost and a backorder cost for unfulfilled demand, which is backlogged until it is filled by another order. The problem may be nonstationary, and in fact our approximation of the optimal cost function using splines is most advantageous when demand falls suddenly. In this case the myopic policy, which is most often used in practice to calculate optimal inventory level, would be very costly. Our algorithm uses quadratic splines to approximate the optimal cost function for this dynamic inventory problem and calculates the optimal inventory level and optimal cost.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography