Rozprawy doktorskie na temat „Convex optimization”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Convex optimization”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Joulin, Armand. "Convex optimization for cosegmentation". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00826236.
Pełny tekst źródłaRätsch, Gunnar. "Robust boosting via convex optimization". Phd thesis, Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2005/39/.
Pełny tekst źródłaDie Arbeit behandelt folgende Sachverhalte:
o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt.
o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung?
Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben.
o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen?
Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen.
o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern?
Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben.
o Ist Boosting praktisch anwendbar?
Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues:
o The statistical learning theory framework for analyzing boosting methods.
We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution.
o How can boosting methods be related to mathematical optimization techniques?
To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms.
o How to make Boosting noise robust?
One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness.
o How to adapt boosting to regression problems?
Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties.
o Can boosting techniques be useful in practice?
The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process.
---
Anmerkung:
Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
Nekooie, Batool. "Convex optimization involving matrix inequalities". Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13880.
Pełny tekst źródłaJangam, Ravindra nath vijay kumar. "BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION". Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-33934.
Pełny tekst źródłaSaunderson, James (James Francis). "Subspace identification via convex optimization". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66475.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 88-92).
In this thesis we consider convex optimization-based approaches to the classical problem of identifying a subspace from noisy measurements of a random process taking values in the subspace. We focus on the case where the measurement noise is component-wise independent, known as the factor analysis model in statistics. We develop a new analysis of an existing convex optimization-based heuristic for this problem. Our analysis indicates that in high-dimensional settings, where both the ambient dimension and the dimension of the subspace to be identified are large, the convex heuristic, minimum trace factor analysis, is often very successful. We provide simple deterministic conditions on the underlying 'true' subspace under which the convex heuristic provably identifies the correct subspace. We also consider the performance of minimum trace factor analysis on 'typical' subspace identification problems, that is problems where the underlying subspace is chosen randomly from subspaces of a particular dimension. In this setting we establish conditions on the ambient dimension and the dimension of the underlying subspace under which the convex heuristic identifies the subspace correctly with high probability. We then consider a refinement of the subspace identification problem where we aim to identify a class of structured subspaces arising from Gaussian latent tree models. More precisely, given the covariance at the finest scale of a Gaussian latent tree model, and the tree that indexes the model, we aim to learn the parameters of the model, including the state dimensions of each of the latent variables. We do so by extending the convex heuristic, and our analysis, from the factor analysis setting to the setting of Gaussian latent tree models. We again provide deterministic conditions on the underlying latent tree model that ensure our convex optimization-based heuristic successfully identifies the parameters and state dimensions of the model.
by James Saunderson.
S.M.
Shewchun, John Marc 1972. "Constrained control using convex optimization". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/46471.
Pełny tekst źródłaBoţ, Radu Ioan. "Conjugate duality in convex optimization". Berlin [u.a.] Springer, 2010. http://dx.doi.org/10.1007/978-3-642-04900-2.
Pełny tekst źródłaAggarwal, Varun. "Analog circuit optimization using evolutionary algorithms and convex optimization". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40525.
Pełny tekst źródłaIncludes bibliographical references (p. 83-88).
In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the designer effort is a contribution. We argue that the accuracy of geometric programming can be improved without adversely influencing the run time or increasing the designer's effort. This is facilitated by decomposition of geometric programming modeling into two steps, which decouples accuracy of models and run-time of geometric programming. We design a new algorithm for producing accurate posynomial models for MOS transistor parameters, which is the first step of the decomposition. The new algorithm can generate posynomial models with variable number of terms and real-valued exponents. The algorithm is a hybrid of a genetic algorithm and a convex optimization technique. We study the performance of the algorithm on artificially created benchmark problems. We show that the accuracy of posynomial models of MOS parameters is improved by a considerable amount by using the new algorithm. The new posynomial modeling algorithm can be used in any application of geometric programming and is not limited to MOS parameter modeling. In the last chapter, we discuss various ideas to improve the state-of-art in circuit sizing.
by Varun Aggarwal.
S.M.
van, den Berg Ewout. "Convex optimization for generalized sparse recovery". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/16646.
Pełny tekst źródłaLin, Chin-Yee. "Interior point methods for convex optimization". Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15044.
Pełny tekst źródłaNiewoehner, Robert Jay. "Plant/controller optimization by convex methods". Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/28453.
Pełny tekst źródłaThis report presents results of a three phase effort to demonstrate the use of convex control design techniques in aeronautical applications. The first phase was the demonstration of a methodology by which classical aircraft controller design requirements could be translated into the weighting matrices for H infinity controller synthesis. The second phase extended that methodology to the design of mixed H2 / H infinity controllers. The third phase considered the problem of minimizing the size of aircraft control surfaces while meeting closed-loop dynamic performance requirements. Control sizing is a critical element in the design of Reduced Static Stability (RSS) aircraft. Inadequate control power places the vehicle in peril, while too much control power forfeits the benefits of RSS, resulting in poorer performance, increased weight, increased cost, increased drag, and increased observability. Non-heuristic methods have been required by which the physical configuration and the accompanying controller can be designed directly from the flying qualities specifications. The optimization of the surfaces should be done while searching over the set of all controllers which, together in closed-loop, satisfy the flying qualities requirements. This report presents a methodology which simultaneously optimizes both the physical configuration and the control system of a rigid body, using performance requirements which can be posed as Linear Matrix Inequalities
Dautbegovic, Dino. "Convex Optimization Methods for System Identification". Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-34808.
Pełny tekst źródłaSou, Kin Cheong 1979. "Convex optimization methods for model reduction". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45872.
Pełny tekst źródłaIncludes bibliographical references (p. 153-161).
Model reduction and convex optimization are prevalent in science and engineering applications. In this thesis, convex optimization solution techniques to three different model reduction problems are studied.Parameterized reduced order modeling is important for rapid design and optimization of systems containing parameter dependent reducible sub-circuits such as interconnects and RF inductors. The first part of the thesis presents a quasi-convex optimization approach to solve the parameterized model order reduction problem for linear time-invariant systems. Formulation of the model reduction problem as a quasi-convex program allows the flexibility to enforce constraints such as stability and passivity in both non-parameterized and parameterized cases. Numerical results including the parameterized reduced modeling of a large RF inductor are given to demonstrate the practical value of the proposed algorithm.A majority of nonlinear model reduction techniques can be regarded as a two step procedure as follows. First the state dimension is reduced through a projection, and then the vector field of the reduced state is approximated for improved computation efficiency. Neither of the above steps has been thoroughly studied. The second part of this thesis presents a solution to a particular problem in the second step above, namely, finding an upper bound of the system input/output error due to nonlinear vector field approximation. The system error upper bounding problem is formulated as an L2 gain upper bounding problem of some feedback interconnection, to which the small gain theorem can be applied. A numerical procedure based on integral quadratic constraint analysis and a theoretical statement based on L2 gain analysis are given to provide the solution to the error bounding problem. The numerical procedure is applied to analyze the vector field approximation quality of a transmission line with diodes.
(Cont) The application of Volterra series to the reduced modeling of nonlinear systems is hampered by the rapidly increasing computation cost with respect to the degrees of the polynomials used. On the other hand, while it is less general than the Volterra series model, the Wiener-Hammerstein model has been shown to be useful for accurate and compact modeling of certain nonlinear sub-circuits such as power amplifiers. The third part of the thesis presents a convex optimization solution technique to the reduction/identification of the Wiener-Hammerstein system. The identification problem is formulated as a non-convex quadratic program, which is solved by a semidefinite programming relaxation technique. It is demonstrated in the thesis that the formulation is robust with respect to noisy measurement, and the relaxation technique is oftentimes sufficient to provide good solutions. Simple examples are provided to demonstrate the use of the proposed identification algorithm.
by Kin Cheong Sou.
Ph.D.
Modiba, Jacob Mantjitji. "An overview of sparse convex optimization". Diss., University of Pretoria, 2018. http://hdl.handle.net/2263/64352.
Pełny tekst źródłaDissertation (MSc)--University of Pretoria, 2018.
CAIR and STATOMET
Statistics
MSc
Unrestricted
Lewis, Naama. "ITEM RESPONSE MODELS AND CONVEX OPTIMIZATION". OpenSIUC, 2020. https://opensiuc.lib.siu.edu/dissertations/1782.
Pełny tekst źródłaFontaine, Xavier. "Sequential learning and stochastic optimization of convex functions". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM024.
Pełny tekst źródłaIn this thesis we study several machine learning problems that are all linked with the minimization of a noisy function, which will often be convex.Inspired by real-life applications we focus on sequential learning problems which consist in treating the data ``on the fly'', or in an online manner.The first part of this thesis is thus devoted to the study of three different sequential learning problems which all face the classical ``exploration vs. exploitation'' trade-off.Each of these problems consists in a situation where a decision maker has to take actions in order to maximize a reward or to evaluate a parameter under uncertainty, meaning that the rewards or the feedback of the possible actions are unknown and noisy.We demonstrate that all of these problems can be studied under the scope of stochastic convex optimization, and we propose and analyze algorithms to solve them.In the second part of this thesis we focus on the analysis of the Stochastic Gradient Descent algorithm, which is likely one of the most used stochastic optimization algorithms in machine learning.We provide an exhaustive analysis in the convex setting and in some non-convex situations by studying the associated continuous-time model, and obtain new optimal convergence results
Inacio, Helder. "Convex relaxations for cubic polynomial problems". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47563.
Pełny tekst źródłaGuzman, Paredes Cristobal. "Information, complexity and structure in convex optimization". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53577.
Pełny tekst źródłaHendrich, Christopher. "Proximal Splitting Methods in Nonsmooth Convex Optimization". Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.
Pełny tekst źródłaSoheili, Azad Negar. "Elementary Algorithms for Solving Convex Optimization Problems". Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/353.
Pełny tekst źródłaGrigas, Paul (Paul Edward). "Methods for convex optimization and statistical learning". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106683.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 219-225).
We present several contributions at the interface of first-order methods for convex optimization and problems in statistical machine learning. In the first part of this thesis, we present new results for the Frank-Wolfe method, with a particular focus on: (i) novel computational guarantees that apply for any step-size sequence, (ii) a novel adjustment to the basic algorithm to better account for warm-start information, and (iii) extensions of the computational guarantees that hold in the presence of approximate subproblem and/or gradient computations. In the second part of the thesis, we present a unifying framework for interpreting "greedy" first-order methods -- namely Frank-Wolfe and greedy coordinate descent -- as instantiations of the dual averaging method of Nesterov, and we discuss the implications thereof. In the third part of the thesis, we present an extension of the Frank-Wolfe method that is designed to induce near-optimal low-rank solutions for nuclear norm regularized matrix completion and, for more general problems, induces near-optimal "well-structured" solutions. We establish computational guarantees that trade off efficiency in computing near-optimal solutions with upper bounds on the rank of iterates. We then present extensive computational results that show significant computational advantages over existing related approaches, in terms of delivering low rank and low run-time to compute a target optimality gap. In the fourth part of the thesis, we analyze boosting algorithms in linear regression from the perspective modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression can be viewed as subgradient descent to minimize the maximum absolute correlation between features and residuals. We also propose a slightly modified boosting algorithm that yields an algorithm for the Lasso, and that computes the Lasso path. Our perspective leads to first-ever comprehensive computational guarantees for all of these boosting algorithms, which provide a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm, for any dataset. In the fifth and final part of the thesis, we present several related results in the contexts of boosting algorithms for logistic regression and the AdaBoost algorithm.
by Paul Grigas.
Ph. D.
Altschuler, Jason (Jason M. ). "Greed, hedging, and acceleration in convex optimization". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120409.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 153-156).
This thesis revisits the well-studied and practically motivated problem of minimizing a strongly convex, smooth function with first-order information. The first main message of the thesis is that, surprisingly, algorithms which are individually suboptimal can be combined to achieve accelerated convergence rates. This phenomenon can be intuively understood as "hedging" between safe strategies (e.g. slowly converging algorithms) and aggressive strategies (e.g. divergent algorithms) since bad cases for the former are good cases for the latter, and vice versa. Concretely, we implement the optimal hedging by simply running Gradient Descent (GD) with prudently chosen stepsizes. This result goes against the conventional wisdom that acceleration is impossible without momentum. The second main message is a universality result for quadratic optimization. We show that, roughly speaking, "most" Krylov-subspace algorithms are asymptotically optimal (in the worst-case) and "most" quadratic functions are asymptotically worst-case functions (for all algorithms). From an algorithmic perspective, this goes against the conventional wisdom that accelerated algorithms require extremely careful parameter tuning. From a lower-bound perspective, this goes against the conventional wisdom that there are relatively few "worst functions in the world" and they have lots of structure. It also goes against the conventional wisdom that a quadratic function is easier to optimize when the initialization error is more concentrated on certain eigenspaces - counterintuitively, we show that so long as this concentration is not "pathologically" extreme, this only leads to faster convergence in the beginning iterations and is irrelevant asymptotically. Part I of the thesis shows the algorithmic side of this universality by leveraging tools from potential theory and harmonic analysis. The main result is a characterization of non-adaptive randomized Krylov-subspace algorithms which asymptotically achieve the so-called "accelerated rate" in the worst case. As a special case, this recovers the known fact that GD accelerates when inverse stepsizes are i.i.d. from the Arcsine distribution. This distribution has a remarkable "equalizing" property: every quadratic function is equally easy to optimize. We interpret this as "optimal hedging" since there is no worst-case function. Leveraging the equalizing property also provides other new insights including asymptotic isotropy of the iterates around the optimum, and uniform convergence guarantees for extending our analysis to l2. Part II of the thesis shows the lower-bound side of this universality by connecting quadratic optimization to the universality of orthogonal polynomials. We also characterize, for every finite number of iterations n, all worst-case quadratic functions for n iterations of any Krylov-subspace algorithm. Previously no tight constructions were known. (Note the classical construction of [Nemirovskii and Yudin, 1983] is only tight asymptotically.) As a corollary, this result also proves that randomness does not help Krylov-subspace algorithms. Combining the results in Parts I and II uncovers a duality between optimal Krylov-subspace algorithms and worst-case quadratic functions. It also shows new close connections between quadratic optimization, orthogonal polynomials, Gaussian quadrature, Jacobi operators, and their spectral measures. Part III of the thesis extends the algorithmic techniques in Part I to convex optimization. We first show that running the aforementioned random GD algorithm accelerates on separable convex functions. This is the first convergence rate that exactly matches the classical quadratic-optimization lower bound of [Nemirovskii and Yudin, 1983] on any class of convex functions richer than quadratics. This provides partial evidence suggesting that convex optimization might be no harder than quadratic optimization. However, these techniques (provably) do not extend to general convex functions. This is roughly because they do not require all observed data to be consistent with a single valid function - we call this "stitching." We turn to a semidefinite programming formulation of worst-case rate from [Taylor et al., 2017] that ensures stitching. Using this we compute the optimal GD stepsize schedules for 1, 2, and 3 iterations, and show that they partially accelerate on general convex functions. These optimal schedules for convex optimization are remarkably different from the optimal schedules for quadratic optimization. The rate improves as the number of iterations increases, but the algebraic systems become increasingly complicated to solve and the general case eludes us.
by Jason Altschuler.
S.M.
Lee, Yin Tat. "Faster algorithms for convex and combinatorial optimization". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104467.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 443-458).
In this thesis, we revisit three algorithmic techniques: sparsification, cutting and collapsing. We use them to obtain the following results on convex and combinatorial optimization: --Linear Programming: We obtain the first improvement to the running time for linear programming in 25 years. The convergence rate of this randomized algorithm nearly matches the universal barrier for interior point methods. As a corollary, we obtain the first ... time randomized algorithm for solving the maximum flow problem on directed graphs with m edges and n vertices. This improves upon the previous fastest running time of achieved over 15 years ago by Goldberg and Rao. --Maximum Flow Problem: We obtain one of the first almost-linear time randomized algorithms for approximating the maximum flow in undirected graphs. As a corollary, we improve the running time of a wide range of algorithms that use the computation of maximum flows as a subroutine. --Non-Smooth Convex Optimization: We obtain the first nearly-cubic time randomized algorithm for convex problems under the black box model. As a corollary, this implies a polynomially faster algorithm for three fundamental problems in computer science: submodular function minimization, matroid intersection, and semidefinite programming. --Graph Sparsification: We obtain the first almost-linear time randomized algorithm for spectrally approximating any graph by one with just a linear number of edges. This sparse graph approximately preserves all cut values of the original graph and is useful for solving a wide range of combinatorial problems. This algorithm improves all previous linear-sized constructions, which required at least quadratic time. --Numerical Linear Algebra: Multigrid is an efficient method for solving large-scale linear systems which arise from graphs in low dimensions. It has been used extensively for 30 years in scientific computing. Unlike the previous approaches that make assumptions on the graphs, we give the first generalization of the multigrid that provably solves Laplacian systems of any graphs in nearly-linear expected time.
by Yin Tat Lee.
Ph. D.
Gupta, Swati Ph D. Massachusetts Institute of Technology. "Combinatorial structures in online and convex optimization". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112014.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 157-163).
Motivated by bottlenecks in algorithms across online and convex optimization, we consider three fundamental questions over combinatorial polytopes. First, we study the minimization of separable strictly convex functions over polyhedra. This problem is motivated by first-order optimization methods whose bottleneck relies on the minimization of a (often) separable, convex metric, known as the Bregman divergence. We provide a conceptually simple algorithm, Inc-Fix, in the case of submodular base polyhedra. For cardinality-based submodular polytopes, we show that Inc-Fix can be speeded up to be the state-of-the-art method for minimizing uniform divergences. We show that the running time of Inc-Fix is independent of the convexity parameters of the objective function. The second question is concerned with the complexity of the parametric line search problem in the extended submodular polytope P: starting from a point inside P, how far can one move along a given direction while maintaining feasibility. This problem arises as a bottleneck in many algorithmic applications like the above-mentioned Inc-Fix algorithm and variants of the Frank-Wolfe method. One of the most natural approaches is to use the discrete Newton's method, however, no upper bound on the number of iterations for this method was known. We show a quadratic bound resulting in a factor of n6 reduction in the worst-case running time from the previous state-of-the-art. The analysis leads to interesting extremal questions on set systems and submodular functions. Next, we develop a general framework to simulate the well-known multiplicative weights update algorithm for online linear optimization over combinatorial strategies U in time polynomial in log /U/, using efficient approximate general counting oracles. We further show that efficient counting over the vertex set of any 0/1 polytope P implies efficient convex minimization over P. As a byproduct of this result, we can approximately decompose any point in a 0/1 polytope into a product distribution over its vertices. Finally, we compare the applicability and limitations of the above results in the context of finding Nash-equilibria in combinatorial two-player zero-sum games with bilinear loss functions. We prove structural results that can be used to find certain Nash-equilibria with a single separable convex minimization.
by Swati Gupta.
Ph. D.
Li, Xinxin. "Some operator splitting methods for convex optimization". HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/43.
Pełny tekst źródłaLan, Guanghui. "Convex optimization under inexact first-order information". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29732.
Pełny tekst źródłaCommittee Chair: Arkadi Nemirovski; Committee Co-Chair: Alexander Shapiro; Committee Co-Chair: Renato D. C. Monteiro; Committee Member: Anatoli Jouditski; Committee Member: Shabbir Ahmed. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Kůdela, Jakub. "Advanced Decomposition Methods in Stochastic Convex Optimization". Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-403864.
Pełny tekst źródłaMerckx, Keno. "Optimization and Realizability Problems for Convex Geometries". Doctoral thesis, Universite Libre de Bruxelles, 2019. https://dipot.ulb.ac.be/dspace/bitstream/2013/288673/4/TOC.pdf.
Pełny tekst źródłaDoctorat en Sciences
info:eu-repo/semantics/nonPublished
Zhang, Hongyi Ph D. Massachusetts Institute of Technology. "Topics in non-convex optimization and learning". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121830.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 165-186).
Non-convex optimization and learning play an important role in data science and machine learning, yet so far they still elude our understanding in many aspects. In this thesis, I study two important aspects of non-convex optimization and learning: Riemannian optimization and deep neural networks. In the first part, I develop iteration complexity analysis for Riemannian optimization, i.e., optimization problems defined on Riemannian manifolds. Through bounding the distortion introduced by the metric curvature, iteration complexity of Riemannian (stochastic) gradient descent methods is derived. I also show that some fast first-order methods in Euclidean space, such as Nesterov's accelerated gradient descent (AGD) and stochastic variance reduced gradient (SVRG), have Riemannian counterparts that are also fast under certain conditions. In the second part, I challenge two common practices in deep learning, namely empirical risk minimization (ERM) and normalization. Specifically, I show (1) training on convex combinations of samples improves model robustness and generalization, and (2) a good initialization is sufficient for training deep residual networks without normalization. The method in (1), called mixup, is motivated by a data-dependent Lipschitzness regularization of the network. The method in (2), called Zerolnit, makes the network update scale invariant to its depth at initialization.
by Hongyi Zhang.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences
Chen, Jieqiu. "Convex relaxations in nonconvex and applied optimization". Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/654.
Pełny tekst źródłaRätsch, Gunnar. "Robust boosting via convex optimization theory and applications /". [S.l.] : [s.n.], 2001. http://pub.ub.uni-potsdam.de/2002/0008/raetsch.ps.
Pełny tekst źródłaLiberti, Leo Sergio. "Reformulation and convex relaxation techniques for global optimization". Thesis, Imperial College London, 2004. http://hdl.handle.net/10044/1/11807.
Pełny tekst źródłaLin, Tim Tai-Yi. "Primary estimation with sparsity-promoting bi-convex optimization". Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/55900.
Pełny tekst źródłaScience, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
Noel, Adam Josiah Gerald. "Convex sensing-reporting optimization for cooperative spectrum sensing". Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/36838.
Pełny tekst źródłaHerr, Katrin [Verfasser]. "Core Sets and Symmetric Convex Optimization / Katrin Herr". München : Verlag Dr. Hut, 2013. http://d-nb.info/1045125504/34.
Pełny tekst źródłaChandrasekaran, Venkat. "Convex optimization methods for graphs and statistical modeling". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66002.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 209-220).
An outstanding challenge in many problems throughout science and engineering is to succinctly characterize the relationships among a large number of interacting entities. Models based on graphs form one major thrust in this thesis, as graphs often provide a concise representation of the interactions among a large set of variables. A second major emphasis of this thesis are classes of structured models that satisfy certain algebraic constraints. The common theme underlying these approaches is the development of computational methods based on convex optimization, which are in turn useful in a broad array of problems in signal processing and machine learning. The specific contributions are as follows: -- We propose a convex optimization method for decomposing the sum of a sparse matrix and a low-rank matrix into the individual components. Based on new rank-sparsity uncertainty principles, we give conditions under which the convex program exactly recovers the underlying components. -- Building on the previous point, we describe a convex optimization approach to latent variable Gaussian graphical model selection. We provide theoretical guarantees of the statistical consistency of this convex program in the high-dimensional scaling regime in which the number of latent/observed variables grows with the number of samples of the observed variables. The algebraic varieties of sparse and low-rank matrices play a prominent role in this analysis. -- We present a general convex optimization formulation for linear inverse problems, in which we have limited measurements in the form of linear functionals of a signal or model of interest. When these underlying models have algebraic structure, the resulting convex programs can be solved exactly or approximately via semidefinite programming. We provide sharp estimates (based on computing certain Gaussian statistics related to the underlying model geometry) of the number of generic linear measurements required for exact and robust recovery in a variety of settings. -- We present convex graph invariants, which are invariants of a graph that are convex functions of the underlying adjacency matrix. Graph invariants characterize structural properties of a graph that do not depend on the labeling of the nodes; convex graph invariants constitute an important subclass, and they provide a systematic and unified computational framework based on convex optimization for solving a number of interesting graph problems. We emphasize a unified view of the underlying convex geometry common to these different frameworks. We describe applications of both these methods to problems in financial modeling and network analysis, and conclude with a discussion of directions for future research.
by Venkat Chandrasekaran.
Ph.D.
Lintereur, Beau V. (Beau Vincent) 1973. "Constrained H̳₂ design via convex optimization with applications". Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50628.
Pełny tekst źródłaIn title on t.p., double-underscored "H" appears in script.
Includes bibliographical references (p. 133-138).
A convex optimization controller design method is presented which minimizes the closed-loop H2 norm, subject to constraints on the magnitude of closed-loop transfer functions and transient responses due to specified inputs. This method uses direct parameter optimization of the closed-loop Youla or Q-parameter where the variables are the coefficients of a stable orthogonal basis. The basis is constructed using the recently rediscovered Generalized Orthonormal Basis Functions (GOBF) that have found application in system identification. It is proposed that many typical control specifications including robustness to modeling error and gain and phase margins can be posed with two simple constraints in the frequency and time domain. With some approximation, this formulation allows the controller design problem to be cast as a quadratic program. Two example applications demonstrate the practical utility of this method for real systems. First this method is applied to the roll axis of the EOS-AM1 spacecraft attitude control system, with a set of performance and robustness specifications. The constrained H2 controller simultaneously meets the specifications where previous model-based control studies failed. Then a constrained H2 controller is designed for an active vibration isolation system for a spaceborne optical technology demonstration test stand. Mixed specifications are successfully incorporated into the design and the results are verified with experimental frequency data.
by Beau V. Lintereur.
S.M.
Anderson, James David. "Dynamical system decomposition and analysis using convex optimization". Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:624001be-28d5-4837-a7d8-2222e270e658.
Pełny tekst źródłaWytock, Matt. "Optimizing Optimization: Scalable Convex Programming with Proximal Operators". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/785.
Pełny tekst źródłaGoulet, Vincent. "Four-bar linkage synthesis using non-convex optimization". Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27721.
Pełny tekst źródłaThis thesis presents a method to automatically synthesize four-bar linkages. A software implementing the method was developed in the scope of a generative design initiative at Autodesk. The software takes a path as input and computes the parameters of a four-bar linkage able to replicate the same path. This path generation problem is solved using non-convex optimization. The problem is modeled with quadratic constraints and real variables. A special redundant constraint greatly improves the performance of the method. Experiments show that the software is faster and more precise than existing approaches.
Lin, Beldon Chi. "Integrated vehicle and mission design using convex optimization". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127070.
Pełny tekst źródłaCataloged from the official PDF of thesis.
Includes bibliographical references (pages 177-183).
Convex optimization is used to solve the simultaneous vehicle and mission design problem. The objective of this work is to develop convex optimization architectures that allow both the vehicle and mission to be designed together. They allow the problem to be solved very quickly while maintaining similar fidelity to comparable methods. Multiple architectures are formulated, and the architectures are implemented and evaluated for a sounding rocket design problem and a hydrogen aircraft design problem. The methodology proves successful in designing the sounding rocket while taking into account the optimal trajectory and control strategy and extended to a multi-mission design case. The hydrogen aircraft was successfully designed, allowing for both the cryogenic tank design to be chosen in conjunction with the mission prole. For the rocket design problem, the integrated vehicle and mission problem can only be combined into alternating and integrated approach, and the integrated architecture for convergence to solution in 50% computation time while reaching similar solution. For the hydrogen aircraft case, a 50+% decrease in fuel burn was able to be achieved compared to regular kerosene with an integrated optimization approach. Future work includes studying the convergence properties as well as increasing the robustness of the architectures.
by Beldon Chi Lin.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Sano, Yoshio. "Matroids on convex geometries: Subclasses, operations, and optimization". 京都大学 (Kyoto University), 2010. http://hdl.handle.net/2433/120626.
Pełny tekst źródłaLi, Jueyou. "Distributed and parallel methods for structural convex optimization". Thesis, Federation University Australia, 2014. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/81614.
Pełny tekst źródłaDoctor of Philosophy
Bengtsson, Gabriel, i Jesper Larsson. "Source Localization by Inverse Diffusion and Convex Optimization". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230738.
Pełny tekst źródłaThe purpose of this project was to implement an inverse diffusion algorithm to locate the sources of an emitted substance. This algorithm has yielded successful results when applied to biological cell detection, and it has been suggested that the run time could be greatly reduced if adaptions for a computation graph framework are made. This would automate calculations of gradients and allow for faster execution on a graphics processing unit. The algorithm implementation was realized in TensorFlow, which is primarily a machine learning oriented programming library. Computer-generated biological test images were then used to evaluate the performance using regular image analysis software and accuracy metrics. Comparisons reveal that the TensorFlow implementation of the algorithm can match the accuracy metrics of traditional implementations of the same algorithm. Viewed in a broader scope this serves as an example to highlight the possibility of using computation graph frameworks to solve large scale optimization problems, and more specifically inverse problems.
Nguyen, Thanh Tan. "Selected non-convex optimization problems in machine learning". Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/200748/1/Thanh_Nguyen_Thesis.pdf.
Pełny tekst źródłaCho, Myung. "Convex and non-convex optimizations for recovering structured data: algorithms and analysis". Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5922.
Pełny tekst źródłaYang, Yi. "Sequential convex approximations of chance constrained programming /". View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?IELM%202008%20YANG.
Pełny tekst źródłaBanerjee, Nirjhar. "Random obtuse triangles and convex quadrilaterals". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54212.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 83-85).
We intend to discuss in detail two well known geometrical probability problems. The first one deals with finding the probability that a random triangle is obtuse in nature. We initially discuss the various ways of choosing a random triangle. The problem is at first analyzed based on random angles (adding to 180 degrees) and random sides (obeying the triangle inequality) which is a direct modification of the Broken Stick Problem. We then study the effect of shape on the probability that when three random points are chosen inside a figure of that shape they will form an obtuse triangle. Literature survey reveals the existence of the analytical formulae only in the cases of square, circle and rectangle. We used Monte Carlo simulation to solve this problem in various shapes. We intend to show by means of simulation that the given probabilatity will reach its minimum value when the random points are taken inside a circle. We then introduce the concept of Random Walk in Triangles and show that the probability that a triangle formed during the process is obtuse is itself random. We also propose the idea of Differential Equation in Triangle Space and study the variation of angles during this dynamic process. We then propose to extend this to the problem of calculating the probability of the quadrilateral formed by four random points is convex. The effects of shape are distinctly different than those obtained in the random triangle problem. The effort of true random numbers and normally generated pseudorandom numbers are also compared for both the problems considered.
by Nirjhar Banerjee.
S.M.
Umenberger, Jack. "Convex Identifcation of Stable Dynamical Systems". Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17321.
Pełny tekst źródłaAndramonov, Mikhail. "Global minimization of some classes of generalized convex functions". Thesis, Federation University Australia, 2001. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/164850.
Pełny tekst źródła