To see the other types of publications on this topic, follow the link: Convex optimization.

Dissertations / Theses on the topic 'Convex optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Convex optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Joulin, Armand. "Convex optimization for cosegmentation." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00826236.

Full text
Abstract:
La simplicité apparente avec laquelle un humain perçoit ce qui l'entoure suggère que le processus impliqué est en partie mécanique, donc ne nécessite pas un haut degré de réflexion. Cette observation suggère que notre perception visuelle du monde peut être simulée sur un ordinateur. La vision par ordinateur est le domaine de recherche consacré au problème de la création d'une forme de perception visuelle pour des ordinateurs. La puissance de calcul des ordinateurs des années 50 ne permettait pas de traiter et d'analyser les données visuelles nécessaires à l'élaboration d'une perception visuelle virtuelle. Depuis peu, la puissance de calcul et la capacité de stockage ont permis à ce domaine de vraiment émerger. En deux décennies, la vision par ordinateur a permis de répondre à problèmes pratiques ou industrielles comme la détection des visages, de personnes au comportement suspect dans une foule ou de défauts de fabrication dans des chaînes de production. En revanche, en ce qui concerne l'émergence d'une perception visuelle virtuelle non spécifique à une tâche donnée, peu de progrès ont été réalisés et la communauté est toujours confrontée à des problèmes fondamentaux. Un de ces problèmes est de segmenter un stimuli optique ou une image en régions porteuses de sens, en objets ou actions. La segmentation de scène est naturelle pour les humains, mais aussi essentielle pour comprendre pleinement son environnement. Malheureusement elle est aussi extrêmement difficile à reproduire sur un ordinateur car il n'existe pas de définition claire de la région "significative''. En effet, en fonction de la scène ou de la situation, une région peut avoir des interprétations différentes. Etant donnée une scène se passant dans la rue, on peut considérer que distinguer un piéton est important dans cette situation, par contre ses vêtements ne le semblent pas nécessairement. Si maintenant nous considérons une scène ayant lieu pendant un défilé de mode, un vêtement devient un élément important, donc une région significative. Ici, nous nous concentrons sur ce problème de segmentation et nous l'abordons sous un angle particulier pour éviter cette difficulté fondamentale. Nous considérerons la segmentation comme un problème d'apprentissage faiblement supervisé, c'est-à-dire qu'au lieu de segmenter des images selon une certaine définition prédéfinie de régions "significatives'', nous développons des méthodes permettant de segmenter simultanément un ensemble d'images en régions qui apparaissent régulièrement. Nous définissons donc une région "significative'' d'un point de vue statistique: Ce sont les régions qui apparaissent régulièrement dans l'ensemble des images données. Pour cela nous concevons des modèles ayant une portée qui va au-delà de l'application à la vision. Notre approche prend ses racines dans l'apprentissage statistique, dont l'objectif est de concevoir des méthodes efficaces pour extraire et/ou apprendre des motifs récurrents dans des jeux de données. Ce domaine a récemment connu une forte popularité en raison de l'augmentation du nombre et de la taille des bases de données disponibles. Nous nous concentrons ici sur des méthodes conçues pour découvrir l'information "cachée'' dans une base à partir d'annotations incomplètes ou inexistantes. Enfin, nos travaux prennent racine dans le domaine de l'optimisation numérique afin d'élaborer des algorithmes efficaces et adaptés à nos problèmes. En particulier, nous utilisons et adaptons des outils récemment développés afin de relaxer des problèmes combinatoires complexes en des problèmes convexes pour lesquels il est garanti de trouver la solution optimale. Nous illustrons la qualité de nos formulations et algorithmes aussi sur des problèmes tirés de domaines autres que la vision par ordinateur. En particulier, nous montrons que nos travaux peuvent être utilisés dans la classification de texte et en biologie cellulaire.
APA, Harvard, Vancouver, ISO, and other styles
2

Rätsch, Gunnar. "Robust boosting via convex optimization." Phd thesis, Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2005/39/.

Full text
Abstract:
In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen.
Die Arbeit behandelt folgende Sachverhalte:

o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt.

o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung?
Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben.

o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen?
Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen.

o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern?
Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben.

o Ist Boosting praktisch anwendbar?
Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues:

o The statistical learning theory framework for analyzing boosting methods.
We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution.

o How can boosting methods be related to mathematical optimization techniques?
To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms.

o How to make Boosting noise robust?
One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness.

o How to adapt boosting to regression problems?
Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties.

o Can boosting techniques be useful in practice?
The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process.

---
Anmerkung:
Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
APA, Harvard, Vancouver, ISO, and other styles
3

Nekooie, Batool. "Convex optimization involving matrix inequalities." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jangam, Ravindra nath vijay kumar. "BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-33934.

Full text
Abstract:
The thesis analyses and validates Beamforming methods using Convex Optimization.  CVX which is a Matlab supported tool for convex optimization has been used to develop this concept. An algorithm is designed by which an appropriate system has been identified by varying parameters such as number of antennas, passband width, and stopbands widths of a beamformer. We have observed the beamformer by minimizing the error for Least-square and Infinity norms. A graph obtained by the optimum values between least-square and infinity norms shows us a trade-off between these two norms. We have observed convex optimization for double passband of a beamformer which has proven the flexibility of convex optimization. On extension for this, we designed a filter in which stopband is arbitrary. A constraint is used by which the stopband would be varying depending upon the upper boundary (limiting) line which varies w.r.t y-axis (dB). The beamformer has been observed for feasibility by varying parameters such as number of antennas, arbitrary upper boundaries, stopbands and passband. This proves that there is flexibility for designing a beamformer as desired.
APA, Harvard, Vancouver, ISO, and other styles
5

Saunderson, James (James Francis). "Subspace identification via convex optimization." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66475.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 88-92).
In this thesis we consider convex optimization-based approaches to the classical problem of identifying a subspace from noisy measurements of a random process taking values in the subspace. We focus on the case where the measurement noise is component-wise independent, known as the factor analysis model in statistics. We develop a new analysis of an existing convex optimization-based heuristic for this problem. Our analysis indicates that in high-dimensional settings, where both the ambient dimension and the dimension of the subspace to be identified are large, the convex heuristic, minimum trace factor analysis, is often very successful. We provide simple deterministic conditions on the underlying 'true' subspace under which the convex heuristic provably identifies the correct subspace. We also consider the performance of minimum trace factor analysis on 'typical' subspace identification problems, that is problems where the underlying subspace is chosen randomly from subspaces of a particular dimension. In this setting we establish conditions on the ambient dimension and the dimension of the underlying subspace under which the convex heuristic identifies the subspace correctly with high probability. We then consider a refinement of the subspace identification problem where we aim to identify a class of structured subspaces arising from Gaussian latent tree models. More precisely, given the covariance at the finest scale of a Gaussian latent tree model, and the tree that indexes the model, we aim to learn the parameters of the model, including the state dimensions of each of the latent variables. We do so by extending the convex heuristic, and our analysis, from the factor analysis setting to the setting of Gaussian latent tree models. We again provide deterministic conditions on the underlying latent tree model that ensure our convex optimization-based heuristic successfully identifies the parameters and state dimensions of the model.
by James Saunderson.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Shewchun, John Marc 1972. "Constrained control using convex optimization." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/46471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Boţ, Radu Ioan. "Conjugate duality in convex optimization." Berlin [u.a.] Springer, 2010. http://dx.doi.org/10.1007/978-3-642-04900-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aggarwal, Varun. "Analog circuit optimization using evolutionary algorithms and convex optimization." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40525.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 83-88).
In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the designer effort is a contribution. We argue that the accuracy of geometric programming can be improved without adversely influencing the run time or increasing the designer's effort. This is facilitated by decomposition of geometric programming modeling into two steps, which decouples accuracy of models and run-time of geometric programming. We design a new algorithm for producing accurate posynomial models for MOS transistor parameters, which is the first step of the decomposition. The new algorithm can generate posynomial models with variable number of terms and real-valued exponents. The algorithm is a hybrid of a genetic algorithm and a convex optimization technique. We study the performance of the algorithm on artificially created benchmark problems. We show that the accuracy of posynomial models of MOS parameters is improved by a considerable amount by using the new algorithm. The new posynomial modeling algorithm can be used in any application of geometric programming and is not limited to MOS parameter modeling. In the last chapter, we discuss various ideas to improve the state-of-art in circuit sizing.
by Varun Aggarwal.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

van, den Berg Ewout. "Convex optimization for generalized sparse recovery." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/16646.

Full text
Abstract:
The past decade has witnessed the emergence of compressed sensing as a way of acquiring sparsely representable signals in a compressed form. These developments have greatly motivated research in sparse signal recovery, which lies at the heart of compressed sensing, and which has recently found its use in altogether new applications. In the first part of this thesis we study the theoretical aspects of joint-sparse recovery by means of sum-of-norms minimization, and the ReMBo-l₁ algorithm, which combines boosting techniques with l₁-minimization. For the sum-of-norms approach we derive necessary and sufficient conditions for recovery, by extending existing results to the joint-sparse setting. We focus in particular on minimization of the sum of l₁, and l₂ norms, and give concrete examples where recovery succeeds with one formulation but not with the other. We base our analysis of ReMBo-l₁ on its geometrical interpretation, which leads to a study of orthant intersections with randomly oriented subspaces. This work establishes a clear picture of the mechanics behind the method, and explains the different aspects of its performance. The second part and main contribution of this thesis is the development of a framework for solving a wide class of convex optimization problems for sparse recovery. We provide a detailed account of the application of the framework on several problems, but also consider its limitations. The framework has been implemented in the SPGL1 algorithm, which is already well established as an effective solver. Numerical results show that our algorithm is state-of-the-art, and compares favorably even with solvers for the easier---but less natural---Lagrangian formulations. The last part of this thesis discusses two supporting software packages: Sparco, which provides a suite of test problems for sparse recovery, and Spot, a Matlab toolbox for the creation and manipulation of linear operators. Spot greatly facilitates rapid prototyping in sparse recovery and compressed sensing, where linear operators form the elementary building blocks. Following the practice of reproducible research, all code used for the experiments and generation of figures is available online at http://www.cs.ubc.ca/labs/scl/thesis/09vandenBerg/.
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Chin-Yee. "Interior point methods for convex optimization." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Niewoehner, Robert Jay. "Plant/controller optimization by convex methods." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/28453.

Full text
Abstract:
Approved for public release; distribution is unlimited
This report presents results of a three phase effort to demonstrate the use of convex control design techniques in aeronautical applications. The first phase was the demonstration of a methodology by which classical aircraft controller design requirements could be translated into the weighting matrices for H infinity controller synthesis. The second phase extended that methodology to the design of mixed H2 / H infinity controllers. The third phase considered the problem of minimizing the size of aircraft control surfaces while meeting closed-loop dynamic performance requirements. Control sizing is a critical element in the design of Reduced Static Stability (RSS) aircraft. Inadequate control power places the vehicle in peril, while too much control power forfeits the benefits of RSS, resulting in poorer performance, increased weight, increased cost, increased drag, and increased observability. Non-heuristic methods have been required by which the physical configuration and the accompanying controller can be designed directly from the flying qualities specifications. The optimization of the surfaces should be done while searching over the set of all controllers which, together in closed-loop, satisfy the flying qualities requirements. This report presents a methodology which simultaneously optimizes both the physical configuration and the control system of a rigid body, using performance requirements which can be posed as Linear Matrix Inequalities
APA, Harvard, Vancouver, ISO, and other styles
12

Dautbegovic, Dino. "Convex Optimization Methods for System Identification." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-34808.

Full text
Abstract:
The extensive use of a least-squares problem formulation in many fields is partly motivated by the existence of an analytic solution formula which makes the theory comprehensible and readily applicable, but also easily embedded in computer-aided design or analysis tools. While the mathematics behind convex optimization has been studied for about a century, several recent researches have stimulated a new interest in the topic. Convex optimization, being a special class of mathematical optimization problems, can be considered as generalization of both least-squares and linear programming. As in the case of a linear programming problem there is in general no simple analytical formula that can be used to find the solution of a convex optimization problem. There exists however efficient methods or software implementations for solving a large class of convex problems. The challenge and the state of the art in using convex optimization comes from the difficulty in recognizing and formulating the problem. The main goal of this thesis is to investigate the potential advantages and benefits of convex optimization techniques in the field of system identification. The primary work focuses on parametric discrete-time system identification models in which we assume or choose a specific model structure and try to estimate the model parameters for best fit using experimental input-output (IO) data. By developing a working knowledge of convex optimization and treating the system identification problem as a convex optimization problem will allow us to reduce the uncertainties in the parameter estimation. This is achieved by reecting prior knowledge about the system in terms of constraint functions in the least-squares formulation.
APA, Harvard, Vancouver, ISO, and other styles
13

Sou, Kin Cheong 1979. "Convex optimization methods for model reduction." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45872.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 153-161).
Model reduction and convex optimization are prevalent in science and engineering applications. In this thesis, convex optimization solution techniques to three different model reduction problems are studied.Parameterized reduced order modeling is important for rapid design and optimization of systems containing parameter dependent reducible sub-circuits such as interconnects and RF inductors. The first part of the thesis presents a quasi-convex optimization approach to solve the parameterized model order reduction problem for linear time-invariant systems. Formulation of the model reduction problem as a quasi-convex program allows the flexibility to enforce constraints such as stability and passivity in both non-parameterized and parameterized cases. Numerical results including the parameterized reduced modeling of a large RF inductor are given to demonstrate the practical value of the proposed algorithm.A majority of nonlinear model reduction techniques can be regarded as a two step procedure as follows. First the state dimension is reduced through a projection, and then the vector field of the reduced state is approximated for improved computation efficiency. Neither of the above steps has been thoroughly studied. The second part of this thesis presents a solution to a particular problem in the second step above, namely, finding an upper bound of the system input/output error due to nonlinear vector field approximation. The system error upper bounding problem is formulated as an L2 gain upper bounding problem of some feedback interconnection, to which the small gain theorem can be applied. A numerical procedure based on integral quadratic constraint analysis and a theoretical statement based on L2 gain analysis are given to provide the solution to the error bounding problem. The numerical procedure is applied to analyze the vector field approximation quality of a transmission line with diodes.
(Cont) The application of Volterra series to the reduced modeling of nonlinear systems is hampered by the rapidly increasing computation cost with respect to the degrees of the polynomials used. On the other hand, while it is less general than the Volterra series model, the Wiener-Hammerstein model has been shown to be useful for accurate and compact modeling of certain nonlinear sub-circuits such as power amplifiers. The third part of the thesis presents a convex optimization solution technique to the reduction/identification of the Wiener-Hammerstein system. The identification problem is formulated as a non-convex quadratic program, which is solved by a semidefinite programming relaxation technique. It is demonstrated in the thesis that the formulation is robust with respect to noisy measurement, and the relaxation technique is oftentimes sufficient to provide good solutions. Simple examples are provided to demonstrate the use of the proposed identification algorithm.
by Kin Cheong Sou.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Modiba, Jacob Mantjitji. "An overview of sparse convex optimization." Diss., University of Pretoria, 2018. http://hdl.handle.net/2263/64352.

Full text
Abstract:
Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. Optimization is seeking values of a variable that leads to an optimal value of the function that is to be optimized. Suppose we have a system of equations where there more unknowns than the equations. This type of system leads to an infinitely many solution. If one has prior knowledge that the solution is sparse this problem can be treated as an optimization problem. In this mini-dissertation we will discuss the convex algorithms for finding sparse solution. We use convex algorithm are chosen since they are relatively easy to implement. The class of methods we will discuss are convex relaxation, greedy algorithms and iterative thresholding. We will then compare this algorithms by applying them to a Sudoku problem.
Dissertation (MSc)--University of Pretoria, 2018.
CAIR and STATOMET
Statistics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
15

Lewis, Naama. "ITEM RESPONSE MODELS AND CONVEX OPTIMIZATION." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/dissertations/1782.

Full text
Abstract:
Item Response Theory (IRT) Models, like the one parameter, two parameters, or normal Ogive, have been discussed for many years. These models represent a rich area of investigation due to their complexity as well as the large amount of data collected in relationship to model parameter estimation. Here we propose a new way of looking at IRT models using I-projections and duality. We use convex optimization methods to derive these models. The Kullback-Leibler divergence is used as a metric and specific constraints are proposed for the various models. With this approach, the dual problem is shown to be much easier to solve than the primal problem. In particular when there are many constraints, we propose the application of a projection algorithm for solving these types of problems. We also consider re-framing the problem and utilizing a decomposition algorithm to solve for parameters as well. Both of these methods will be compared to the Rasch and 2-Parameter Logistic models using established computer software where estimation of model parameters are done under Maximum Likelihood Estimation framework. We will also compare the appropriateness of these techniques on multidimensional item response data sets and propose new models with the use of I-projections.
APA, Harvard, Vancouver, ISO, and other styles
16

Fontaine, Xavier. "Sequential learning and stochastic optimization of convex functions." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM024.

Full text
Abstract:
Dans cette thèse nous étudions plusieurs problèmes d'apprentissage automatique qui sont tous liés à la minimisation d'une fonction bruitée, qui sera souvent convexe.Du fait de leurs nombreuses applications nous nous concentrons sur des problèmes d'apprentissage séquentiel, qui consistent à traiter des données ``à la volée'', ou en ligne.La première partie de cette thèse est ainsi consacrée à l'étude de trois différents problèmes d'apprentissage séquentiel dans lesquels nous rencontrons le compromis classique ``exploration vs. exploitation''.Dans chacun de ces problèmes un agent doit prendre des décisions pour maximiser une récompense ou pour évaluer un paramètre dans un environnement incertain, dans le sens où les récompenses ou les résultats des différentes actions sont inconnus et bruités.Nous étudions tous ces problèmes à l'aide de techniques d'optimisation stochastique convexe, et nous proposons et analysons des algorithmes pour les résoudre.Dans la deuxième partie de cette thèse nous nous concentrons sur l'analyse de l'algorithme de descente de gradient stochastique qui est vraisemblablement l'un des algorithmes d'optimisation stochastique les plus utilisés en apprentissage automatique.Nous en présentons une analyse complète dans le cas convexe ainsi que dans certaines situations non convexes en étudiant le modèle continu qui lui est associé, et obtenons de nouveaux résultats de convergence optimaux
In this thesis we study several machine learning problems that are all linked with the minimization of a noisy function, which will often be convex.Inspired by real-life applications we focus on sequential learning problems which consist in treating the data ``on the fly'', or in an online manner.The first part of this thesis is thus devoted to the study of three different sequential learning problems which all face the classical ``exploration vs. exploitation'' trade-off.Each of these problems consists in a situation where a decision maker has to take actions in order to maximize a reward or to evaluate a parameter under uncertainty, meaning that the rewards or the feedback of the possible actions are unknown and noisy.We demonstrate that all of these problems can be studied under the scope of stochastic convex optimization, and we propose and analyze algorithms to solve them.In the second part of this thesis we focus on the analysis of the Stochastic Gradient Descent algorithm, which is likely one of the most used stochastic optimization algorithms in machine learning.We provide an exhaustive analysis in the convex setting and in some non-convex situations by studying the associated continuous-time model, and obtain new optimal convergence results
APA, Harvard, Vancouver, ISO, and other styles
17

Inacio, Helder. "Convex relaxations for cubic polynomial problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47563.

Full text
Abstract:
This dissertation addresses optimization of cubic polynomial problems. Heuristics for finding good quality feasible solutions and for improving on existing feasible solutions for a complex industrial problem, involving cubic and pooling constraints among other complicating constraints, have been developed. The heuristics for finding feasible solutions are developed based on linear approximations to the original problem that enforce a subset of the original problem constraints while it tries to provide good approximations for the remaining constraints, obtaining in this way nearly feasible solutions. The performance of these heuristics has been tested by using industrial case studies that are of appropriate size, scale and structure. Furthermore, the quality of the solutions can be quantified by comparing the obtained feasible solutions against upper bounds on the value of the problem. In order to obtain these upper bounds we have extended efficient existing techniques for bilinear problems for this class of cubic polynomial problems. Despite the efficiency of the upper bound techniques good upper bounds for the industrial case problem could not be computed efficiently within a reasonable time limit (one hour). We have applied the same techniques to subproblems with the same structure but about one fifth of the size and in this case, on average, the gap between the obtained solutions and the computed upper bounds is about 3%. In the remaining part of the thesis we look at global optimization of cubic polynomial problems with non-negative bounded variables via branch and bound. A theoretical study on the properties of convex underestimators for non-linear terms which are quadratic in one of the variables and linear on the other variable is presented. A new underestimator is introduced for this class of terms. The final part of the thesis describes the numerical testing of the previously mentioned underestimators together with approximations obtained by considering lifted approximations of the convex hull of the (x x y) terms. Two sets of instances are generated for this test and the descriptions of the procedures to generate the instances are detailed here. By analyzing the numerical results we can conclude that our proposed underestimator has the best behavior in the family of instances where the only non-linear terms present are of the form (x x y). Problems originating from least squares are much harder to solve than the other class of problems. In this class of problems the efficiency of linear programming solvers plays a big role and on average the methods that use these solvers perform better than the others.
APA, Harvard, Vancouver, ISO, and other styles
18

Guzman, Paredes Cristobal. "Information, complexity and structure in convex optimization." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53577.

Full text
Abstract:
This thesis is focused on the limits of performance of large-scale convex optimization algorithms. Classical theory of oracle complexity, first proposed by Nemirovski and Yudin in 1983, successfully established the worst-case behavior of methods based on local oracles (a generalization of first-order oracle for smooth functions) for nonsmooth convex minimization, both in the large-scale and low-scale regimes; and the complexity of approximately solving linear systems of equations (equivalent to convex quadratic minimization) over Euclidean balls, under a matrix-vector multiplication oracle. Our work extends the applicability of lower bounds in two directions: Worst-Case Complexity of Large-Scale Smooth Convex Optimization: We generalize lower bounds on the complexity of first-order methods for convex optimization, considering classes of convex functions with Hölder continuous gradients. Our technique relies on the existence of a smoothing kernel, which defines a smooth approximation for any convex function via infimal convolution. As a consequence, we derive lower bounds for \ell_p/\ell_q-setups, where 1\leq p,q\leq \infty, and extend to its matrix analogue: Smooth convex minimization (with respect to the Schatten q-norm) over matrices with bounded Schatten p-norm. The major consequences of this result are the near-optimality of the Conditional Gradient method over box-type domains (p=q=\infty), and the near-optimality of Nesterov's accelerated method over the cross-polytope (p=q=1). Distributional Complexity of Nonsmooth Convex Optimization: In this work, we prove average-case lower bounds for the complexity of nonsmooth convex ptimization. We introduce an information-theoretic method to analyze the complexity of oracle-based algorithms solving a random instance, based on the reconstruction principle. Our technique shows that all known lower bounds for nonsmooth convex optimization can be derived by an emulation procedure from a common String-Guessing Problem, which is combinatorial in nature. The derived average-case lower bounds extend to hold with high probability, and for algorithms with bounded probability error, via Fano's inequality. Finally, from the proposed technique we establish the equivalence (up to constant factors) of distributional, randomized, and worst-case complexity for black-box convex optimization. In particular, there is no gain from randomization in this setup.
APA, Harvard, Vancouver, ISO, and other styles
19

Hendrich, Christopher. "Proximal Splitting Methods in Nonsmooth Convex Optimization." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.

Full text
Abstract:
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
APA, Harvard, Vancouver, ISO, and other styles
20

Soheili, Azad Negar. "Elementary Algorithms for Solving Convex Optimization Problems." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/353.

Full text
Abstract:
The rapid growth in data availability has led to modern large scale convex optimization problems that pose new practical and theoretical challenges. Examples include classification problems such as customer segmentation in retail and credit scoring in insurance. Classical optimization and machine learning techniques are typically inadequate to solve these large optimization problems because of high memory requirements and slow convergence guarantees. This thesis develops two research threads to address these issues. The first involves improving the effectiveness of a class of algorithms with simple computational steps for solving convex optimization problems arising in machine learning, data mining, and decision making. The second involves the refinement of conditioning and geometry of convex optimization problems via preconditioning. I elaborate on these two threads below. 1. The main theme of this thesis focuses on the class of elementary algorithms for solving convex optimization problems. These algorithms only involve simple operations such as matrix-vector multiplications, vector updates, and separation oracles. This simplicity makes the computational cost per iteration and memory requirement of these algorithms low. Thus, elementary algorithms are promising for solving emerging big data applications in areas such as classification, pattern recognition, and online learning. A major hurdle that needs to be overcome is the slow convergence of these algorithms. We develop new elementary algorithms that are enhanced via smoothing and dilation techniques. These enhancements yield algorithms that retain the attractive simplicity of the elementary algorithms while achieving substantially improved convergence rates. Thus, these enhanced algorithms are better suited for solving modern large scale convex optimization problems. 2. A significant difficulty when solving large convex optimization problems is poor conditioning caused by the existence of at and nearly at geometries. This thesis shows that a combination of two simple preprocessing steps generally improve the geometric structure of problem instances. We improve instances' geometric structure by reconciling the properties of three different but related notions of conditioning. More precisely, when one of these measures is large, in which case the problem instance is certainly poorly conditioned, our procedure reduces it without making the remaining measures worse. Our preconditioning procedures can be potentially useful for the convergence properties of a large class of iterative methods without changing their ultimate solution.
APA, Harvard, Vancouver, ISO, and other styles
21

Grigas, Paul (Paul Edward). "Methods for convex optimization and statistical learning." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106683.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 219-225).
We present several contributions at the interface of first-order methods for convex optimization and problems in statistical machine learning. In the first part of this thesis, we present new results for the Frank-Wolfe method, with a particular focus on: (i) novel computational guarantees that apply for any step-size sequence, (ii) a novel adjustment to the basic algorithm to better account for warm-start information, and (iii) extensions of the computational guarantees that hold in the presence of approximate subproblem and/or gradient computations. In the second part of the thesis, we present a unifying framework for interpreting "greedy" first-order methods -- namely Frank-Wolfe and greedy coordinate descent -- as instantiations of the dual averaging method of Nesterov, and we discuss the implications thereof. In the third part of the thesis, we present an extension of the Frank-Wolfe method that is designed to induce near-optimal low-rank solutions for nuclear norm regularized matrix completion and, for more general problems, induces near-optimal "well-structured" solutions. We establish computational guarantees that trade off efficiency in computing near-optimal solutions with upper bounds on the rank of iterates. We then present extensive computational results that show significant computational advantages over existing related approaches, in terms of delivering low rank and low run-time to compute a target optimality gap. In the fourth part of the thesis, we analyze boosting algorithms in linear regression from the perspective modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression can be viewed as subgradient descent to minimize the maximum absolute correlation between features and residuals. We also propose a slightly modified boosting algorithm that yields an algorithm for the Lasso, and that computes the Lasso path. Our perspective leads to first-ever comprehensive computational guarantees for all of these boosting algorithms, which provide a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm, for any dataset. In the fifth and final part of the thesis, we present several related results in the contexts of boosting algorithms for logistic regression and the AdaBoost algorithm.
by Paul Grigas.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Altschuler, Jason (Jason M. ). "Greed, hedging, and acceleration in convex optimization." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120409.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 153-156).
This thesis revisits the well-studied and practically motivated problem of minimizing a strongly convex, smooth function with first-order information. The first main message of the thesis is that, surprisingly, algorithms which are individually suboptimal can be combined to achieve accelerated convergence rates. This phenomenon can be intuively understood as "hedging" between safe strategies (e.g. slowly converging algorithms) and aggressive strategies (e.g. divergent algorithms) since bad cases for the former are good cases for the latter, and vice versa. Concretely, we implement the optimal hedging by simply running Gradient Descent (GD) with prudently chosen stepsizes. This result goes against the conventional wisdom that acceleration is impossible without momentum. The second main message is a universality result for quadratic optimization. We show that, roughly speaking, "most" Krylov-subspace algorithms are asymptotically optimal (in the worst-case) and "most" quadratic functions are asymptotically worst-case functions (for all algorithms). From an algorithmic perspective, this goes against the conventional wisdom that accelerated algorithms require extremely careful parameter tuning. From a lower-bound perspective, this goes against the conventional wisdom that there are relatively few "worst functions in the world" and they have lots of structure. It also goes against the conventional wisdom that a quadratic function is easier to optimize when the initialization error is more concentrated on certain eigenspaces - counterintuitively, we show that so long as this concentration is not "pathologically" extreme, this only leads to faster convergence in the beginning iterations and is irrelevant asymptotically. Part I of the thesis shows the algorithmic side of this universality by leveraging tools from potential theory and harmonic analysis. The main result is a characterization of non-adaptive randomized Krylov-subspace algorithms which asymptotically achieve the so-called "accelerated rate" in the worst case. As a special case, this recovers the known fact that GD accelerates when inverse stepsizes are i.i.d. from the Arcsine distribution. This distribution has a remarkable "equalizing" property: every quadratic function is equally easy to optimize. We interpret this as "optimal hedging" since there is no worst-case function. Leveraging the equalizing property also provides other new insights including asymptotic isotropy of the iterates around the optimum, and uniform convergence guarantees for extending our analysis to l2. Part II of the thesis shows the lower-bound side of this universality by connecting quadratic optimization to the universality of orthogonal polynomials. We also characterize, for every finite number of iterations n, all worst-case quadratic functions for n iterations of any Krylov-subspace algorithm. Previously no tight constructions were known. (Note the classical construction of [Nemirovskii and Yudin, 1983] is only tight asymptotically.) As a corollary, this result also proves that randomness does not help Krylov-subspace algorithms. Combining the results in Parts I and II uncovers a duality between optimal Krylov-subspace algorithms and worst-case quadratic functions. It also shows new close connections between quadratic optimization, orthogonal polynomials, Gaussian quadrature, Jacobi operators, and their spectral measures. Part III of the thesis extends the algorithmic techniques in Part I to convex optimization. We first show that running the aforementioned random GD algorithm accelerates on separable convex functions. This is the first convergence rate that exactly matches the classical quadratic-optimization lower bound of [Nemirovskii and Yudin, 1983] on any class of convex functions richer than quadratics. This provides partial evidence suggesting that convex optimization might be no harder than quadratic optimization. However, these techniques (provably) do not extend to general convex functions. This is roughly because they do not require all observed data to be consistent with a single valid function - we call this "stitching." We turn to a semidefinite programming formulation of worst-case rate from [Taylor et al., 2017] that ensures stitching. Using this we compute the optimal GD stepsize schedules for 1, 2, and 3 iterations, and show that they partially accelerate on general convex functions. These optimal schedules for convex optimization are remarkably different from the optimal schedules for quadratic optimization. The rate improves as the number of iterations increases, but the algebraic systems become increasingly complicated to solve and the general case eludes us.
by Jason Altschuler.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Yin Tat. "Faster algorithms for convex and combinatorial optimization." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104467.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 443-458).
In this thesis, we revisit three algorithmic techniques: sparsification, cutting and collapsing. We use them to obtain the following results on convex and combinatorial optimization: --Linear Programming: We obtain the first improvement to the running time for linear programming in 25 years. The convergence rate of this randomized algorithm nearly matches the universal barrier for interior point methods. As a corollary, we obtain the first ... time randomized algorithm for solving the maximum flow problem on directed graphs with m edges and n vertices. This improves upon the previous fastest running time of achieved over 15 years ago by Goldberg and Rao. --Maximum Flow Problem: We obtain one of the first almost-linear time randomized algorithms for approximating the maximum flow in undirected graphs. As a corollary, we improve the running time of a wide range of algorithms that use the computation of maximum flows as a subroutine. --Non-Smooth Convex Optimization: We obtain the first nearly-cubic time randomized algorithm for convex problems under the black box model. As a corollary, this implies a polynomially faster algorithm for three fundamental problems in computer science: submodular function minimization, matroid intersection, and semidefinite programming. --Graph Sparsification: We obtain the first almost-linear time randomized algorithm for spectrally approximating any graph by one with just a linear number of edges. This sparse graph approximately preserves all cut values of the original graph and is useful for solving a wide range of combinatorial problems. This algorithm improves all previous linear-sized constructions, which required at least quadratic time. --Numerical Linear Algebra: Multigrid is an efficient method for solving large-scale linear systems which arise from graphs in low dimensions. It has been used extensively for 30 years in scientific computing. Unlike the previous approaches that make assumptions on the graphs, we give the first generalization of the multigrid that provably solves Laplacian systems of any graphs in nearly-linear expected time.
by Yin Tat Lee.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Gupta, Swati Ph D. Massachusetts Institute of Technology. "Combinatorial structures in online and convex optimization." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112014.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 157-163).
Motivated by bottlenecks in algorithms across online and convex optimization, we consider three fundamental questions over combinatorial polytopes. First, we study the minimization of separable strictly convex functions over polyhedra. This problem is motivated by first-order optimization methods whose bottleneck relies on the minimization of a (often) separable, convex metric, known as the Bregman divergence. We provide a conceptually simple algorithm, Inc-Fix, in the case of submodular base polyhedra. For cardinality-based submodular polytopes, we show that Inc-Fix can be speeded up to be the state-of-the-art method for minimizing uniform divergences. We show that the running time of Inc-Fix is independent of the convexity parameters of the objective function. The second question is concerned with the complexity of the parametric line search problem in the extended submodular polytope P: starting from a point inside P, how far can one move along a given direction while maintaining feasibility. This problem arises as a bottleneck in many algorithmic applications like the above-mentioned Inc-Fix algorithm and variants of the Frank-Wolfe method. One of the most natural approaches is to use the discrete Newton's method, however, no upper bound on the number of iterations for this method was known. We show a quadratic bound resulting in a factor of n6 reduction in the worst-case running time from the previous state-of-the-art. The analysis leads to interesting extremal questions on set systems and submodular functions. Next, we develop a general framework to simulate the well-known multiplicative weights update algorithm for online linear optimization over combinatorial strategies U in time polynomial in log /U/, using efficient approximate general counting oracles. We further show that efficient counting over the vertex set of any 0/1 polytope P implies efficient convex minimization over P. As a byproduct of this result, we can approximately decompose any point in a 0/1 polytope into a product distribution over its vertices. Finally, we compare the applicability and limitations of the above results in the context of finding Nash-equilibria in combinatorial two-player zero-sum games with bilinear loss functions. We prove structural results that can be used to find certain Nash-equilibria with a single separable convex minimization.
by Swati Gupta.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Xinxin. "Some operator splitting methods for convex optimization." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/43.

Full text
Abstract:
Many applications arising in various areas can be well modeled as convex optimization models with separable objective functions and linear coupling constraints. Such areas include signal processing, image processing, statistical learning, wireless networks, etc. If these well-structured convex models are treated as generic models and their separable structures are ignored in algorithmic design, then it is hard to effectively exploit the favorable properties that the objective functions possibly have. Therefore, some operator splitting methods have regained much attention from different areas for solving convex optimization models with separable structures in different contexts. In this thesis, some new operator splitting methods are proposed for convex optimiza- tion models with separable structures. We first propose combining the alternating direction method of multiplier with the logarithmic-quadratic proximal regulariza- tion for a separable monotone variational inequality with positive orthant constraints and propose a new operator splitting method. Then, we propose a proximal version of the strictly contractive Peaceman-Rachford splitting method, which was recently proposed for the convex minimization model with linear constraints and an objective function in form of the sum of two functions without coupled variables. After that, an operator splitting method suitable for parallel computation is proposed for a convex model whose objective function is the sum of three functions. For the new algorithms, we establish their convergence and estimate their convergence rates measured by the iteration complexity. We also apply the new algorithms to solve some applications arising in the image processing area; and report some preliminary numerical results. Last, we will discuss a particular video processing application and propose a series of new models for background extraction in different scenarios; to which some of the new methods are applicable. Keywords: Convex optimization, Operator splitting method, Alternating direction method of multipliers, Peaceman-Rachford splitting method, Image processing
APA, Harvard, Vancouver, ISO, and other styles
26

Lan, Guanghui. "Convex optimization under inexact first-order information." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29732.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Arkadi Nemirovski; Committee Co-Chair: Alexander Shapiro; Committee Co-Chair: Renato D. C. Monteiro; Committee Member: Anatoli Jouditski; Committee Member: Shabbir Ahmed. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
27

Kůdela, Jakub. "Advanced Decomposition Methods in Stochastic Convex Optimization." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-403864.

Full text
Abstract:
Při práci s úlohami stochastického programování se často setkáváme s optimalizačními problémy, které jsou příliš rozsáhlé na to, aby byly zpracovány pomocí rutinních metod matematického programování. Nicméně, v některých případech mají tyto problémy vhodnou strukturu, umožňující použití specializovaných dekompozičních metod, které lze použít při řešení rozsáhlých optimalizačních problémů. Tato práce se zabývá dvěma třídami úloh stochastického programování, které mají speciální strukturu, a to dvoustupňovými stochastickými úlohami a úlohami s pravděpodobnostním omezením, a pokročilými dekompozičními metodami, které lze použít k řešení problému v těchto dvou třídách. V práci popisujeme novou metodu pro tvorbu “warm-start” řezů pro metodu zvanou “Generalized Benders Decomposition”, která se používá při řešení dvoustupňových stochastických problémů. Pro třídu úloh s pravděpodobnostním omezením zde uvádíme originální dekompoziční metodu, kterou jsme nazvali “Pool & Discard algoritmus”. Užitečnost popsaných dekompozičních metod je ukázána na několika příkladech a inženýrských aplikacích.
APA, Harvard, Vancouver, ISO, and other styles
28

Merckx, Keno. "Optimization and Realizability Problems for Convex Geometries." Doctoral thesis, Universite Libre de Bruxelles, 2019. https://dipot.ulb.ac.be/dspace/bitstream/2013/288673/4/TOC.pdf.

Full text
Abstract:
Convex geometries are combinatorial structures; they capture in an abstract way the essential features of convexity in Euclidean space, graphs or posets for instance. A convex geometry consists of a finite ground set plus a collection of subsets, called the convex sets and satisfying certain axioms. In this work, we study two natural problems on convex geometries. First, we consider the maximum-weight convex set problem. After proving a hardness result for the problem, we study a special family of convex geometries built on split graphs. We show that the convex sets of such a convex geometry relate to poset convex geometries constructed from the split graph. We discuss a few consequences, obtaining a simple polynomial-time algorithm to solve the problem on split graphs. Next, we generalize those results and design the first polynomial-time algorithm for the maximum-weight convex set problem in chordal graphs. Second, we consider the realizability problem. We show that deciding if a given convex geometry (encoded by its copoints) results from a point set in the plane is ER-hard. We complete our text with a brief discussion of potential further work.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Hongyi Ph D. Massachusetts Institute of Technology. "Topics in non-convex optimization and learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121830.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 165-186).
Non-convex optimization and learning play an important role in data science and machine learning, yet so far they still elude our understanding in many aspects. In this thesis, I study two important aspects of non-convex optimization and learning: Riemannian optimization and deep neural networks. In the first part, I develop iteration complexity analysis for Riemannian optimization, i.e., optimization problems defined on Riemannian manifolds. Through bounding the distortion introduced by the metric curvature, iteration complexity of Riemannian (stochastic) gradient descent methods is derived. I also show that some fast first-order methods in Euclidean space, such as Nesterov's accelerated gradient descent (AGD) and stochastic variance reduced gradient (SVRG), have Riemannian counterparts that are also fast under certain conditions. In the second part, I challenge two common practices in deep learning, namely empirical risk minimization (ERM) and normalization. Specifically, I show (1) training on convex combinations of samples improves model robustness and generalization, and (2) a good initialization is sufficient for training deep residual networks without normalization. The method in (1), called mixup, is motivated by a data-dependent Lipschitzness regularization of the network. The method in (2), called Zerolnit, makes the network update scale invariant to its depth at initialization.
by Hongyi Zhang.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Jieqiu. "Convex relaxations in nonconvex and applied optimization." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/654.

Full text
Abstract:
Traditionally, linear programming (LP) has been used to construct convex relaxations in the context of branch and bound for determining global optimal solutions to nonconvex optimization problems. As second-order cone programming (SOCP) and semidefinite programming (SDP) become better understood by optimization researchers, they become alternative choices for obtaining convex relaxations and producing bounds on the optimal values. In this thesis, we study the use of these convex optimization tools in constructing strong relaxations for several nonconvex problems, including 0-1 integer programming, nonconvex box-constrained quadratic programming (BoxQP), and general quadratic programming (QP). We first study a SOCP relaxation for 0-1 integer programs and a sequential relaxation technique based on this SOCP relaxation. We present desirable properties of this SOCP relaxation, for example, this relaxation cuts off all fractional extreme points of the regular LP relaxation. We further prove that the sequential relaxation technique generates the convex hull of 0-1 solutions asymptotically. We next explore nonconvex quadratic programming. We propose a SDP relaxation for BoxQP based on relaxing the first- and second-order KKT conditions, where the difficulty and contribution lie in relaxing the second-order KKT condition. We show that, although the relaxation we obtain this way is equivalent to an existing SDP relaxation at the root node, it is significantly stronger on the children nodes in a branch-and-bound setting. New advance in optimization theory allows one to express QP as optimizing a linear function over the convex cone of completely positive matrices subject to linear constraints, referred to as completely positive programming (CPP). CPP naturally admits strong semidefinite relaxations. We incorporate the first-order KKT conditions of QP into the constraints of QP, and then pose it in the form of CPP to obtain a strong relaxation. We employ the resulting SDP relaxation inside a finite branch-and-bound algorithm to solve the QP. Comparison of our algorithm with commercial global solvers shows potential as well as room for improvement. The remainder is devoted to new techniques for solving a class of large-scale linear programming problems. First order methods, although not as fast as second-order methods, are extremely memory efficient. We develop a first-order method based on Nesterov's smoothing technique and demonstrate the effectiveness of our method on two machine learning problems.
APA, Harvard, Vancouver, ISO, and other styles
31

Rätsch, Gunnar. "Robust boosting via convex optimization theory and applications /." [S.l.] : [s.n.], 2001. http://pub.ub.uni-potsdam.de/2002/0008/raetsch.ps.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Liberti, Leo Sergio. "Reformulation and convex relaxation techniques for global optimization." Thesis, Imperial College London, 2004. http://hdl.handle.net/10044/1/11807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Tim Tai-Yi. "Primary estimation with sparsity-promoting bi-convex optimization." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/55900.

Full text
Abstract:
This thesis establishes a novel inversion methodology for the surface-related primaries from a given recorded seismic wavefield, called the Robust Estimation of Primaries by Sparse Inversion (Robust EPSI, or REPSI). Surface-related multiples are a major source of coherent noise in seismic data, and inferring fine geological structures from active-source seismic recordings typically first necessitates its removal or mitigation. For this task, current practice calls for data-driven approaches which produce only approximate multiple models that must be non-linearly subtracted from the data, often distorting weak primary events in the process. A recently proposed method called Estimation of Primaries by Sparse Inversion (EPSI) avoids this adaptive subtraction by directly inverting for a discrete representation of the underlying multiple-free subsurface impulse response as a set of band-limited spikes. However, in its original form, the EPSI algorithm exhibits a few notable shortcomings that impede adoption. Although it was shown that the correct impulse response can be obtained through a sparsest solution criteria, the current EPSI algorithm is not designed to take advantage of this finding, but instead approximates a sparse solution in an ad-hoc manner that requires practitioners to decide on a multitude of inversion parameters. The Robust EPSI method introduced in this thesis reformulates the original EPSI problem as a formal bi-convex optimization problem that makes obtaining the sparsest solution an explicit goal, while also reliably admit satisfactory solutions using contemporary self-tuning gradient methods commonly seen in large-scale machine learning communities. I show that the Robust EPSI algorithm is able to operate successfully on a variety of datasets with minimal user input, while also producing a more accurate model of the subsurface impulse response when compared to the original algorithm. Furthermore, this thesis makes several contributions that improves the capability and practicality of EPSI: a novel scattering-based multiple prediction model that allows Robust EPSI to deal with wider near-offset receiver gaps than previously demonstrated for EPSI, as well as a multigrid-inspired continuation strategy that significantly reduces the computation time needed to solve EPSI-type problems. These additions are enabled by and built upon the formalism of the Robust EPSI as developed in this thesis.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
34

Noel, Adam Josiah Gerald. "Convex sensing-reporting optimization for cooperative spectrum sensing." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/36838.

Full text
Abstract:
In this thesis, we consider the cooperative spectrum sensing problem in cognitive radio with energy detection. Secondary users with non-identical, independent sensing channels make 1-bit sensing decisions and report their decisions to the secondary base station over orthogonal noisy fading channels. The base station has knowledge of the reporting channel coefficients and acts as a fusion center by combining the decisions with an M-out-of-K rule. We allow the secondary users to trade sensing time slots for additional reporting time slots to increase the signal-to-noise ratios of the reporting channels. We derive the corresponding false alarm and missed detection probabilities, which are functions of the secondary sensor decision thresholds and the durations for sensing and reporting. Furthermore, we bound these probabilities and impose a practical convex region to enable the application of convex optimization for minimization of the false alarm probability for a target missed detection probability. We consider the two cases where the instantaneous and the average reporting channels are known for optimization. Allowing secondary users to trade sensing time slots for additional reporting time slots is shown to significantly improve sensing performance in both cases, even with poor sensing channels and a small number of secondary users.
APA, Harvard, Vancouver, ISO, and other styles
35

Herr, Katrin [Verfasser]. "Core Sets and Symmetric Convex Optimization / Katrin Herr." München : Verlag Dr. Hut, 2013. http://d-nb.info/1045125504/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chandrasekaran, Venkat. "Convex optimization methods for graphs and statistical modeling." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66002.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 209-220).
An outstanding challenge in many problems throughout science and engineering is to succinctly characterize the relationships among a large number of interacting entities. Models based on graphs form one major thrust in this thesis, as graphs often provide a concise representation of the interactions among a large set of variables. A second major emphasis of this thesis are classes of structured models that satisfy certain algebraic constraints. The common theme underlying these approaches is the development of computational methods based on convex optimization, which are in turn useful in a broad array of problems in signal processing and machine learning. The specific contributions are as follows: -- We propose a convex optimization method for decomposing the sum of a sparse matrix and a low-rank matrix into the individual components. Based on new rank-sparsity uncertainty principles, we give conditions under which the convex program exactly recovers the underlying components. -- Building on the previous point, we describe a convex optimization approach to latent variable Gaussian graphical model selection. We provide theoretical guarantees of the statistical consistency of this convex program in the high-dimensional scaling regime in which the number of latent/observed variables grows with the number of samples of the observed variables. The algebraic varieties of sparse and low-rank matrices play a prominent role in this analysis. -- We present a general convex optimization formulation for linear inverse problems, in which we have limited measurements in the form of linear functionals of a signal or model of interest. When these underlying models have algebraic structure, the resulting convex programs can be solved exactly or approximately via semidefinite programming. We provide sharp estimates (based on computing certain Gaussian statistics related to the underlying model geometry) of the number of generic linear measurements required for exact and robust recovery in a variety of settings. -- We present convex graph invariants, which are invariants of a graph that are convex functions of the underlying adjacency matrix. Graph invariants characterize structural properties of a graph that do not depend on the labeling of the nodes; convex graph invariants constitute an important subclass, and they provide a systematic and unified computational framework based on convex optimization for solving a number of interesting graph problems. We emphasize a unified view of the underlying convex geometry common to these different frameworks. We describe applications of both these methods to problems in financial modeling and network analysis, and conclude with a discussion of directions for future research.
by Venkat Chandrasekaran.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
37

Lintereur, Beau V. (Beau Vincent) 1973. "Constrained H̳₂ design via convex optimization with applications." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50628.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1998.
In title on t.p., double-underscored "H" appears in script.
Includes bibliographical references (p. 133-138).
A convex optimization controller design method is presented which minimizes the closed-loop H2 norm, subject to constraints on the magnitude of closed-loop transfer functions and transient responses due to specified inputs. This method uses direct parameter optimization of the closed-loop Youla or Q-parameter where the variables are the coefficients of a stable orthogonal basis. The basis is constructed using the recently rediscovered Generalized Orthonormal Basis Functions (GOBF) that have found application in system identification. It is proposed that many typical control specifications including robustness to modeling error and gain and phase margins can be posed with two simple constraints in the frequency and time domain. With some approximation, this formulation allows the controller design problem to be cast as a quadratic program. Two example applications demonstrate the practical utility of this method for real systems. First this method is applied to the roll axis of the EOS-AM1 spacecraft attitude control system, with a set of performance and robustness specifications. The constrained H2 controller simultaneously meets the specifications where previous model-based control studies failed. Then a constrained H2 controller is designed for an active vibration isolation system for a spaceborne optical technology demonstration test stand. Mixed specifications are successfully incorporated into the design and the results are verified with experimental frequency data.
by Beau V. Lintereur.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
38

Anderson, James David. "Dynamical system decomposition and analysis using convex optimization." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:624001be-28d5-4837-a7d8-2222e270e658.

Full text
Abstract:
This thesis is concerned with investigating new methods for the analysis of large-scale dynamical systems using convex optimization. The proposed methodology is based on composite Lyapunov theory and is computationally implemented using polynomial programming techniques. The main result of this work is the development of a system decomposition framework that makes it possible to analyze systems that are of such a scale that traditional methods cannot cope with. We begin by addressing the problem of model invalidation. A barrier certificate method for invalidating models in the presence of uncertain data is presented for both continuous and discrete time models. It is shown how a re-parameterization of the time dependent variables can improve the numerical conditioning of the underlying optimization problem. The main contribution of this thesis is the development of an automated dynamical system decomposition framework that permits us to verify the stability of systems that typically have a state dimension large enough to render traditional computational methods intractable. The underlying idea is to decompose a system into a set of lower order subsystems connected in feedback in such a manner that composite methods for stability verification may be employed. What is unique about the algorithm presented is that it takes into account both dynamics and the topology of the interconnection graph. In the first instance we illustrate the methodology with an ecological network and primal Internet congestion control scheme. The versatility of the decomposition framework is also highlighted when it is shown that when applied to a model of the EGF-MAPK signaling pathway it is capable of identifying biologically relevant subsystems in addition to stability verification. Finally we introduce stability metrics for interconnected dynamical systems based on the theory of dissipativity. We conclude by outlining a clustering based decomposition algorithm that explicitly takes into account the input and output dynamics when determining the system decomposition.
APA, Harvard, Vancouver, ISO, and other styles
39

Wytock, Matt. "Optimizing Optimization: Scalable Convex Programming with Proximal Operators." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/785.

Full text
Abstract:
Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algorithms required for implementation. Especially as datasets grow in size, there is a significant gap in speed and scalability between general solvers and specialized algorithms. This thesis addresses this gap with a new model for convex programming based on an intermediate representation of convex problems as a sum of functions with efficient proximal operators. This representation serves two purposes: 1) many problems can be expressed in terms of functions with simple proximal operators, and 2) the proximal operator form serves as a general interface to any specialized algorithm that can incorporate additional `2-regularization. On a single CPU core, numerical results demonstrate that the prox-affine form results in significantly faster algorithms than existing general solvers based on conic forms. In addition, splitting problems into separable sums is attractive from the perspective of distributing solver work amongst multiple cores and machines. We apply large-scale convex programming to several problems arising from building the next-generation, information-enabled electrical grid. In these problems (as is common in many domains) large, high-dimensional datasets present opportunities for novel data-driven solutions. We present approaches based on convex models for several problems: probabilistic forecasting of electricity generation and demand, preventing failures in microgrids and source separation for whole-home energy disaggregation.
APA, Harvard, Vancouver, ISO, and other styles
40

Goulet, Vincent. "Four-bar linkage synthesis using non-convex optimization." Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27721.

Full text
Abstract:
Ce mémoire présente une méthode pour synthétiser automatiquement des mécanismes articulés à quatre barres. Un logiciel implémentant cette méthode a été développé dans le cadre d’une initiative d’Autodesk Research portant sur la conception générative. Le logiciel prend une trajectoire en entrée et calcule les paramètres d’un mécanisme articulé à quatre barres capable de reproduire la même trajectoire. Ce problème de génération de trajectoire est résolu par optimisation non-convexe. Le problème est modélisé avec des contraintes quadratiques et des variables réelles. Une contrainte redondante spéciale améliore grandement la performance de la méthode. L’expérimentation présentée montre que le logiciel est plus rapide et précis que les approches existantes.
This thesis presents a method to automatically synthesize four-bar linkages. A software implementing the method was developed in the scope of a generative design initiative at Autodesk. The software takes a path as input and computes the parameters of a four-bar linkage able to replicate the same path. This path generation problem is solved using non-convex optimization. The problem is modeled with quadratic constraints and real variables. A special redundant constraint greatly improves the performance of the method. Experiments show that the software is faster and more precise than existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Beldon Chi. "Integrated vehicle and mission design using convex optimization." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127070.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 177-183).
Convex optimization is used to solve the simultaneous vehicle and mission design problem. The objective of this work is to develop convex optimization architectures that allow both the vehicle and mission to be designed together. They allow the problem to be solved very quickly while maintaining similar fidelity to comparable methods. Multiple architectures are formulated, and the architectures are implemented and evaluated for a sounding rocket design problem and a hydrogen aircraft design problem. The methodology proves successful in designing the sounding rocket while taking into account the optimal trajectory and control strategy and extended to a multi-mission design case. The hydrogen aircraft was successfully designed, allowing for both the cryogenic tank design to be chosen in conjunction with the mission prole. For the rocket design problem, the integrated vehicle and mission problem can only be combined into alternating and integrated approach, and the integrated architecture for convergence to solution in 50% computation time while reaching similar solution. For the hydrogen aircraft case, a 50+% decrease in fuel burn was able to be achieved compared to regular kerosene with an integrated optimization approach. Future work includes studying the convergence properties as well as increasing the robustness of the architectures.
by Beldon Chi Lin.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
42

Sano, Yoshio. "Matroids on convex geometries: Subclasses, operations, and optimization." 京都大学 (Kyoto University), 2010. http://hdl.handle.net/2433/120626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Jueyou. "Distributed and parallel methods for structural convex optimization." Thesis, Federation University Australia, 2014. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/81614.

Full text
Abstract:
There has been considerable recent interest in optimization methods associated with a multi-agent network. The goal is to optimize a global objective function which is a sum of local objective functions only known by the agents through the network. The focus of this dissertation is the development of optimization algorithms for the special class when the optimization problem of interest has an additive or separable structure. Specifically, we are concerned with two classes of convex optimization problems. The first one is called as multi-agent convex problems and they arise in many network applications, including in-network estimation, machine learning and signal processing. The second one is termed as separable convex problems and they arise in diverse applications, including network resource allocation, and distributed model prediction and control. Due to the structure of problems and privacy of local objective functions, special optimization methods are always desirable, especially for large-scale structured problems. For the multi-agent convex problems with simple constraints, we develop gradient-free distributed methods based on the incremental and consensus strategies. The convergence analysis and convergence rates of proposed methods are provided. By comparison, existing distributed algorithms require first-order information of objective functions, but our methods only involve the estimates of objective function value. Therefore, the proposed methods are suitable to solve more general problems even when the first-order information of problems is unavailable or costly to compute. In practical applications a wide variety of problems are formulated as multi-agent optimization problems subject to equality and (or) inequality constraints. Methods available for solving this type of problems are still limited in the literature. Most of them are based on the Lagrangian duality, and there is no estimates on the convergence rate. In the thesis, we develop a distributed proximal-gradient method to solve multi-agent convex problems under global inequality constraints. Moreover, we provide the convergence analysis of the proposed method and obtain the explicit estimates of convergence rate. Our method relies on the exact penalty function method and multi-consensus averaging, not involving the Lagrangian multipliers. For the separable convex problems with linear constraints, on the framework of Lagrangian dual decomposition, we develop fast gradient-based optimization methods, including a fast dual gradient-projection method and a fast dual gradient method. In addition to parallel implementation of the algorithm, our focus is that the algorithm has faster convergence rate, since existing dual subgradient-based algorithms suffer from a slow convergence rate. Our proposed algorithms are based Nesterov’s smoothing technique and several fast gradient schemes. The explicit convergence rates of the proposed algorithms are obtained, which are superior to those obtained by subgradient-based algorithms. The proposed algorithms are applied to a real-pricing problem in smart grid and a network utility maximum problem. Dual decomposition methods often involve in finding the exact solution of an inner subproblem at each iteration. However, from a practical point of view, the subproblem is never solved exactly. Hence, we extend the proposed fast dual gradient-projection method to the inexact setting. Although the inner subproblem is solved only up to certain precision, we provide a complete analysis of computational complexity on the generated approximate solutions. Thus, our inexact version has the attractive computational advantage that the subproblem only needs to be solved with certain accuracy while still maintaining the same iteration complexity as the exact counterpart.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
44

Bengtsson, Gabriel, and Jesper Larsson. "Source Localization by Inverse Diffusion and Convex Optimization." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230738.

Full text
Abstract:
Målet med rapporten har varit att att lokalisera källor till en emitterad substans med hjälp av en algoritm för inversdiffusion. Denna algoritm har gett lyckade resultat när den applicerats för biologisk detektering av celler och det har vidare föreslagits att körtiden skulle kunna reduceras avsevärt om algoritmen implementeras som en beräkningsgraf. Detta skulle automatisera beräkningar av gradienter och tillåta snabbare exekvering på en grafikprocessor. För algoritmens implementation användes TensorFlow, som primärt är ett programmeringsbibliotek inriktat mot maskininlärning. Datorgenererade testbilder av biologiska prover användes sedan för att utvärdera körresultatet med hjälp av mjukvara för bildanalys samt prestandamått. Jämförelser visar att implementationen i TensorFlow ger resultat som överensstämmer med den traditionella implementationen av samma algoritm. Sett ur ett bredare perspektiv visar detta på möjligheten att använda beräkningsgrafer för att lösa storskaliga optimeringsproblem och mer specifikt inversproblem.
The purpose of this project was to implement an inverse diffusion algorithm to locate the sources of an emitted substance. This algorithm has yielded successful results when applied to biological cell detection, and it has been suggested that the run time could be greatly reduced if adaptions for a computation graph framework are made. This would automate calculations of gradients and allow for faster execution on a graphics processing unit. The algorithm implementation was realized in TensorFlow, which is primarily a machine learning oriented programming library. Computer-generated biological test images were then used to evaluate the performance using regular image analysis software and accuracy metrics. Comparisons reveal that the TensorFlow implementation of the algorithm can match the accuracy metrics of traditional implementations of the same algorithm. Viewed in a broader scope this serves as an example to highlight the possibility of using computation graph frameworks to solve large scale optimization problems, and more specifically inverse problems.
APA, Harvard, Vancouver, ISO, and other styles
45

Nguyen, Thanh Tan. "Selected non-convex optimization problems in machine learning." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/200748/1/Thanh_Nguyen_Thesis.pdf.

Full text
Abstract:
Non-convex optimization is an important and rapidly growing research area. It is tied to the latest success of deep learning, reinforcement learning, matrix factorization, and more. As a contribution to this area, this thesis provides analyses and algorithms for three important problems. The first one is optimization of noisy functions defined on a large graph, which is useful for AB testing, digital marketing. The second one is learning a convex ensemble of basis models, with application in regression and classification. The last one is optimization of ResNet with restricted residual modules, which leads to better performance over standard ResNet.
APA, Harvard, Vancouver, ISO, and other styles
46

Cho, Myung. "Convex and non-convex optimizations for recovering structured data: algorithms and analysis." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5922.

Full text
Abstract:
Optimization theories and algorithms are used to efficiently find optimal solutions under constraints. In the era of “Big Data”, the amount of data is skyrocketing,and this overwhelms conventional techniques used to solve large scale and distributed optimization problems. By taking advantage of structural information in data representations, this thesis offers convex and non-convex optimization solutions to various large scale optimization problems such as super-resolution, sparse signal processing,hypothesis testing, machine learning, and treatment planning for brachytherapy. Super-resolution: Super-resolution aims to recover a signal expressed as a sum of a few Dirac delta functions in the time domain from measurements in the frequency domain. The challenge is that the possible locations of the delta functions are in the continuous domain [0,1). To enhance recovery performance, we considered deterministic and probabilistic prior information for the locations of the delta functions and provided novel semidefinite programming formulations under the information. We also proposed block iterative reweighted methods to improve recovery performance without prior information. We further considered phaseless measurements, motivated by applications in optic microscopy and x-ray crystallography. By using the lifting method and introducing the squared atomic norm minimization, we can achieve super-resolution using only low frequency magnitude information. Finally, we proposed non-convex algorithms using structured matrix completion. Sparse signal processing: L1 minimization is well known for promoting sparse structures in recovered signals. The Null Space Condition (NSC) for L1 minimization is a necessary and sufficient condition on sensing matrices such that a sparse signal can be uniquely recovered via L1 minimization. However, verifying NSC is a non-convex problem and known to be NP-hard. We proposed enumeration-based polynomial-time algorithms to provide performance bounds on NSC, and efficient algorithms to verify NSC precisely by using the branch and bound method. Hypothesis testing: Recovering statistical structures of random variables is important in some applications such as cognitive radio. Our goal is distinguishing two different types of random variables among n>>1 random variables. Distinguishing them via experiments for each random variable one by one takes lots of time and efforts. Hence, we proposed hypothesis testing using mixed measurements to reduce sample complexity. We also designed efficient algorithms to solve large scale problems. Machine learning: When feature data are stored in a tree structured network having time delay in communication, quickly finding an optimal solution to the regularized loss minimization is challenging. In this scenario, we studied a communication-efficient stochastic dual coordinate ascent and its convergence analysis. Treatment planning: In the Rotating-Shield Brachytherapy (RSBT) for cancer treatment, there is a compelling need to quickly obtain optimal treatment plans to enable clinical usage. However, due to the degree of freedom in RSBT, finding optimal treatment planning is difficult. For this, we designed a first order dose optimization method based on the alternating direction method of multipliers, and reduced the execution time around 18 times compared to the previous research.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Yi. "Sequential convex approximations of chance constrained programming /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?IELM%202008%20YANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Banerjee, Nirjhar. "Random obtuse triangles and convex quadrilaterals." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54212.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 83-85).
We intend to discuss in detail two well known geometrical probability problems. The first one deals with finding the probability that a random triangle is obtuse in nature. We initially discuss the various ways of choosing a random triangle. The problem is at first analyzed based on random angles (adding to 180 degrees) and random sides (obeying the triangle inequality) which is a direct modification of the Broken Stick Problem. We then study the effect of shape on the probability that when three random points are chosen inside a figure of that shape they will form an obtuse triangle. Literature survey reveals the existence of the analytical formulae only in the cases of square, circle and rectangle. We used Monte Carlo simulation to solve this problem in various shapes. We intend to show by means of simulation that the given probabilatity will reach its minimum value when the random points are taken inside a circle. We then introduce the concept of Random Walk in Triangles and show that the probability that a triangle formed during the process is obtuse is itself random. We also propose the idea of Differential Equation in Triangle Space and study the variation of angles during this dynamic process. We then propose to extend this to the problem of calculating the probability of the quadrilateral formed by four random points is convex. The effects of shape are distinctly different than those obtained in the random triangle problem. The effort of true random numbers and normally generated pseudorandom numbers are also compared for both the problems considered.
by Nirjhar Banerjee.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
49

Umenberger, Jack. "Convex Identifcation of Stable Dynamical Systems." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17321.

Full text
Abstract:
This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems commonly arising in system identi cation are model instability (e.g. unreliability of long-term, open-loop predictions), and nonconvexity of quality-of- t criteria, such as simulation error (a.k.a. output error). To address these problems, this thesis presents convex parametrizations of stable dynamical systems, convex quality-of- t criteria, and e cient algorithms to optimize the latter over the former. In particular, this thesis makes extensive use of Lagrangian relaxation, a technique for generating convex approximations to nonconvex optimization problems. Recently, Lagrangian relaxation has been used to approximate simulation error and guarantee nonlinear model stability via semide nite programming (SDP), however, the resulting SDPs have large dimension, limiting their practical utility. The rst contribution of this thesis is a custom interior point algorithm that exploits structure in the problem to signi cantly reduce computational complexity. The new algorithm enables empirical comparisons to established methods including Nonlinear ARX, in which superior generalization to new data is demonstrated. Equipped with this algorithmic machinery, the second contribution of this thesis is the incorporation of model stability constraints into the maximum likelihood framework. Speci - cally, Lagrangian relaxation is combined with the expectation maximization (EM) algorithm to derive tight bounds on the likelihood function, that can be optimized over a convex parametrization of all stable linear dynamical systems. Two di erent formulations are presented, one of which gives higher delity bounds when disturbances (a.k.a. process noise) dominate measurement noise, and vice versa. Finally, identi cation of positive systems is considered. Such systems enjoy substantially simpler stability and performance analysis compared to the general linear time-invariant iv Abstract (LTI) case, and appear frequently in applications where physical constraints imply nonnegativity of the quantities of interest. Lagrangian relaxation is used to derive new convex parametrizations of stable positive systems and quality-of- t criteria, and substantial improvements in accuracy of the identi ed models, compared to existing approaches based on weighted equation error, are demonstrated. Furthermore, the convex parametrizations of stable systems based on linear Lyapunov functions are shown to be amenable to distributed optimization, which is useful for identi cation of large-scale networked dynamical systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Andramonov, Mikhail. "Global minimization of some classes of generalized convex functions." Thesis, Federation University Australia, 2001. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/164850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography