Dissertations / Theses on the topic 'Second-Order Cone Programming'

To see the other types of publications on this topic, follow the link: Second-Order Cone Programming.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Second-Order Cone Programming.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Okuno, Takayuki. "Studies on Algorithms for Solving Generalized Second-Order Cone Programming Problems." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/174846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Jein-Shan. "Merit functions and nonsmooth functions for the second-order cone complementarity problem /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/5782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

CAMARGO, JULIA DE TOLEDO. "THREE DIMENSIONAL LIMIT ANALYSIS USING SECOND ORDER CONE PROGRAMMING APPLIED TO SLOPE STABILITY." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26876@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
Visando avaliar uma ferramenta numérica efetiva para resolução de problemas de estabilidade tridimensionais, a análise limite numérica foi estudada neste trabalho. Sua abordagem numérica requer o uso tanto do método dos elementos finitos quanto de programação matemática. Isto porque os teoremas da plasticidade, base da análise limite, podem ser colocados como problemas de otimização. No teorema do limite inferior, por exemplo, se deseja maximizar o fator de colapso, com o solo sujeito a condições de equilíbrio e ao critério de ruptura. O critério de ruptura utilizado foi o de Drucker-Prager. Neste trabalho, fez-se uso da programação cônica quadrática, conhecida por possibilitar a resolução de problemas de grande escala com muita eficiência. Empregou-se, para tanto, o solver Mosek. Além de ser possível determinar o fator de colapso, também se desenvolveu um método para calcular o fator de segurança da encosta. Ele reduz sucessivamente os parâmetros de resistência do solo, através do método de Newton-Raphson. Em casos de geometrias mais complexas, a formulação do problema teve que ser modificada. Uma força horizontal fictícia foi adicionada na condição de equilíbrio e unicamente ela foi majorada com o fator de colapso. Foi apenas através desta formulação que se pode simular a estabilidade de solos submetidos ao efeito de poropressão. A análise de fluxo foi simulada a parte no programa de elementos finitos desenvolvido por Miqueletto (2007). A resistência do solo depende dos valores de poropressão, que caracterizam os solos como saturados ou não saturados.
Numerical limit analysis was studied in order to evaluate an effective numerical procedure to solve three-dimensional slope stability problems. This numerical approach utilizes finite element method and mathematical programming. Mathematical programming is needed because the plasticity theorems, basic theorems for limit analysis, can be cast as optimization problems. The lower bound theorem consists of finding the maximum collapse multiplier that will lead the soil to the imminence of collapse. The soil will still be restricted to equilibrium conditions and the yield criterion will have to be satisfied everywhere. Drucker- Prager was the yield criterion chosen. In this thesis, the optimization problem is reformulated as a second order cone programming (SOCP). SOCP is known to solve large-scale problems with great computational efficiency and we used the solver Mosek. The model calculates not only the collapse multiplier, but also the safety factor for the slope. A strength reduction scheme was proposed, based on the Newton-Raphson method. For complex geometries cases, a novel formulation was developed. A fictitious horizontal force was added at the equilibrium equation and uniquely this force was increased by the multiplier factor. It was only through this reformulation that it was possible to assess stability of slopes subjected to porepressure effects. The groundwater flow was simulated separately in a finite element program developed by Miqueletto (2007). The soil strength depends on porepressure values, which define soils as saturated or unsaturated.
APA, Harvard, Vancouver, ISO, and other styles
4

Ciria, Suárez Héctor 1979. "Computation of upper and lower bounds in limit analysis using second-order cone programming and mesh adaptivity." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/16655.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.
Includes bibliographical references (p. 109-111).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Limit analysis is relevant in many practical engineering areas such as the design of mechanical structures or the analysis of soil mechanics. Assuming a rigid, perfectly-plastic solid subject to a static load distribution, the problem of limit analysis consists of finding the minimum multiple of this load distribution that will cause the body to collapse. This collapse multiplier results from solving an infinite dimensional saddle point problem, where the internal work rate is maximized over an admissible set of stresses -defined by a yield condition- and minimized over the linear space of kinematically admissible velocities for which the external work rate equals the unity. When strong duality is applied to this saddle point problem, the well-known convex (and equivalent) static and kinematic principles of limit analysis arise. In this thesis, an efficient procedure to compute strict upper and lower bounds for the exact collapse multiplier is presented, with a formulation that explicitly considers the exact convex yield condition. The approach consists of two main steps. First, the continuous problem, under the form of the static principle, is discretized twice (one per bound) by means of different combinations of finite element spaces for the stresses and velocities. For each discretization, the interpolation spaces are chosen so that the attainment of an upper or a lower bound is guaranteed. The second step consists of solving the resulting discrete nonlinear optimization problems. Towards this end, they are reformulated into the canonical form of Second-order Cone Programs, which allows for the use of primal-dual interior point methods that optimally exploit the convexity and duality properties of the limit analysis
(cont.) model and guarantee global convergence to the optimal solutions. To exploit the fact that collapse mechanisms are typically highly localized, a novel method for adaptive meshing is introduced based on local bound gap measures and not on heuristic estimates. The method decomposes the total bound gap as the sum of positive elemental contributions from each element in the mesh, and refines only those elements which are responsible for the majority of the numerical error. Finally, stand-alone computational certificates that allow the bounds to be verified independently, without recourse to the original computer program, are also provided. This removes the uncertainty about the reliability of the results, which frequently undermines the utility of computational simulations. The efficiency of the methodology proposed is illustrated with several applications in plane stress and plane strain, demonstrating that it can be used in complex, realistic problems as a supplement to other models.
by Héctor Ciria Suárez.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohammad, Salimian. "A Mixed Integer Second Order Cone Programming Reformulation For A Congested Location And Capacity Allocation Problem On A Supply Chain Network." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615407/index.pdf.

Full text
Abstract:
Supply chain network design involves location decisions for production facilities and distribution centers. We consider a make-to-order supply chain environment where distribution centers serve as crossdocking terminals. Long waiting times may occur at a cross-docking terminal, unless sucient handling capacity is installed. In this study, we deal with a facility location problem with congestion eects at distribution centers. Along with location decisions, we make capacity allocation (service rate) and demand allocation decisions so that the total cost, including facility opening, transportation and congestion costs, is minimized. Response time to customer orders is a critical performance measure for a supply chain network. The decisions like where the plants and distribution centers are located aect the response time of the system. Response time is more sensitive to these decisions in a make-to-order business environment. In a distribution network where distribution centers function as cross-docking terminals, capacity or the service rate decisions also aect the response time performance. This study is closely related to a recent work Vidyarthi et al. (2009) which models distribution centers asM/G/1 queuing systems. They use the average waiting time formula ofM/G/1 queuing model. Thus, the average waiting time at a distribution center is a nonlinear function of the demand rate allocated to and the service rate available at the distribution center. The authors Vidyarthi et al. (2009) propose a linear approximation approach and a Lagrangian based heuristic for the problem. Dierent than the solution approach proposed in Vidyarthi et al. (2009), we propose a closed form formulation for the problem. In particular, we show that the waiting time function derived from M/G/1 queuing model can be represented via second order conic inequalities. Then, the problem becomes a mixed integer second order cone programming problem which can be solved by using commercial branch-and-bound software such as IBM ILOG CPLEX. Our computational tests show that proposed reformulation can be solved in reasonable CPU times for practical size instances.
APA, Harvard, Vancouver, ISO, and other styles
6

Bornhorst, Nils [Verfasser], Marius [Akademischer Betreuer] Pesavento, Martin [Akademischer Betreuer] Haardt, Anja [Akademischer Betreuer] Klein, and Sebastian [Akademischer Betreuer] Schöps. "Energy-Efficient Distributed Multicast Beamforming Using Iterative Second-Order Cone Programming / Nils Bornhorst. Betreuer: Marius Pesavento ; Martin Haardt ; Anja Klein ; Sebastian Schöps." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1110980922/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Jieqiu. "Convex relaxations in nonconvex and applied optimization." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/654.

Full text
Abstract:
Traditionally, linear programming (LP) has been used to construct convex relaxations in the context of branch and bound for determining global optimal solutions to nonconvex optimization problems. As second-order cone programming (SOCP) and semidefinite programming (SDP) become better understood by optimization researchers, they become alternative choices for obtaining convex relaxations and producing bounds on the optimal values. In this thesis, we study the use of these convex optimization tools in constructing strong relaxations for several nonconvex problems, including 0-1 integer programming, nonconvex box-constrained quadratic programming (BoxQP), and general quadratic programming (QP). We first study a SOCP relaxation for 0-1 integer programs and a sequential relaxation technique based on this SOCP relaxation. We present desirable properties of this SOCP relaxation, for example, this relaxation cuts off all fractional extreme points of the regular LP relaxation. We further prove that the sequential relaxation technique generates the convex hull of 0-1 solutions asymptotically. We next explore nonconvex quadratic programming. We propose a SDP relaxation for BoxQP based on relaxing the first- and second-order KKT conditions, where the difficulty and contribution lie in relaxing the second-order KKT condition. We show that, although the relaxation we obtain this way is equivalent to an existing SDP relaxation at the root node, it is significantly stronger on the children nodes in a branch-and-bound setting. New advance in optimization theory allows one to express QP as optimizing a linear function over the convex cone of completely positive matrices subject to linear constraints, referred to as completely positive programming (CPP). CPP naturally admits strong semidefinite relaxations. We incorporate the first-order KKT conditions of QP into the constraints of QP, and then pose it in the form of CPP to obtain a strong relaxation. We employ the resulting SDP relaxation inside a finite branch-and-bound algorithm to solve the QP. Comparison of our algorithm with commercial global solvers shows potential as well as room for improvement. The remainder is devoted to new techniques for solving a class of large-scale linear programming problems. First order methods, although not as fast as second-order methods, are extremely memory efficient. We develop a first-order method based on Nesterov's smoothing technique and demonstrate the effectiveness of our method on two machine learning problems.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Terrade, Benjamin. "Evaluation structurale des murs de soutènement en maçonnerie." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1203/document.

Full text
Abstract:
Partout où la pierre est facilement disponible, on trouve des constructions en maçonnerie de pierre. Suivant les coutumes et les usages, les blocs de pierres sont assemblés bruts, simplement ébauchés ou parfaitement taillés, avec ou sans l'ajout d'un liant. Supplantée par le béton dans les constructions neuves depuis le milieu du XX} siècle, les ouvrages en maçonnerie demeurent majoritaires dans le patrimoine bâti français, un patrimoine qu'il convient d'entretenir rationnellement. L'objectif de ce travail de thèse est de poursuivre l'élaboration d'un cadre scientifique rigoureux et opérationnel afin de donner aux décideurs et aux gestionnaires les outils nécessaires pour mener à bien leur mission. Nous proposons ici deux outils d'évaluation de la stabilité d'ouvrages de soutènement en maçonnerie basés sur l'utilisation conjointe du calcul à la rupture avec des méthodes d'homogénéisation. Dans un premier temps, nous mettons d'abord au point un outil analytique permettant de dimensionner des ouvrages neufs ou d'évaluer la stabilité d'ouvrages peu déformés. Cet outil permet également de dimensionner des solutions de renforcement par clouage lorsque cela est jugé nécessaire. Dans un deuxième temps, nous implémentons cet outil dans un code numérique afin de lui donner la souplesse nécessaire à l'étude d'ouvrages non-conventionnels, de grandes taille ou fortement pathologique. Enfin, nous mettons en oeuvre plusieurs campagnes expérimentales qui nous fournissent les données nécessaires à la validation de ces modèles de calcul
Wherever stone is readily available, we encounter stone masonry buildings. Depending on customs or dedicated use, the blocks are used raw, lightly faced or perfectly cut, with or without the use of mortar. Althougth concrete has replaced masonry in new construction for some decades, the better part of the French built heritage is made of masonry, an heritage we are responsible for. This works aims at contributing to create a reliable scientific frame for that purpose. This thesis uses the yield design theory alongside with homogenisation techniques to study the stability of stone masonry earth retaining walls. First, we provide an analytical tool suitable for designing new structures or assessing the stability of existing ones that are still in good shape. Should it be needed, this tools allows for the design of a strengthening solution based on soil-nailing. Then, we implement it in a finite element code to give it the versatility required to study unconventionnal structures or structures badly damaged. We then present several experimental campaigns aiming at validating the proposed tools
APA, Harvard, Vancouver, ISO, and other styles
10

Coutinho, Walton Pereira. "Um algoritmo branch-and-bound para o problema do caixeiro viajante suficientemente próximo." Universidade Federal da Paraí­ba, 2014. http://tede.biblioteca.ufpb.br:8080/handle/tede/5268.

Full text
Abstract:
Made available in DSpace on 2015-05-08T14:53:38Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 7900350 bytes, checksum: fbca2db827307d8c3ed2a1c15067d0da (MD5) Previous issue date: 2014-02-13
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This research deals with the Close-Enough Traveling Salesman Problem, a variant of the Traveling Salesman Problem wich has several applicatios in logistics. In the Close-Enough Traveling Salesman Problem, rather than visiting the vertex (customer) itself, the salesman must visit a specific region containing such vertex. To solve this problem, we propose a simple yet effective exact algorithm, based on Branch-and-Bound and Second Order Cone Programming. The proposed algorithm was tested in 824 instances suggested in the literature. Optimal solutions are obtained for open problems with up to a thousand vertices. We consider both instances in the two- and three-dimensional space.
Esta pesquisa trata do Problema do Caixeiro Viajante Suficientemente Próximo, uma variante do Problema do Caixeiro Viajante que possui diversas aplicações em logística. No Problema do Caixeiro Viajante Suficientemente Próximo, ao invés de visitar o próprio vértice (cliente), o caixeiro deve visitar uma região especifica contendo este vértice. Para resolver este problema, é proposto um algoritmo exato, simples e efetivo, baseado em branch-and-bound e Programação Cônica de Segunda Ordem. O algoritmo proposto foi testado em 824 instâncias sugeridas na literatura. Soluções ótimas foram obtidas para instâncias com até mil vértices. Foram consideradas instâncias nos espaços bi e tridimensional.
APA, Harvard, Vancouver, ISO, and other styles
11

Pham, Anh Tu. "Détermination numérique des propriétés de résistance de roches argileuses." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1237/document.

Full text
Abstract:
Les capacités de résistance de l'argilite Callovo-Oxfordian (COx), qui est une roche hôte potentielle pour le dépôt souterrain profond de déchets radioactifs de haute activité en France, sont étudiées. À une échelle microscopique, des micros pores peuvent être observés dans la matrice. Une première étape d'homogénéisation a été réalisée afin d'évaluer le critère de résistance de la matrice. L'analyse microstructurale de ce matériau à quelques centaines d'échelle, référencée échelle échelle mésoscopique, montre une matrice argileuse et une distribution aléatoire d'inclusions minérales (quartz et calcite).Dans le but de déterminer le domaine de résistance à l'argilite COx, un premier outil numérique a été développé dans le contexte du comportement élastoplastique de la matrice. Plusieurs modèles morphologiques du volume élémentaire représentatif ont été considérés, et soumis à un chargement incrémental dans des conditions périodiques jusqu'à la charge limite. A la suite de ce calcul élastoplastique, un point de la frontière du domaine de résistance est obtenu. Ce dernier est alors obtenu par des calculs élastoplastiques successifs.Une alternative aux simulations élastoplastique directes, des approches cinématiques et statiques du calcul à la rupture sont réalisées. Une méthode du type éléments finis basée sur la construction d'un champ de contrainte (dans l'approche statique) et d'un champ de vitesse (dans l'approche cinématique) est développé dans un outil numérique permettant de calculer une limite inférieure et une limite supérieure de domaine de résistance
The strength capacities of Callovo-Oxfordian (COx) argillite which is a potential host rock for the deep underground repository of high-level radioactive waste in France are investigated. At a micro-scale, micro-pores can be observed in the matrix. A first strength homogenization step has been performed in order to evaluate the matrix strength criteria. The microstructure analysis of this material at some hundreds of micromet scale, referred at meso-scale, shows a clay matrix and a random distribution of mineral inclusions (quartz and calcite).Aiming to the determination of COx argillite strength domain, an FEM numerical tool has been developed in the context of the elastoplastic behavior of the matrix. Several morphological patterns of the representative elementary volume have been considered and subjected to an incremental loading in periodic conditions until collapse occurs. As a result of such elastoplastic calculation, one point of the boundary of the strength domain is obtained. The latter then could be reached by successive elastoplastic calculations.As an alternative to direct elastoplastic simulations, kinematic and static approaches of limit analysis are performed. The stress-based (static approach) and the velocity-based (kinematic approach) finite element method are used to develop a numerical tool able to derive a lower bound and upper bound of strength domain, respectively
APA, Harvard, Vancouver, ISO, and other styles
12

Cheng, Jianqiang. "Stochastic Combinatorial Optimization." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112261.

Full text
Abstract:
Dans cette thèse, nous étudions trois types de problèmes stochastiques : les problèmes avec contraintes probabilistes, les problèmes distributionnellement robustes et les problèmes avec recours. Les difficultés des problèmes stochastiques sont essentiellement liées aux problèmes de convexité du domaine des solutions, et du calcul de l’espérance mathématique ou des probabilités qui nécessitent le calcul complexe d’intégrales multiples. A cause de ces difficultés majeures, nous avons résolu les problèmes étudiées à l’aide d’approximations efficaces.Nous avons étudié deux types de problèmes stochastiques avec des contraintes en probabilités, i.e., les problèmes linéaires avec contraintes en probabilité jointes (LLPC) et les problèmes de maximisation de probabilités (MPP). Dans les deux cas, nous avons supposé que les variables aléatoires sont normalement distribués et les vecteurs lignes des matrices aléatoires sont indépendants. Nous avons résolu LLPC, qui est un problème généralement non convexe, à l’aide de deux approximations basée sur les problèmes coniques de second ordre (SOCP). Sous certaines hypothèses faibles, les solutions optimales des deux SOCP sont respectivement les bornes inférieures et supérieures du problème du départ. En ce qui concerne MPP, nous avons étudié une variante du problème du plus court chemin stochastique contraint (SRCSP) qui consiste à maximiser la probabilité de la contrainte de ressources. Pour résoudre ce problème, nous avons proposé un algorithme de Branch and Bound pour calculer la solution optimale. Comme la relaxation linéaire n’est pas convexe, nous avons proposé une approximation convexe efficace. Nous avons par la suite testé nos algorithmes pour tous les problèmes étudiés sur des instances aléatoires. Pour LLPC, notre approche est plus performante que celles de Bonferroni et de Jaganathan. Pour MPP, nos résultats numériques montrent que notre approche est là encore plus performante que l’approximation des contraintes probabilistes individuellement.La deuxième famille de problèmes étudiés est celle relative aux problèmes distributionnellement robustes où une partie seulement de l’information sur les variables aléatoires est connue à savoir les deux premiers moments. Nous avons montré que le problème de sac à dos stochastique (SKP) est un problème semi-défini positif (SDP) après relaxation SDP des contraintes binaires. Bien que ce résultat ne puisse être étendu au cas du problème multi-sac-à-dos (MKP), nous avons proposé deux approximations qui permettent d’obtenir des bornes de bonne qualité pour la plupart des instances testées. Nos résultats numériques montrent que nos approximations sont là encore plus performantes que celles basées sur les inégalités de Bonferroni et celles plus récentes de Zymler. Ces résultats ont aussi montré la robustesse des solutions obtenues face aux fluctuations des distributions de probabilités. Nous avons aussi étudié une variante du problème du plus court chemin stochastique. Nous avons prouvé que ce problème peut se ramener au problème de plus court chemin déterministe sous certaine hypothèses. Pour résoudre ce problème, nous avons proposé une méthode de B&B où les bornes inférieures sont calculées à l’aide de la méthode du gradient projeté stochastique. Des résultats numériques ont montré l’efficacité de notre approche. Enfin, l’ensemble des méthodes que nous avons proposées dans cette thèse peuvent s’appliquer à une large famille de problèmes d’optimisation stochastique avec variables entières
In this thesis, we studied three types of stochastic problems: chance constrained problems, distributionally robust problems as well as the simple recourse problems. For the stochastic programming problems, there are two main difficulties. One is that feasible sets of stochastic problems is not convex in general. The other main challenge arises from the need to calculate conditional expectation or probability both of which are involving multi-dimensional integrations. Due to the two major difficulties, for all three studied problems, we solved them with approximation approaches.We first study two types of chance constrained problems: linear program with joint chance constraints problem (LPPC) as well as maximum probability problem (MPP). For both problems, we assume that the random matrix is normally distributed and its vector rows are independent. We first dealt with LPPC which is generally not convex. We approximate it with two second-order cone programming (SOCP) problems. Furthermore under mild conditions, the optimal values of the two SOCP problems are a lower and upper bounds of the original problem respectively. For the second problem, we studied a variant of stochastic resource constrained shortest path problem (called SRCSP for short), which is to maximize probability of resource constraints. To solve the problem, we proposed to use a branch-and-bound framework to come up with the optimal solution. As its corresponding linear relaxation is generally not convex, we give a convex approximation. Finally, numerical tests on the random instances were conducted for both problems. With respect to LPPC, the numerical results showed that the approach we proposed outperforms Bonferroni and Jagannathan approximations. While for the MPP, the numerical results on generated instances substantiated that the convex approximation outperforms the individual approximation method.Then we study a distributionally robust stochastic quadratic knapsack problems, where we only know part of information about the random variables, such as its first and second moments. We proved that the single knapsack problem (SKP) is a semedefinite problem (SDP) after applying the SDP relaxation scheme to the binary constraints. Despite the fact that it is not the case for the multidimensional knapsack problem (MKP), two good approximations of the relaxed version of the problem are provided which obtain upper and lower bounds that appear numerically close to each other for a range of problem instances. Our numerical experiments also indicated that our proposed lower bounding approximation outperforms the approximations that are based on Bonferroni's inequality and the work by Zymler et al.. Besides, an extensive set of experiments were conducted to illustrate how the conservativeness of the robust solutions does pay off in terms of ensuring the chance constraint is satisfied (or nearly satisfied) under a wide range of distribution fluctuations. Moreover, our approach can be applied to a large number of stochastic optimization problems with binary variables.Finally, a stochastic version of the shortest path problem is studied. We proved that in some cases the stochastic shortest path problem can be greatly simplified by reformulating it as the classic shortest path problem, which can be solved in polynomial time. To solve the general problem, we proposed to use a branch-and-bound framework to search the set of feasible paths. Lower bounds are obtained by solving the corresponding linear relaxation which in turn is done using a Stochastic Projected Gradient algorithm involving an active set method. Meanwhile, numerical examples were conducted to illustrate the effectiveness of the obtained algorithm. Concerning the resolution of the continuous relaxation, our Stochastic Projected Gradient algorithm clearly outperforms Matlab optimization toolbox on large graphs
APA, Harvard, Vancouver, ISO, and other styles
13

Talina, Bernardo Júdice Franqueira Cotrim. "Quadratic programming versus second order con programming in portfolio optimization." Master's thesis, 2016. http://hdl.handle.net/10362/16803.

Full text
Abstract:
Despite the extensive literature in finding new models to replace the Markowitz model or trying to increase the accuracy of its input estimations, there is less studies about the impact on the results of using different optimization algorithms. This paper aims to add some research to this field by comparing the performance of two optimization algorithms in drawing the Markowitz Efficient Frontier and in real world investment strategies. Second order cone programming is a faster algorithm, appears to be more efficient, but is impossible to assert which algorithm is better. Quadratic Programming often shows superior performance in real investment strategies.
APA, Harvard, Vancouver, ISO, and other styles
14

Bornhorst, Nils. "Energy-Efficient Distributed Multicast Beamforming Using Iterative Second-Order Cone Programming." Phd thesis, 2015. https://tuprints.ulb.tu-darmstadt.de/4387/1/Dissertation_Bornhorst_Nils.pdf.

Full text
Abstract:
In multi-user (MU) downlink beamforming, a high spectral efficiency along with a low transmit power is achieved by separating multiple users in space rather than in time or frequency using spatially selective transmit beams. For streaming media applications, multi-group multicast (MGM) downlink beamforming is a promising approach to exploit the broadcasting property of the wireless medium to transmit the same information to a group of users. To limit inter-group interference, the individual streams intended for different multicast groups are spatially separated using MGM downlink beamforming. Spatially selective downlink beamforming requires the employment of an array of multiple antennas at the base station (BS). The hardware costs associated with the use of multiple antennas may be prohibitive in practice. A way to avoid the expensive employment of multiple antennas at the BS is to exploit user cooperation in wireless networks where multiple single-antenna users act as relays and take over the role of antenna elements in a virtual distributed antenna array. In both downlink beamforming and distributed relay beamforming, several approaches to design the transmit beams exist. In many scenarios, minimizing the transmit power has a higher priority than maximizing throughput. This is the case, e.g., in downlink beamforming, when operators aim at reducing their operational expenditures and their carbon footprint, or in distributed beamforming when users are only willing to cooperate when the battery consumption of their devices is kept at a minimum level. Then, the transmit beams are designed as the solution of a quality-of-service-constrained (QoS-constrained) power minimization problem. The goal of this optimization approach is to minimize the total transmitted power (at the BS or at the relays) while guaranteeing a predefined QoS at the destination users. In many scenarios with a high relevance in practice such as MGM downlink beamforming and MU peer-to-peer (MUP2P) relay beamforming, the QoS-constrained power minimization problem is a nonconvex optimization problem which cannot be solved optimally in polynomial time. In state-of-the-art methods, the nonconvex problem is therefore approximated by a convex one which is easier to solve. As the number of users increases, however, these approximations become more and more inaccurate leading to severe performance degradation and problem infeasibility. In this dissertation, we therefore propose a novel framework of iterative optimization schemes where a convex approximation of the original problem is successively improved. In each iteration of the proposed schemes, a convex approximation of the original problem is solved and then adapted to the solution obtained in this iteration. We prove that by this means, the approximate solution is improved from iteration to iteration. In this way, an approximate solution which is much closer to the optimal solution of the original problem than in state-of-the-art methods is usually obtained. Moreover, the convex approximation may be infeasible in state-of-the-art methods even if the original problem is feasible. The proposed iterative scheme is therefore extended such that it can also be used to find a feasible initial approximation of the original problem. Simulations results reveal that in relay beamforming scenarios with a moderate to a large number of destination users, the proposed scheme substantially outperforms state-of-the-art methods in terms of total transmitted relay power and problem feasibility. In MGM downlink beamforming scenarios with a large number of users, the proposed technique achieves the same performance as the state-of-the-art methods at a substantially reduced computational complexity. Several extensions of this basic iterative convex approximation technique are proposed in this dissertation: The feasibility search is extended to an admission control scheme where a minimum number of destination users is determined for which service has to be denied in case of shortage of spectral resources. Furthermore, a non-trivial extension of our iterative method is proposed to maintain its applicability in the case where only limited information about the state of the channels in the network is available. Finally, a complexity reduction is proposed to turn the proposed algorithms into computational efficient ones which are suitable for real time application in practice. Due to its wide-ranging applicability, we also consider filter-and-forward (FF) relay beamforming where the relays are equipped with finite impulse response (FIR) filters and single-carrier (SC) transmission over frequency selective channels is performed. In this scenario, the signal received at the destination nodes is contaminated by inter-symbol interference (ISI). Using the FIR filters at the relays, the frequency selective channels are equalized to some extent such that the ISI at the destination nodes can be mitigated. In previous works, the FF relay networks have been assumed to be perfectly synchronized. Perfect timing synchronization is, however, not possible in MU FF relay networks as there exists no single point of reference. This synchronization issue has barely been addressed in the literature. In this dissertation, we consider FF beamforming in asynchronous MUP2P and MGM relay networks and extend the FF scheme by incorporating individual adaptive decoding delays at the destinations. Using these decoding delays, the destination users can individually adapt to the different link delays in the asynchronous network and thereby mitigate residual ISI. We again consider the QoS-constrained power minimization problem where now the filter coefficients at the relays, which are continuous variables, and the individual decoding delays at the destinations, which are discrete variables, are jointly optimized. This optimization problem is a nonconvex mixed-integer programming (MIP) problem which is generally hard to solve optimally. We propose (i) a customized branch-and-cut (BnC) method yielding a lower bound on the total transmitted relay power as a benchmark and (ii) a low-complexity deflation algorithm providing an approximate solution for implementation in practice. The deflation method is based on the proposed iterative convex approximation approach mentioned above. Our simulation results show that the approximate solution obtained using the proposed deflation scheme is close to optimal and outperforms the solutions obtained by state-of-the-art schemes which do not make use of adaptive decoding delays.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, I.-ching, and 陳怡靜. "Support vector regression with noise by using second-order cone programming." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/11385402094271423959.

Full text
Abstract:
碩士
義守大學
電機工程學系碩士班
96
In this thesis, the regression model establishment is made by support vector machine. And we use an algorithm of second-order cone programming (SOCP) to solve the problem of support vector regression with noise data. We can transform the support vector regression with noisy data problem into the SOCP problem by the operation of mathematical. After the transformation, merit function is applied to solve the SOCP problems. The SOCP algorithm could be applied in many engineering problems such as the filter design, robust control and some optimization problems. Computation results are presented on both synthetic problems and real-world problems, in which we discuss the uncertainty and mean square error values. From the computation results we realize that increase of uncertainty would also certainty the accuracy of prediction.
APA, Harvard, Vancouver, ISO, and other styles
16

Toh, Kim Chuan, Zhi Cai, and Robert M. Freund. "Solving symmetric indefinite systems in an interior-point method for second order cone programming." 2002. http://hdl.handle.net/1721.1/4016.

Full text
Abstract:
Many optimization problems can be formulated as second order cone programming (SOCP) problems. Theoretical results show that applying interior-point method (IPM) to SOCP has global polynomial convergence. However, various stability issues arise in the implementation of IPM. The standard normal equation based implementation of IPM encounters stability problems in the computation of search direction. In this paper, an augmented system approach is proposed to overcome the stability problems. Numerical experiments show that the new approach can improve the stability.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Xiao-Ren, and 吳孝仁. "Neural Network Approach for Nonlinear Complementarity Problem and Quadratic Programming with Second-Order Cone Constraints." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/ea44k2.

Full text
Abstract:
博士
國立臺灣師範大學
數學系
105
This dissertation focuses on two types of optimization problems, nonlinear complementarity problem (NCP for short) and quadratic programming with second-order cone constraints (SOCQP for short). Based on NCP-function and SOC-complementarity function, we propose suitable neural networks for each of them, respectively. For the NCP-function, we propose new one which is the generalization of natural residual function for NCP. It is a discrete generalization of natural residual function phinr, denoted as phinrp. Besides being a NCP-function, we also show its twice dierentiability and present the geometric view. In addition, we utilize neural network approach to solving nonlinear complementarity problems and quadratic programming problems with second-order cone constraints. By building neural networks based on dierent families of smooth NCP or SOCCP-functions. Our goal is to study the stability of the equilibrium with respect to dierent neural network models. Asymptotical stability are built in most neural network models. Under suitable conditions, we show the equilibrium being exponentially stable. Finally, the simulation results are reported to demonstrate the effectiveness of the proposed neural network.
APA, Harvard, Vancouver, ISO, and other styles
18

Han, Deren. "Global Optimization with Polynomials." 2003. http://hdl.handle.net/1721.1/3883.

Full text
Abstract:
The class of POP (Polynomial Optimization Problems) covers a wide rang of optimization problems such as 0 - 1 integer linear and quadratic programs, nonconvex quadratic programs and bilinear matrix inequalities. In this paper, we review some methods on solving the unconstraint case: minimize a real-valued polynomial p(x) : Rn → R, as well the constraint case: minimize p(x) on a semialgebraic set K, i.e., a set defined by polynomial equalities and inequalities. We also summarize some questions that we are currently considering.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Xue. "Particle finite element method in geomechanics." Thesis, 2014. http://hdl.handle.net/1959.13/1055070.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
Despite the wide application of the finite element method (FEM) in geotechnical engineering, the numerical analysis usually stops at the point when soil flow occurs and results in overall `failure'. In many cases, the so-called failure only represents a specific time point of the deformation process and the soil flow itself is of interest as well. A typical example is a landslide in which a transition of the soil behaviour is experienced from solid-like to liquid-like, and then back to solid-like. For such problems, a correct understanding of the triggering mechanism is important. However, the prediction of the sliding process as well as the estimation of the final deposit are also of great concern. Unfortunately, the traditional Lagrangian FEM cannot handle problems involving both solid-like and liquid-like behaviour. This is to a large extent, due to the following two issues: (1) Severe mesh distortion and boundary evolution as a result of large changes in geometry. (2) Difficulties in solving the highly nonlinear and non-smooth discrete governing equations in an efficient and robust manner. In this thesis, a new continuum approach that addresses the above two issues explicitly is proposed for handling problems involving the solid-liquid transitional behaviour in geomechanics. More specifically, the first issue is solved via the so-called Particle Finite Element Method (PFEM) originally proposed for the solution of fluid dynamics problems involving free surfaces. The key feature of the PFEM is that mesh nodes are treated as a cloud of particles which can move freely and even separate from the domain to which they originally belong. At each time step, the computational domain is detected based on those particles; then, the conventional FEM is used to solve the problem on the identified domain. Regarding the second issue, mathematical programming formulations for the dynamic analysis of elastoplastic behaviour are developed with a wide utilisation of the Hellinger-Reissner variational theorem. The resulting formulations can be cast as a second-order cone program and solved via appropriate optimization methods. Unlike the conventional Newton-Raphson based FE scheme, the convergence of the solution of the scheme developed is guaranteed regardless of the quality of the initial solution. Formulations for both plane strain and axisymmetric problems are developed. Moreover, the contact between deformable bodies and rigid boundaries is also taken into account. A number of challenging problems in plane strain cases are solved successfully, which demonstrates the capabilities of the proposed approach. Furthermore, the approach is used to reproduce laboratory tests involving the collapse of axisymmetric granular columns. A quantitative comparison between the simulated results and the existing experimental data is conducted. Finally, an actual natural disaster event, the Yangbaodi landslide, is considered and analysed in some detail.
APA, Harvard, Vancouver, ISO, and other styles
20

(5930891), Benjamin M. Tackett. "REAL-TIME TRAJECTORY OPTIMIZATION BY SEQUENTIAL CONVEX PROGRAMMING FOR ONBOARD OPTIMAL CONTROL." Thesis, 2021.

Find full text
Abstract:
Optimization of atmospheric flight control has long been performed on the ground, prior to mission flight due to large computational requirements used to solve non-linear programming problems. Onboard trajectory optimization enables the creation of new reference trajectories and updates to guidance coefficients in real time. This thesis summarizes the methods involved in solving optimal control problems in real time using convexification and Sequential Convex Programming (SCP). The following investigation provided insight in assessing the use of state of the art SCP optimization architectures and convexification of the hypersonic equations of motion[ 1 ]–[ 3 ] with different control schemes for the purposes of enabling on-board trajectory optimization capabilities.
An architecture was constructed to solve convexified optimal control problems using direct population of sparse matrices in triplet form and an embedded conic solver to enable rapid turn around of optimized trajectories. The results of this show that convexified optimal control problems can be solved quickly and efficiently which holds promise in autonomous trajectory design to better overcome unexpected environments and mission parameter changes. It was observed that angle of attack control problems can be successfully convexified and solved using SCP methods. However, the use of multiple coupled controls is not guaranteed to be successful with this method when they act in the same plane as one another. The results of this thesis demonstrate that state of the art SCP methods have the capacity to enable onboard trajectory optimization with both angle of attack control and bank angle control schemes.

APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Jae-Hak. "Camera Motion Estimation for Multi-Camera Systems." Phd thesis, 2008. http://hdl.handle.net/1885/49364.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. ...
APA, Harvard, Vancouver, ISO, and other styles
22

Ismailova, Darya. "Localization algorithms for passive sensor networks." Thesis, 2016. http://hdl.handle.net/1828/7747.

Full text
Abstract:
Locating a radiating source based on range or range measurements obtained from a network of passive sensors has been a subject of research over the past two decades due to the problem’s importance in applications in wireless communications, surveillance, navigation, geosciences, and several other fields. In this thesis, we develop new solution methods for the problem of localizing a single radiating source based on range and range-difference measurements. Iterative re-weighting algorithms are developed for both range-based and range-difference-based least squares localization. Then we propose a penalty convex-concave procedure for finding an approximate solution to nonlinear least squares problems that are related to the range measurements. Finally, the sequential convex relaxation procedures are proposed to obtain the nonlinear least squares estimate of source coordinates. Localization in wireless sensor network, where the RF signals are used to derive the ranging measurements, is the primary application area of this work. However, the solution methods proposed are general and could be applied to range and range-difference measurements derived from other types of signals.
Graduate
0544
ismailds@uvic.ca
APA, Harvard, Vancouver, ISO, and other styles
23

Jagarlapudi, Saketha Nath. "Learning Algorithms Using Chance-Constrained Programs." Thesis, 2008. http://hdl.handle.net/2005/733.

Full text
Abstract:
This thesis explores Chance-Constrained Programming (CCP) in the context of learning. It is shown that chance-constraint approaches lead to improved algorithms for three important learning problems — classification with specified error rates, large dataset classification and Ordinal Regression (OR). Using moments of training data, the CCPs are posed as Second Order Cone Programs (SOCPs). Novel iterative algorithms for solving the resulting SOCPs are also derived. Borrowing ideas from robust optimization theory, the proposed formulations are made robust to moment estimation errors. A maximum margin classifier with specified false positive and false negative rates is derived. The key idea is to employ chance-constraints for each class which imply that the actual misclassification rates do not exceed the specified. The formulation is applied to the case of biased classification. The problems of large dataset classification and ordinal regression are addressed by deriving formulations which employ chance-constraints for clusters in training data rather than constraints for each data point. Since the number of clusters can be substantially smaller than the number of data points, the resulting formulation size and number of inequalities are very small. Hence the formulations scale well to large datasets. The scalable classification and OR formulations are extended to feature spaces and the kernelized duals turn out to be instances of SOCPs with a single cone constraint. Exploiting this speciality, fast iterative solvers which outperform generic SOCP solvers, are proposed. Compared to state-of-the-art learners, the proposed algorithms achieve a speed up as high as 10000 times, when the specialized SOCP solvers are employed. The proposed formulations involve second order moments of data and hence are susceptible to moment estimation errors. A generic way of making the formulations robust to such estimation errors is illustrated. Two novel confidence sets for moments are derived and it is shown that when either of the confidence sets are employed, the robust formulations also yield SOCPs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography