Dissertations / Theses on the topic 'Optimization algorithms'

To see the other types of publications on this topic, follow the link: Optimization algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimization algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Astete, morales Sandra. "Contributions to Convergence Analysis of Noisy Optimization Algorithms." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS327/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées
This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods
2

Reimann, Axel. "Evolutionary algorithms and optimization." Doctoral thesis, [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=969093497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Parpas, Panayiotis. "Algorithms for stochastic optimization." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Jared. "Algorithms for Rendering Optimization." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation explores algorithms for rendering optimization realizable within a modern, complex rendering engine. The first part contains optimized rendering algorithms for ray tracing. Ray tracing algorithms typically provide properties of simplicity and robustness that are highly desirable in computer graphics. We offer several novel contributions to the problem of interactive ray tracing of complex lighting environments. We focus on the problem of maintaining interactivity as both geometric and lighting complexity grows without effecting the simplicity or robustness of ray tracing. First, we present a new algorithm called occlusion caching for accelerating the calculation of direct lighting from many light sources. We cache light visibility information sparsely across a scene. When rendering direct lighting for all pixels in a frame, we combine cached lighting information to determine whether or not shadow rays are needed. Since light visibility and scene location are highly correlated, our approach precludes the need for most shadow rays. Second, we present improvements to the irradiance caching algorithm. Here we demonstrate a new elliptical cache point spacing heuristic that reduces the number of cache points required by taking into account the direction of irradiance gradients. We also accelerate irradiance caching by efficiently and intuitively coupling it with occlusion caching. In the second part of this dissertation, we present optimizations to rendering algorithms for participating media. Specifically, we explore the implementation and use of photon beams as an efficient, intuitive artistic primitive. We detail our implementation of the photon beams algorithm into PhotoRealistic RenderMan (PRMan). We show how our implementation maintains the benefits of the industry standard Reyes rendering pipeline, with proper motion blur and depth of field. We detail an automatic photon beam generation algorithm, utilizing PRMan shadow maps. We accelerate the rendering of camera-facing photon beams by utilizing Gaussian quadrature for path integrals in place of ray marching. Our optimized implementation allows for incredible versatility and intuitiveness in artistic control of volumetric lighting effects. Finally, we demonstrate the usefulness of photon beams as artistic primitives by detailing their use in a feature-length animated film.
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
5

CESARI, TOMMASO RENATO. "ALGORITHMS, LEARNING, AND OPTIMIZATION." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/699354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis covers some algorithmic aspects of online machine learning and optimization. In Chapter 1 we design algorithms with state-of-the-art regret guarantees for the problem dynamic pricing. In Chapter 2 we move on to an asynchronous online learning setting in which only some of the agents in the network are active at each time step. We show that when information is shared among neighbors, knowledge about the graph structure might have a significantly different impact on learning rates depending on how agents are activated. In Chapter 3 we investigate the online problem of multivariate non-concave maximization under weak assumptions on the regularity of the objective function. In Chapter 4 we introduce a new performance measure and design an efficient algorithm to learn optimal policies in repeated A/B testing.
6

Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
7

Rafique, Abid. "Communication optimization in iterative numerical algorithms : an algorithm-architecture interaction." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/17837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Trading communication with redundant computation can increase the silicon efficiency of common hardware accelerators like FPGA and GPU in accelerating sparse iterative numerical algorithms. While iterative numerical algorithms are extensively used in solving large-scale sparse linear system of equations and eigenvalue problems, they are challenging to accelerate as they spend most of their time in communication-bound operations, like sparse matrix-vector multiply (SpMV) and vector-vector operations. Communication is used in a general sense to mean moving the matrix and the vectors within the custom memory hierarchy of the FPGA and between processors in the GPU; the cost of which is much higher than performing the actual computation due to technological reasons. Additionally, the dependency between the operations hinders overlapping computation with communication. As a result, although GPU and FPGA are offering large peak floating-point performance, their sustained performance is nonetheless very low due to high communication costs leading to poor silicon efficiency. In this thesis, we provide a systematic study to minimize the communication cost thereby increase the silicon efficiency. For small-to-medium datasets, we exploit large on-chip memory of the FPGA to load the matrix only once and then use explicit blocking to perform all iterations at the communication cost of a single iteration. For large sparse datasets, it is now a well-known idea to unroll k iterations using a matrix powers kernel which replaces SpMV and two additional kernels, TSQR and BGS, which replace vector-vector operations. While this approach can provide a Θ(k) reduction in the communication cost, the extent of the unrolling depends on the growth in redundant computation, the underlying architecture and the memory model. In this work, we show how to select the unroll factor k in an architecture-agnostic manner to provide communication-computation tradeoff on FPGA and GPU. To this end, we exploit inverse-memory hierarchy of the GPUs to map matrix power kernel and present a new algorithm for the FPGAs which matches with their strength to reduce redundant computation to allow large k and hence higher speedups. We provide predictive models of the matrix powers kernel to understand the communication-computation tradeoff on GPU and FPGA. We highlight extremely low efficiency of the GPU in TSQR due to off-chip sharing of data across different building blocks and show how we can use on-chip memory of the FPGA to eliminate this off-chip access and hence achieve better efficiency. Finally, we demonstrate how to compose all the kernels by using a unified architecture and exploit on-chip memory of the FPGA to share data across these kernels. Using the Lanczos Iteration as a case study to solve symmetric extremal eigenvalue problem, we show that the efficiency of FPGAs can be increased from 1.8% to 38% for small- to-medium scale dense matrices whereas up to 7.8% for large-scale structured banded matrices. We show that although GPU shows better efficiency for certain kernels like the matrix powers kernel, the overall efficiency is even lower due to increase in communication cost while sharing data across different kernels through off-chip memory. As the Lanczos Iteration is at the heart of all modern iterative numerical algorithms, our results are applicable to a broad class of iterative numerical algorithms.
8

Dost, Banu. "Optimization algorithms for biological data." Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3397170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed March 23, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 149-159).
9

Xiong, Xiaoping. "Stochastic optimization algorithms and convergence /." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Business and Management. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
10

Quttineh, Nils-Hassan. "Algorithms for Costly Global Optimization." Licentiate thesis, Mälardalen University, School of Education, Culture and Communication, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-5970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

There exists many applications with so-called costly problems, which means that the objective function you want to maximize or minimize cannot be described using standard functions and expressions. Instead one considers these objective functions as ``black box'' where the parameter values are sent in and a function value is returned. This implies in particular that no derivative information is available.The reason for describing these problems as expensive is that it may take a long time to calculate a single function value. The black box could, for example, solve a large system of differential equations or carrying out a heavy simulation, which can take anywhere from several minutes to several hours!These very special conditions therefore requires customized algorithms. Common optimization algorithms are based on calculating function values every now and then, which usually can be done instantly. But with an expensive problem, it may take several hours to compute a single function value. Our main objective is therefore to create algorithms that exploit all available information to the limit before a new function value is calculated. Or in other words, we want to find the optimal solution using as few function evaluations as possible.A good example of real life applications comes from the automotive industry, where on the development of new engines utilize advanced models that are governed by a dozen key parameters. The goal is to optimize the model by changing the parameters in such a way that the engine becomes as energy efficient as possible, but still meets all sorts of demands on strength and external constraints.

11

Zhou, Yi. "Optimization Algorithms for Clique Problems." Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0013/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse présente des algorithmes de résolution de quatre problèmes de clique : clique de poids maximum (MVWCP), s-plex maximum (MsPlex), clique maximum équilibrée dans un graphe biparti (MBBP) et clique partition (CPP). Les trois premiers problèmes sont des généralisations ou relaxations du problème de la clique maximum, tandis que le dernier est un problème de couverture. Ces problèmes, ayant de nombreuses applications pratiques, sont NP-difficiles, rendant leur résolution ardue dans le cas général. Nous présentons ici des algorithmes de recherche locale, principalement basés sur la recherche tabou, permettant de traiter efficacement ces problèmes ; chacun de ces algorithmes emploie des composants originaux et spécifiquement adaptés aux problèmes traités, comme de nouveaux opérateurs ou mécanismes perturbatifs. Nous y intégrons également des stratégies telles que la réduction de graphe ou la propagation afin de traiter des réseaux de plus grande taille. Des expérimentations basées sur des jeux d’instances nombreux et variés permettent de montrer la compétitivité de nos algorithmes en comparaison avec les autres stratégies existantes
This thesis considers four clique problems: the maximum vertex weight clique problem (MVWCP), the maximum s-plex problem (MsPlex), the maximum balanced biclique problem (MBBP) and the clique partitioning problem (CPP). The first three are generalization and relaxation of the classic maximum clique problem (MCP), while the last problem belongs to a clique grouping problem. These combinatorial problems have numerous practical applications. Given that they all belong to the NP-Hard family, it is computationally difficult to solve them in the general case. For this reason, this thesis is devoted to develop effective algorithms to tackle these challenging problems. Specifically, we propose two restart tabu search algorithms based on a generalized PUSH operator for MVWCP, a frequency driven local search algorithms for MsPlex, a graph reduction based tabu search as well as effective exact branch and bound algorithms for MBBP and lastly, a three phase local search algorithm for CPP. In addition to the design of efficient move operators for local search algorithms, we also integrate components like graph reduction or upper bound propagation in order to deal deal with very large real-life networks. The experimental tests on a wide range of instances show that our algorithms compete favorably with the main state-of-the-art algorithms
12

Liu, Qiuliang. "Image mosaic algorithms and optimization." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 58 p, 2007. http://proquest.umi.com/pqdweb?did=1400965051&sid=9&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

李國誠 and Kwok-shing Lee. "Convergences of stochastic optimization algorithms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B3025632X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Roberts, Christopher. "Genetic algorithms for cluster optimization." Thesis, University of Birmingham, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Caramanis, Constantine (Constantine Michael) 1977. "Adaptable optimization : theory and algorithms." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 189-200).
Optimization under uncertainty is a central ingredient for analyzing and designing systems with incomplete information. This thesis addresses uncertainty in optimization, in a dynamic framework where information is revealed sequentially, and future decisions are adaptable, i.e., they depend functionally on the information revealed in the past. Such problems arise in applications where actions are repeated over a time horizon (e.g., portfolio management, or dynamic scheduling problems), or that have multiple planning stages (e.g., network design). The first part of the thesis focuses on the robust optimization approach to systems with uncertainty. Unlike the probability-driven stochastic programming approach, robust optimization is built on deterministic set-based formulations of uncertainty. This thesis seeks to place Robust Optimization within a dynamic framework. In particular, we introduce the notion of finite adaptability. Using geometric results, we characterize the benefits of adaptability, and use these theoretical results to design efficient algorithms for finding near-optimal protocols. Among the novel contributions of the work are the capacity to accommodate discrete variables, and the development of a hierarchy of adaptability.
(cont.) The second part of the thesis takes a data-driven view to uncertainty. The central questions are (a) how can we construct adaptability in multi-stage optimization problems given only data, and (b) what feasibility guarantees can we provide. Multi-stage Stochastic Optimization typically requires exponentially many data points. Robust Optimization, on the other hand, has a very limited ability to address multi-stage optimization in an adaptable manner. We present a hybrid sample-based robust optimization methodology for constructing adaptability in multi-stage optimization problems, that is both tractable and also flexible, offering a hierarchy of adaptability. We prove polynomial upper bounds on sample complexity. We further extend our results to multi-stage problems with integer variables in the future stages. We illustrate the ideas above on several problems in Network Design, and Portfolio Optimization. The last part of the thesis focuses on an application of adaptability, in particular, the ideas of finite adaptability from the first part of the thesis, to the problem of air traffic control. The main problem is to sequentially schedule the departures, routes, ground-holding, and air-holding, for every flight over the national air space (NAS).
(cont.) The schedule seeks to minimize the aggregate delay incurred, while satisfying capacity constraints that specify the maximum number of flights that can take off or land at a particular airport, or fly over the same sector of the NAS at any given time. These capacities are impacted by the weather conditions. Since we receive an initial weather forecast, and then updates throughout the day, we naturally have a multistage optimization problem, with sequentially revealed uncertainty. We show that finite adaptability is natural, since the scheduling problem is inherently finite, and furthermore the uncertainty set is low-dimensional. We illustrate both the applicability of finite adaptability, and also its effectiveness, through several examples.
by Constantine Caramanis.
Ph.D.
16

Hu, Liujia. "Convergent algorithms in simulation optimization." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is frequently the case that deterministic optimization models could be made more practical by explicitly incorporating uncertainty. The resulting stochastic optimization problems are in general more difficult to solve than their deterministic counterparts, because the objective function cannot be evaluated exactly and/or because there is no explicit relation between the objective function and the corresponding decision variables. This thesis develops random search algorithms for solving optimization problems with continuous decision variables when the objective function values can be estimated with some noise via simulation. Our algorithms will maintain a set of sampled solutions, and use simulation results at these solutions to guide the search for better solutions. In the first part of the thesis, we propose an Adaptive Search with Resampling and Discarding (ASRD) approach for solving continuous stochastic optimization problems. Our ASRD approach is a framework for designing provably convergent algorithms that are adaptive both in seeking new solutions and in keeping or discarding already sampled solutions. The framework is an improvement over the Adaptive Search with Resampling (ASR) method of Andradottir and Prudius in that it spends less effort on inferior solutions (the ASR method does not discard already sampled solutions). We present conditions under which the ASRD method is convergent almost surely and carry out numerical studies aimed at comparing the algorithms. Moreover, we show that whether it is beneficial to resample or not depends on the problem, and analyze when resampling is desirable. Our numerical results show that the ASRD approach makes substantial improvements on ASR, especially for difficult problems with large numbers of local optima. In traditional simulation optimization problems, noise is only involved in the objective functions. However, many real world problems involve stochastic constraints. Such problems are more difficult to solve because of the added uncertainty about feasibility. The second part of the thesis presents an Adaptive Search with Discarding and Penalization (ASDP) method for solving continuous simulation optimization problems involving stochastic constraints. Rather than addressing feasibility separately, ASDP utilizes the penalty function method from deterministic optimization to convert the original problem into a series of simulation optimization problems without stochastic constraints. We present conditions under which the ASDP algorithm converges almost surely from inside the feasible region, and under which it converges to the optimal solution but without feasibility guarantee. We also conduct numerical studies aimed at assessing the efficiency and tradeoff under the two different convergence modes. Finally, in the third part of the thesis, we propose a random search method named Gaussian Search with Resampling and Discarding (GSRD) for solving simulation optimization problems with continuous decision spaces. The method combines the ASRD framework with a sampling distribution based on a Gaussian process that not only utilizes the current best estimate of the optimal solution but also learns from past sampled solutions and their objective function observations. We prove that our GSRD algorithm converges almost surely, and carry out numerical studies aimed at studying the effects of utilizing the Gaussian sampling strategy. Our numerical results show that the GSRD framework performs well when the underlying objective function is multi-modal. However, it takes much longer to sample solutions, especially in higher dimensions.
17

Collether, John. "Portfolio optimization by heuristic algorithms." Thesis, University of Essex, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.635985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Portfolio optimization is a major activity in business. It is intensively studied by researchers. Conventional portfolio optimization research made simplifying assumptions. For example, they assumed no constraint in how many assets one holds (cardinality constraint). They also assume no minimum and maximum holding sizes (holding size constraint). Once these assumptions are relaxed, conventional methods become inapplicable. New methods are demanded. Threshold Accepting is an established algorithm in the extended portfolio optimization problem.
18

Arvidsson, Elisabeth. "Optimization algorithms for Quantum Annealing." Thesis, KTH, Fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Viitanen, Sami. "Some new global optimization algorithms /." Åbo : Åbo Akad. Förl, 1997. http://www.gbv.de/dms/goettingen/239457927.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Naji, Azimi Zahra <1982&gt. "Algorithms for Combinatorial Optimization Problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2695/1/Naji_Azimi_Zahra_Tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Naji, Azimi Zahra <1982&gt. "Algorithms for Combinatorial Optimization Problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2695/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Harris, Steven C. "A genetic algorithm for robust simulation optimization." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178645751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Samek, Michal. "Optimization of Aircraft Tracker Parameters." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Diplomová práce se zabývá optimalizací systému pro sledování letadel, využívaného pro řízení letového provozu. Je popsána metodika vyhodnocování přesnosti sledovacího systému a přehled relevantních algoritmů pro sledování objektů. Dále jsou navrženy tři přístupy k řešení problému. První se pokouší identifikovat parametry filtrovacích algoritmů pomocí algoritmu Expectation-Maximisation, implementací metody maximální věrohodnosti. Druhý přístup je založen na prostých odhadech parametrů normálního rozložení z naměřených a referenčních dat. Nakonec je zkoumána možnost řešení pomocí optimalizačního algoritmu Evoluční strategie. Závěrečné vyhodnocení ukazuje, že třetí přístup je pro daný problém nejvhodnější.
24

Aggarwal, Varun. "Analog circuit optimization using evolutionary algorithms and convex optimization." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 83-88).
In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the designer effort is a contribution. We argue that the accuracy of geometric programming can be improved without adversely influencing the run time or increasing the designer's effort. This is facilitated by decomposition of geometric programming modeling into two steps, which decouples accuracy of models and run-time of geometric programming. We design a new algorithm for producing accurate posynomial models for MOS transistor parameters, which is the first step of the decomposition. The new algorithm can generate posynomial models with variable number of terms and real-valued exponents. The algorithm is a hybrid of a genetic algorithm and a convex optimization technique. We study the performance of the algorithm on artificially created benchmark problems. We show that the accuracy of posynomial models of MOS parameters is improved by a considerable amount by using the new algorithm. The new posynomial modeling algorithm can be used in any application of geometric programming and is not limited to MOS parameter modeling. In the last chapter, we discuss various ideas to improve the state-of-art in circuit sizing.
by Varun Aggarwal.
S.M.
25

Pelikan, Martin. "Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/toc/fy053/2004116659.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Corbineau, Marie-Caroline. "Proximal and interior point optimization strategies in image recovery." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC085/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les problèmes inverses en traitement d'images peuvent être résolus en utilisant des méthodes variationnelles classiques, des approches basées sur l'apprentissage profond, ou encore des stratégies bayésiennes. Bien que différentes, ces approches nécessitent toutes des algorithmes d'optimisation efficaces. L'opérateur proximal est un outil important pour la minimisation de fonctions non lisses. Dans cette thèse, nous illustrons la polyvalence des algorithmes proximaux en les introduisant dans chacune des trois méthodes de résolution susmentionnées.Tout d'abord, nous considérons une formulation variationnelle sous contraintes dont la fonction objectif est composite. Nous développons PIPA, un nouvel algorithme proximal de points intérieurs permettant de résoudre ce problème. Dans le but d'accélérer PIPA, nous y incluons une métrique variable. La convergence de PIPA est prouvée sous certaines conditions et nous montrons que cette méthode est plus rapide que des algorithmes de l'état de l'art au travers de deux exemples numériques en traitement d'images.Dans une deuxième partie, nous étudions iRestNet, une architecture neuronale obtenue en déroulant un algorithme proximal de points intérieurs. iRestNet nécessite l'expression de l'opérateur proximal de la barrière logarithmique et des dérivées premières de cet opérateur. Nous fournissons ces expressions pour trois types de contraintes. Nous montrons ensuite que sous certaines conditions, cette architecture est robuste à une perturbation sur son entrée. Enfin, iRestNet démontre de bonnes performances pratiques en restauration d'images par rapport à une approche variationnelle et à d'autres méthodes d'apprentissage profond.La dernière partie de cette thèse est consacrée à l'étude d'une méthode d'échantillonnage stochastique pour résoudre des problèmes inverses dans un cadre bayésien. Nous proposons une version accélérée de l'algorithme proximal de Langevin non ajusté, baptisée PP-ULA. Cet algorithme est incorporé à un échantillonneur de Gibbs hybride utilisé pour réaliser la déconvolution et la segmentation d'images ultrasonores. PP-ULA utilise le principe de majoration-minimisation afin de gérer les distributions non log-concaves. Comme le montrent nos expériences réalisées sur des données ultrasonores simulées et réelles, PP-ULA permet une importante réduction du temps d'exécution tout en produisant des résultats de déconvolution et de segmentation très satisfaisants
Inverse problems in image processing can be solved by diverse techniques, such as classical variational methods, recent deep learning approaches, or Bayesian strategies. Although relying on different principles, these methods all require efficient optimization algorithms. The proximity operator appears as a crucial tool in many iterative solvers for nonsmooth optimization problems. In this thesis, we illustrate the versatility of proximal algorithms by incorporating them within each one of the aforementioned resolution methods.First, we consider a variational formulation including a set of constraints and a composite objective function. We present PIPA, a novel proximal interior point algorithm for solving the considered optimization problem. This algorithm includes variable metrics for acceleration purposes. We derive convergence guarantees for PIPA and show in numerical experiments that it compares favorably with state-of-the-art algorithms in two challenging image processing applications.In a second part, we investigate a neural network architecture called iRestNet, obtained by unfolding a proximal interior point algorithm over a fixed number of iterations. iRestNet requires the expression of the logarithmic barrier proximity operator and of its first derivatives, which we provide for three useful types of constraints. Then, we derive conditions under which this optimization-inspired architecture is robust to an input perturbation. We conduct several image deblurring experiments, in which iRestNet performs well with respect to a variational approach and to state-of-the-art deep learning methods.The last part of this thesis focuses on a stochastic sampling method for solving inverse problems in a Bayesian setting. We present an accelerated proximal unadjusted Langevin algorithm called PP-ULA. This scheme is incorporated into a hybrid Gibbs sampler used to perform joint deconvolution and segmentation of ultrasound images. PP-ULA employs the majorize-minimize principle to address non log-concave priors. As shown in numerical experiments, PP-ULA leads to a significant time reduction and to very satisfactory deconvolution and segmentation results on both simulated and real ultrasound data
27

Cruz, Mencía Francisco. "Enhancing performance on combinatorial optimization algorithms." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/665199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’optimització combinatòria és un tipus específic d’optimització matemàtica on el domini de les variables és discret. Aquest tipus de problemes d’optimització tenen una gran aplicabilitat degut a la seva capacitat d’optimització sobre objectes unitaris i no divisibles. Més enllà dels algoritmes genèrics, la comunitat investigadora és molt activa proposant algorismes capaços d’abordar problemes d’optimització combinatòria per a problemes específics. L’objectiu d’aquesta tesi és investigar com ampliar l’aplicabilitat d’algorismes d’optimització combinatoria que exploten l’estructura dels problemes a resoldre. Ho fem des de la perspectiva del maquinari d’una computadora, perseguint l’explotació total dels recursos computacionals que ofereix el maquinari actual. Per assolir generalitat treballem amb tres problemes diferents. Primer abordem el problema de generació d’estructures de la coalició (CSGP). Trobem que l’algorisme d’última generació és IDP. Proposem un algoritme optimitzat i paral·lel capaç de resoldre el CSGP. Aconseguim això definint un nou mètode per dur a terme l’operació més crítica -Splitting-, així com definint un nou mètode per dividir l’operació de l’algoritme en els diferent subprocessos. A continuació, estudiem el problema de determinació del guanyador (WDP) per a les subhastes combinades (CA). Trobem que l’escalabilitat dels solucionadors d’avantguarda és limitada. Més concretament, mostrem com millorar la resolució de resultats de relaxació LP per al WDP en subhastes combinables de gran escala mitjançant l’aplicació de l’algoritme AD³. A continuació, contribuïm amb una versió optimitzada d’AD³ que també es pot executar en un escenari paral·lel de memòria compartida. Finalment, estudiem l’aplicació de AD³ per resoldre les relaxacions LP d’un problema més exigent de la computacionalment: El problema de la predició de cadenes laterals (SCP). Presentem una manera optimitzada de resoldre l’operació més crítica, la resol·lució d’un problema quadràtic per a un factor arbitrari. En tots els casos proposem algoritmes optimitzats que es poden escalar de forma paral·lela i que milloren notablement l’estat de la tècnica. Tres ordres de magnitud a IDP, i un ordre de magnitud a AD³. L’objectiu final d’aquest treball és demostrar com un disseny algorisme conscient de maquinari pot conduir a millores de rendiment significatives. Mostrem estratègies exportables a algorismes d’optimització combinatòria similars. Aquestes estratègies ajudaran al dissenyador d’algorismes a assolir més eficiència en les CPU modernes.
La optimización combinatoria es un tipo específico de optimización donde el dominio de las variables es discreto. Este tipo de problemas de optimización tienen una gran aplicabilidad debido a su capacidad de optimización sobre objetos unitarios y no divisibles. Más allá de los algoritmos genéricos, la comunidad investigadora es muy activa proponiendo algoritmos capaces de abordar problemas de optimización combinatoria para problemas específicos. El objetivo de esta tesis es investigar cómo ampliar la aplicabilidad de algoritmos de optimización combinatoria que explotan la estructura de los problemas a resolver. Lo hacemos desde la perspectiva del hardware de una computadora, persiguiendo la explotación total de los recursos computacionales que ofrece el hardware actual. Para alcanzar generalidad trabajamos con tres problemas diferentes. Primero abordamos el problema de generación de estructuras de la coalición (CSGP). Encontramos que el algoritmo de última generación es IDP. Proponemos un algoritmo optimizado y paralelo capaz de resolver el CSGP. Conseguimos esto deniendo un nuevo metodo para llevar a cabo la operacion mas crtica -Splitting-, así como deniendo un nuevo método para dividir la operación del algoritmo en los diferente subprocesos. A continuación, estudiamos el problema de determinación del ganador (WDP) para las subastas combinatorias (CA). Encontramos que la escalabilidad de los solucionadores de vanguardia es limitada. Más concretamente, mostramos cómo mejorar la resolución de resultados de relajación LP para el WDP en subastas combinatorias de gran escala mediante la aplicación del algoritmo AD³. A continuación, contribuimos con una versión optimizada de AD³ que también se puede ejecutar en un escenario paralelo de memoria compartida. Finalmente, estudiamos la aplicación de AD³ para resolver las relajaciones LP de un problema más exigente de la computacionalmente: El problema de la predición de cadenas laterales (SCP). Presentamos una manera optimizada de resolver la operación más crítica, la resolución de un problema cuadrático para un factor arbitrario. En todos los casos proponemos algoritmos optimizados que se pueden escalar de forma paralela y que mejoran notablemente el estado de la técnica. Tres órdenes de magnitud en IDP, y un orden de magnitud en AD³. El objetivo nal de este trabajo es demostrar como un diseño algoritmo consciente de hardware puede conducir a mejoras de rendimiento signicativas. Mostramos estrategias exportables a algoritmos de optimización combinatoria similares. Estas estrategias ayudarán al diseñador de algoritmos lograr más eficiencia en las CPU modernas.
Combinatorial Optimization is a specific type of mathematical optimization where variables' domain is discrete. This kind of optimization problems have an enormous applicability due to its ability to optimize over unitary and non-divisible objects. Beyond generic algorithms, the research community is very active proposing algorithms able to tackle specific combinatorial optimization problems. The goal of this thesis is to investigate how to widen the applicability of Combinatorial Optimization algorithms that exploit the structure of the problems to solve. We do so from a computer's hardware perspective, pursuing the full exploitation of computational resources offered by today's hardware. For the sake of generality we work on three different problems. First we tackle the Coalition Structure Generation Problem (CSGP). We find that the state-of-the-art algorithm is IDP. We propose an optimized and parallel algorithm able to solve the CSGP. We reach this by defining a novel method to perform the most critical operation --Splitting-- as well as by defining a novel method to divide the the algorithm's operation in threads. Then, we study the Winner Determination Problem (WDP) for Combinatorial Auctions (CA), which is very related to the CSGP. We find that state-of-the-art solvers' scalability is limited. More specifically we show how to improve results solving LP relaxations for the WDP in Large Scale Combinatorial Auctions by applying the AD³ algorithm. Then we contribute with an optimized version of AD³ which is also able to run in a shared-memory parallel scenario. Finally we study the application of AD³ to solve the LP relaxations of a more CPU demanding problem: The Side-Chain Prediction (SCP). We present an optimized way to solve the most critical operation which is solving a Quadratic Problem for an Arbitrary Factor. In all the cases we propose optimized algorithms that are scalable in parallel and that improve significantly the state-of-the-art. Three orders of magnitude speedup on IDP, one order of magnitude speedup in AD³. The ultimate purpose of this work is to demonstrate how a hardware-aware algorithmic design can lead to significant speedups. We show strategies that are exportable to similar Combinatorial Optimization algorithms. Such strategies will help the algorithm designer to achieve more efficiency in modern CPUs.
28

Ericsson, Kenneth, and Robert Grann. "Image optimization algorithms on an FPGA." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-5727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

 

In this thesis a method to compensate camera distortion is developed for an FPGA platform as part of a complete vision system. Several methods and models is presented and described to give a good introduction to the complexity of the problems that is overcome with the developed method. The solution to the core problem is shown to have a good precision on a sub-pixel level.

 

29

Amouzgar, Kaveh. "Multi-objective optimization using Genetic Algorithms." Thesis, Högskolan i Jönköping, Tekniska Högskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-19851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, the basic principles and concepts of single and multi-objective Genetic Algorithms (GA) are reviewed. Two algorithms, one for single objective and the other for multi-objective problems, which are believed to be more efficient are described in details. The algorithms are coded with MATLAB and applied on several test functions. The results are compared with the existing solutions in literatures and shows promising results. Obtained pareto-fronts are exactly similar to the true pareto-fronts with a good spread of solution throughout the optimal region. Constraint handling techniques are studied and applied in the two algorithms. Constrained benchmarks are optimized and the outcomes show the ability of algorithm in maintaining solutions in the entire pareto-optimal region. In the end, a hybrid method based on the combination of the two algorithms is introduced and the performance is discussed. It is concluded that no significant strength is observed within the approach and more research is required on this topic. For further investigation on the performance of the proposed techniques, implementation on real-world engineering applications are recommended.
30

Tian, Zhong Huan. "Gender based meta-heuristic optimization algorithms." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Weicker, Karsten. "Evolutionary algorithms and dynamic optimization problems /." Osnabrück : Der Andere Verl, 2003. http://www.gbv.de/dms/ilmenau/toc/365163716weick.PDF.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Raj, Ashish. "Evolutionary Optimization Algorithms for Nonlinear Systems." DigitalCommons@USU, 2013. http://digitalcommons.usu.edu/etd/1520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many real world problems in science and engineering can be treated as optimization problems with multiple objectives or criteria. The demand for fast and robust stochastic algorithms to cater to the optimization needs is very high. When the cost function for the problem is nonlinear and non-differentiable, direct search approaches are the methods of choice. Many such approaches use the greedy criterion, which is based on accepting the new parameter vector only if it reduces the value of the cost function. This could result in fast convergence, but also in misconvergence where it could lead the vectors to get trapped in local minima. Inherently, parallel search techniques have more exploratory power. These techniques discourage premature convergence and consequently, there are some candidate solution vectors which do not converge to the global minimum solution at any point of time. Rather, they constantly explore the whole search space for other possible solutions. In this thesis, we concentrate on benchmarking three popular algorithms: Real-valued Genetic Algorithm (RGA), Particle Swarm Optimization (PSO), and Differential Evolution (DE). The DE algorithm is found to out-perform the other algorithms in fast convergence and in attaining low-cost function values. The DE algorithm is selected and used to build a model for forecasting auroral oval boundaries during a solar storm event. This is compared against an established model by Feldstein and Starkov. As an extended study, the ability of the DE is further put into test in another example of a nonlinear system study, by using it to study and design phase-locked loop circuits. In particular, the algorithm is used to obtain circuit parameters when frequency steps are applied at the input at particular instances.
33

Carlson, Susan Elizabeth. "Component selection optimization using genetic algorithms." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Farah, Abdulkadir. "Differential evolution algorithms for network optimization." Thesis, University of Reading, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.602400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many real world optimization problems are difficult to solve, therefore, when solving such problems it is wise to employ efficient optimization algorithms which are capable of handling the problem complexities and of finding optimal or near optimal solutions within a reasonable time and without using excessive computational resources. The objective of this research is to develop Differential Evolution (DE) algorithms with improved performance capable of solving difficult and challenging global constrained and unconstrained optimization problems, as well as extending the application of the these algorithms to real-world optimization problems, particularly wireless broadband network placement and deployment problems. The adaptation of DE control parameters has also been investigated and a novel method using Mann-Iteration and Tournament scoring is proposed to improve the performance of the algorithm. A novel constraint handling technique called neighborhood constraints handling (NCR) method has been also proposed. A set of experiments are conducted to comprehensively test the performance of the proposed DE algorithms for global optimization. The numerical results for well-known optimization global optimization test problems are shown to prove the performance of the proposed methods. In addition, a novel wireless network test point (TP) reduction algorithm (TPR) has been presented. The TPR algorithm and the proposed DE algorithms have been applied for solving the optimal network placement problem. In order to utilize the value of flexibility a novel value optimization problem formulation integrating the state of the art approaches of cash flow (CF) analysis and real option analysis (ROA) for network deployment has been presented, utilizing the proposed DE algorithms to obtain the optimal roll-out sequence that maximizes the value of the wireless network deployment. A numerical experimentation, based on a case study scenario of an optimal network placement and deployment for wireless broadband access network, has been conducted to confirm the efficiency of these algorithms.
35

Fathi, Bayda. "Gradient optimization algorithms with fast convergence." Thesis, Cardiff University, 2013. http://orca.cf.ac.uk/50398/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The most common class of methods for solving linear systems is the class of gradient algorithms, the most famous of which being the steepest descent (SD) algorithm. Linear systems equivalent to the quadratic optimization problems where the coefficient matrix of the linear system is the Hessian of the corresponding quadratic function. In recent years, the development of a particular gradient algorithm, namely the Barzilai-Borwein (BB) algorithm, has instigated a lot of research in the area of optimization and many algorithms now exist which have faster rates of convergence than those possessed by the steepest descent algorithm. In this thesis, some well known gradient algorithms (which are originally developed for solving linear symmetric problems) are applied to non-symmetric systems of equations. Their efficiency is demonstrated through an implementation on a constructed a class of random slightly non-symmetric matrices E (with positive and real eigenvalues) as well as on the symmetric part of E, ES = 1 2 (E +ET ), the performance of the algorithm is then investigated. Also several gradient algorithms are created that have a better performance than the Cauchy- Barzilai-Borwein (CBB) method for a non-symmetric problem when applied to another constructed class of random slightly non-symmetric matrices A (having eigenvalues with a positive real part). Numerically, it is established that the asymptotic rate of convergence of these algorithms is faster when the roots of their polynomials have a Beta(2,2) distribution rather than a uniform distribution. In fact, there is a strong dependence of the asymptotic rate of convergence on the distribution of the spectrum, this has been proven from numerical results. Most of the created algorithms achieve faster convergence than other methods, especially with the non-clustered distribution of the spectrum. The modified Cauchy- Barzilai-Borwein (MCBB) algorithm is the first created algorithm which splits the two equal polynomial roots of the CBB algorithm into two different roots at each iteration. This means, moving the roots of the polynomial of CBB method from the real-axis to the complex plane where the eigenvalues of the objective matrix would lie. To attain further improvement on the convergence rates, another gradient algorithm is proposed by using theMCBB method or CBB method at each iteration depending on two parameters g and x . Furthermore, some different methods are then created by utilizing different step-sizes of gradient algorithms which already exist for finding the solution of symmetric problems. The effectiveness of the constructed algorithms, which depend on several parameters, have been verified by various numerical experiments and comparisons with other approaches. The motivation for choosing different gradient methods stems from the fact that simple examples can be constructed in which each of these gradient methods will show superiority over the other. Most of these algorithms have faster rates of convergence than CBB for non-symmetric matrices but slower for symmetric matrices. A new efficient gradient-based method to solve non-symmetric linear systems of equations is presented. The simplicity of the new method, guaranteed convergent property, along with the miniii imum memory requirement where it has almost the same number of gradient evaluations as the CBB method per iteration. This improved method is very attractive compared to alternative methods for solving slightly non-symmetric linear problems. Efficiency and feasibility of the presented method are supported by numerical experiments.
36

Chen, Qin, and 陈琴. "Algorithms for some combinatorial optimization problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46589302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mota, Joao F. C. "Communication-Efficient Algorithms For Distributed Optimization." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is concerned with the design of distributed algorithms for solving optimization problems. The particular scenario we consider is a network with P compute nodes, where each node p has exclusive access to a cost function fp. We design algorithms in which all the nodes cooperate to find the minimum of the sum of all the cost functions, f1 + · · · + fP . Several problems in signal processing, control, and machine learning can be posed as such optimization problems. Given that communication is often the most energy-consuming operation in networks and, many times, also the slowest one, it is important to design distributed algorithms with low communication requirements, that is, communication-efficient algorithms. The two main contributions of this thesis are a classification scheme for distributed optimization problems of the kind explained above and a set of corresponding communication-efficient algorithms. The class of optimization problems we consider is quite general, since we allow that each function may depend on arbitrary components of the optimization variable, and not necessarily on all of them. In doing so, we go beyond the commonly used assumption in distributed optimization and create additional structure that can be explored to reduce the total number of communications. This structure is captured by our classification scheme, which identifies particular instances of the problem that are easier to solve. One example is the standard distributed optimization problem, in which all the functions depend on all the components of the variable. All our algorithms are distributed in the sense that no central node coordinates the network, all the communications occur exclusively between neighboring nodes, and the data associated with each node is always processed locally. We show several applications of our algorithms, including average consensus, support vector machines, network flows, and several distributed scenarios for compressed sensing. We also propose a new framework for distributed model predictive control, which can be solved with our algorithms. Through extensive numerical experiments, we show that our algorithms outperform prior distributed algorithms in terms of communication-efficiency, even some that were specifically designed for a particular application.
38

ABOUSABEA, Emad Mohamed Abd Elrahman. "Optimization algorithms for video service delivery." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00762636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this thesis is to provide optimization algorithms for accessing video services either in unmanaged or managed ways. We study recent statistics about unmanaged video services like YouTube and propose suitable optimization techniques that could enhance files accessing and reduce their access costs. Moreover, this cost analysis plays an important role in decision making about video files caching and hosting periods on the servers. Under managed video services called IPTV, we conducted experiments for an open-IPTV collaborative architecture between different operators. This model is analyzed in terms of CAPEX and OPEX costs inside the domestic sphere. Moreover, we introduced a dynamic way for optimizing the Minimum Spanning Tree (MST) for multicast IPTV service. In nomadic access, the static trees could be unable to provide the service in an efficient manner as the utilization of bandwidth increases towards the streaming points (roots of topologies). Finally, we study reliable security measures in video streaming based on hash chain methodology and propose a new algorithm. Then, we conduct comparisons between different ways used in achieving reliability of hash chains based on generic classifications
39

CARBONO, ALONSO JOAQUIN JUVINAO. "MOORING PATTERN OPTIMIZATION USING GENETIC ALGORITHMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8242@1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
GRUPO DE TECNOLOGIA DE COMPUTAÇÃO GRÁFICA - PUC-RIO
Com o crescimento da demanda de óleo, as empresas de petróleo têm sido forçadas a explorar novas reservas em águas cada vez mais profundas. Em função do alto custo das operações de exploração de petróleo, torna-se necessário o desenvolvimento de tecnologias capazes de aumentar a eficiência e reduzir os custos envolvidos. Neste contexto, a utilização de unidades flutuantes torna-se cada vez mais freqüente em águas profundas. O posicionamento das unidades flutuantes durante as operações de exploração de óleo é garantido pelas linhas de ancoragem, que são estruturas flexíveis compostas, geralmente, por trechos de aço, amarras e/ou cabos sintéticos. O presente trabalho apresenta o desenvolvimento de um Algoritmo Genético (AG) para solucionar o problema da disposição das linhas de ancoragem de unidades flutuantes utilizadas nas operações de exploração de petróleo. A distribuição das linhas de ancoragem é um dos fatores que influencia diretamente nos deslocamentos (offsets) sofridos pelas unidades flutuantes quando submetidas às ações ambientais, como ventos, ondas e correntes. Desta forma, o AG busca uma disposição ótima das linhas de ancoragem cujo objetivo final é a minimização dos deslocamentos da unidade flutuante. Os operadores básicos utilizados por este algoritmo são mutação, crossover e seleção. Neste trabalho, foi adotada a técnica steady-state, que só efetua a substituição de um ou dois indivíduos por geração. O cálculo da posição de equilíbrio estático da unidade flutuante é feito aplicando-se a equação da catenária para cada linha de ancoragem com o objetivo de se obterem as forças de restauração na unidade, e empregando-se um processo iterativo para calcular a sua posição final de equilíbrio.
With the increasing demand for oil, oil companies have been forced to exploit new fields in deep waters. Due to the high cost of oil exploitation operations, the development of technologies capable of increasing efficiency and reducing costs is crucial. In this context, the use of floating units in deep waters has become more frequent. The positioning of the floating units during oil exploitation operations is done using mooring lines, which are flexible structures usually made of steel wire, steel chain and/or synthetic cables. This work presents the development of a Genetic Algorithm (GA) procedure to solve the problem of the mooring pattern of floating units used in oil exploitation operations. The distribution of mooring lines is one of the factors that directly influence the displacements (offsets) suffered by floating units when subjected to environmental conditions such as winds, waves and currents. Thus, the GA seeks an optimum distribution of the mooring lines whose final goal is to minimize the units´ displacements. The basic operators used in this algorithm are mutation, crossover and selection. In the present work, the steady- state GA has been implemented, which performs the substitution of only one or two individuals per generation. The computation of the floating unit´s static equilibrium position is accomplished by applying the catenary equilibrium equation to each mooring line in order to obtain the out-of-balance forces on the unit, and by using an iterative process to compute the final unit equilibrium position.
40

Tang, Wang-Rei 1975. "The optimization of data compression algorithms." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chicoisne, Renaud Pierre. "Efficient algorithms for risk averse optimization." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/135193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Doctor en Sistemas de Ingeniería
Muchos problemas de decisión industriales o logísticos pueden ser vistos como problemas de optimización y para muchos de ellos no es razonable ocupar datos deterministas. Como veremos en este trabajo en el contexto de despachos de emergencia o de planificación de seguridad, las condiciones reales son desconocidas y tomar decisiones sin considerar esta incertidumbre pueden llevar a resultados catastróficos. La teoría y la aplicación de optimización bajo incertidumbre es un tema que ha generado un amplio área de investigación. Sin embargo, aún existen grandes diferencias en complejidad entre optimización determinista y su versión incierta. En esta tesis, se estudian varios problemas de optimización con aversión al riesgo con un enfasis particular en el problema de camino más corto (RASP), problemas estocásticos en redes en general y juegos de seguridad de Stackelberg. Para obtener distribuciones de tiempos de viaje precisos sobre una red vial a partir de datos GPS del sistema de tránsito, se presenta una revisión de los métodos existentes para proyectar trayectorias GPS y se definen dos nuevos algoritmos: Uno que permite la proyección de datos óptima con respecto a una medida de error convenientemente definida (MOE), y un método heurístico rápido que permite proyectar grandes cantidades de datos de manera contínua (MMH). Se presentan resultados computacionales en redes reales y generadas de gran tamaño. Luego, se desarrollan algoritmos eficientes para problemas de ruteo con aversión al riesgo utilizando métodos de Sample Average Approximation, técnicas de linealización y métodos de descomposición. Se estudian la medida de riesgo entrópica y el Conditional Value at Risk considerando correlaciones entre las variables aleatorias. Se presentan resultados computacionales prometedores en instancias generadas de tamaño mediano. Sin embargo, la naturaleza combinatorial de los problemas los vuelve rapidamente intratable a medida que el tamaño del problema crece. Para hacer frente a esta dificultad computacional, se presentan nuevas formulaciones para problemas en redes difíciles, que tienen un menor número de variables enteras. Estas formulaciones ayudan a derivar esquemas de brancheo que se aprovechan de la estructura especial de las formulaciones propuestas. Se muestra como aplicar estas ideas a los conjuntos de camino simple y de circuito hamiltoniano en redes generales, así como los conjuntos de camino simple y de corte en grafos dirigidos acíclicos (DAG). Este trabajo preliminar muestra ideas prometedoras para resolver problemas difíciles. Finalmente, se exploran las implicaciones de los métodos algortmicos y las formulaciones desarrolladas para resolver RASP en un área diferente. Se presentan nuevas formulaciones y enfoques de resolución para juegos de seguridad de Stackelberg cuando el defensor es averso al riesgo con respecto a la estrategia del atacante. Esto se puede resolver de manera polinomial cuando se enfrenta a un adversario y resolviendo un problema de optimización convexa en números enteros cuando el defensor enfrenta varios tipos de adversarios.
42

Lovestead, Raymond L. "Helical Antenna Optimization Using Genetic Algorithms." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/35295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The genetic algorithm (GA) is used to design helical antennas that provide a significantly larger bandwidth than conventional helices with the same size. Over the bandwidth of operation, the GA-optimized helix offers considerably smaller axial-ratio and slightly higher gain than the conventional helix. Also, the input resistance remains relatively constant over the bandwidth. On the other hand, for nearly the same bandwidth and gain, the GA-optimized helix offers a size reduction of 2:1 relative to the conventional helix. The optimization is achieved by allowing the genetic algorithm to control a polynomial that defines the envelope around which the helix is wrapped. The fitness level is defined as a combination of gain, bandwidth and axial ratio as determined by an analysis of the helix using NEC2. To experimentally verify the optimization results, a prototype 12-turn, two-wavelength high, GA-helix is built and tested on the Virginia Tech outdoor antenna range. Far-field radiation patterns are measured over a wide frequency range. The axial-ratio information is extracted from the measured pattern data. Comparison of measured and NEC-2 computed radiation patterns shows excellent agreement. The agreement between the measured and calculated axial-ratios is reasonable. The prototype GA-helix provides a peak gain of more than 13 dB and an upper-to-lower frequency ratio of 1.89. The 3-dB bandwidth of the antenna is 1.27 GHz (1.435 GHz - 2.705 GHz). Over this bandwidth the computed gain varies less than 3 dB and the axial-ratio remains below 3 dB.
Master of Science
43

Shi, Yi. "Algorithms and Optimization for Wireless Networks." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, many new types of wireless networks have emerged for both civil and military applications, such as wireless sensor networks, ad hoc networks, among others. To improve the performance of these wireless networks, many advanced communication techniques have been developed at the physical layer. For both theoretical and practical purposes, it is important for a network researcher to understand the performance limits of these new wireless networks. Such performance limits are important not only for theoretical understanding, but also in that they can be used as benchmarks for the design of distributed algorithms and protocols. However, due to some unique characteristics associated with these networks, existing analytical technologies may not be applied directly. As a result, new theoretical results, along with new mathematical techniques, need to be developed. In this dissertation, we focus on the design of new algorithms and optimization techniques to study theoretical performance limits associated with these new wireless networks. In this dissertation, we mainly focus on sensor networks and ad hoc networks. Wireless sensor networks consist of battery-powered nodes that are endowed with a multitude of sensing modalities. A wireless sensor network can provide in-situ, unattended, high-precision, and real-time observation over a vast area. Wireless ad hoc networks are characterized by the absence of infrastructure support. Nodes in an ad hoc network are able to organize themselves into a multi-hop network. An ad hoc network can operate in a stand-alone fashion or could possibly be connected to a larger network such as the Internet (also known as mesh networks). For these new wireless networks, a number of advanced physical layer techniques, e.g., ultra wideband (UWB), multiple-input and multiple-output (MIMO), and cognitive radio (CR), have been employed. These new physical layer technologies have the potential to improve network performance. However, they also introduce some unique design challenges. For example, CR is capable of reconfiguring RF (on the fly) and switching to newly-selected frequency bands. It is much more advanced than the current multi-channel multi-radio (MC-MR) technology. MC-MR remains hardware-based radio technology: each radio can only operate on a single channel at a time and the number of concurrent channels that can be used at a wireless node is limited by the number of radio interfaces. While a CR can use multiple bands at the same time. In addition, an MC-MR based wireless network typically assumes there is a set of "common channels" available for all nodes in the network. While for CR networks, each node may have a different set of frequency bands based on its particular location. These important differences between MC-MR and CR warrant that the algorithmic design for a CR network is substantially more complex than that under MC-MR. Due to the unique characteristics of these new wireless networks, it is necessary to consider models and constraints at multiple layers (e.g., physical, link, and network) when we explore network performance limits. The formulations of these cross-layer problems are usually in very complex forms and are mathematically challenging. We aim to develop some novel algorithmic design and optimization techniques that provide optimal or near-optimal solutions. The main contributions of this dissertation are summarized as follows. 1. Node lifetime and rate allocation We study the sensor node lifetime problem by considering not only maximizing the time until the first node fails, but also maximizing the lifetimes for all the nodes in the network. For fairness, we maximize node lifetimes under the lexicographic max-min (LMM) criteria. Our contributions are two-fold. First, we develop a polynomial-time algorithm based on a parametric analysis (PA) technique, which has a much lower computational complexity than an existing state-of-the-art approach. We also present a polynomial-time algorithm to calculate the flow routing schedule such that the LMM-optimal node lifetime vector can be achieved. Second, we show that the same approach can be employed to address a different but related problem, called LMM rate allocation problem. More important, we discover an elegant duality relationship between the LMM node lifetime problem and the LMM rate allocation problem. We show that it is sufficient to solve only one of the two problems and that important insights can be obtained by inferring the duality results. 2. Base station placement Base station location has a significant impact on sensor network lifetime. We aim to determine the best location for the base station so as to maximize the network lifetime. For a multi-hop sensor network, this problem is particularly challenging as data routing strategies also affect the network lifetime performance. We present an approximation algorithm that can guarantee $(1- \varepsilon)$-optimal network lifetime performance with any desired error bound $\varepsilon >0$. The key step is to divide the continuous search space into a finite number of subareas and represent each subarea with a "fictitious cost point" (FCP). We prove that the largest network lifetime achieved by one of these FCPs is $(1- \varepsilon)$-optimal. This approximation algorithm offers a significant reduction in complexity when compared to a state-of-the-art algorithm, and represents the best known result to this problem. 3. Mobile base station The benefits of using a mobile base station to prolong sensor network lifetime have been well recognized. However, due to the complexity of the problem (time-dependent network topology and traffic routing), theoretical performance limits and provably optimal algorithms remain difficult to develop. Our main result hinges upon a novel transformation of the joint base station movement and flow routing problem from the time domain to the space domain. Based on this transformation, we first show that if the base station is allowed to be present only on a set of pre-defined points, then we can find the optimal sojourn time for the base station on each of these points so that the overall network lifetime is maximized. Based on this finding, we show that when the location of the base station is un-constrained (i.e., can move to any point in the two-dimensional plane), we can develop an approximation algorithm for the joint mobile base station and flow routing problem such that the network lifetime is guaranteed to be at least $(1- \varepsilon)$ of the maximum network lifetime, where $\varepsilon$ can be made arbitrarily small. This is the first theoretical result with performance guarantee on this problem. 4. Spectrum sharing in CR networks Cognitive radio is a revolution in radio technology that promises unprecedented flexibility in radio communications and is viewed as an enabling technology for dynamic spectrum access. We consider a cross-layer design of scheduling and routing with the objective of minimizing the required network-wide radio spectrum usage to support a set of user sessions. Here, scheduling considers how to use a pool of unequal size frequency bands for concurrent transmissions and routing considers how to transmit data for each user session. We develop a near-optimal algorithm based on a sequential fixing (SF) technique, where the determination of scheduling variables is performed iteratively through a sequence of linear programs (LPs). Upon completing the fixing of these scheduling variables, the value of the other variables in the optimization problem can be obtained by solving an LP. 5. Power control in CR networks We further consider the case of variable transmission power in CR networks. Now, our objective is minimizing the total required bandwidth footprint product (BFP) to support a set of user sessions. As a basis, we first develop an interference model for scheduling when power control is performed at each node. This model extends existing so-called protocol models for wireless networks where transmission power is deterministic. As a result, this model can be used for a broad range of problems where power control is part of the optimization space. An efficient solution procedure based on the branch-and-bound framework and convex hull relaxations is proposed to provide $(1- \varepsilon)$-optimal solutions. This is the first theoretical result on this important problem.
Ph. D.
44

Sun, Chuangchuang. "Rank-Constrained Optimization: Algorithms and Applications." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1531750343188114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ren, Jintong. "Optimization algorithms for graph layout problems." Thesis, Angers, 2020. https://tel.archives-ouvertes.fr/tel-03178385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse considère deux problèmes de disposition des graphes : le problème de la bande passante cyclique (CBP) et le problème de l’agencement linéaire minimum (MinLA). Le CBP est une extension naturelle du problème de minimisation de la bande passante (BMP) et le MinLA est un problème de somme minimale. Ces problèmes sont largement appliqués dans la vie réelle. Puisqu’ils sont NP-difficile, il est difficile de les résoudre dans le cas général. Par conséquent, cette thèse est consacrée au développement d’algorithmes heuristiques efficaces pour faire face à ces problèmes. Plus précisément, nous introduisons deux algorithmes de recherche locale itétée, un algorithme mémétique avec différents opérateurs de recombinaison pour le CBP et une heuristique de voisinage basée sur un ensemble pour résoudre le MinLA. On montre expérimentalement que pour le CBP, les deux algorithmes de recherche locale itéré pouvaient concurrencer favorablement les méthodes de l’état de l’art, le croisement approprié est identifié pour l’algorithme mémétique. On montre également que pour le MinLA, l’heuristique de voisinage basée sur l’ensemble s’est avérée plus efficace que des algorithmes avec voisinage traditionnel à 2-flip
This thesis considers two graph layout problems: the cyclic bandwidth problem (CBP) and the minimum linear arrangement problem (MinLA). The CBP is a natural extension of the bandwidth minimization problem (BMP) and the MinLA is a min-sum problem. These problems are widely applied in the real life. Since they are NP-hard problems, it is computational difficult to solve them in the general case. Therefore, this thesis is dedicated to developing effective heuristic algorithms to deal with these challenging problems.Specifically, we introduce two iterated local search algorithms, a memetic algorithm with different recombination operators for the CBP and a set based neighborhood heuristic algorithm to solve the MinLA. The two iterated local search algorithms are experimentallydemonstrated to be able to compete favourably with state-of-the-art methods, the feature of a suitable crossover for the memetic algorithm is identified for the CBP and the set based neighborhood heuristic algorithm is proven to be more efficient than the traditional 2-flip neighborhood algorithm
46

Jedor, Matthieu. "Bandit algorithms for recommender system optimization." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette thèse de doctorat, nous étudions l'optimisation des systèmes de recommandation dans le but de fournir des suggestions de produits plus raffinées pour un utilisateur.La tâche est modélisée à l'aide du cadre des bandits multi-bras.Dans une première partie, nous abordons deux problèmes qui se posent fréquemment dans les systèmes de recommandation : le grand nombre d'éléments à traiter et la gestion des contenus sponsorisés.Dans une deuxième partie, nous étudions les performances empiriques des algorithmes de bandit et en particulier comment paramétrer les algorithmes traditionnels pour améliorer les résultats dans les environnements stationnaires et non stationnaires qui l'on rencontre en pratique.Cela nous amène à analyser à la fois théoriquement et empiriquement l'algorithme glouton qui, dans certains cas, est plus performant que l'état de l'art
In this PhD thesis, we study the optimization of recommender systems with the objective of providing more refined suggestions of items for a user to benefit.The task is modeled using the multi-armed bandit framework.In a first part, we look upon two problems that commonly occured in recommendation systems: the large number of items to handle and the management of sponsored contents.In a second part, we investigate the empirical performance of bandit algorithms and especially how to tune conventional algorithm to improve results in stationary and non-stationary environments that arise in practice.This leads us to analyze both theoretically and empirically the greedy algorithm that, in some cases, outperforms the state-of-the-art
47

Kroyan, Julia. "Trust-search algorithms for unconstrained optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3120456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wike, Carl E. 1948. "Supply chain optimization : formulations and algorithms." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 1999.
Includes bibliographical references (leaves 103-106).
In this thesis, we develop practical solution methods for a supply chain optimization problem: a multi-echelon, un capacitated, time-expanded network of distribution cen­ters and stores, for which we seek the shipping schedule that minimizes total inventory, backlogging, and shipping costs, assuming deterministic, time-varying demand over a fixed time horizon for a single product. Because of fixed ordering and shipping costs, this concave cost network flow problem is in a class of NP-hard network design problems. We develop mathematical programming formulations, heuristic algorithms, and enhanced algorithms using approximate dynamic programming (ADP). We achieve a strong mixed integer programming (MIP) formulation, and fast, reliable algorithms, which can be extended to problems with multiple products. Beginning with a lot-size based formulation, we strengthen the formulation in steps to develop one which is a variation of a node-arc formulation for the network design problem. In addition, we present a path-flow formulation for the single product case and an enhanced network design formulation for the multiple product case. The basic algorithm we develop uses a dynamic lot-size model with backlogging together with a greedy procedure that emulates inventory pull systems. Four re­lated algorithms perform local searches of the basic algorithm's solution or explore alternative solutions using pricing schemes, including a Lagrangian-based heuristic. We show how approximate dynamic programming can be used to solve this sup­ply chain optimization problem as a dynamic control problem using any of the five algorithms. In addition to improving all the algorithms, the ADP enhancement turns the simplest algorithm into one comparable to the more complex ones. Our computational results illustrate that our enhanced network design formula­tion almost always produces integral solutions and can be used to solve problems of moderate size (3 distribution centers, 30 stores, 30 periods). Our heuristic methods, particularly those enhanced by ADP methods, produce near optimal solutions for truly large scale problems.
by Carl E. Wike.
S.M.
49

Themelis, Andreas. "Proximal algorithms for structured nonconvex optimization." Thesis, IMT Alti Studi Lucca, 2018. http://e-theses.imtlucca.it/262/1/Themelis_phdthesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Due to their simplicity and versatility, splitting algorithms are often the methods of choice for many optimization problems arising in engineering. “Splitting” complex problems into simpler subtasks, their complexity scales well with problem size, making them particularly suitable for large-scale applications where other popular methods such as IP or SQP cannot be employed. There are, however, two major downsides: 1) there is no satisfactory theory in support of their employment for nonconvex problems, and 2) their efficacy is severely affected by ill conditioning. Many attempts have been made to overcome these issues, but only incomplete or case-specific theories have been established, and some enhancements have been proposed which however either fail to preserve the simplicity of the original algorithms, or can only offer local convergence guarantees. This thesis aims at overcoming these downsides. First, we provide novel tight convergence results for the popular DRS and ADMM schemes for nonconvex problems, through an elegant unified framework reminiscent of Lyapunov stability theory. “Proximal envelopes”, whose analysis is here extended to nonconvex problems, prove to be the suitable Lyapunov functions. Furthermore, based on these results we develop enhancements of splitting algorithms, the first that 1) preserve complexity and convergence properties, 2) are suitable for nonconvex problems, and 3) achieve asymptotic superlinear rates.
50

Strappaveccia, Francesco <1982&gt. "Many-core Algorithms for Combinatorial Optimization." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6949/1/Strappaveccia_Francesco_tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.

To the bibliography