Dissertations / Theses on the topic 'Global Optimization'

To see the other types of publications on this topic, follow the link: Global Optimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Global Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Singer, Adam B. "Global dynamic optimization." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28662.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2004.
Includes bibliographical references (p. 247-256).
(cont.) on a set composed of the Cartesian product between the parameter bounds and the state bounds. Furthermore, I show that the solution of the differential equations is affine in the parameters. Because the feasible set is convex pointwise in time, the standard result that a convex function composed with an affine function remains convex yields the desired result that the integrand is convex under composition. Additionally, methods are developed using interval arithmetic to derive the exact state bounds for the solution of a linear dynamic system. Given a nonzero tolerance, the method is rigorously shown to converge to the global solution in a finite time. An implementation is developed, and via a collection of case studies, the technique is shown to be very efficient in computing the global solutions. For problems with embedded nonlinear dynamic systems, the analysis requires a more sophisticated composition technique attributed to McCormick. McCormick's composition technique provides a method for computing a convex underestimator for for the integrand given an arbitrary nonlinear dynamic system provided that convex underestimators and concave overestimators can be given for the states. Because the states are known only implicitly via the solution of the nonlinear differential equations, deriving these convex underestimators and concave overestimators is a highly nontrivial task. Based on standard optimization results, outer approximation, the affine solution to linear dynamic systems, and differential inequalities, I present a novel method for constructing convex underestimators and concave overestimators for arbitrary nonlinear dynamic systems ...
My thesis focuses on global optimization of nonconvex integral objective functions subject to parameter dependent ordinary differential equations. In particular, efficient, deterministic algorithms are developed for solving problems with both linear and nonlinear dynamics embedded. The techniques utilized for each problem classification are unified by an underlying composition principle transferring the nonconvexity of the embedded dynamics into the integral objective function. This composition, in conjunction with control parameterization, effectively transforms the problem into a finite dimensional optimization problem where the objective function is given implicitly via the solution of a dynamic system. A standard branch-and-bound algorithm is employed to converge to the global solution by systematically eliminating portions of the feasible space by solving an upper bounding problem and convex lower bounding problem at each node. The novel contributions of this work lie in the derivation and solution of these convex lower bounding relaxations. Separate algorithms exist for deriving convex relaxations for problems with linear dynamic systems embedded and problems with nonlinear dynamic systems embedded. However, the two techniques are unified by the method for relaxing the integral in the objective function. I show that integrating a pointwise in time convex relaxation of the original integrand yields a convex underestimator for the integral. Separate composition techniques, however, are required to derive relaxations for the integrand depending upon the nature of the embedded dynamics; each case is addressed separately. For problems with embedded linear dynamic systems, the nonconvex integrand is relaxed pointwise in time
by Adam Benjamin Singer.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Ruan, Ning. "Global optimization for nonconvex optimization problems." Thesis, Curtin University, 2012. http://hdl.handle.net/20.500.11937/1936.

Full text
Abstract:
Duality is one of the most successful ideas in modern science [46] [91]. It is essential in natural phenomena, particularly, in physics and mathematics [39] [94] [96]. In this thesis, we consider the canonical duality theory for several classes of optimization problems.The first problem that we consider is a general sum of fourth-order polynomial minimization problem. This problem arises extensively in engineering and science, including database analysis, computational biology, sensor network communications, nonconvex mechanics, and ecology. We first show that this global optimization problem is actually equivalent to a discretized minimal potential variational problem in large deformation mechanics. Therefore, a general analytical solution is proposed by using the canonical duality theory.The second problem that we consider is a nonconvex quadratic-exponential optimization problem. By using the canonical duality theory, the nonconvex primal problem in n-dimensional space can be converted into a one-dimensional canonical dual problem, which is either a concave maximization or a convex minimization problem with zero duality gap. Several examples are solved so as to illustrate the applicability of the theory developed.The third problem that we consider is quadratic minimization problems subjected to either box or integer constraints. Results show that these nonconvex problems can be converted into concave maximization dual problems over convex feasible spaces without duality gap and the Boolean integer programming problem is actually equivalent to a critical point problem in continuous space. These dual problems can be solved under certain conditions. Both existence and uniqueness of the canonical dual solutions are presented. A canonical duality algorithm is presented and applications are illustrated.The fourth problem that we consider is a quadratic discrete value selection problem subjected to inequality constraints. The problem is first transformed into a quadratic 0-1 integer programming problem. The dual problem is thus constructed by using the canonical duality theory. Under appropriate conditions, this dual problem is a maximization problem of a concave function over a convex continuous space. Theoretical results show that the canonical duality theory can either provide a global optimization solution, or an optimal lower bound approximation to this NP-hard problem. Numerical simulation studies, including some relatively large scale problems, are carried out so as to demonstrate the effectiveness and efficiency of the canonical duality method. An open problem for understanding NP-hard problems is proposed.The fifth problem that we consider is a mixed-integer quadratic minimization problem with fixed cost terms. We show that this well-known NP-hard problem in R2n can be transformed into a continuous concave maximization dual problem over a convex feasible subset of Rn with zero duality gap. We also discuss connections between the proposed canonical duality theory approach and the classical Lagrangian duality approach. The resulting canonical dual problem can be solved under certain conditions, by traditional convex programming methods. Conditions for the existence and uniqueness of global optimal solutions are presented. An application to a decoupled mixed-integer problem is used to illustrate the derivation of analytic solutions for globally minimizing the objective function. Numerical examples for both decoupled and general mixed-integral problems are presented, and an open problem is proposed for future study.The sixth problem that we consider is a general nonconvex quadratic minimization problem with nonconvex constraints. By using the canonical dual transformation, the nonconvex primal problem can be converted into a canonical dual problem (i.e., either a concave maximization problem with zero duality gap). Illustrative applications to quadratic minimization with multiple quadratic constraints, box/integer constraints, and general nonconvex polynomial constraints are discussed, along with insightful connections to classical Lagrangian duality. Conditions for ensuring the existence and uniqueness of global optimal solutions are presented. Several numerical examples are solved.The seventh problem that we consider is a general nonlinear algebraic system. By using the least square method, the nonlinear system of m quadratic equations in n-dimensional space is first formulated as a nonconvex optimization problem. We then prove that, by using the canonical duality theory, this nonconvex problem is equivalent to a concave maximization problem in Rm, which can be solved by well-developed convex optimization techniques. Both existence and uniqueness of global optimal solutions are discussed, and several illustrative examples are presented.The eighth problem that we consider is a general sensor network localization problem. It is shown that by the canonical duality theory, this nonconvex minimization problem is equivalent to a concave maximization problem over a convex set in a symmetrical matrix space, and hence can be solved by combining a perturbation technique with existing optimization techniques. Applications are illustrated and results show that the proposed method is potentially a powerful one for large-scale sensor network localization problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Moore, Roxanne Adele. "Value-based global optimization." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44750.

Full text
Abstract:
Computational models and simulations are essential system design tools that allow for improved decision making and cost reductions during all phases of the design process. However, the most accurate models are often computationally expensive and can therefore only be used sporadically. Consequently, designers are often forced to choose between exploring many design alternatives with less accurate, inexpensive models and evaluating fewer alternatives with the most accurate models. To achieve both broad exploration of the alternatives and accurate determination of the best alternative with reasonable costs incurred, surrogate modeling and variable accuracy modeling are used widely. A surrogate model is a mathematically tractable approximation of a more expensive model based on a limited sampling of that model, while variable accuracy modeling involves a collection of different models of the same system with different accuracies and computational costs. As compared to using only very accurate and expensive models, designers can determine the best solutions more efficiently using surrogate and variable accuracy models because obviously poor solutions can be eliminated inexpensively using only the less expensive, less accurate models. The most accurate models are then reserved for discerning the best solution from the set of good solutions. In this thesis, a Value-Based Global Optimization (VGO) algorithm is introduced. The algorithm uses kriging-like surrogate models and a sequential sampling strategy based on Value of Information (VoI) to optimize an objective characterized by multiple analysis models with different accuracies. It builds on two primary research contributions. The first is a novel surrogate modeling method that accommodates data from any number of analysis models with different accuracies and costs. The second contribution is the use of Value of Information (VoI) as a new metric for guiding the sequential sampling process for global optimization. In this manner, the cost of further analysis is explicitly taken into account during the optimization process. Results characterizing the algorithm show that VGO outperforms Efficient Global Optimization (EGO), a similar global optimization algorithm that is considered to be the current state of the art. It is shown that when cost is taken into account in the final utility, VGO achieves a higher utility than EGO with statistical significance. In further experiments, it is shown that VGO can be successfully applied to higher dimensional problems as well as practical engineering design examples.
APA, Harvard, Vancouver, ISO, and other styles
4

Birbil, Sevket Ilker. "Stochastic Global Optimization Techniques." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-20020403-171452.

Full text
Abstract:

In this research, a novel population-based global optimization method has been studied. The method is called Electromagnetism-like Mechanism or in short EM. The proposed method mimicks the behavior of electrically charged particles. In other words, a set of points is sampled from the feasible region and these points imitate the role of the charged particles in basic electromagnetism. The underlying idea of the method is directing sample points toward local optimizers, which point out attractive regions of the feasible space.The proposed method has been applied to different test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature. Without using the higher order information, EM has converged rapidly (in terms of the number of function evaluations) to the global optimum and produced highly efficient results for problems of varying degree of difficulty.After a systematic study of the underlying stochastic process, the proof of convergence to the global optimum has been given for the proposed method. The thrust of the proof has been to show that in the limit, at least one of the points in the population moves to the neighborhood of the global optimum with probability one.The structure of the proposed method is very flexible permitting the easy development of variations. Capitalizing on this, several variants of the proposed method has been developed and compared with the other methods from the literature. These variants of EM have been able to provide accurate answers to selected problems and in many cases have been able to outperform other well-known methods.

APA, Harvard, Vancouver, ISO, and other styles
5

Gattupalli, Rajeswar R. "Advances in global optimization /." View online ; access limited to URI, 2008. http://0-digitalcommons.uri.edu.helin.uri.edu/dissertations/AAI3314454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zehnder, Nino. "Global optimization of laminated structures /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Quttineh, Nils-Hassan. "Algorithms for Costly Global Optimization." Licentiate thesis, Mälardalen University, School of Education, Culture and Communication, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-5970.

Full text
Abstract:

There exists many applications with so-called costly problems, which means that the objective function you want to maximize or minimize cannot be described using standard functions and expressions. Instead one considers these objective functions as ``black box'' where the parameter values are sent in and a function value is returned. This implies in particular that no derivative information is available.The reason for describing these problems as expensive is that it may take a long time to calculate a single function value. The black box could, for example, solve a large system of differential equations or carrying out a heavy simulation, which can take anywhere from several minutes to several hours!These very special conditions therefore requires customized algorithms. Common optimization algorithms are based on calculating function values every now and then, which usually can be done instantly. But with an expensive problem, it may take several hours to compute a single function value. Our main objective is therefore to create algorithms that exploit all available information to the limit before a new function value is calculated. Or in other words, we want to find the optimal solution using as few function evaluations as possible.A good example of real life applications comes from the automotive industry, where on the development of new engines utilize advanced models that are governed by a dozen key parameters. The goal is to optimize the model by changing the parameters in such a way that the engine becomes as energy efficient as possible, but still meets all sorts of demands on strength and external constraints.

APA, Harvard, Vancouver, ISO, and other styles
8

Schonlau, Matthias. "Computer experiments and global optimization." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22234.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Mharmah, Hisham. "Global optimization of stochastic functions." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/25665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Oulmane, Mourad. "On-Chip global interconnect optimization." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33985.

Full text
Abstract:
We present a practical yet accurate approach for dealing with the problem of inserting repeaters along on-chip interconnect lines to meet delay and transition time requirements. This approach is based on the fact that the transition time and the delay at the far end of an interconnect segment are, respectively, independent and linearly dependent on the driving repeater's input transition time, as long as the ratio of the output to input transition time does not exceed a pre-defined value. In this context, we first derive simple closed form expressions for the optimal repeater spacing and sizing. Then, we propose a bottom-up "pseudo" hierarchical quadratic programming method for inserting and sizing repeaters in RC interconnects. This method, unlike Van Ginneken's [30], although largely based on it, is able to account for transition times at every potential repeater insertion point along the RC line of interest. The resulting technique can be readily incorporated in a more general RC network optimization scheme (through repeater insertion) where, eventually, wire sizing can be formulated either as an objective or a constraint.
Accurate moment matching techniques for computing the RC delays and transition times are used in addition to an accurate CMOS inverter/repeater delay model that takes into account short channel effects that are prevalent in deep submicron (DSM) technologies. In particular, a new delay metric, based on the first two moments of the impulse response of the interconnect RC circuit, is derived. Also, a new empirical ramp approximation that takes into account the inherent asymmetry of signals in signal distribution networks in DSM technologies is presented.
APA, Harvard, Vancouver, ISO, and other styles
11

Stephens, Chris. "Global optimization: theory versus practice." Thesis, University of Canterbury. Mathematics, 1997. http://hdl.handle.net/10092/8430.

Full text
Abstract:
This thesis looks at some theoretical and practical aspects of global optimization - as we shall see they do not always coincide. Chapter 1 defines the global optimization problem, discusses applications, concluding there are fewer than often claimed, and a presents a survey of algorithms. A simple deterministic analogue to PAS, a theoretical stochastic algorithm known to have good convergence properties, is presented. The often-claimed minimax optimality of the Piyavskii-Shubert algorithm is shown not to apply for the first few iterations. The counter-example given also applies to Mladineo's algorithm. Chapter 2 concentrates on some theoretical results for global optimization algorithms. The results show that for both deterministic and stochastic algorithms, global information about the function is necessary to solve the global optimization problem. Chapter 3 introduces interval arithmetic as a tool for global optimization. A simpler and slightly more general proof of the convergence of the natural inclusion function than appears in the literature is provided. Interval arithmetic is generalised to apply to new classes of subdomains and take account of the structure of the function's expression. Examples show that generalised interval arithmetic can lead to dramatic improvements in inclusions and global optimization algorithms. Chapter 4 defines interval and bounding Hessians. The main result provides an optimal method of obtaining optimal (in two different senses) bounding Hessians from interval Hessians. Examples demonstrate the usefulness of bounding Hessians to global optimization. Chapter 5 brings together the theoretical results of the previous chapters into a new adaptive second derivative branch and bound algorithm. First, it presents a summary of the branch and bound framework and discusses the algorithms of Baritompa and Cutler. A counter-example shows that one of Baritompa and Cutler's algorithms is not valid in general and restricted sufficient conditions under which it is valid are given. The new algorithm is based (somewhat loosely in its final form) on Baritompa and Cutler's algorithms in a branch and bound framework. It achieves for the first time a cubic order of convergence in the bounding rule of a global optimization algorithm. Theoretical implications of a cubic order of convergence are also presented. Chapter 6 presents the results of testing an implementation of the new algorithm and variations. Four different bounding rules, three using adaptive second derivatives, are compared on 29 test functions. Conclusions are presented in the final chapter.
APA, Harvard, Vancouver, ISO, and other styles
12

Newbern, Jeffrey Lawrence. "Global optimization of dither arrays." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10894.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 59-60).
by Jeffrey Newbern.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Theodosopoulos, Theodore. "Stochastic models for global optimization." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11404.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 61-63).
by Theodore Vassilios Theodosopoulos.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Wechsung, Achim. "Global optimization in reduced space." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/87131.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 203-216).
Optimization is a key activity in any engineering discipline. Global optimization methods, in particular, strive to solve nonconvex problems, which often arise in chemical engineering, and deterministic algorithms such as branch-and-bound provide a certificate of optimality for the identified solution. Unfortunately, the worst-case runtime of these algorithms is exponential in the problem dimension. This leads to the notion of reduced-space problem formulations where either the number of variables that the algorithm branches on is reduced or only the actual degrees of freedom are visible to the optimization algorithms, following a partition of the variables into independent and dependent ones. This approach introduces new challenges though: McCormick relaxations, which are very easily applied in this setting, can be nonsmooth, the minima are very likely to be unconstrained causing the cluster problem and the information contained in the constraints is not as readily exploited. In this thesis, several advances to both theory and methods are reported. First, a new analysis of the cluster problem is provided reaffirming the importance of second-order convergent bounding methods. The cluster problem refers to the phenomenon whereby a large number of boxes in the vicinity of a minimum are visited by branch-and-bound algorithms. In particular, it is shown that tighter relaxations can lead to a significant reduction in the number of boxes visited. Next, a constraint propagation technique for intervals is extended to McCormick relaxations. This reverse McCormick update utilizes information in the constraints and improves relaxations of the dependent variables, which can be used to either strengthen the relaxations of the feasible set or, using generalized McCormick relaxations, to construct reduced-space relaxations of the objective function. Third, a second-order convergent interval bounding method for the zeros of parametric nonlinear systems of equations is presented. This is useful to provide second-order convergent interval information to generalized McCormick relaxations, e.g., in the reverse propagation scheme. Fourth, the theory underpinning McCormick relaxations is extended to a class of discontinuous functions. It is further shown that branch-and-bound algorithms still possess their convergence properties.
by Achim Wechsung.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Cha Kun. "Global optimization of hybrid systems." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37276.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2006.
Includes bibliographical references (p. 339-353).
Systems that exhibit both discrete state and continuous state dynamics are called hybrid systems. In most nontrivial cases, these two aspects of system behavior interact to such a significant extent that they cannot be decoupled effectively by any kind of abstraction and must be analyzed simultaneously. Hybrid system models are important in many areas of science and engineering, including flip-flops and latching relays, manufacturing systems, air-traffic management systems, controller synthesis, switched systems, chemical process systems, signaling and decision making mechanisms in (biological) cells, robotic systems, safety interlock systems, and embedded systems. The primary focus of this thesis is to explore deterministic methods for the global optimization of hybrid systems. While the areas of modeling, simulation and sensitivity analysis of hybrid systems have received much attention, there are many challenging difficulties associated with the optimization of such systems. The contents of this thesis represent the first steps toward deterministic global optimization of hybrid systems in the continuous time domain. There are various reasons for wanting to solve optimization problems globally.
(cont.) In particular, there are many applications which demand that the global solution be found, for example, formal safety verification problems and parameter estimation problems. In the former case, a suboptimal local solution could falsely indicate that all safety specifications are met, leading to disastrous consequences if, in actuality, a global solution exists which provides a counter example that violates some safety specification. In the latter case, a suboptimal local solution could falsely indicate that a proposed model structure did not match experimental data in a statistically significant manner, leading to the false rejection of a valid model structure. In addition, for many optimization problems in engineering, the presence of nonlinear equality constraints makes the optimization problem nonconvex such that local optimization methods can often fail to produce a single feasible point, even though the problem is indeed feasible. The control parameterization framework is employed for the solution of the optimization problem with continuous time hybrid systems embedded. A major difficulty of such a framework lies in the fact that the mode sequence of the embedded hybrid system changes in the space of the optimization decision variables for most nontrivial problems.
(cont.) This makes the resulting optimization problem nonsmooth because the parametric sensitivities of the hybrid system do not exist everywhere, thus invalidating efficient gradient based optimization solvers. In this thesis, the general optimization problem is decomposed into three subproblems, and tackled individually: (a) when the mode sequence is fixed, and the transition times are fixed; (b) when the mode sequence is allowed to vary, and the transition times are fixed; and (c) when the mode sequence is fixed, and the transition times are allowed to vary. Because even these subproblems are nontrivial to solve, this thesis focuses on hybrid systems with linear time varying ordinary differential equations describing the continuous dynamics, and proposes various methods to exploit the linear structure. However, in the course of solving the last subproblem, a convexity theory for general, nonlinear hybrid systems is developed, which can be easily extended for general, nonlinear hybrid systems. Subproblem (a) is the easiest to solve. A convexity theory is presented that allows convex relaxations of general, nonconvex Bolza type functions to be constructed for the optimization problem. This allows a deterministic branch-and-bound framework to be employed for the global optimization of the subproblem.
(cont.) Subproblems (b) and (c) are much more difficult to solve, and require the exploitation of structure. For subproblem (b), a hybrid superstructure is proposed that enables the linear structure to be retained. A branch-and-cut framework with a novel dynamic bounds tightening heuristic is proposed, and it is shown that the generation of cuts from dynamic bounds tightening can have a dramatic impact on the solution of the problem. For subproblem (c), a time transformation is employed to transform the problem into one with fixed transition times, but nonlinear dynamics. A convexity theory is developed for constructing convex relaxations of general, nonconvex Bolza type functions with the nonlinear hybrid system embedded, along with the development of bounding methods, based on the theory of differential inequalities. A novel bounding technique that exploits the time transformation is also introduced, which can provide much tighter bounds than that furnished utilizing differential inequalities.
by Cha Kun Lee.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
16

Pandit, Sanjay. "Texture segmentation by global optimization." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843855/.

Full text
Abstract:
This thesis is concerned with the investigation of a specific approach to the problem of texture segmentation, namely that based on the global optimization of a cost function. Many tasks in image analysis are expressed as global optimization problems in which the general issue is to find the global minimum of a cost function which describes the interaction between the different variables modelling the image features and the interaction of these variables with the data in a given problem. The minimization of such a global cost function is a difficult problem since the number of hidden variables (labels) is very large and the global cost function may have many local minima. This problem can be overcome to a large extent by using a stochastic relaxation algorithm (for example, Simulated annealing). Initially, various classical techniques on texture segmentation are reviewed. Ideally, any texture segmentation algorithm should segment an image, so that there is one to one correspondence between the segmentated edgels and the ground truth edgels. The effectiveness of an algorithm can be quantified in terms of under and over detection errors for each segmented output image. These measures are used throughout this thesis to quantify the quality of the results. A particular method which uses global optimization for texture segmentation is initially identified as potentially interesting and is implemented and studied. The implementation proved that this method suffered from many shortcomings and it is not really as good as it was reported in the literature. As the general approach to the problem is a well established methodology for image processing problems, the rest of the thesis is devoted into different attempts to make this method work. The novel ideas introduced in order to improve the method are: An improved version of the cost function. The use of alternative statistics that characterize each texture. The use of a combination of statistics to charaterize textures. The use of an implicit dictionary of penalizable label configurations, as opposed to an explicit dictionary, leading to penalties applied to anything not acceptable rather than to a selection of unacceptable configurations. The introduction of a modified transfer function that maps statistical differences to label differences. The use of a database of training patterns instead of assuming that one knows a priori which textures are present in the image to be segmented. The use of alternative time schedules with which the model is imposed to the data gradually, in a linear, non-linear and in an adaptive way. The introduction of an enhanced set of labels that allows the use of local orientation of the boundary. The introduction of a novel way to create new states of the system during the process of simulated annealing in order to achieve faster acceleration, by updating the values of 9 label sites instead of a single label site at a time. The results obtained by all these modifications vastly improve the performance of the algorithm from its original version. However, the whole approach does not really produce the quality of the results expected for real applications and it does not exhibit the robustness of a system that could be used in practice. The reason appears to be the bluntness of the statistical tests used to identify the boundary. So, my conclusion is that although global optimization methods are good for edge detection where the data are the local values of the first derivative, the approach is not very appropriate for texture segmentation where one has to rely on statistical differences.
APA, Harvard, Vancouver, ISO, and other styles
17

Leclerc, Anthony P. "Efficient and reliable global optimization /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487780393266914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Viitanen, Sami. "Some new global optimization algorithms /." Åbo : Åbo Akad. Förl, 1997. http://www.gbv.de/dms/goettingen/239457927.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mohd, Ismail Bin. "Global optimization using interval arithmetic." Thesis, University of St Andrews, 1987. http://hdl.handle.net/10023/13824.

Full text
Abstract:
This thesis contains a description of algorithm, MW, for bounding the global minimizers and globally minimum value of a twice continuously differentiable function f :Rn → R1 R1 in a compact sub-interval of Rn. The algorithm MW is similar to the algorithm of Hansen (Han-80a] in that interval arithmetic is used together with certain of Hansen's ideas, but is different from Hansen's algorithm in that MW bounds the Kuhn Tucker points corresponding to the global minimizers of f in the given sab-interval. The Kuhn Tucker points are bounded with prescribed precision by using either of the algorithms KMSW [SheW-85c] or MAP [SheW-85b]. Numerical results which are obtained from Triplex [BaCM-82a] [MorC-83a] implementations of H and MW axe presented.
APA, Harvard, Vancouver, ISO, and other styles
20

Schütze, Oliver. "Set oriented methods for global optimization." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=976566982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Beckers, Markus [Verfasser]. "Toward global robust optimization / Markus Beckers." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1059317370/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Heath, Jeffrey W. "Global optimization of finite mixture models." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7179.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Applied Mathematics and Scientific Computation Program. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
23

Wilson, Simon Paul. "Aircraft routing using nonlinear global optimization." Thesis, University of Hertfordshire, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sobester, A. "Enhancements to global design optimization techniques." Thesis, University of Southampton, 2003. https://eprints.soton.ac.uk/45904/.

Full text
Abstract:
Modern engineering design optimization relies to a large extent on computer simulations of physical phenomena. The computational cost of such high-fidelity physics-based analyses typically places a strict limit on the number of candidate designs that can be evaluated during the optimization process. The more global the scope of the search, the greater are the demands placed by this limited budget on the efficiency of the optimization algorithm. This thesis proposes a number of enhancements to two popular classes of global optimizers. First, we put forward a generic algorithm template that combines population-based stochastic global search techniques with local hillclimbers in a Lamarckian learning framework. We then test a specific implementation of this template on a simple aerodynamic design problem, where we also investigate the feasibility of using an adjoint flow-solver in this type of global optimization. In the second part of this work we look at optimizers based on low-cost global surrogate models of the objective function. We propose a heuristic that enables efficient parallelisation of such strategies (based on the expected improvement infill selection criterion). We then look at how the scope of surrogate-based optimizers can be controlled and how they can be set up for high efficiency.
APA, Harvard, Vancouver, ISO, and other styles
25

Bashir, Hassan Abdullahi. "Global optimization with hybrid evolutionary computation." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/global-optimization-with-hybrid-evolutionary-computation(0392a891-dfae-4063-baf1-992cd0dc7df2).html.

Full text
Abstract:
An investigation has been made into hybrid systems which include stochastic and deterministic optimization. This thesis aims to provide new and relevant insights into the design of the nature-inspired hybrid optimization paradigms. It combines evolutionary and gradient-based methods. These hybrid evolutionary methods yield improved performance when applied to complex global optimization tasks and recent research has shown many of such hybridization policies. The thesis has three broad contributions. Firstly, by examination of stochastic optimization, supported by case studies, we utilised the Price's theorem to formulate a new population evolvability measure which assesses the dynamical characteristics of evolutionary operators. This leads to the development of a new convergence assessment method. A novel diversity control mechanism that uses heuristic initialisation and convergence detection mechanism is then proposed. Empirical support is provided to explicitly analyse the benefits of effective diversity control for continuous optimization. Secondly, this study utilised research relevance trees to evolve hybrid systems which combine various evolutionary computation (EC) models with the sequential quadratic programming (SQP) algorithm in a collaborative manner. We reviewed the convergence characteristics of various numerical optimization methods, and the concept of automatic differentiation is applied to design a vectorised forward derivative accumulation technique; this enables provision of accurate derivatives to the SQP algorithm. The SQP serves as a local optimizer in the deterministic phase of the hybrid models. Through benchmarking on stationary and dynamic problems, results showed that the proposed models achieved sufficient diversity control, which suggests improved exploration-exploitation balance. Thirdly, to mitigate the challenges of 'inappropriate' parameter settings, this thesis proposes closed-loop adaptive mechanisms which dynamically evolve effective step sizes for the evolutionary operators. It then examines the effect of incorporating a derivative-free algorithm which extends the hybrid model to a flexible and reusable algorithmic framework.
APA, Harvard, Vancouver, ISO, and other styles
26

Greene, James J. "Global optimization of water distribution systems." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020212/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Monnet, Dominique. "Global minmax optimization for robust H∞ control." Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2018. http://www.theses.fr/2018ENTA0009/document.

Full text
Abstract:
La commande H∞ est de nos jours utilisée pour la régulation de nombreux systèmes. Cette technique de contrôle permet de synthétiser des lois de commande robustes, dans le sens où le comportement du système régulé est peu sensible aux perturbations externes. De plus, la commande H∞ permet de prendre en compte des incertitudes liés au modèle décrivant le système à réguler. Par conséquence, cette technique de contrôle est robuste vis-à-vis des perturbations et des incertitudes de modèle. Afin de synthétiser une loi de commande robuste, les spécifications des performances du système en boucle fermée sont traduites en critères H∞ à partir desquels est formulé un problème d'optimisation. La loi de commande est une solution de ce problème, qui est non convexe dans le cas général. Les deux principales approches pour la résolution de ce problème sont basées sur la reformulation convexe et les méthodes d'optimisations locales, mais ne garantissent pas l'optimalité de la loi de commande vis-à-vis des critères H∞. Cette thèse propose une approche de la commande H∞ par des méthodes d'optimisation globales, rarement considérées jusqu'à présent. Contrairement aux approches classiques, bien qu'au prix d'une complexité algorithmique supérieure, la convergence vers la loi de commande optimale est garantie par les méthodes globales. De plus, les incertitude de modèle sont prises en compte de manière garantie, ce qui n'est pas nécessairement le cas avec les approches convexes et locales
H∞ control is nowadays used in many applications. This control technique enables to synthesize control laws which are robust with respect to external disturbances. Moreover, it allows to take model uncertainty into account in the synthesis process. As a consequence, H∞ control laws are robust with respect to both external disturbances and model uncertainty. A robust control law is a solution to an optimization problem, formulated from H∞ criteria. These criteria are the mathematical translations of the desired closed loop performance specifications. The two classical approaches to the optimization problem rely on the convex reformulation and local optimization methods. However, such approaches are unable to guarantee the optimality, with respect to the H∞ criteria, of the control law. This thesis proposes to investigate a global optimization approach to H∞ control. Contrary to convex and local approaches, global optimization methods enable to guarantee the optimality of the control, and also to take into account model uncertainty in a reliable way
APA, Harvard, Vancouver, ISO, and other styles
28

Ahmed, Abdel-Rahman Hedar A. "Studies on metaheuristics continuous global optimization problems." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/145313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lančinskas, Algirdas. "Parallelization of random search global optimization algorithms." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130620_110438-17037.

Full text
Abstract:
Global optimization problems are relevant in various fields of research and industry, such as chemistry, biology, biomedicine, operational research, etc. Normally it is easier to solve optimization problems having some specific properties of objective function such as linearity, convexity, differentiability, etc. However, there are a lot of practical problems that do not satisfy such properties or even cannot be expressed in an adequate mathematical form. Therefore, it is popular to use random search optimization methods in solving such optimization problems. The dissertation deals with investigation of random search global optimization algorithms, their parallelization and application to solve practical problems. The work is focused on modification and parallelization of particle swarm optimization and genetic algorithms. The modification of particle swarm optimization algorithm, based on reduction of the search area is proposed, and several strategies to parallelize the algorithm are investigated. The algorithm is applied to solve Multiple Gravity Assist problem using parallel computing system. A hybrid global multi-objective optimization algorithm is developed by modifying single agent stochastic search strategy, and incorporating it into multi-objective optimization genetic algorithm. Several strategies to parallelize multi-objective optimization genetic algorithm is proposed. Parallel algorithms are experimentally investigated by solving competitive facility location... [to full text]
Optimizavimo uždaviniai sutinkami įvairiose mokslo ir pramonės srityse, tokiose kaip chemija, biologija, biomedicina, operacijų tyrimai ir pan. Paprastai efektyviausiai sprendžiami uždaviniai, turintys tam tikras savybes, tokias kaip tikslo funkcijų tiesiškumas, iškilumas, diferencijuojamumas ir pan. Tačiau ne visi praktikoje pasitaikantys optimizavimo uždaviniai tenkina šias savybes, o kartais iš vis negali būti išreiškiami adekvačia matematine išraiška. Tokiems uždaviniam spręsti yra populiarūs atsitiktinės paieškos optimizavimo metodai. Disertacijoje yra tiriami atsitiktinės paieškos optimizavimo metodai, jų lygiagretinimo galimybės ir taikymas praktikoje pasitaikantiems uždaviniams spręsti. Pagrindinis dėmesys skiriamas dalelių spiečiaus optimizavimo ir genetinių algoritmų modifikavimui ir lygiagretinimui. Disertacijoje yra siūloma dalelių spiečiaus optimizavimo algoritmo modifikacija, grįsta pieškos srities siaurinimu, ir tiriamos kelios algoritmo lygiagretinimo strategijos. Algoritmas yra taikomas erdvėlaivių skrydžių trajektorijų optimizavimo uždaviniui spręsti lygiagrečiųjų skaičiavimų sistemose. Taip pat yra siūlomas hibridinis globaliojo daugiakriterio optimizavimo algoritmas, gautas modifikuojant vieno agento stochastinės paieškos algoritmą ir įkomponuojant į daugiakriterio optimizavimo genetinį algoritmą. Siūlomos kelios daugiakriterio genetinio algoritmo lygiagretinimo strategijos. Jų pagrindu gauti lygiagretieji algoritmai eksperimentiškai tiriami sprendžiant... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
30

Gutmann, H. M. "Radial basis function methods for global optimization." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599804.

Full text
Abstract:
In many real world optimization problems it is essential or desirable to determine the global minimum of the objective function. The subject of this dissertation is a new class of methods that tackle such problems. In particular, we have in mind problems where function evaluations are expensive and no additional information is available. The methods employ radial basis functions that have been proved to be useful for interpolation problems. Examples include thin plate splines and multiquadrics. Specifically, in each iteration, radial basis function interpolation is used to define a utility function. A maximizer of this function is chosen to be the next point where the objective function is evaluated. Relations to similar optimization methods are established, and a general framework is presented that combines these methods and our methods. A large part of the dissertation is devoted to the convergence theory. We show that convergence can be achieved for most types of basis functions without further assumptions on the objective function. For other types, however, a similar results could not be obtained. This is due to the properties of the so-called native space that is associated with a basis function. In particular, it is of interest whether this space contains sufficiently smooth functions with compact support. In order to address this question, we present two approaches. First, we establish a characterization of the native space in terms of generalized Fourier transforms. For many types, for example thin plate splines, this helps to derive conditions on the smoothness of a function that guarantee that it is in the native space. For other types, for example multiquadrics, however, we show that the native space does not contain a nonzero function with compact support. The second approach we present gives slightly weaker results, but it employs some new theory using interpolation on an infinite regular grid.
APA, Harvard, Vancouver, ISO, and other styles
31

Henkle, Aimee L. (Aimee Leigh) 1975. "Global supply chain design and optimization methodology." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/34762.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.
Includes bibliographical references (p. 72).
The work for this thesis was performed at Honeywell in the Automation and Control Solutions (ACS) division. The project focuses on ACS's manufacturing strategy regarding its global supply chain design, primarily discussing the manufacturing growth opportunities available in emerging regions. Honeywell ACS's current methodology for the development of a long-term manufacturing strategy is based on growth and total cost reduction objectives. In order to comprehend the total cost of the manufacturing strategy, considerations such as inventory, logistics and duties, outsourcing and material sourcing are evaluated. The project also considers a factory's geographical location and ACS's year-by-year implementation plan. An outcome of this Honeywell project and the basis of this thesis is the development of a general supply chain design and optimization methodology that utilizes three analytical tools (Country Selection Framework, Total Cost Model and Implementation Plan Process) that are capable of validating the supply chain design of any company. The analytical tools can be used to verify key strategic supply chain decisions or to create a baseline manufacturing strategy. The following results can be determined using this supply chain design methodology: Determine an appropriate operating region for current or future business needs; Evaluate the feasibility of factory relocation projects by considering all relevant costs; Evaluate the cost implications of the supply chain structure by considering logistics, inventory and material sourcing costs; Understand the impact of outsourcing on the manufacturing strategy; Recommend a year-by-year implementation plan in the case of multiple projects and limited capital resources.
by Aimee L. Henkle.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
32

Mayer, Theresia. "Efficient global optimization : analysis, generalizations and extensions." Thesis, University of Edinburgh, 2003. http://hdl.handle.net/1842/15297.

Full text
Abstract:
In some optimization problems the evaluation of the objective function is very expensive. It is therefore desirable to find the global optimum of the function with only comparatively few function evaluations. Because of the expense of evaluations it is justified to put significant effort into finding good sample points and using all the available information about the objective function. One way of achieving this is by assuming that the function can be modelled as a stochastic process and fitting a response surface to it, based on function evaluations at a set of points determined by an initial design. Parameters in the model are estimated when fitting the response surface to the available data. In determining the next point at which to evaluate the objective function, a balance must be struck between local search and global search. Local search in a neighbourhood of the minimum of the approximating function has the aim of finding a point with improved objective value. The aim of global search is to improve the approximation by maximizing an error function which reflects the uncertainty in the approximating function. Such a balance is achieved by using the expected improvement criterion. In this approach the next sample point is chosen where the expected improvement is maximized. The expected improvement at any point in the range reflects the expected amount of improvement of the approximating function beyond a target value (usually the best function value found up to this point) at that point, taking into account the uncertainty in the approximating function. In this thesis, we present and examine the expected improvement approach and the maximization of the expected improvement function.
APA, Harvard, Vancouver, ISO, and other styles
33

Stepanenko, Svetlana. "Global Optimization Methods based on Tabu Search." Doctoral thesis, kostenfrei, 2008. http://www.opus-bayern.de/uni-wuerzburg/volltexte/2008/3060/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ibraev, Suiunbek. "A new parallel method for verified global optimization." [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963452304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Schutte, Jaco Francois. "Applications of parallel global optimization to mechanics problems." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0012932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pettersson, Tobias. "Global optimization methods for estimation of descriptive models." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11781.

Full text
Abstract:

Using mathematical models with the purpose to understand and store knowlegde about a system is not a new field in science with early contributions dated back to, e.g., Kepler’s laws of planetary motion.

The aim is to obtain such a comprehensive predictive and quantitative knowledge about a phenomenon so that mathematical expressions or models can be used to forecast every relevant detail about that phenomenon. Such models can be used for reducing pollutions from car engines; prevent aviation incidents; or developing new therapeutic drugs. Models used to forecast, or predict, the behavior of a system are refered to predictive models. For such, the estimation problem aims to find one model and is well known and can be handeled by using standard methods for global nonlinear optimization.

Descriptive models are used to obtain and store quantitative knowledge of system. Estimation of descriptive models has not been much described by the literature so far; instead the methods used for predictive models have beed applied. Rather than finding one particular model, the parameter estimation for descriptive models aims to find every model that contains descriptive information about the system. Thus, the parameter estimation problem for descriptive models can not be stated as a standard optimization problem.

The main objective for this thesis is to propose methods for estimation of descriptive models. This is made by using methods for nonlinear optimization including both new and existing theory.

APA, Harvard, Vancouver, ISO, and other styles
37

Samuelsson, Oscar. "Benchmarking Global Optimization Algorithms for Core Prediction Identification." Thesis, Linköpings universitet, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-61253.

Full text
Abstract:
Mathematical modeling has evolved from being a rare event to becoming a standardapproach for investigating complex biological interactions. However, variationsand uncertainties in experimental data usually result in uncertain estimatesof the parameters of the model. It is possible to draw conclusions from the modeldespite uncertain parameters by using core predictions. A core prediction is amodel property which is valid for all parameter vectors that fit data at an acceptablecost. By validating the core prediction with additional experimentalmeasurements one can draw conclusions about the overall model despite uncertainparameter values. A prerequisite for identifying a core prediction is a global searchfor all acceptable parameter vectors. Global optimization methods are normallyconstructed to search for a single optimal parameter vector, but methods searchingfor several acceptable parameter vectors are required here.In this thesis, two metaheuristic optimization algorithms have been evaluated,namely Simulated annealing and Scatter search. In order to compare their differences,a set of functions has been implemented in Matlab. The Matlab functionsinclude a statistical framework which is used to discard poorly tuned optimizationalgorithms, five performance measures reflecting the different objectives of locatingone or several acceptable parameter vectors, and a number of test functionsmeant to reflect high-dimensional, multimodal problems. In addition to the testfunctions, a biological benchmark model is included.The statistical framework has been used to evaluate the performance of thetwo algorithms with the objective of locating one and several acceptable parametervectors. For the objective of locating one acceptable parameter vector, theresults indicate that Scatter search performed better than Simulated Annealing.The results also indicate that different search objectives require differently tunedalgorithms. Furthermore, the results show that test functions with a suitabledegree of difficulty are not a trivial task to obtain. A verification of the tuned optimizationalgorithms has been conducted on the benchmark model. The resultsare somewhat contradicting and in this specific case, it is not possible to claimthat good configurations on test functions remain good in real applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Liberti, Leo Sergio. "Reformulation and convex relaxation techniques for global optimization." Thesis, Imperial College London, 2004. http://hdl.handle.net/10044/1/11807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ding, Baoyan. "A parametric solution for local and global optimization." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21341.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Fowkes, Jaroslav Mrazek. "Bayesian numerical analysis : global optimization and other applications." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:ab268fe7-f757-459e-b1fe-a4a9083c1cba.

Full text
Abstract:
We present a unifying framework for the global optimization of functions which are expensive to evaluate. The framework is based on a Bayesian interpretation of radial basis function interpolation which incorporates existing methods such as Kriging, Gaussian process regression and neural networks. This viewpoint enables the application of Bayesian decision theory to derive a sequential global optimization algorithm which can be extended to include existing algorithms of this type in the literature. By posing the optimization problem as a sequence of sampling decisions, we optimize a general cost function at each stage of the algorithm. An extension to multi-stage decision processes is also discussed. The key idea of the framework is to replace the underlying expensive function by a cheap surrogate approximation. This enables the use of existing branch and bound techniques to globally optimize the cost function. We present a rigorous analysis of the canonical branch and bound algorithm in this setting as well as newly developed algorithms for other domains including convex sets. In particular, by making use of Lipschitz continuity of the surrogate approximation, we develop an entirely new algorithm based on overlapping balls. An application of the framework to the integration of expensive functions over rectangular domains and spherical surfaces in low dimensions is also considered. To assess performance of the framework, we apply it to canonical examples from the literature as well as an industrial model problem from oil reservoir simulation.
APA, Harvard, Vancouver, ISO, and other styles
41

Ali, M. "Some modified stochastic global optimization algorithms with applications." Thesis, Loughborough University, 1994. https://dspace.lboro.ac.uk/2134/13429.

Full text
Abstract:
Stochastic methods for global optimization problems with continuous variables have been studied. Modifications of three different algorithms have been proposed. These are (1) Multilevel Single Linkage (MSL), (2) Simulated Annealing (SA) and (3) Controlled Random Search (CRS). We propose a new topographical Multilevel Single Linkage (TMSL) algorithm as an extension of MSL. TMSL performs much better than MSL, especially in terms of number of function evaluations. A new aspiration based simulated annealing algorithm (ASA) has been derived which enhances the performance of SA by incorporating an aspirat.ion criterion. We have also proposed two new CRS algorithms, the CRS4 and CRS5 algorithms, which improve the CRS algorithm both in terms of cpu time and the number of function evaluations. The usefulness of the Halton and the Hammersley quasi-random sequences in global optimization has been investigated. These sequences are frequently used in numerical integration in the field of Bayesian statistics. A useful property of the quasi-random sequences is that they are evenly distributed and thus explore the search region more rapidly than pseudo-random numbers. Comparison of the modified algorithms with their unmodified versions is carried out on standard test problems but in addition a substantial part of the thesis consists of numerical investigations of 5 different practical global optimization problems. These problems are as follows: (1) A nonlinear continuous stirred tank reactor problem. (2) A chemical reactor problem with a bifunctional catalyst. (3) A pig-liver likelihood function. (4) Application and derivation of semi-empirical many body interatomic potentials. (5) A optimal control problem involving a car suspension system. Critical comparisons of the modified and unmodified global optimization algorithms have been carried out on these problems. The methods applied to these problems are compared from the points of view of reliability in finding the global optimum, cpu time and number of function evaluations.
APA, Harvard, Vancouver, ISO, and other styles
42

Herrero-López, Sergio. "Large-scale simulator for global data infrastructure optimization." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/70759.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, February 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 165-172).
Companies depend on information systems to control their operations. During the last decade, Information Technology (IT) infrastructures have grown in scale and complexity. Any large company runs many enterprise applications that serve data to thousands of users which, in turn, consume this information in different locations concurrently and collaboratively. The understanding by the enterprise of its own systems is often limited. No one person in the organization has a complete picture of the way in which applications share and move data files between data centers. In this dissertation an IT infrastructure simulator is developed to evaluate the performance, availability and reliability of large-scale computer systems. The goal is to provide data center operators with a tool to understand the consequences of infrastructure updates. These alterations can include the deployment of new network topologies, hardware configurations or software applications. The simulator was constructed using a multilayered approach and was optimized for multicore scalability. The results produced by the simulator were validated against the real system of a Fortune 500 company. This work pioneers the simulation of large-scale IT infrastructures. It not only reproduces the behavior of data centers at a macroscopic scale, but allows operators to navigate down to the detail of individual elements, such as processors or network links. The combination of queueing networks representing hardware components with message sequences modeling enterprise software enabled reaching a scale and complexity not available in previous research in this area.
by Sergio Herrero-López.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
43

Daboul, Siad [Verfasser]. "Global Timing Optimization in Chip Design / Siad Daboul." Bonn : Universitäts- und Landesbibliothek Bonn, 2021. http://d-nb.info/1235525341/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hirsch, Michael J. "Grasp-based heuristics for continuous global optimization problems." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0017140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Xiaojun. "Canonical Dual Algorithms for Global Optimization with Applications." Thesis, Federation University Australia, 2014. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/74678.

Full text
Abstract:
Canonical duality theory provides a unified framework which can transform a nonconvex primal minimization problem to a canonical dual maximization problem over a convex domain without duality gap. But the global optimality is guaranteed by a certain positive definite condition and such condition is not always satisfied. The goal of this thesis aims to explore possible techniques that can be used to solve global optimization problems based on the canonical duality theory. Firstly, an algorithmic framework for canonical duality theory is established, which shows that the canonical dual algorithms can be developed in four aspects under the positive definite condition explicitly or implicitly, namely, (i) minimizing the primal problem, (ii) maximizing the canonical dual problem, (iii) solving a nonlinear equation caused by total complementary function, and (iv) solving a nonlinear equation caused by canonical dual function. Secondly, we show that if there exists a critical point of the canonical dual problem in the positive definite domain, by solving an equivalent semidefinite programming (SDP) problem, the corresponding global solution to the primal problem can be obtained easily via off-the-shelf software packages. A specific canonical dual algorithm is given for each problem, including sum of fourth-order polynomials minimization, nonconvex quadratically constrained quadratic program (QCQP), and boolean quadratic program (BQP). Thirdly, we propose a canonical primal-dual algorithm framework based on the total complementary function. Convergence analysis is discussed from the perspective of variational inequalities (VIs) and contraction methods. Specific canonical primal-dual algorithms for sum of fourth-order polynomials minimization is given as well. And a real-world application to the sensor network localization problem is illustrated. Next, a canonical sequential reduction approach is proposed to recover the approximate or global solution for the BQP problem. By fixing some previously known components, the original problem can be reduced sequentially to a lower dimension one. This approach is successfully applied to the well-known maxcut problem. Finally, we discuss the canonical dual approach applied to continuous time constrained optimal control. And it shows that the optimal control law for the n-dimensional constrained linear quadratic regulator can be achieved precisely via one-dimensional canonical dual variable, and for the optimal control problem with concave cost functional, an approximate solution can be obtained by introducing a linear perturbation term.
PhD
APA, Harvard, Vancouver, ISO, and other styles
46

Kunde, Christian [Verfasser]. "Global optimization in conceptual process design / Christian Kunde." Magdeburg : Universitätsbibliothek, 2017. http://d-nb.info/1135662266/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Roddier, Nicolas. "Global optimization via neural networks and D.C. programming." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186617.

Full text
Abstract:
The ultimate goal of this work is to provide a general global optimization method. Due to the difficulty of the problem, the complete task is divided into several sections. These sections can be collected into a modeling phase followed by a global minimization phase. Each of the various sections draws from different engineering fields. What this work suggests is an interface and common ground between these fields. The modeling phase of the procedure consists of converting a general problem into a given formulation using a particular type of neural network. The architecture required for the neural network forms a new class: the pseudo multilayer neural network. It is introduced and compared to more classical neural network architectures such as the regular multilayer neural network. However, a key difference between these two classes is that the behavior of the usual multilayer network has to be programmed via iterative training, while an extremely efficient direct procedure is given here to synthesize the pseudo multilayer neural network. Therefore any initial problem can be systematically converted into a pseudo multilayer network without going through the undesired programming steps such as the backpropagation rule. The second phase of the work consists of translating the initial global optimization problem into the global minimization of a target function related to the neural network model. Systematic procedures are again given here. The last phase consists of globally minimizing the target function. This is done via the so-called DC programming technique where DC stands for "Difference of Convex". The pseudo multilayer was created such that it can systematically be converted into a DC formulation, and therefore be compatible with DC programming. A translation procedure to go from the pseudo multilayer neural network model to the DC formulation is given. When a DC program is applied to this last formulation, the resulting solution can be directly mapped to the global minimum of the target function previously defined, thereby producing the global optimal solution of the neural network modeling the initial problem. Therefore, the optimal solution of the original problem is known as well.
APA, Harvard, Vancouver, ISO, and other styles
48

McMeen, John Norman Jr. "Ranking Methods for Global Optimization of Molecular Structures." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etd/2447.

Full text
Abstract:
This work presents heuristics for searching large sets of molecular structures for low-energy, stable systems. The goal is to find the globally optimal structures in less time or by consuming less computational resources. The strategies intermittently evaluate and rank structures during molecular dynamics optimizations, culling possible weaker solutions from evaluations earlier, leaving better solutions to receive more simulation time. Although some imprecision was introduced from not allowing all structures to fully optimize before ranking, the strategies identify metrics that can be used to make these searches more efficient when computational resources are limited.
APA, Harvard, Vancouver, ISO, and other styles
49

Mohammadi, Hossein. "Kriging-based black-box global optimization : analysis and new algorithms." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEM005/document.

Full text
Abstract:
L’«Efficient Global Optimization» (EGO) est une méthode de référence pour l’optimisation globale de fonctions «boites noires» coûteuses. Elle peut cependant rencontrer quelques difficultés, comme le mauvais conditionnement des matrices de covariance des processus Gaussiens (GP) qu’elle utilise, ou encore la lenteur de sa convergence vers l’optimum global. De plus, le choix des paramètres du GP, crucial car il contrôle la famille des fonctions d’approximation utilisées, mériterait une étude plus poussée que celle qui en a été faite jusqu’à présent. Enfin, on peut se demander si l’évaluation classique des paramètres du GP est la plus appropriée à des fins d’optimisation. \\Ce travail est consacré à l'analyse et au traitement des différentes questions soulevées ci-dessus.La première partie de cette thèse contribue à une meilleure compréhension théorique et pratique de l’impact des stratégies de régularisation des processus Gaussiens, développe une nouvelle technique de régularisation, et propose des règles pratiques. Une seconde partie présente un nouvel algorithme combinant EGO et CMA-ES (ce dernier étant un algorithme d’optimisation globale et convergeant). Le nouvel algorithme, nommé EGO-CMA, utilise EGO pour une exploration initiale, puis CMA-ES pour une convergence finale. EGO-CMA améliore les performances des deux algorithmes pris séparément. Dans une troisième partie, l’effet des paramètres du processus Gaussien sur les performances de EGO est soigneusement analysé. Finalement, un nouvel algorithme EGO auto-adaptatif est présenté, dans une nouvelle approche où ces paramètres sont estimés à partir de leur influence sur l’efficacité de l’optimisation elle-même
The Efficient Global Optimization (EGO) is regarded as the state-of-the-art algorithm for global optimization of costly black-box functions. Nevertheless, the method has some difficulties such as the ill-conditioning of the GP covariance matrix and the slow convergence to the global optimum. The choice of the parameters of the GP is critical as it controls the functional family of surrogates used by EGO. The effect of different parameters on the performance of EGO needs further investigation. Finally, it is not clear that the way the GP is learned from data points in EGO is the most appropriate in the context of optimization. This work deals with the analysis and the treatment of these different issues. Firstly, this dissertation contributes to a better theoretical and practical understanding of the impact of regularization strategies on GPs and presents a new regularization approach based on distribution-wise GP. Moreover, practical guidelines for choosing a regularization strategy in GP regression are given. Secondly, a new optimization algorithm is introduced that combines EGO and CMA-ES which is a global but converging search. The new algorithm, called EGO-CMA, uses EGO for early exploration and then CMA-ES for final convergence. EGO-CMA improves the performance of both EGO and CMA-ES. Thirdly, the effect of GP parameters on the EGO performance is carefully analyzed. This analysis allows a deeper understanding of the influence of these parameters on the EGO iterates. Finally, a new self-adaptive EGO is presented. With the self-adaptive EGO, we introduce a novel approach for learning parameters directly from their contribution to the optimization
APA, Harvard, Vancouver, ISO, and other styles
50

Andramonov, Mikhail. "Global minimization of some classes of generalized convex functions." Thesis, Federation University Australia, 2001. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/164850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography