Dissertations / Theses on the topic 'Optimization'

To see the other types of publications on this topic, follow the link: Optimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pieume, Calice Olivier. "Multiobjective optimization approaches in bilevel optimization." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00665605.

Full text
Abstract:
This thesis addresses two important classes of optimization : multiobjective optimization and bilevel optimization. The investigation concerns their solution methods, applications, and possible links between them. First of all, we develop a procedure for solving Multiple Objective Linear Programming Problems (MOLPP). The method is based on a new characterization of efficient faces. It exploits the connectedness property of the set of ideal tableaux associated to degenerated points in the case of degeneracy. We also develop an approach for solving Bilevel Linear Programming Problems (BLPP). It is based on the result that an optimal solution of the BLPP is reachable at an extreme point of the underlying region. Consequently, we develop a pivoting technique to find the global optimal solution on an expanded tableau that represents the data of the BLPP. The solutions obtained by our algorithm on some problems available in the literature show that these problems were until now wrongly solved. Some applications of these two areas of optimization problems are explored. An application of multicriteria optimization techniques for finding an optimal planning for the distribution of electrical energy in Cameroon is provided. Similary, a bilevel optimization model that could permit to protect any economic sector where local initiatives are threatened is proposed. Finally, the relationship between the two classes of optimization is investigated. We first look at the conditions that guarantee that the optimal solution of a given BPP is Pareto optimal for both upper and lower level objective functions. We then introduce a new relation that establishes a link between MOLPP and BLPP. Moreover, we show that, to solve a BPP, it is possible to solve two artificial M0PPs. In addition, we explore Bilevel Multiobjective Programming Problem (BMPP), a case of BPP where each decision maker (DM) has more than one objective function. Given a MPP, we show how to construct two artificial M0PPs such that any point that is efficient for both problems is also efficient for the BMPP. For the linear case specially, we introduce an artificial MOLPP such that its resolution can permit to generate the whole feasible set of the leader DM. Based on this result and depending on whether the leader can evaluate or not his preferences for his different objective functions, two approaches for obtaining efficient solutions are presented
APA, Harvard, Vancouver, ISO, and other styles
2

Ruan, Ning. "Global optimization for nonconvex optimization problems." Thesis, Curtin University, 2012. http://hdl.handle.net/20.500.11937/1936.

Full text
Abstract:
Duality is one of the most successful ideas in modern science [46] [91]. It is essential in natural phenomena, particularly, in physics and mathematics [39] [94] [96]. In this thesis, we consider the canonical duality theory for several classes of optimization problems.The first problem that we consider is a general sum of fourth-order polynomial minimization problem. This problem arises extensively in engineering and science, including database analysis, computational biology, sensor network communications, nonconvex mechanics, and ecology. We first show that this global optimization problem is actually equivalent to a discretized minimal potential variational problem in large deformation mechanics. Therefore, a general analytical solution is proposed by using the canonical duality theory.The second problem that we consider is a nonconvex quadratic-exponential optimization problem. By using the canonical duality theory, the nonconvex primal problem in n-dimensional space can be converted into a one-dimensional canonical dual problem, which is either a concave maximization or a convex minimization problem with zero duality gap. Several examples are solved so as to illustrate the applicability of the theory developed.The third problem that we consider is quadratic minimization problems subjected to either box or integer constraints. Results show that these nonconvex problems can be converted into concave maximization dual problems over convex feasible spaces without duality gap and the Boolean integer programming problem is actually equivalent to a critical point problem in continuous space. These dual problems can be solved under certain conditions. Both existence and uniqueness of the canonical dual solutions are presented. A canonical duality algorithm is presented and applications are illustrated.The fourth problem that we consider is a quadratic discrete value selection problem subjected to inequality constraints. The problem is first transformed into a quadratic 0-1 integer programming problem. The dual problem is thus constructed by using the canonical duality theory. Under appropriate conditions, this dual problem is a maximization problem of a concave function over a convex continuous space. Theoretical results show that the canonical duality theory can either provide a global optimization solution, or an optimal lower bound approximation to this NP-hard problem. Numerical simulation studies, including some relatively large scale problems, are carried out so as to demonstrate the effectiveness and efficiency of the canonical duality method. An open problem for understanding NP-hard problems is proposed.The fifth problem that we consider is a mixed-integer quadratic minimization problem with fixed cost terms. We show that this well-known NP-hard problem in R2n can be transformed into a continuous concave maximization dual problem over a convex feasible subset of Rn with zero duality gap. We also discuss connections between the proposed canonical duality theory approach and the classical Lagrangian duality approach. The resulting canonical dual problem can be solved under certain conditions, by traditional convex programming methods. Conditions for the existence and uniqueness of global optimal solutions are presented. An application to a decoupled mixed-integer problem is used to illustrate the derivation of analytic solutions for globally minimizing the objective function. Numerical examples for both decoupled and general mixed-integral problems are presented, and an open problem is proposed for future study.The sixth problem that we consider is a general nonconvex quadratic minimization problem with nonconvex constraints. By using the canonical dual transformation, the nonconvex primal problem can be converted into a canonical dual problem (i.e., either a concave maximization problem with zero duality gap). Illustrative applications to quadratic minimization with multiple quadratic constraints, box/integer constraints, and general nonconvex polynomial constraints are discussed, along with insightful connections to classical Lagrangian duality. Conditions for ensuring the existence and uniqueness of global optimal solutions are presented. Several numerical examples are solved.The seventh problem that we consider is a general nonlinear algebraic system. By using the least square method, the nonlinear system of m quadratic equations in n-dimensional space is first formulated as a nonconvex optimization problem. We then prove that, by using the canonical duality theory, this nonconvex problem is equivalent to a concave maximization problem in Rm, which can be solved by well-developed convex optimization techniques. Both existence and uniqueness of global optimal solutions are discussed, and several illustrative examples are presented.The eighth problem that we consider is a general sensor network localization problem. It is shown that by the canonical duality theory, this nonconvex minimization problem is equivalent to a concave maximization problem over a convex set in a symmetrical matrix space, and hence can be solved by combining a perturbation technique with existing optimization techniques. Applications are illustrated and results show that the proposed method is potentially a powerful one for large-scale sensor network localization problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Gedin, Sebastian. "Securities settlement optimization using an optimization software solution." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279295.

Full text
Abstract:
Many people have engaged in the trading of stocks and other securities, but few are aware of how the transactions are executed. The process of transferring the ownership of securities, often in exchange for cash, is called securities settlement. If the parties involved in the transaction have the assets that they are obligated to deliver at the time of settlement, securities settlement is straightforward. But if the securities settlement system is faced with a set of transactions, in which some party fails to meet their obligation, some transactions will fail to settle. Since the receiving party of a transaction that fails to settle may have been depending on the assets from that transaction in order to meet their own obligations, a single settlement fail can have a ripple effect causing many other transactions to fail to settle. Securities settlement optimization is the problem of finding the optimal set of transactions to settle, given the available assets. In this thesis, we model securities settlement optimization as an integer linear programming problem and evaluate how an optimization solver software solution performs in comparison to a greedy heuristic algorithm on problem instances derived from real settlement data. We find that the solver offers a significant advantage over the heuristic algorithm in terms of reducing settlement fails at the cost of longer execution time. Furthermore, we find that even if the solver is only allowed to run for a couple of minutes it offers a significant advantage compared to the heuristic algorithm. Our conclusion is that using this type of solver for securities settlement optimization seems to be a viable approach, but that further experimentation with other data sets is needed.
Många har bedrivit handel med aktier eller andra värdepapper, men få är medvetna om hur transaktionerna genomförs. Processen där ägandet av värdepapper överförs, ofta i utbyte mot kontanter, kallas värdepappersavveckling. Om parterna som är involverade i transaktionen har de tillgångar som de är skyldiga att leverera vid tidpunkten för avvecklingen är värdepappersavvecklingen enkel. Men om värdepappersavvecklingssystemet står inför en uppsättning transaktioner, där någon part saknar den tillgång som hon är skyldig att leverera, kommer avvecklandet att misslyckas. Eftersom den mottagande parten av en transaktion som inte genomförs kan ha varit beroende av tillgångarna från den transaktionen för att uppfylla sina egna förpliktelser, kan en misslyckad avveckling leda till att många andra transaktioner inte kan genomföras. Värdepappersavveckling bör därmed optimeras för att minimera skadan i en situation där en eller flera transaktioner inta kan genomföras. I det här examensarbetet modellerar vi optimering av värdepappersavveckling som ett linjärprogrammeringsproblem med heltalsvillkor och utvärderar hur en optimeringsprogramvara presterar i jämförelse med en girig heuristisk algoritm på probleminstanser härledda från verklig avvecklingsdata. Vi finner att programvaran erbjuder en betydande fördel jämfört med den heuristiska algoritmen när det gäller att minimera antalet transaktioner som misslyckas, men att det görs till kostnaden av längre exekveringstid. Dessutom upptäcker vi att även om programvaran bara får köra i några minuter ger den en betydande fördel jämfört med den heuristiska algoritmen. Vår slutsats är att användningen av denna typ av programvara för optimering av värdepappersavveckling verkar vara en genomförbar strategi, men att ytterligare experiment med andra dataset behövs.
APA, Harvard, Vancouver, ISO, and other styles
4

Sim, Melvyn 1971. "Robust optimization." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17725.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004.
Includes bibliographical references (p. 169-171).
We propose new methodologies in robust optimization that promise greater tractability, both theoretically and practically than the classical robust framework. We cover a broad range of mathematical optimization problems, including linear optimization (LP), quadratic constrained quadratic optimization (QCQP), general conic optimization including second order cone programming (SOCP) and semidefinite optimization (SDP), mixed integer optimization (MIP), network flows and 0 - 1 discrete optimization. Our approach allows the modeler to vary the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations, while keeping the problem tractable. Specifically, for LP, MIP, SOCP, SDP, our approaches retain the same complexity class as the original model. The robust QCQP becomes a SOCP, which is computationally as attractive as the nominal problem. In network flows, we propose an algorithm for solving the robust minimum cost flow problem in polynomial number of nominal minimum cost flow problems in a modified network. For 0 - 1 discrete optimization problem with cost uncertainty, the robust counterpart of a polynomially solvable 0 - 1 discrete optimization problem remains polynomially solvable and the robust counterpart of an NP-hard o-approximable 0-1 discrete optimization problem, remains a-approximable.
(cont.) Under an ellipsoidal uncertainty set, we show that the robust problem retains the complexity of the nominal problem when the data is uncorrelated and identically distributed. For uncorrelated, but not identically distributed data, we propose an approximation method that solves the robust problem within arbitrary accuracy. We also propose a Frank-Wolfe type algorithm for this case, which we prove converges to a locally optimal solution, and in computational experiments is remarkably effective.
by Melvyn Sim.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
5

Lopes, David Granja. "Collections Optimization." Master's thesis, Instituto Superior de Economia e Gestão, 2013. http://hdl.handle.net/10400.5/6505.

Full text
Abstract:
Mestrado em Decisão Económica e Empresarial
Este trabalho debruça-se sobre a otimização do transporte de componentes automóveis. Começa-se por fazer uma revisão bibliográfica e de seguida um enquadramento da situação a estudar, bem como da empresa onde será aplicado (Volkswagen Autoeuropa). É desenvolvido um modelo matemático que permite identificar as rotas ótimas. O objetivo principal deste trabalho é a identificação de rotas otimizadas que permitam o aumento da eficiência, económica e operacional, da cadeia de abastecimento da Volkswagen Autoeuropa. Os resultados foram bastantes promissores, pois foi possível obter uma poupança média de 37% nas rotas identificadas.
This work focuses on optimizing the transport of automotive components. Starts by a literature review and then a framework of the situation being studied, as well as the company where it will be applied (Volkswagen Autoeuropa). A mathematical model that identifies the optimal routes is developed. The main objective of this work is the identification of optimal routes which can increase efficiency, economic and operational, of the supply chain of Wolkswagen Autoeuropa. The results were very promising as it was achieved an average saving of 37% on the identified routes.
APA, Harvard, Vancouver, ISO, and other styles
6

Bylund, Johanna. "Collateral Optimization." Thesis, Umeå universitet, Institutionen för fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Hao. "Run-time optimization of adaptive irregular applications." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/1285.

Full text
Abstract:
Compared to traditional compile-time optimization, run-time optimization could offer significant performance improvements when parallelizing and optimizing adaptive irregular applications, because it performs program analysis and adaptive optimizations during program execution. Run-time techniques can succeed where static techniques fail because they exploit the characteristics of input data, programs' dynamic behaviors, and the underneath execution environment. When optimizing adaptive irregular applications for parallel execution, a common observation is that the effectiveness of the optimizing transformations depends on programs' input data and their dynamic phases. This dissertation presents a set of run-time optimization techniques that match the characteristics of programs' dynamic memory access patterns and the appropriate optimization (parallelization) transformations. First, we present a general adaptive algorithm selection framework to automatically and adaptively select at run-time the best performing, functionally equivalent algorithm for each of its execution instances. The selection process is based on off-line automatically generated prediction models and characteristics (collected and analyzed dynamically) of the algorithm's input data, In this dissertation, we specialize this framework for automatic selection of reduction algorithms. In this research, we have identified a small set of machine independent high-level characterization parameters and then we deployed an off-line, systematic experiment process to generate prediction models. These models, in turn, match the parameters to the best optimization transformations for a given machine. The technique has been evaluated thoroughly in terms of applications, platforms, and programs' dynamic behaviors. Specifically, for the reduction algorithm selection, the selected performance is within 2% of optimal performance and on average is 60% better than "Replicated Buffer," the default parallel reduction algorithm specified by OpenMP standard. To reduce the overhead of speculative run-time parallelization, we have developed an adaptive run-time parallelization technique that dynamically chooses effcient shadow structures to record a program's dynamic memory access patterns for parallelization. This technique complements the original speculative run-time parallelization technique, the LRPD test, in parallelizing loops with sparse memory accesses. The techniques presented in this dissertation have been implemented in an optimizing research compiler and can be viewed as effective building blocks for comprehensive run-time optimization systems, e.g., feedback-directed optimization systems and dynamic compilation systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Arayapan, Khanittha, and Piyanut Warunyuwong. "Logistics Optimization: Application of Optimization Modeling in Inbound Logistics." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-6213.

Full text
Abstract:

To be a market leader, low cost and responsiveness are the key success factors. Logistics activities create high cost reducing competitiveness of the company, especially for the remote production base. Thus, logistics activities which are delivery planning, freight forwarder and delivery mode selection must be optimized. The focusing area of this paper is inbound logistics due to its big proportion in the total cost and involvement with several stakeholders. The optimization theory and Microsoft Excel’s Solver is used to create the standard optimization tools since it is an efficient and user friendly program. The models are developed based on the supply chain management theory in order to achieve the lowest cost, responsiveness and shared objectives. 2 delivery planning optimization models, container loading for fixed slitting and loading pattern and container loading for pallet loaded material, are formulated. Also, delivery mode selection is constructed by using optimization concept to determine the best alternative. Furthermore, freight forwarder selection process is created by extending the use of the delivery mode selection model. The results express that safety stock, loading pattern, transport mode, and minimum order quantity (MOQ) significantly affect the total logistics cost. Including hidden costs, long transit time and delay penalties, leads freight forwarder selection process to become more realistic and reliable. Shorter processing time, ensured optimal solution, transparency increase and better communication are gained by using these optimization models. However, the proper boundaries must be defined carefully to gain the feasible solution.

APA, Harvard, Vancouver, ISO, and other styles
9

GOUCHER, DANIEL. "Database optimization : An investigation of code-based optimization techniques." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-153754.

Full text
Abstract:
In this report database optimization is investigated with a focus on the area which can be implemented with software rather than hardware.The reason for this specialization is that software optimization is an area that all developers have the resources to implement.To investigate these optimization techniques I have first presented the theory behind different optimization techniques including indices, caching and materialized views. A brief explanation on how these techniques can be implemented in MySQL and PHP is then given. With the help of these theories a set of tests have been designed to see how the different optimization techniques react to different types of queries and datasizes. The results of these tests are then presented with a discussion on how these results coincide with the theories presented and how they can be used to benefit a specific system.
Databasoptimering - En utredning av kodbaseradeoptimeringstekniker. I den här rapporten utreds databasoptimering med en inriktning på området som kan implementeras med mjukvara. Anledningen till denna inriktning är att mjukvaruoptimering är att område som alla utvecklare har resurserna att implementera.För att undersöka dessa optimeringstekniker så har jag först presenterat teorin bakom olika optimeringstekniker så som index, cachning och materialiserade vyer. En kort beskrivning på hur dessa tekniker kan implementeras i MySQL och PHP ges sedan. Med hjälp av dessa teorier så har en mängd tester skapats för att se hur olika optimeringstekniker reagerar på olika databasfrågor och storlekar. Resultatet av dessa tester presenteras sedan tillsammans med en diskussion på hur de sammanhänger med teorierna som presenterades och hur de kan användas i ett specifikt system.
APA, Harvard, Vancouver, ISO, and other styles
10

Aggarwal, Varun. "Analog circuit optimization using evolutionary algorithms and convex optimization." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40525.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 83-88).
In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the designer effort is a contribution. We argue that the accuracy of geometric programming can be improved without adversely influencing the run time or increasing the designer's effort. This is facilitated by decomposition of geometric programming modeling into two steps, which decouples accuracy of models and run-time of geometric programming. We design a new algorithm for producing accurate posynomial models for MOS transistor parameters, which is the first step of the decomposition. The new algorithm can generate posynomial models with variable number of terms and real-valued exponents. The algorithm is a hybrid of a genetic algorithm and a convex optimization technique. We study the performance of the algorithm on artificially created benchmark problems. We show that the accuracy of posynomial models of MOS parameters is improved by a considerable amount by using the new algorithm. The new posynomial modeling algorithm can be used in any application of geometric programming and is not limited to MOS parameter modeling. In the last chapter, we discuss various ideas to improve the state-of-art in circuit sizing.
by Varun Aggarwal.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
11

Olsson, Per-Magnus. "Methods for Network Optimization and Parallel Derivative-free Optimization." Doctoral thesis, Linköpings universitet, Optimeringslära, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-104110.

Full text
Abstract:
This thesis is divided into two parts that each is concerned with a specific problem. The problem under consideration in the first part is to find suitable graph representations, abstractions, cost measures and algorithms for calculating placements of unmanned aerial vehicles (UAVs) such that they can keep one or several static targets under constant surveillance. Each target is kept under surveillance by a surveillance UAV, which transmits information, typically real time video, to a relay UAV. The role of the relay UAV is to retransmit the information to another relay UAV, which retransmits it again to yet another UAV. This chain of retransmission continues until the information eventually reaches an operator at a base station. When there is a single target, then all Pareto-optimal solutions, i.e. all relevant compromises between quality and the number of UAVs required, can be found using an efficient new algorithm. If there are several targets, the problem becomes a variant of the Steiner tree problem and to solve this problem we adapt an existing algorithm to find an initial tree. Once it is found, we can further improve it using a new algorithm presentedin this thesis. The second problem is optimization of time-consuming problems where the objective function is seen as a black box, where the input parameters are sent and a function valueis returned. This has the important implication that no gradient or Hessian information is available. Such problems are common when simulators are used to perform advanced calculations such as crash test simulations of cars, dynamic multibody simulations etc. It is common that a single function evaluation takes several hours. Algorithms for solving such problems can be broadly divided into direct search algorithms and model building algorithms. The first kind evaluates the objective function directly, whereas the second kind builds a model of the objective function, which is then optimized in order to find a new point where it is believed that objective function has agood value. Then the objective function is evaluated in that point. Since the objective function is very time-consuming, it is common to focus on minimizing the number of function evaluations. However, this completely disregards the possibility to perform calculations in parallel and to exploit this we investigate different ways parallelization can be used in model-building algorithms. Some of the ways to do this is to use several starting points, generate several new points in each iteration, new ways of predicting a point’s value and more. We have implemented the parallel extensions in one of the state of the art algorithms for derivative-free optimization and report results from testing on synthetic benchmarksas well as from solving real industrial problems.
APA, Harvard, Vancouver, ISO, and other styles
12

Sheehan, Shane P. "Spacecraft Trajectory Optimization Suite (STOPS): Optimization of Low-Thrust Interplanetary Spacecraft Trajectories Using Modern Optimization Techniques." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1901.

Full text
Abstract:
The work presented here is a continuation of Spacecraft Trajectory Optimization Suite (STOpS), a master’s thesis written by Timothy Fitzgerald at California Polytechnic State University, San Luis Obispo. Low-thrust spacecraft engines are becoming much more common due to their high efficiency, especially for interplanetary trajectories. The version of STOpS presented here optimizes low-thrust trajectories using the Island Model Paradigm with three stochastic evolutionary algorithms: the genetic algorithm, differential evolution, and particle swarm optimization. While the algorithms used here were designed for the original STOpS, they were modified for this work. The low-thrust STOpS was successfully validated with two trajectory problems and their known near-optimal solutions. The first verification case was a constant-thrust, variable-time Earth orbit to Mars orbit transfer where the thrust was 3.787 Newtons and the time was approximately 195 days. The second verification case was a variable-thrust, constant-time Earth orbit to Mercury orbit transfer with the thrust coming from a solar electric propulsion model equation and the time being 355 days. Low-thrust STOpS found similar near-optimal solutions in each case. The final result of this work is a versatile MATLAB tool for optimizing low-thrust interplanetary trajectories.
APA, Harvard, Vancouver, ISO, and other styles
13

Fitzgerald, Timothy J. "Spacecraft Trajectory Optimization Suite (STOpS): Optimization of Multiple Gravity Assist Spacecraft Trajectories Using Modern Optimization Techniques." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1503.

Full text
Abstract:
In trajectory optimization, a common objective is to minimize propellant mass via multiple gravity assist maneuvers (MGAs). Some computer programs have been developed to analyze MGA trajectories. One of these programs, Parallel Global Multiobjective Optimization (PaGMO), uses an interesting technique known as the Island Model Paradigm. This work provides the community with a MATLAB optimizer, STOpS, that utilizes this same Island Model Paradigm with five different optimization algorithms. STOpS allows optimization of a weighted combination of many parameters. This work contains a study on optimization algorithm performance and how each algorithm is affected by its available settings. STOpS successfully found optimal trajectories for the Mariner 10 mission and the Voyager 2 mission that were similar to the actual missions flown. STOpS did not necessarily find better trajectories than those actually flown, but instead demonstrated the capability to quickly and successfully analyze/plan trajectories. The analysis for each of these missions took 2-3 days each. The final program is a robust tool that has taken existing techniques and applied them to the specific problem of trajectory optimization, so it can repeatedly and reliably solve these types of problems.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhou, Fangjun. "Nonmonotone methods in optimization and DC optimization of location problems." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/21777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Boissier, Mathilde. "Coupling structural optimization and trajectory optimization methods in additive manufacturing." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX084.

Full text
Abstract:
Cette thèse porte sur l’optimisation des trajectoires de lasage pour la fabrication additive sur lit de poudre, ainsi que leur lien avec la géométrie de la pièce à construire. L’état de l’art est principalement constitué par des trajectoires basées sur des motifs, dont l’impact sur les propriétés mécaniques des objets finaux est quantifié. Cependant, peu d’analyses permettent de relier leur pertinence à la forme de la pièce elle-même. Nous proposons dans ce travail une approche systématique visant à optimiser la trajectoire sans restriction a priori. Le problème d’optimisation consiste à fusionner la structure en évitant de surchauffer (ce qui induirait des contraintes résiduelles) tout en minimisant le temps de fabrication. L’équation d’état est donc l’équation de la chaleur, dont le terme source dépend de la trajectoire. Deux modèles 2-d sont proposés pour contrôler la température : l’un transitoire et le second stationnaire (pas de dépendance en temps). Basés sur des techniques d’optimisation de forme pour le stationnaire et sur des outils de contrôle pour le transitoire, des algorithmes d’optimisation sont développés. Les applications numériques qui en découlent permettent une analyse critique des différents choix effectués. Afin de laisser plus de liberté dans la conception, l’algorithme stationnaire est adapté à la modification du nombre de composantes connexes de la trajectoire lors de l’optimisation. Deux méthodes sont comparées. Dans la première, la puissance de la source est ajoutée aux variables d’optimisation et un algorithme impliquant une relaxation-pénalisation et un contrôle de la variation totale est proposé. Dans la seconde, la notion de dérivation topologique est adaptée à la source. Enfin, dans le cadre stationnaire, nous détaillons le couplage de l’optimisation de la forme de la pièce, pour améliorer ses performances mécaniques, et de la trajectoire de lasage. Ce problème multiphysique ouvre des perspectives d'applications et de généralisations futures
This work investigates path planning optimization for powder bed fusion additive manufacturing processes, and relates them to the design of the built part. The state of the art mainly studies trajectories based on existing patterns and, besides their mechanical evaluation, their relevance has not been related to the object’s shape. We propose in this work a systematic approach to optimize the path without any a priori restriction. The typical optimization problem is to melt the desired structure, without over-heating (to avoid thermally induced residual stresses) and possibly with a minimal path length. The state equation is the heat equation with a source term depending on the scanning path. Two physical 2-d models are proposed, involving temperature constraint: a transient and a steady state one (in which time dependence is removed). Based on shape optimization for the steady state model and control for the transient model, path optimization algorithms are developed. Numerical results are then performed allowing a critical assessment of the choices we made. To increase the path design freedom, we modify the steady state algorithm to introduce path splits. Two methods are compared. In the first one, the source power is added to the optimization variables and an algorithm mixing relaxation-penalization techniques and the control of the total variation is set. In a second method, notion of topological derivative are applied to the path to cleverly remove and add pieces. eventually, in the steady state, we conduct a concurrent optimization of the part’s shape and of the scanning path. This multiphysics optimization problem raises perspectives gathering direct applications and future generalizations
APA, Harvard, Vancouver, ISO, and other styles
16

Sghir, Inès. "A Multi-Agent based Optimization Method for Combinatorial Optimization Problems." Thesis, Angers, 2016. http://www.theses.fr/2016ANGE0009/document.

Full text
Abstract:
Nous élaborons une approche multi-agents pour la résolution des problèmes d’optimisation combinatoire nommée MAOM-COP. Elle combine des métaheuristiques, les systèmes multi-agents et l’apprentissage par renforcement. Les heuristiques manquent d’une vue d’ensemble sur l’évolution de la recherche. Notre objectif consiste à utiliser les systèmes multi-agents pour créer des méthodes de recherche coopératives. Ces méthodes explorent plusieurs métaheuristiques. MAOM-COP est composée de plusieurs agents qui sont l’agent décideur, les agents intensificateurs et les agents diversificateurs (agents croisement et agent perturbation). A l’aide de l’apprentissage, l’agent décideur décide dynamiquement quel agent à activer entre les agents intensificateurs et les agents croisement. Si les agents intensificateurs sont activés, ils appliquent des algorithmes de recherche locale. Durant leurs recherches, ils peuvent s’échanger des informations, comme ils peuvent déclencher l’agent perturbation. Si les agents croisement sont activés, ils exécutent des opérateurs de recombinaison. Nous avons appliqué MAOM-COP sur les problèmes suivants : l’affectation quadratique, la coloration des graphes, la détermination des gagnants et le sac à dos multidimensionnel. MAOM-COP possède des performances compétitives par rapport aux algorithmes de l’état de l’art
We elaborate a multi-agent based optimization method for combinatorial optimization problems named MAOM-COP. It combines metaheuristics, multiagent systems and reinforcement learning. Although the existing heuristics contain several techniques to escape local optimum, they do not have an entire vision of the evolution of optimization search. Our main objective consists in using the multi-agent system to create intelligent cooperative methods of search. These methods explore several existing metaheuristics. MAOMCOP is composed of the following agents: the decisionmaker agent, the intensification agents and the diversification agents which are composed of the perturbation agent and the crossover agents. Based on learning techniques, the decision-maker agent decides dynamically which agent to activate between intensification agents and crossover agents. If the intensifications agents are activated, they apply local search algorithms. During their searches, they can exchange information, as they can trigger the perturbation agent. If the crossover agents are activated, they perform recombination operations. We applied MAOMCOP to the following problems: quadratic assignment, graph coloring, winner determination and multidimensional knapsack. MAOM-COP shows competitive performances compared with the approaches of the literature
APA, Harvard, Vancouver, ISO, and other styles
17

Yu, Feng. "Constructing Accurate Synopses for Database Query Optimization and Re-optimization." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/dissertations/709.

Full text
Abstract:
Fast and accurate estimations for complex queries are profoundly beneficial for large databases with heavy workloads. The most widely adopted query optimizers use synopses to tune up the databases in manners of optimization and re-optimization. From Chapter 1 to Chapter 3, we focus on the synopses for query optimization. We propose a statistical summary for a database, called CS2 (Correlated Sample Synopsis), to provide rapid and accurate result size estimations for all queries with joins and arbitrary selections. Unlike the state-of-the-art techniques, CS2 does not completely rely on simple random samples, but mainly consists of correlated sample tuples that retain join relationships with less storage. We introduce a statistical technique, called reverse sample, and design an innovative estimator, called reverse estimator, to fully utilize correlated sample tuples for query estimation. We prove both theoretically and empirically that the reverse estimator is unbiased and accurate using CS2. Extensive experiments on multiple datasets show that CS2 is fast to construct and derives more accurate estimations than existing methods with the same space budget. In Chapter 4, we focus on the synopses for query re-optimization on repetitive queries. Repetitive queries refer to those queries that are likely to be executed repeatedly in the future, such as those that are used to generate periodic reports, perform routine maintenance, summarize data for analysis, etc. They can constitute a large part of daily activities of a database system and deserve more optimization efforts. In this paper, we propose to collect information about how tuples are joined in a query, called the query or join trace, during execution of a query. We intend to use this join trace to compute the selectivities of joins in all join orders for the query. We use existing operators, as well as new operators, to gather such information. We show that the trace gathered from a query is sufficient to compute the exact selectivities of all plans of the query. To reduce the overheads of generating a trace, we propose a sampling scheme that generates only a sample of the trace. Experimental results have shown that with only a small sample of the trace, accurate estimates of join selectivities can be obtained. The sample trace makes re-estimation of join selectivities of a repetitive query efficient and accurate.
APA, Harvard, Vancouver, ISO, and other styles
18

Jensen, Marina. "Conversion Rate Optimization : A Qualitative Approach to Identifying Optimization Barriers." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44487.

Full text
Abstract:
This thesis examined the question “What barriers are preventing Swedish companies from performing a structured conversion rate optimization process?”. As the purpose is to obtain an understanding of what is preventing companies from successfully execute conversion rate optimization (CRO). Given that CRO is an important part in most digital marketing activities. And despite increase in budget and importance in marketing, resource constraint continues to be the biggest obstacle.  The method employed to investigate this question was qualitative interviews with participants who worked with websites in seven different companies. An analysis was carried out, estimating the participating companies’ level of knowledge, overall structure, what to prioritize and current obstacles. It was established that the interviewees had several different areas of concern with regards to conversion rate optimization. Limited time, budget, priorities, knowledge, ownership, structured approach and interpreting data, were all treated in the analysis. A discussion was carried out to argument the definition of “biggest” barrier, as some barriers were more common than others but easier to overcome. Overall, these obstacles could all be traced back to barriers as prioritization, structure and ownership. The conclusion was that companies must have a more structured working process within the area of conversion rate optimization in order for this practice to be prioritized as a substantial part of companies online marketing activities.
Denna uppsats har undersökt frågan "Vilka hinder hindrar svenska företag från att genomföra en strukturerad konverteringsoptimerings process?". Med syfte att få en förståelse för vad som hindrar företagen att utföra konverteringsoptimering (CRO). Då CRO är en viktig del i de flesta digitala marknadsaktiviteter. Och trots ökad budget och kunskap inom marknadsföring är resursbegränsningen fortfarande det största hindret. Metoden som användes för att undersöka denna fråga var kvalitativa intervjuer med deltagare som arbetade med webbplatser inom sju olika företag. En analys genomfördes, som tog upp företags kunskapsnivå, övergripande struktur, prioriteringar och nuvarande hinder. Det fastställdes att intervjuarna hade flera olika problemområden med avseende om konverteringsoptimering. Begränsad tid, budget, prioriteringar, kunskap, ägandeskap, strukturerat abetsflöde och tolkning av data, behandlades alla i analysen. En diskussion gjordes för att argumentera för definitionen av "största" barriären, eftersom vissa hinder var vanligare än andra men lättare att övervinna. Sammantaget kan dessa hinder alltså spåras tillbaka till hinder som prioritering, struktur och ägande. Slutsatsen var att företagen måste ha en mer strukturerad arbetsprocess inom området för konverteringsoptimering för att denna disciplin ska prioriteras som en väsentlig del av företagets marknadsföringsaktiviteter online.
APA, Harvard, Vancouver, ISO, and other styles
19

Müller, Stephan. "Constrained portfolio optimization /." [S.l.] : [s.n.], 2005. http://aleph.unisg.ch/hsgscan/hm00133325.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jayaraman, Shankar. "Dynamic cutback optimization." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33812.

Full text
Abstract:
The focus of this thesis is to develop and evaluate a cutback noise minimization process - also known as dynamic cutback optimization - that considers engine spool down during thrust cutback and is consistent with ICAO and FAR Part 36 noise certification procedures. Simplified methods for flyover EPNL prediction used by propulsion designers assume instantaneous thrust reduction and do not take into account the spooling down of the engine during the cutback procedure. The thesis investigates if there is an additional noise benefit that can be gained by modeling the engine spool down behavior. This in turn would improve the margin between predicted EPNL and Stage 4 noise regulations. Modeling dynamic cutback also impacts engine design during the preliminary and detailed design stages. Reduced noise levels due to cutback may be traded for lower engine fan diameter, which in turn reduces weight, fuel burn, and cost.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Zhuo. "Fast interconnect optimization." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3250.

Full text
Abstract:
As the continuous trend of Very Large Scale Integration (VLSI) circuits technology scaling and frequency increases, delay optimization techniques for interconnect are increasingly important for achieving timing closure of high performance designs. For the gigahertz microprocessor and multi-million gate ASIC designs it is crucial to have fast algorithms in the design automation tools for many classical problems in the field to shorten time to market of the VLSI chip. This research presents algorithmic techniques and constructive models for two such problems: (1) Fast buffer insertion for delay optimization, (2) Wire sizing for delay optimization and variation minimization on non-tree networks. For the buffer insertion problem, this dissertation proposes several innovative speedup techniques for different problem formulations and the realistic requirement. For the basic buffer insertion problem, an O(n log2 n) optimal algorithm that runs much faster than the previous classical van Ginneken’s O(n2) algorithm is proposed, where n is the number of buffer positions. For modern design libraries that contain hundreds of buffers, this research also proposes an optimal algorithm in O(bn2) time for b buffer types, a significant improvement over the previous O(b2n2) algorithm by Lillis, Cheng and Lin. For nets with small numbers of sinks and large numbers of buffer positions, a simple O(mn) optimal algorithm is proposed, where m is the number of sinks. For the buffer insertion with minimum cost problem, the problem is first proved to be NP-complete. Then several optimal and approximation techniques are proposed to further speed up the buffer insertion algorithm with resource control for big industrial designs. For the wire sizing problem, we propose a systematic method to size the wires of general non-tree RC networks. The new method can be used for delay optimization and variation reduction.
APA, Harvard, Vancouver, ISO, and other styles
22

Gopinath, Varun. "Industrial Silo Optimization." Thesis, Linköpings universitet, Maskinkonstruktion, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67645.

Full text
Abstract:
This thesis aims to build a working design-analyze-optimize methodology for Alstom Power Sweden AB at Växjö, Sweden. In order to be profitable in today’s competitive industrial product market, it is necessary to engineer optimized products fast. This involves CAD design and FEA analysis to work within an optimization routine in a seamless fashion which will result in a more profitable product. This approach can be understood as a model-based design, where the 3D CAD data is central to the product life cycle. The present approach provides many benefits to a company because of the use of a central database ensure access to the latest release of the 3D model. This allows for a streamlined design to fabrication life cycle with inputs from all departments of a product based company. Alstom is looking into automating some of their design process so as to achieve efficiency within their design department. This report is the result of a study where an industrial silo is taken as an example. A design loop involving CAD design and FE analysis is built to work with an optimization routine to minimize the mass and also ensure structural stiffness and stability. Most engineers work with a lot of constraints with regard to material stock size and other design codes (e.g. Euro Codes). In this report an efficient way to design an industrial product in a 3D CAD (CATIA) program so as to stay within these constrains and still obtain credible computation results within an optimization loop will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Andersson, Joakim, and Jimmy Bertilsson. "SETUP TIME OPTIMIZATION." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-23661.

Full text
Abstract:
Sammanfattning   Emhart Glass AB är världsledande företag inom glasflasktillverkning. De konstruerar automatiserade maskiner som formar glasflaskor. I Sverige finns det två fabriker, en i Örebro och en i Sundsvall. I Örebro tillverkar man främst reservdelar och nya delar till maskinerna medan man i Sundsvall monterar ihop maskinerna. Det finns totalt 15 olika fabriker och kontor över världen med huvudkontor i schweiziska Cham.      Eftersom Emhart Glass Örebro har för långa ställtider på några av deras maskiner ska det undersökas hur omställningsarbetet går till i dagsläget och hur omställningsarbetet skiljer sig åt mellan operatörerna. Det ska även undersökas om det finns några möjligheter till förbättringar samt om det i dagsläget finns något standardiserat sätt som operatören borde följa. Ett dokument som beskriver hur ställarbetet ska gå till kommer även att tas fram.   Ett utmärkt verktyg för att förkorta ställtiderna i en produktion är SMED-Metoden. Filosofin bakom SMED är att man ska analysera och skilja på inre och yttre ställ. Med inre och yttre ställ menas de som endast kan utföras då maskinen är avstängd resp. de som kan utföras när maskinen är i drift.   För att standardisera omställningsarbetet så att samtliga operatörer jobbar på liknande sätt så krävs det att man tar fram en dokumentation över hur arbetet ska gå till. Därför har checklistor tagits fram till operatören. "Checklista - Omställning.xls" är en checklista med syftet att man ska kunna bocka för vilka delar i förberedelserna man gjort inför kommande ställ. Den har tagits fram för att man enkelt ska kunna hålla reda på vilka delar man gjort om man blivit tvungen att jobba med maskinen emellan förberedelserna eller om man slutar sitt skift och lämnar delar av arbetet till nästa operatör.   Om samtliga av dessa förbättringar införs kan man förvänta sig en ställtidsreducering på 20,5% vilket motsvarar ca 35min per ställ. Ignorerar man inkörningstiderna och endast kollar på riggningstider kan man se en förbättring på 36,4 %.
Abstract   Emhart Glass Ltd is a world leader in glass bottle manufacturing. They design automated machines that shape glass bottles. In Sweden there are two factories, one in Örebro and one in Sundsvall. In Örebro they manufacture primarily spare parts and new parts for the machines while they in Sundsvall assemble the machines. There are a total of 15 factories and offices around the world with the headquarter located in Swiss Cham.Since Emhart Glass Örebro has long setup times on some of their machines. This is why we want to identify the current setup process and how the setup process differs between operators. We will also look at whether there are any opportunities for improvement to be made and if they have a standardized way to work. A document that describes how to setup work should be done will also be developed.An excellent tool to shorten the setup time in a production is the SMED method. The philosophy behind SMED is that you should analyze and separate the inner and outer activities. Inner and outer activities mean those activities which can only be performed when the machine is turned off, respectively those activities that can be performed when the machine is in operation. In order to standardize the adjustment process so that all operators are working in a similar way it's required that you make a documentation about how the work should be done. Therefore, checklists been developed to the operator. "Checklista - Omställning.xls" is a checklist with the purpose to be able to check which parts of the preparations they have made before the next setup work. It has been designed to be easy to keep track of what parts you have done if you had to work with the machine between the trial or if you quit your shift and leaving parts of the work to the next operator. If all of these improvements are implemented, we expect a set-up time reduction of 20.5% which corresponds to about 35min per set-up. By ignoring the running time and only check on the setup times, one can see an improvement of 36.4%.
APA, Harvard, Vancouver, ISO, and other styles
24

Puhle, Michael. "Bond portfolio optimization." Berlin Heidelberg Springer, 2007. http://d-nb.info/985928115/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ruziyeva, Alina. "Fuzzy Bilevel Optimization." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-106378.

Full text
Abstract:
In the dissertation the solution approaches for different fuzzy optimization problems are presented. The single-level optimization problem with fuzzy objective is solved by its reformulation into a biobjective optimization problem. A special attention is given to the computation of the membership function of the fuzzy solution of the fuzzy optimization problem in the linear case. Necessary and sufficient optimality conditions of the the convex nonlinear fuzzy optimization problem are derived in differentiable and nondifferentiable cases. A fuzzy optimization problem with both fuzzy objectives and constraints is also investigated in the thesis in the linear case. These solution approaches are applied to fuzzy bilevel optimization problems. In the case of bilevel optimization problem with fuzzy objective functions, two algorithms are presented and compared using an illustrative example. For the case of fuzzy linear bilevel optimization problem with both fuzzy objectives and constraints k-th best algorithm is adopted.
APA, Harvard, Vancouver, ISO, and other styles
26

Sogand, Yousefbeigi. "Wind Farm Optimization." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615685/index.pdf.

Full text
Abstract:
In this thesis, a mixed integer linear program is used to formulate the optimization process of a wind farm. As a start point, a grid was superimposed into the wind farm, in which grid points represent possible wind turbine locations. During the optimization process, proximity and wind interference between wind turbines were considered in order to found the power loss of the wind farm. Power loss was analyzed by using wind interference coefficient, which is a function of wind intensity interference factor (WIIF), weibull distribution and power of the wind turbines. Two different programs
Genetic Algorithm and Lingo, were used to solve the MILP optimization formula and results were compared for different cases in the conclusion part.
APA, Harvard, Vancouver, ISO, and other styles
27

Stettner, Martin. "Tiltrotor multidisciplinary optimization." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/12996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Van, Rooyen Marchand. "Stable parametric optimization." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70259.

Full text
Abstract:
This thesis is a study of convex parametric programs on regions of stability. The main tools are complete characterizations of optimality without constraint qualifications and a theory of point-to-set mappings. We prove various new results that describe the Lipschitzian behaviour of the optimal value function and optimal solution point-to-set mapping. Then we show how these results can be used in the algorithms of Input Optimization, and other applications. These applications include new results on structural optima in nonlinear programming, determination of optimal trade-off directions in interactive multi-objective optimization, and formulation of new dynamic models for efficiency testing in data envelopment analysis.
APA, Harvard, Vancouver, ISO, and other styles
29

Law, S. L. "Financial optimization problems." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426391.

Full text
Abstract:
The major objective of this thesis is to study optimization problems in finance. Most of the effort is directed towards studying the impact of transaction costs in those problems. In addition, we study dynamic meanvariance asset allocation problems. Stochastic HJB equations, Pontryagin Maximum Principle and perturbation analysis are the major mathematical techniques used. In Chapter 1, we introduce the background literature. Following that, we use the Pontryagin Maximum Principle to tackle the problem of dynamic mean-variance asset allocation and rediscover the doubling strategy. In Chapter 2, we present one of the major results of this thesis. In this chapter, we study a financial optimization problem based on a market model without transaction costs first. Then we study the equivalent problem based on a market model with transaction costs. We find that there is a relationship between these two solutions. Using this relationship, we can obtain the solution of one when we have the solution of another. In Chapter 3, we generalize the results of chapter 2. In Chapter 4, we use Pontryagin Maximum Principle to study the problem limit of the no-transaction region when transaction costs tend to 0. We find that the limit is the no-transaction cost solution.
APA, Harvard, Vancouver, ISO, and other styles
30

Abdi, Mohammad Javad. "Cardinality optimization problems." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4620/.

Full text
Abstract:
In this thesis, we discuss the cardinality minimization problem (CMP) and the cardinality constraint problem. Due to the NP-hardness of these problems, we discuss different computational and relaxation techniques for finding an approximate solution to these problems. We also discuss the l\(_1\)-minimization as one of the most efficient methods for solving CMPs, and we demonstrate that the l\(_1\)-minimization uses a kind of weighted l\(_2\)-minimization. We show that the reweighted l\(_j\)-minimization (j≥1) is very effective to locate a sparse solution to a linear system. Next, we show how to introduce different merit functions for sparsity, and how proper weights may reduce the gap between the performances of these functions for finding a sparse solution to an undetermined linear system. Furthermore, we introduce some effective computational approaches to locate a sparse solution for an underdetermined linear system. These approaches are based on reweighted l\(_j\)-minimization (j≥1) algorithms. We focus on the reweighted l\(_1\)-minimization, and introduce several new concave approximations to the l\(_0\)-norm function. These approximations can be employed to define new weights for reweighted l\(_1\)-minimization algorithms. We show how the change of parameters in reweighted algorithms may affect the performance of the algorithms for finding the solution of the cardinality minimization problem. In our experiments, the problem data were generated according to different statistical distributions, and we test the algorithms on different sparsity level of the solution of the problem. As a special case of cardinality constrained problems, we also discuss compressed sensing, restricted isometry property (RIP), and restricted isometry constant (RIC).
APA, Harvard, Vancouver, ISO, and other styles
31

Faramarzi, Oghani Sohrab. "Clinical laboratory optimization." Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I072.

Full text
Abstract:
Cette thèse porte sur l'optimisation de la conception et des décisions opérationnelles des laboratoires d'analyses médicales. Dans cette thèse, un outil d'aide à la décision comprenant des modèles mathématiques, un algorithme heuristique et un modèle de simulation personnalisé est développé pour aider les décideurs à résoudre les principaux problèmes stratégiques, tactiques et opérationnels en conception et gestion des opérations des laboratoires d'analyses médicales. Dans cette thèse, la sélection des machines et la disposition des instruments sont étudiées en tant que principaux problèmes stratégiques, le problème de configuration des analyseurs en tant que problème tactique et l’affectation, l’aliquotage et l'ordonnancement en tant que principaux problèmes opérationnels. Un modèle de simulation personnalisé et flexible est développé dans FlexSim pour étudier le laboratoire d'analyse médicale conçu à l'aide des résultats de modèles mathématiques et d'un algorithme de layout développés. Le modèle de simulation aide le concepteur à construire et à analyser un laboratoire complet en tenant compte de toutes les principales caractéristiques du système. Cet attribut de simulation permet d'analyser le comportement du système et de déterminer si le système conçu est efficace. Pour vérifier la validité du cadre proposé, les données extraites d’un cas réel sont utilisées. Les résultats de sortie scellent l'applicabilité et l'efficacité du cadre proposé ainsi que la compétence des techniques proposées pour traiter chaque problème d'optimisation. À notre connaissance, cette thèse est l’une des principales études sur l’optimisation des laboratoires d'analyses médicales
This thesis focuses on the optimization of clinical laboratory design and operating decisions. In this thesis, a decision support tool including mathematical models, a heuristic algorithm and a customized simulation model is developed to aid decision makers for the main strategic, tactical and operational problems in clinical laboratory design and operations management. In this thesis, machine selection and facility layout are studied as the main strategic problems, analyzer configuration problem as the tactical problem, and assignment, aliquoting, and scheduling as the principal operational problems. A customized and flexible simulation model is developed in FlexSim to study the clinical laboratory designed through the outputs of developed mathematical models and layout algorithm. The simulation model helps the designer to construct and analyze a complete clinical laboratory taking into account all major features of the system. This simulation attribute provides the ability to scrutinize the system behaviour and to find out whether the designed system is efficient. Furthermore, simulation model can be fruitful to decide on scheduling, aliquoting and staffing problems through the evaluation of various scenarios proposed by decision maker for each of these problems. To verify the validity of the proposed framework, data extracted from a real case is used. The output results seal on the applicability and the efficiency of the proposed framework as well as competency of proposed techniques to deal with each optimization problem. To the best of our knowledge, this thesis is one of the leading studies on the optimization of clinical laboratories
APA, Harvard, Vancouver, ISO, and other styles
32

Singer, Adam B. "Global dynamic optimization." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28662.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2004.
Includes bibliographical references (p. 247-256).
(cont.) on a set composed of the Cartesian product between the parameter bounds and the state bounds. Furthermore, I show that the solution of the differential equations is affine in the parameters. Because the feasible set is convex pointwise in time, the standard result that a convex function composed with an affine function remains convex yields the desired result that the integrand is convex under composition. Additionally, methods are developed using interval arithmetic to derive the exact state bounds for the solution of a linear dynamic system. Given a nonzero tolerance, the method is rigorously shown to converge to the global solution in a finite time. An implementation is developed, and via a collection of case studies, the technique is shown to be very efficient in computing the global solutions. For problems with embedded nonlinear dynamic systems, the analysis requires a more sophisticated composition technique attributed to McCormick. McCormick's composition technique provides a method for computing a convex underestimator for for the integrand given an arbitrary nonlinear dynamic system provided that convex underestimators and concave overestimators can be given for the states. Because the states are known only implicitly via the solution of the nonlinear differential equations, deriving these convex underestimators and concave overestimators is a highly nontrivial task. Based on standard optimization results, outer approximation, the affine solution to linear dynamic systems, and differential inequalities, I present a novel method for constructing convex underestimators and concave overestimators for arbitrary nonlinear dynamic systems ...
My thesis focuses on global optimization of nonconvex integral objective functions subject to parameter dependent ordinary differential equations. In particular, efficient, deterministic algorithms are developed for solving problems with both linear and nonlinear dynamics embedded. The techniques utilized for each problem classification are unified by an underlying composition principle transferring the nonconvexity of the embedded dynamics into the integral objective function. This composition, in conjunction with control parameterization, effectively transforms the problem into a finite dimensional optimization problem where the objective function is given implicitly via the solution of a dynamic system. A standard branch-and-bound algorithm is employed to converge to the global solution by systematically eliminating portions of the feasible space by solving an upper bounding problem and convex lower bounding problem at each node. The novel contributions of this work lie in the derivation and solution of these convex lower bounding relaxations. Separate algorithms exist for deriving convex relaxations for problems with linear dynamic systems embedded and problems with nonlinear dynamic systems embedded. However, the two techniques are unified by the method for relaxing the integral in the objective function. I show that integrating a pointwise in time convex relaxation of the original integrand yields a convex underestimator for the integral. Separate composition techniques, however, are required to derive relaxations for the integrand depending upon the nature of the embedded dynamics; each case is addressed separately. For problems with embedded linear dynamic systems, the nonconvex integrand is relaxed pointwise in time
by Adam Benjamin Singer.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
33

Xiong, Ying S. M. Massachusetts Institute of Technology. "Racing line optimization." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/64669.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 112-113).
Although most racers are good at controlling their cars, world champions are always talented at choosing the right racing line while others mostly fail to do that. Optimal racing line selection is a critical problem in car racing. However, currently it is strongly based on the intuition of experienced racers after they conduct repeated real-time experiments. It will be very useful to have a method which can generate the optimal racing line based on the given racing track and the car. This paper explains four methods to generate optimal racing lines: the Euler spiral method, artificial intelligence method, nonlinear programming solver method and integrated method. Firstly we study the problem and obtain the objective functions and constraints for both 2-D and 3-D situations. The mathematical and physical features of the racing tracks are studied. Then we try different ways of solving this complicated nonlinear programming problem. The Euler spiral method generates Euler spiral curve turns at corners and it gives optimal results fast and accurately for 2-D corners with no banking. The nonlinear programming solver method is based on the MINOS solver on AMPL and the MATLAB Optimization Toolbox and it only needs the input of the objective function and constraints. A heavy emphasis is placed on the artificial intelligence method. It works well for any 2-D or 3-D track shapes. It uses intelligent algorithms including branch-cutting and forward-looking to give optimal racing lines for both 2-D and 3-D tracks. And the integrated method combines methods and their advantages so that it is fast and practical for all situations. Different methods are compared, and their evolutions towards the optimum are described in detail. Convenient display software is developed to show the tracks and racing lines for observation. The approach to finding optimal racing lines for cars will be also helpful for finding optimal racing lines for bicycle racing, ice skating and skiing.
by Ying Xiong.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Lu, Xin Ph D. Massachusetts Institute of Technology Operations Research Center. "Online optimization problems." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82724.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 149-153).
In this thesis, we study online optimization problems in routing and allocation applications. Online problems are problems where information is revealed incrementally, and decisions must be made before all information is available. We design and analyze algorithms for a variety of online problems, including traveling salesman problems with rejection options, generalized assignment problems, stochastic matching problems, and resource allocation problems. We use worst case competitive ratios to analyze the performance of proposed algorithms. We begin our study with online traveling salesman problems with rejection options where acceptance/rejection decisions are not required to be explicitly made. We propose an online algorithm in arbitrary metric spaces, and show that it is the best possible. We then consider problems where acceptance/rejection decisions must be made at the time when requests arrive. For dierent metric spaces, we propose dierent online algorithms, some of which are asymptotically optimal. We then consider generalized online assignment problems with budget constraints and resource constraints. We first prove that all online algorithms are arbitrarily bad for general cases. Then, under some assumptions, we propose, analyze, and empirically compare two online algorithms, a greedy algorithm and a primal dual algorithm. We study online stochastic matching problems. Instances with a fixed number of arrivals are studied first. A novel algorithm based on discretization is proposed and analyzed for unweighted problems. The same algorithm is modified to accommodate vertex-weighted cases. Finally, we consider cases where arrivals follow a Poisson Process. Finally, we consider online resource allocation problems. We first consider the problems with free but fixed inventory under certain assumptions, and present near optimal algorithms. We then relax some unrealistic assumptions. Finally, we generalize the technique to problems with flexible inventory with non-decreasing marginal costs.
by Xin Lu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
35

Teo, Kwong Meng. "Nonconvex robust optimization." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40303.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 133-138).
We propose a novel robust optimization technique, which is applicable to nonconvex and simulation-based problems. Robust optimization finds decisions with the best worst-case performance under uncertainty. If constraints are present, decisions should also be feasible under perturbations. In the real-world, many problems are nonconvex and involve computer-based simulations. In these applications, the relationship between decision and outcome is not defined through algebraic functions. Instead, that relationship is embedded within complex numerical models. Since current robust optimization methods are limited to explicitly given convex problems, they cannot be applied to many practical problems. Our proposed method, however, operates on arbitrary objective functions. Thus, it is generic and applicable to most real-world problems. It iteratively moves along descent directions for the robust problem, and terminates at a robust local minimum. Because the concepts of descent directions and local minima form the building blocks of powerful optimization techniques, our proposed framework shares the same potential, but for the richer, and more realistic, robust problem.
(cont.) To admit additional considerations including parameter uncertainties and nonconvex constraints, we generalized the basic robust local search. In each case, only minor modifications are required - a testimony to the generic nature of the method, and its potential to be a component of future robust optimization techniques. We demonstrated the practicability of the robust local search technique in two realworld applications: nanophotonic design and Intensity Modulated Radiation Therapy (IMRT) for cancer treatment. In both cases, the numerical models are verified by actual experiments. The method significantly improved the robustness for both designs, showcasing the relevance of robust optimization to real-world problems.
by Kwong Meng Teo.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
36

Sadownick, Ronald 1960. "Helicopter configuration optimization." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/82683.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, February 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaf 102).
by Ronald Sadownick.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
37

Bailey, Drake (William Drake), and Daniel Skempton. "Communicating optimization results." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81092.

Full text
Abstract:
Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 76-79).
With global supply chains becoming increasingly complex, leading companies are embracing optimization software tools to help them structure and coordinate their supply chains. With an array of choices available, many organizations opt for one of the numerous off-the-shelf products. Others choose instead to create their own bespoke optimization tools. While this custom approach affords greater versatility than a commercially available product, it also presents significant challenges to both the creators and users of the tool in terms of complexity. It can often be time-consuming and difficult for the users of the tool to understand and verify the results that are generated. If a decision-maker has difficulty understanding or trusting the output of a model, then the value of the tool is seriously diminished. This paper examines the challenges between the creators, or operational research engineers, and the end-users when deploying and executing complex optimization software in supply chain management. We examine the field of optimization modeling, communication methods involved, and relevant data visualization techniques. Then, we survey a group of users from our sponsoring company to gain insight to their experience using their tool. The general responses and associated crosstab analysis reveals that training and visualization are areas that have potential to improve the user's understanding of the tool, which in turn would lead to better communication between the end-users and the experts who build and maintain the tool. Finally, we present a section on current, cutting edge visualization techniques that can be adapted to influence the way a user visualizes the optimization results.
by Drake Bailey and Daniel Skempton.
M.Eng.in Logistics
APA, Harvard, Vancouver, ISO, and other styles
38

Simmen, Martin Walter. "Neural network optimization." Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/12942.

Full text
Abstract:
Combinatorial optimization problems arise throughout science, industry, and commerce. The demonstration that analogue neural networks could, in principle, rapidly find near-optimal solutions to such problems - many of which appear computationally intractable - was important both for the novelty of the approach and because these networks are potentially implementable in parallel hardware. However, subsequent research, conducted largely on the travelling salesman problem, revealed problems regarding the original network's parameter sensitivity and tendency to give invalid states. Although this has led to improvements and new network designs which at least partly overcome the above problems, many issues concerning the performance of optimization networks remain unresolved. This thesis explores how to optimize the performance of two neural networks current in the literature: the elastic net, and the mean field Potts network, both of which are designed for the travelling salesman problem. Analytical methods elucidate issues of parameter sensitivty and enable parameter values to be chosen in a rational manner. Systematic numerical experiments on realistic size problems complement and support the theoretical analyses throughout. An existing analysis of how the elastic net algorithm may generate invalid solutions is reviewed and extended. A new analysis locates the parameter regime in which the net may converge to a second type of invalid solution. Combining the two analyses yields a prescription for setting the value of a key parameter optimally with respect to avoiding invalid solutions. The elastic net operates by minimizing a computational energy function. Several new forms of dynamics using locally adaptive step-sizes are developed, and shown to increase greatly the efficiency of the minimization process. Analytical work constraining the range of safe adaptation rates is presented. A new form of dynamics, with a user defined step-size, is introduced for the mean field Potts network. An analysis of the network's critical temperature under these dynamics is given, by generalizing a previous analysis valid for a special case of the dynamics.
APA, Harvard, Vancouver, ISO, and other styles
39

Speicher, Maximilian. "Search Interaction Optimization." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-208102.

Full text
Abstract:
Over the past 25 years, search engines have become one of the most important, if not the entry point of the World Wide Web. This development has been primarily due to the continuously increasing amount of available documents, which are highly unstructured. Moreover, the general trend is towards classifying search results into categories and presenting them in terms of semantic information that answer users' queries without having to leave the search engine. With the growing amount of documents and technological enhancements, the needs of users as well as search engines are continuously evolving. Users want to be presented with increasingly sophisticated results and interfaces while companies have to place advertisements and make revenue to be able to offer their services for free. To address the above needs, it is more and more important to provide highly usable and optimized search engine results pages (SERPs). Yet, existing approaches to usability evaluation are often costly or time-consuming and mostly rely on explicit feedback. They are either not efficient or not effective while SERP interfaces are commonly optimized primarily from a company's point of view. Moreover, existing approaches to predicting search result relevance, which are mostly based on clicks, are not tailored to the evolving kinds of SERPs. For instance, they fail if queries are answered directly on a SERP and no clicks need to happen. Applying Human-Centered Design principles, we propose a solution to the above in terms of a holistic approach that intends to satisfy both, searchers and developers. It provides novel means to counteract exclusively company-centric design and to make use of implicit user feedback for efficient and effective evaluation and optimization of usability and, in particular, relevance. We define personas and scenarios from which we infer unsolved problems and a set of well-defined requirements. Based on these requirements, we design and develop the Search Interaction Optimization toolkit. Using a bottom-up approach, we moreover define an eponymous, higher-level methodology. The Search Interaction Optimization toolkit comprises a total of six components. We start with INUIT [1], which is a novel minimal usability instrument specifically aiming at meaningful correlations with implicit user feedback in terms of client-side interactions. Hence, it serves as a basis for deriving usability scores directly from user behavior. INUIT has been designed based on reviews of established usability standards and guidelines as well as interviews with nine dedicated usability experts. Its feasibility and effectiveness have been investigated in a user study. Also, a confirmatory factor analysis shows that the instrument can reasonably well describe real-world perceptions of usability. Subsequently, we introduce WaPPU [2], which is a context-aware A/B testing tool based on INUIT. WaPPU implements the novel concept of Usability-based Split Testing and enables automatic usability evaluation of arbitrary SERP interfaces based on a quantitative score that is derived directly from user interactions. For this, usability models are automatically trained and applied based on machine learning techniques. In particular, the tool is not restricted to evaluating SERPs, but can be used with any web interface. Building on the above, we introduce S.O.S., the SERP Optimization Suite [3], which comprises WaPPU as well as a catalog of best practices [4]. Once it has been detected that an investigated SERP's usability is suboptimal based on scores delivered by WaPPU, corresponding optimizations are automatically proposed based on the catalog of best practices. This catalog has been compiled in a three-step process involving reviews of existing SERP interfaces and contributions by 20 dedicated usability experts. While the above focus on the general usability of SERPs, presenting the most relevant results is specifically important for search engines. Hence, our toolkit contains TellMyRelevance! (TMR) [5] — the first end-to-end pipeline for predicting search result relevance based on users’ interactions beyond clicks. TMR is a fully automatic approach that collects necessary information on the client, processes it on the server side and trains corresponding relevance models based on machine learning techniques. Predictions made by these models can then be fed back into the ranking process of the search engine, which improves result quality and hence also usability. StreamMyRelevance! (SMR) [6] takes the concept of TMR one step further by providing a streaming-based version. That is, SMR collects and processes interaction data and trains relevance models in near real-time. Based on a user study and large-scale log analysis involving real-world search engines, we have evaluated the components of the Search Interaction Optimization toolkit as a whole—also to demonstrate the interplay of the different components. S.O.S., WaPPU and INUIT have been engaged in the evaluation and optimization of a real-world SERP interface. Results show that our tools are able to correctly identify even subtle differences in usability. Moreover, optimizations proposed by S.O.S. significantly improved the usability of the investigated and redesigned SERP. TMR and SMR have been evaluated in a GB-scale interaction log analysis as well using data from real-world search engines. Our findings indicate that they are able to yield predictions that are better than those of competing state-of-the-art systems considering clicks only. Also, a comparison of SMR to existing solutions shows its superiority in terms of efficiency, robustness and scalability. The thesis concludes with a discussion of the potential and limitations of the above contributions and provides an overview of potential future work
Im Laufe der vergangenen 25 Jahre haben sich Suchmaschinen zu einem der wichtigsten, wenn nicht gar dem wichtigsten Zugangspunkt zum World Wide Web (WWW) entwickelt. Diese Entwicklung resultiert vor allem aus der kontinuierlich steigenden Zahl an Dokumenten, welche im WWW verfügbar, jedoch sehr unstrukturiert organisiert sind. Überdies werden Suchergebnisse immer häufiger in Kategorien klassifiziert und in Form semantischer Informationen bereitgestellt, die direkt in der Suchmaschine konsumiert werden können. Dies spiegelt einen allgemeinen Trend wider. Durch die wachsende Zahl an Dokumenten und technologischen Neuerungen wandeln sich die Bedürfnisse von sowohl Nutzern als auch Suchmaschinen ständig. Nutzer wollen mit immer besseren Suchergebnissen und Interfaces versorgt werden, während Suchmaschinen-Unternehmen Werbung platzieren und Gewinn machen müssen, um ihre Dienste kostenlos anbieten zu können. Damit geht die Notwendigkeit einher, in hohem Maße benutzbare und optimierte Suchergebnisseiten – sogenannte SERPs (search engine results pages) – für Nutzer bereitzustellen. Gängige Methoden zur Evaluierung und Optimierung von Usability sind jedoch größtenteils kostspielig oder zeitaufwändig und basieren meist auf explizitem Feedback. Sie sind somit entweder nicht effizient oder nicht effektiv, weshalb Optimierungen an Suchmaschinen-Schnittstellen häufig primär aus dem Unternehmensblickwinkel heraus durchgeführt werden. Des Weiteren sind bestehende Methoden zur Vorhersage der Relevanz von Suchergebnissen, welche größtenteils auf der Auswertung von Klicks basieren, nicht auf neuartige SERPs zugeschnitten. Zum Beispiel versagen diese, wenn Suchanfragen direkt auf der Suchergebnisseite beantwortet werden und der Nutzer nicht klicken muss. Basierend auf den Prinzipien des nutzerzentrierten Designs entwickeln wir eine Lösung in Form eines ganzheitlichen Ansatzes für die oben beschriebenen Probleme. Dieser Ansatz orientiert sich sowohl an Nutzern als auch an Entwicklern. Unsere Lösung stellt automatische Methoden bereit, um unternehmenszentriertem Design entgegenzuwirken und implizites Nutzerfeedback für die effizienteund effektive Evaluierung und Optimierung von Usability und insbesondere Ergebnisrelevanz nutzen zu können. Wir definieren Personas und Szenarien, aus denen wir ungelöste Probleme und konkrete Anforderungen ableiten. Basierend auf diesen Anforderungen entwickeln wir einen entsprechenden Werkzeugkasten, das Search Interaction Optimization Toolkit. Mittels eines Bottom-up-Ansatzes definieren wir zudem eine gleichnamige Methodik auf einem höheren Abstraktionsniveau. Das Search Interaction Optimization Toolkit besteht aus insgesamt sechs Komponenten. Zunächst präsentieren wir INUIT [1], ein neuartiges, minimales Instrument zur Bestimmung von Usability, welches speziell auf sinnvolle Korrelationen mit implizitem Nutzerfeedback in Form Client-seitiger Interaktionen abzielt. Aus diesem Grund dient es als Basis für die direkte Herleitung quantitativer Usability-Bewertungen aus dem Verhalten von Nutzern. Das Instrument wurde basierend auf Untersuchungen etablierter Usability-Standards und -Richtlinien sowie Experteninterviews entworfen. Die Machbarkeit und Effektivität der Benutzung von INUIT wurden in einer Nutzerstudie untersucht und darüber hinaus durch eine konfirmatorische Faktorenanalyse bestätigt. Im Anschluss beschreiben wir WaPPU [2], welches ein kontextsensitives, auf INUIT basierendes Tool zur Durchführung von A/B-Tests ist. Es implementiert das neuartige Konzept des Usability-based Split Testing und ermöglicht die automatische Evaluierung der Usability beliebiger SERPs basierend auf den bereits zuvor angesprochenen quantitativen Bewertungen, welche direkt aus Nutzerinteraktionen abgeleitet werden. Hierzu werden Techniken des maschinellen Lernens angewendet, um automatisch entsprechende Usability-Modelle generieren und anwenden zu können. WaPPU ist insbesondere nicht auf die Evaluierung von Suchergebnisseiten beschränkt, sondern kann auf jede beliebige Web-Schnittstelle in Form einer Webseite angewendet werden. Darauf aufbauend beschreiben wir S.O.S., die SERP Optimization Suite [3], welche das Tool WaPPU sowie einen neuartigen Katalog von „Best Practices“ [4] umfasst. Sobald eine durch WaPPU gemessene, suboptimale Usability-Bewertung festgestellt wird, werden – basierend auf dem Katalog von „Best Practices“ – automatisch entsprechende Gegenmaßnahmen und Optimierungen für die untersuchte Suchergebnisseite vorgeschlagen. Der Katalog wurde in einem dreistufigen Prozess erarbeitet, welcher die Untersuchung bestehender Suchergebnisseiten sowie eine Anpassung und Verifikation durch 20 Usability-Experten beinhaltete. Die bisher angesprochenen Tools fokussieren auf die generelle Usability von SERPs, jedoch ist insbesondere die Darstellung der für den Nutzer relevantesten Ergebnisse eminent wichtig für eine Suchmaschine. Da Relevanz eine Untermenge von Usability ist, beinhaltet unser Werkzeugkasten daher das Tool TellMyRelevance! (TMR) [5], die erste End-to-End-Lösung zur Vorhersage von Suchergebnisrelevanz basierend auf Client-seitigen Nutzerinteraktionen. TMR ist einvollautomatischer Ansatz, welcher die benötigten Daten auf dem Client abgreift, sie auf dem Server verarbeitet und entsprechende Relevanzmodelle bereitstellt. Die von diesen Modellen getroffenen Vorhersagen können wiederum in den Ranking-Prozess der Suchmaschine eingepflegt werden, was schlussendlich zu einer Verbesserung der Usability führt. StreamMyRelevance! (SMR) [6] erweitert das Konzept von TMR, indem es einen Streaming-basierten Ansatz bereitstellt. Hierbei geschieht die Sammlung und Verarbeitung der Daten sowie die Bereitstellung der Relevanzmodelle in Nahe-Echtzeit. Basierend auf umfangreichen Nutzerstudien mit echten Suchmaschinen haben wir den entwickelten Werkzeugkasten als Ganzes evaluiert, auch, um das Zusammenspiel der einzelnen Komponenten zu demonstrieren. S.O.S., WaPPU und INUIT wurden zur Evaluierung und Optimierung einer realen Suchergebnisseite herangezogen. Die Ergebnisse zeigen, dass unsere Tools in der Lage sind, auch kleine Abweichungen in der Usability korrekt zu identifizieren. Zudem haben die von S.O.S.vorgeschlagenen Optimierungen zu einer signifikanten Verbesserung der Usability der untersuchten und überarbeiteten Suchergebnisseite geführt. TMR und SMR wurden mit Datenmengen im zweistelligen Gigabyte-Bereich evaluiert, welche von zwei realen Hotelbuchungsportalen stammen. Beide zeigen das Potential, bessere Vorhersagen zu liefern als konkurrierende Systeme, welche lediglich Klicks auf Ergebnissen betrachten. SMR zeigt gegenüber allen anderen untersuchten Systemen zudem deutliche Vorteile bei Effizienz, Robustheit und Skalierbarkeit. Die Dissertation schließt mit einer Diskussion des Potentials und der Limitierungen der erarbeiteten Forschungsbeiträge und gibt einen Überblick über potentielle weiterführende und zukünftige Forschungsarbeiten
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Xing-Si. "Entropy and optimization." Thesis, University of Liverpool, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Elsayed, M. A. N. "Aircraft trajectory optimization." Thesis, Loughborough University, 1985. https://dspace.lboro.ac.uk/2134/32873.

Full text
Abstract:
A typical aircraft flight consists of three phases, namely climb, cruise and descent. The purpose of this research was to study the control schedules for a transport aircraft which would result in least fuel expenditure in each flight-segment.
APA, Harvard, Vancouver, ISO, and other styles
42

Hillberg, Alexander, and Marcus Olmarker. "Optimization of Flexplate." Thesis, Högskolan i Halmstad, Akademin för ekonomi, teknik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37051.

Full text
Abstract:
From 2019, all new Volvo Powertrains will be electrified, also including ICE based Powertrains. In the ICE based Powertrains with automatic transmission, a flexplate is connecting the crank shaft to the automatic transmission to transfer the power. The flexplate needs to handle torque peaks from the combustion, crank shaft whirl and axial forces generated by the torque converter. The combustion torque load is maximized at every ignition, but the load is also dependent on engine speed/load and gearbox shifts. All these loads applied to the flexplate means that the plate will be exposed to high stress levels. These high levels of stress make the flexplate more likely to break and thus, shorten its lifespan. Therefore, the objective of this thesis was to generate new optimized design proposals of the plate with minimized stress levels using the method Simulation Driven Design. Also, to understand the result, a parametrical study was performed using Design of Experiments. Using Simulation Driven Design, several concepts were generated but only two showed valid results, and they were optimized using parametrical optimization and Design of Experiments in Catia V5. Also, an attempt at a topological optimization was performed in Inspire, but the model was to complex and could not be accurately replicated in the program, making the topological optimization impossible to do. With a combination of parametrical optimization and Design of Experiments, the two concepts showed a reduced stress level of 16.4% and 11.1% compared to the original design. They also showed an increase in deformation and a slight decrease in weight. Knowing this, a conclusion could be made that Simulation Driven Design is a great method to use in product development. The results show that the combination of parametrical optimization and Design of Experiments can be used efficient during the optimization process in product development.
Från 2019 kommer alla nya drivlinor från Volvo att elektrifieras, även inkluderat förbränningsmotorbaserade drivlinor. I de förbränningsmotorbaserade drivlinorna med automatisk växellåda så fäster en medbringarplåt samman vevaxeln med den automatiska växellådan för att överföra kraften. Medbringarplåten måste hantera vridmomentstoppar från vevaxeln vid förbränningen, vevaxelrotation och axiella krafter generade av momentöverföraren. Vridmomentslasten vid förbränningen är maximerad vid varje tändning, men lasten är även beroende på motorns hastighet/last och på växlingar. Alla dessa laster som medbringarplåten utsätts för innebär att plåten kommer påverkas av höga spänningsnivåer. Dessa höga spänningsnivåer gör det mer sannolikt att medbringarplåten går sönder och därav förkortas livslängden. Därför var målet med denna avhandling att generera nya optimerade designkoncept på plåten med minimerade spänningsnivåer genom att använda metoden Simulation Driven Design. För att förstå resultatet, genomfördes även en parameterstudie med hjälp av Design of Experiments. Genom att använda Simulation Driven Design så kunde flera designkoncept tas fram, men endast två visade på hållbara resultat, och de optimerades med parametrisk optimering samt Design of Experiments i Catia V5. En topologisk optimering i programmet Inspire försöktes även, men modellen var för komplicerad och kunde inte återskapas på ett noggrant sätt i programmet, vilket gjorde att optimeringen var omöjlig att göra. Med en kombination av parametrisk optimering och Design of Experiments visade de två koncepten en minskning i spänning på 16,4% och 11,1% jämfört med originaldesignen. De visade även en ökning i deformation samt en liten minskning i vikt. Resultatet visar att det är möjligt att dra slutsatsen att Simulation Driven Design är en fantastisk metod att använda i produktutveckling. Resultatet visar även att kombinationen av parametrisk optimering och Design of Experiments kan användas effektivt under optimeringsprocessen i produktutveckling
APA, Harvard, Vancouver, ISO, and other styles
43

Mesgarpour, Mohammad. "Airport runway optimization." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/366012/.

Full text
Abstract:
This thesis considers the scheduling of aircraft landing and take-off problems on a single runway where aircraft must respect various operational constraints. The aim is to introduce generic models and solution approaches that can be implemented in practice. Existing solution methods and techniques of airport runway optimization have been reviewed. Several solution methods such as mixed integer programming, dynamic programming, iterated descent local search and simulated annealing are proposed for the scheduling of aircraft landings in the static and dynamic environment. A multi-objective formulation is used for taking into account runway throughput, earliness and lateness, and the cost of fuel arising from aircraft manoeuvres and additional flight time incurred to achieve the landing schedule. Moreover, computational results are presented using real data from Heathrow airport as well as randomly generated problem instances which are generated based on characteristics of the real data. Later, dynamic programming, descent local search and beam search algorithms are proposed for the scheduling of aircraft take-offs in the departure holding area. Scheduling aircraft take-off is formulated as a hierarchical multi-objective problem which includes maximizing departure runway throughput and minimizing total waiting time in the holding area. Performance of the algorithms have been evaluated for three common layouts of holding area. Computational results are presented on randomly generated test data.
APA, Harvard, Vancouver, ISO, and other styles
44

Flach, Guilherme Augusto. "Clock mesh optimization." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/34773.

Full text
Abstract:
Malhas de relógio são arquiteturas de rede de relógio adequadas para distribuir confiavelmente o sinal de relógio na presença de variações de processo e ambientais. Tal propriedade se torna muito importante nas tecnologias submicrônicas onde variações têm um papel importante. A confiabilidade da malha de relógio é devido aos caminhos redundantes conectando o sinal de relógio até os receptores de forma que variações afetando um caminho possam ser compensadas pelos outros caminhos. A confiabilidade vem ao custo de mais consumo de potência e fiação. Desta forma fica claro o balanceamento necessário entre distribuir confiavelmente o sinal de relógio (mais redundância) e o consumo de potência e aumento de fiação. O clock skew é definido como a diferença entre os tempos de chegada do sinal de clock nos seus receptores. Quanto maior é o clock skew, mais lento o circuito precisa operar. Além de diminuir a velocidade do circuito, um valor alto de clock skew aumenta a probabilidade de o circuito não funcionar devido às variações. Neste trabalho, nos focamos no problema de clock skew. Inicialmente extraímos informações úteis de como o comprimento da fiação e a capacitância variam a medida que o tamanho da malha varia. São apresentadas fórmulas analíticas que encontram o tamanho ótimo para ambos objetivos e é apresentado um estudo de como o clock skew varia a medida que nos afastamos do tamanho ótimo da malha de relógio. Um método para a redução de clock skew através do deslocamento dos buffers também é apresentado. Tal melhoria no clock skew não afeta o consumo de potência já que o tamanho dos buffers e a malha não são alterados.
Clock meshes are a suitable clock network architecture for reliably distributing the clock signal under process and environmental variations. This property becomes very important in the deep sub-micron technology where variations play a main role. The clock mesh reliability is due to redundant paths connecting clock buffers to clock sinks, so that variations affecting one path can be compensated by other paths. This comes at cost of more power consumption and wiring resources. Therefore it is clear the tradeoff between reliably distributing the clock signal (more redundancy) and the power and resource consumption. The clock skew is defined as the difference in the arrival time of clock signal at clock sinks. The higher is the clock skew, the slower is the circuit. Besides slowing down the circuit operation, a high clock skew increases the probability of circuit malfunction due to variations. In this work we focus on the clock skew problem. We first extract some useful information on how the clock wirelength and capacitance change as the mesh size changes. We present analytical formulas to find the optimum mesh size for both goals and study how the clock skew varies as we move further away from the optimum mesh size. We also present a method for reducing the clock mesh skew by sliding buffers from the position where they are traditionally placed. This improvement comes at no increasing cost of power consumption since the buffer size and the mesh capacitance are not changed.
APA, Harvard, Vancouver, ISO, and other styles
45

Devarakonda, SaiPrasanth. "Particle Swarm Optimization." University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1335827032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Cheng, Jianqiang. "Stochastic Combinatorial Optimization." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112261.

Full text
Abstract:
Dans cette thèse, nous étudions trois types de problèmes stochastiques : les problèmes avec contraintes probabilistes, les problèmes distributionnellement robustes et les problèmes avec recours. Les difficultés des problèmes stochastiques sont essentiellement liées aux problèmes de convexité du domaine des solutions, et du calcul de l’espérance mathématique ou des probabilités qui nécessitent le calcul complexe d’intégrales multiples. A cause de ces difficultés majeures, nous avons résolu les problèmes étudiées à l’aide d’approximations efficaces.Nous avons étudié deux types de problèmes stochastiques avec des contraintes en probabilités, i.e., les problèmes linéaires avec contraintes en probabilité jointes (LLPC) et les problèmes de maximisation de probabilités (MPP). Dans les deux cas, nous avons supposé que les variables aléatoires sont normalement distribués et les vecteurs lignes des matrices aléatoires sont indépendants. Nous avons résolu LLPC, qui est un problème généralement non convexe, à l’aide de deux approximations basée sur les problèmes coniques de second ordre (SOCP). Sous certaines hypothèses faibles, les solutions optimales des deux SOCP sont respectivement les bornes inférieures et supérieures du problème du départ. En ce qui concerne MPP, nous avons étudié une variante du problème du plus court chemin stochastique contraint (SRCSP) qui consiste à maximiser la probabilité de la contrainte de ressources. Pour résoudre ce problème, nous avons proposé un algorithme de Branch and Bound pour calculer la solution optimale. Comme la relaxation linéaire n’est pas convexe, nous avons proposé une approximation convexe efficace. Nous avons par la suite testé nos algorithmes pour tous les problèmes étudiés sur des instances aléatoires. Pour LLPC, notre approche est plus performante que celles de Bonferroni et de Jaganathan. Pour MPP, nos résultats numériques montrent que notre approche est là encore plus performante que l’approximation des contraintes probabilistes individuellement.La deuxième famille de problèmes étudiés est celle relative aux problèmes distributionnellement robustes où une partie seulement de l’information sur les variables aléatoires est connue à savoir les deux premiers moments. Nous avons montré que le problème de sac à dos stochastique (SKP) est un problème semi-défini positif (SDP) après relaxation SDP des contraintes binaires. Bien que ce résultat ne puisse être étendu au cas du problème multi-sac-à-dos (MKP), nous avons proposé deux approximations qui permettent d’obtenir des bornes de bonne qualité pour la plupart des instances testées. Nos résultats numériques montrent que nos approximations sont là encore plus performantes que celles basées sur les inégalités de Bonferroni et celles plus récentes de Zymler. Ces résultats ont aussi montré la robustesse des solutions obtenues face aux fluctuations des distributions de probabilités. Nous avons aussi étudié une variante du problème du plus court chemin stochastique. Nous avons prouvé que ce problème peut se ramener au problème de plus court chemin déterministe sous certaine hypothèses. Pour résoudre ce problème, nous avons proposé une méthode de B&B où les bornes inférieures sont calculées à l’aide de la méthode du gradient projeté stochastique. Des résultats numériques ont montré l’efficacité de notre approche. Enfin, l’ensemble des méthodes que nous avons proposées dans cette thèse peuvent s’appliquer à une large famille de problèmes d’optimisation stochastique avec variables entières
In this thesis, we studied three types of stochastic problems: chance constrained problems, distributionally robust problems as well as the simple recourse problems. For the stochastic programming problems, there are two main difficulties. One is that feasible sets of stochastic problems is not convex in general. The other main challenge arises from the need to calculate conditional expectation or probability both of which are involving multi-dimensional integrations. Due to the two major difficulties, for all three studied problems, we solved them with approximation approaches.We first study two types of chance constrained problems: linear program with joint chance constraints problem (LPPC) as well as maximum probability problem (MPP). For both problems, we assume that the random matrix is normally distributed and its vector rows are independent. We first dealt with LPPC which is generally not convex. We approximate it with two second-order cone programming (SOCP) problems. Furthermore under mild conditions, the optimal values of the two SOCP problems are a lower and upper bounds of the original problem respectively. For the second problem, we studied a variant of stochastic resource constrained shortest path problem (called SRCSP for short), which is to maximize probability of resource constraints. To solve the problem, we proposed to use a branch-and-bound framework to come up with the optimal solution. As its corresponding linear relaxation is generally not convex, we give a convex approximation. Finally, numerical tests on the random instances were conducted for both problems. With respect to LPPC, the numerical results showed that the approach we proposed outperforms Bonferroni and Jagannathan approximations. While for the MPP, the numerical results on generated instances substantiated that the convex approximation outperforms the individual approximation method.Then we study a distributionally robust stochastic quadratic knapsack problems, where we only know part of information about the random variables, such as its first and second moments. We proved that the single knapsack problem (SKP) is a semedefinite problem (SDP) after applying the SDP relaxation scheme to the binary constraints. Despite the fact that it is not the case for the multidimensional knapsack problem (MKP), two good approximations of the relaxed version of the problem are provided which obtain upper and lower bounds that appear numerically close to each other for a range of problem instances. Our numerical experiments also indicated that our proposed lower bounding approximation outperforms the approximations that are based on Bonferroni's inequality and the work by Zymler et al.. Besides, an extensive set of experiments were conducted to illustrate how the conservativeness of the robust solutions does pay off in terms of ensuring the chance constraint is satisfied (or nearly satisfied) under a wide range of distribution fluctuations. Moreover, our approach can be applied to a large number of stochastic optimization problems with binary variables.Finally, a stochastic version of the shortest path problem is studied. We proved that in some cases the stochastic shortest path problem can be greatly simplified by reformulating it as the classic shortest path problem, which can be solved in polynomial time. To solve the general problem, we proposed to use a branch-and-bound framework to search the set of feasible paths. Lower bounds are obtained by solving the corresponding linear relaxation which in turn is done using a Stochastic Projected Gradient algorithm involving an active set method. Meanwhile, numerical examples were conducted to illustrate the effectiveness of the obtained algorithm. Concerning the resolution of the continuous relaxation, our Stochastic Projected Gradient algorithm clearly outperforms Matlab optimization toolbox on large graphs
APA, Harvard, Vancouver, ISO, and other styles
47

Cha, Kyungduck. "Cancer treatment optimization." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22604.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Lee, Eva K.; Committee Member: Barnes, Earl; Committee Member: Hertel, Nolan E.; Committee Member: Johnson, Ellis; Committee Member: Monteiro, Renato D.C.
APA, Harvard, Vancouver, ISO, and other styles
48

Allan, John S. Nekimken Kyle J. Weills Spencer B. "Pump sequencing optimization /." Click here to view, 2009. http://digitalcommons.calpoly.edu/mesp/7.

Full text
Abstract:
Thesis (B.S.)--California Polytechnic State University, 2009.
Project advisor: Tom Mase. Title from PDF title page; viewed on Jan. 13, 2010. Includes bibliographical references. Also available on microfiche.
APA, Harvard, Vancouver, ISO, and other styles
49

Nasiri, Faranak. "Ambulance Optimization Allocation." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1462.

Full text
Abstract:
Facility Location problem refers to placing facilities (mostly vehicles) in appropriate locations to yield the best coverage with respect to other important factors which are specific to the problem. For instance in a fire station some of the important factors are traffic time, distribution of stations, time of the service and so on. Furthermore, budget limitation, time constraints and the great importance of the operation, make the optimum allocation very complex. In the past few years, several research in this area have been done to help managers by designing some effective algorithm to allocating facilities in the best way possible. Most early proposed models were focused on static and deterministic methods. In static models, once a facility assigns to a location, it will not relocate anymore. Although these methods could be utilized in some simple settings, there are so many factors in real world that make a static model of limited application. The demands may change over time or facilities may be dropped or added. In these cases a more flexible model is desirable, thus dynamic models are proposed to be used in such cases. Facilities can be located and relocated based on the situations. More recently, dynamic models became more popular but there were still many aspects of facility allocation problems which were challenging and would require more complex solutions. The importance of facility location problem becomes significantly more relevant when it relates to hospitals and emergency responders. Even one second of improvement in response time is important in this area. For this reason, we selected ambulance facility allocation problem as a case study to analyze this problem domain. Much research has been done on ambulances allocation. We will review some of these models and their advantages and disadvantages. One of the best model in this areas introduced by Rajagopalan. In this work, his model is analyzed and its major drawback is addressed by applying some modifications to its methodology. Genetic Algorithm is utilized in this study as a heuristic method to solve the allocation model.
APA, Harvard, Vancouver, ISO, and other styles
50

Ramachandran, Selvaraj. "Hypoid gear optimization." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4419.

Full text
Abstract:
A hypoid gear optimization procedure using the method of feasible directions has been developed. The objective is to reduce the gear set weight with bending strength, contact strength and facewidth-diametral pitch ratio as constraints. The objective function weight, is calculated from the geometric approximation of the volume of the gear and pinion. The design variables selected are number of gear teeth, diametral pitch, and facewidth. The input parameters for starting the initial design phase are power to be transmitted, speed, gear ratio, type of application, mounting condition, type of loading, and the material to be used. In the initial design phase, design parameters are selected or calculated using the standard available procedures. These selected values of design parameters are passed on to the optimization routine as starting points.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography