Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Mixed-Integer Linear Programs.

Rozprawy doktorskie na temat „Mixed-Integer Linear Programs”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 30 najlepszych rozpraw doktorskich naukowych na temat „Mixed-Integer Linear Programs”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Guieu, Olivier Carleton University Dissertation Engineering Systems and Computer. "Analyzing infeasible mixed-integer and integer linear programs". Ottawa, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yildiz, Sercan. "Valid Inequalities for Mixed-Integer Linear and Mixed-Integer Conic Programs". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/777.

Pełny tekst źródła
Streszczenie:
Mixed-integer programming provides a natural framework for modeling optimization problems which require discrete decisions. Valid inequalities, used as cutting-planes and cuttingsurfaces in integer programming solvers, are an essential part of today’s integer programming technology. They enable the solution of mixed-integer programs of greater scale and complexity by providing tighter mathematical descriptions of the feasible solution set. This dissertation presents new structural results on general-purpose valid inequalities for mixedinteger linear and mixed-integer conic programs. Cut-generating functions are a priori formulas for generating a cutting-plane from the data of a mixed-integer linear program. This concept has its roots in the work of Balas, Gomory, and Johnson from the 1970s. It has received renewed attention in the past few years. Gomory and Johnson studied cut-generating functions for the corner relaxation of a mixedinteger linear program, which ignores the nonnegativity constraints on the basic variables in a tableau formulation. We consider models where these constraints are not ignored. In our first contribution, we generalize a classical result of Gomory and Johnson characterizing minimal cut-generating functions in terms of subadditivity, symmetry, and periodicity. Our analysis also exposes shortcomings in the usual definition of minimality in our general setting. To remedy this, we consider stronger notions of minimality and show that these impose additional structure on cut-generating functions. A stronger notion than the minimality of a cut-generating function is its extremality. While extreme cut-generating functions produce powerful cutting-planes, their structure can be very complicated. For the corner relaxation of a one-row integer linear program, Gomory and Johnson identified continuous, piecewise linear, minimal cut-generating functions with only two distinct slope values as a “simple” class of extreme cut-generating functions. In our second contribution, we establish a similar result for a one-row problem which takes the nonnegativity constraint on the basic variable into account. In our third contribution, we consider a multi-row model where only continuous nonbasic variables are present. Conforti, Cornuéjols, Daniilidis, Lemaréchal, and Malick recently showed that not all cutting-planes can be obtained from cut-generating functions in this framework. They also conjectured a natural condition under which cut-generating functions might be sufficient. In our third contribution, we prove that this conjecture is true. This justifies the recent research interest in cut-generating functions for this model. Despite the power of mixed-integer linear programming, many optimization problems of practical and theoretical interest cannot be modeled using a linear objective function and constraints alone. Next, we turn to a natural generalization of mixed-integer linear programming which allows nonlinear convex constraints: mixed-integer conic programming. Disjunctive inequalities, introduced by Balas in the context of mixed-integer linear programming in the 1970s, have been a principal ingredient in the practical success of mixed-integer programming in the last two decades. In order to extend our understanding of disjunctive inequalities to mixed-integer conic programming, we pursue a principled study of two-term disjunctions on conic sets. In our fourth contribution, we consider two-term disjunctions on a general regular cone. A result of Kılınç-Karzan indicates that conic minimal valid linear inequalities are all that is needed for a closed convex hull description of such sets. First we characterize the structure of conic minimal and tight valid linear inequalities for the disjunction. Then we develop structured nonlinear valid inequalities for the disjunction by grouping subsets of valid linear inequalities. We analyze the structure of these inequalities and identify conditions which guarantee that a single such inequality characterizes the closed convex hull of the disjunction. In our fifth and sixth contributions, we extend our earlier results to the cases where the regular cone under consideration is a direct product of second order cones and nonnegative rays and where it is the positive semidefinite cone. Disjunctions on these cones deserve special attention because they provide fundamental relaxations for mixed-integer second-order cone and mixed-integer semidefinite programs. We identify conditions under which our valid convex inequalities can be expressed in computationally tractable forms and present techniques to generate low-complexity relaxations when these conditions are not satisfied. In our final contribution, we provide closed convex hull descriptions for homogeneous two-term disjunctions on the second-order cone and general two-term disjunctions on affine cross-sections of the second-order cone. Our results yield strong convex disjunctive inequalities which can be used as cutting-surfaces in generic mixed-integer conic programming solvers.
Style APA, Harvard, Vancouver, ISO itp.
3

Nwana, Vincent Lebga. "Parallel algorithms for solving mixed integer linear programs". Thesis, Brunel University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368540.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Salvagnin, Domenico. "Constraint Programming Techniques for Mixed Integer Linear Programs". Doctoral thesis, Università degli studi di Padova, 2009. http://hdl.handle.net/11577/3425690.

Pełny tekst źródła
Streszczenie:
Many decision problems in industry, logistics, and telecommunications can be viewed as satisfiability or optimization problems. Two paradigms have reached a high degree of sophistication from the point of view of both theory and implementation: Constraint Programming (CP) and Mixed Integer Programming (MIP). The CP and MIP paradigms have strengths and weaknesses that complement each other. On the one hand, CP, through the use of sophisticated propagation techniques, privileges primal inference. On the other hand, MIP, through the techniques of relaxation and strengthening through cutting planes, privileges dual inference. This thesis presents several studies in Mixed Integer Programming, with emphasis on computational aspects and integration with the Constraint Programming paradigm. In particular, CP concepts and techniques, such as nogoods, minimal infeasiblity and constraint propagation, are used to improve different MIP solving components, namely, dominance detection, Benders cuts selection strategy and primal heuristics. This cross-fertilization of techniques and ideas has proven very effective. Finally, an appendix is given covering a MIP application to robust railway timetabling.
Molti problemi decisionali nell'industria, nella logistica e nelle telecomunicazioni possono essere formulati come problemi di soddisfacibilita' o di ottimizzazione. Due paradigmi per la modellazione e la risoluzione di tali problemi hanno raggiunto un elevato grado di sviluppo, sia dal punto di vista teorico che implementativo: la Programmazione a Vincoli (Constraint Programming, CP) e la Programmazione Lineare Intera Mista (Mixed Integer Programming, MIP). I paradigmi CP e MIP hanno vantaggi e debolezze complementari. Da una parte, la CP privilegia l'inferenza primale, attraverso sofisticate tecniche di propagazione. Dall'altra, la MIP privilegia l'inferenza duale, attraverso i rilassamenti e il loro rafforzamento mediante piani di taglio. Questa tesi presenta alcuni studi in Programmazione Lineare Intera Mista, con enfasi sugli aspetti computazionali e sull'integrazione col paradigma della Programmazione a Vincoli. In particolare, concetti e tecniche CP, quali nogood, insoddisfacibilita' minimale e propagazione, sono usati per migliorare varie componenti risolutive per la MIP, quali procedure di dominanza, strategia di selezione dei tagli di Benders e euristiche primali. Questo scambio di idee e tecniche si e' dimostrato molto efficace. Infine, un'applicazione MIP alla generazione di orari robusti in ambito ferroviario e' presentata in appendice.
Style APA, Harvard, Vancouver, ISO itp.
5

Köppe, Matthias [Verfasser]. "Exact Primal Algorithms for General Integer and Mixed-Integer Linear Programs / Matthias Köppe". Aachen : Shaker, 2003. http://d-nb.info/1179032292/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Binyuan. "FINITE DISJUNCTIVE PROGRAMMING METHODS FOR GENERAL MIXED INTEGER LINEAR PROGRAMS". Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145120.

Pełny tekst źródła
Streszczenie:
In this dissertation, a finitely convergent disjunctive programming procedure, the Convex Hull Tree (CHT) algorithm, is proposed to obtain the convex hull of a general mixed–integer linear program with bounded integer variables. The CHT algorithm constructs a linear program that has the same optimal solution as the associated mixed-integer linear program. The standard notion of sequential cutting planes is then combined with ideasunderlying the CHT algorithm to help guide the choice of disjunctions to use within a new cutting plane method, the Cutting Plane Tree (CPT) algorithm. We show that the CPT algorithm converges to an integer optimal solution of the general mixed-integer linear program with bounded integer variables in finitely many steps. We also enhance the CPT algorithm with several techniques including a “round-of-cuts” approach and an iterative method for solving the cut generation linear program (CGLP). Two normalization constraints are discussed in detail for solving the CGLP. For moderately sized instances, our study shows that the CPT algorithm provides significant gap closures with a pure cutting plane method.
Style APA, Harvard, Vancouver, ISO itp.
7

Smith, Edmund. "Parallel solution of linear programs". Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8833.

Pełny tekst źródła
Streszczenie:
The factors limiting the performance of computer software periodically undergo sudden shifts, resulting from technological progress, and these shifts can have profound implications for the design of high performance codes. At the present time, the speed with which hardware can execute a single stream of instructions has reached a plateau. It is now the number of instruction streams that may be executed concurrently which underpins estimates of compute power, and with this change, a critical limitation on the performance of software has come to be the degree to which it can be parallelised. The research in this thesis is concerned with the means by which codes for linear programming may be adapted to this new hardware. For the most part, it is codes implementing the simplex method which will be discussed, though these have typically lower performance for single solves than those implementing interior point methods. However, the ability of the simplex method to rapidly re-solve a problem makes it at present indispensable as a subroutine for mixed integer programming. The long history of the simplex method as a practical technique, with applications in many industries and government, has led to such codes reaching a great level of sophistication. It would be unexpected in a research project such as this one to match the performance of top commercial codes with many years of development behind them. The simplex codes described in this thesis are, however, able to solve real problems of small to moderate size, rather than being confined to random or otherwise artificially generated instances. The remainder of this thesis is structured as follows. The rest of this chapter gives a brief overview of the essential elements of modern parallel hardware and of the linear programming problem. Both the simplex method and interior point methods are discussed, along with some of the key algorithmic enhancements required for such systems to solve real-world problems. Some background on the parallelisation of both types of code is given. The next chapter describes two standard simplex codes designed to exploit the current generation of hardware. i6 is a parallel standard simplex solver capable of being applied to a range of real problems, and showing exceptional performance for dense, square programs. i8 is also a parallel, standard simplex solver, but now implemented for graphics processing units (GPUs).
Style APA, Harvard, Vancouver, ISO itp.
8

Ramakrishnan, V. S., i Jeremy F. 1939 Shapiro. "Analyzing Multi-Objective Linear and Mixed Integer Programs by Lagrange Multipliers". Massachusetts Institute of Technology, Operations Research Center, 1991. http://hdl.handle.net/1721.1/5322.

Pełny tekst źródła
Streszczenie:
A new method for multi-objective optimization of linear and mixed programs based on Lagrange multiplier methods is developed. The method resembles, but is distinct from, objective function weighting and goal programming methods. A subgradient optimization algorithm for selecting the multipliers is presented and analyzed. The method is illustrated by its application to a model for determining the weekly re-distribution of railroad cars from excess supply areas to excess demand areas, and to a model for balancing cost minimization against order completion requirements for a dynamic lot size model.
Style APA, Harvard, Vancouver, ISO itp.
9

Adams, Warren Philip. "The mixed-integer bilinear programming problem with extensions to zero-one quadratic programs". Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/74711.

Pełny tekst źródła
Streszczenie:
This research effort is concerned with a class of mathematical programming problems referred to as Mixed-Integer Bilinear Programming Problems. This class of problems, which arises in production, location-allocation, and distribution-application contexts, may be considered as a discrete version of the well-known Bilinear Programming Problem in that one set of decision variables is restricted to be binary valued. The structure of this problem is studied, and special cases wherein it is readily solvable are identified. For the more general case, a new linearization technique is introduced and demonstrated to lead to a tighter linear programming relaxation than obtained through available linearization methods. Based on this linearization, a composite Lagrangian relaxation-implicit enumeration-cutting plane algorithm is developed. Extensive computational experience is provided to test the efficiency of various algorithmic strategies and the effects of problem data on the computational effort of the proposed algorithm. The solution strategy developed for the Mixed-Integer Bilinear Programming Problem may be applied, with suitable modifications,. to other classes of mathematical programming problems: in particular, to the Zero-One Quadratic Programming Problem. In what may be considered as an extension to the work performed on the Mixed-Integer Bilinear Programming Problem, a solution strategy based on an equivalent linear reformulation is developed for the Zero-One Quadratic Programming Problem. The strategy is essentially an implicit enumeration algorithm which employs Lagrangian relaxation, Benders' cutting planes, and local explorations. Computational experience for this problem class is provided to justify the worth of the proposed linear reformulation and algorithm.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
10

Gayen, Neela. "Automatic parallelization of stream programs for resource efficient embedded processors". Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/213058/1/Neela_Gayen_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis considers how to exploit the specific characteristics of data streaming functions and multi-core processors to increase throughput through appropriate software process mappings. The hypothesis is that large numbers of low-power processors can achieve high throughput for streaming applications if a good mapping is provided. The innovation is to use compilation principles to guide the mapping, rather than heuristics. Three increasingly complex approaches are developed that focus on computational bottlenecks, then adds communication overheads, and lastly adds the costs of splitting and merging operations. Using this approach demonstrates that the successively more complex models can achieve correspondingly greater throughput.
Style APA, Harvard, Vancouver, ISO itp.
11

Rush, Andrew J. "Partial Destination Resolution in Multicast Elastic Optical Networks: A Mixed-Integer Linear Programming Approach". Miami University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=miami1470324185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Macfie, Peter. "Large-scale security constrained optimal reactive power flow for operational loss management on the GB electricity transmission network". Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/5073.

Pełny tekst źródła
Streszczenie:
The transmission of power across the GB transmission system, as operated by National Grid, results in inevitable loss of electrical power. Operationally these power losses cannot be eliminated, but they can be reduced by adjustment of the system voltage profile. At present the minimisation of active power losses relies upon a lengthy manually based iterative adjustment process. Therefore the system operator requires the development of advanced optimisation tools to cope with the challenges faced over the next decade, such as achieving the stringent greenhouse gas emission targets laid down by the UK government, while continue to provide an economical, secure and efficient service. To meet these challenges the research presented in this thesis has developed optimisation techniques that can assist control centre engineers by automatically setting up voltage studies that are low loss and low cost. The proposed voltage optimisation techniques have been shown to produce solutions that are secured against 800 credible contingency cases. A prototype voltage optimisation tool has been deployed, which required the development of a series of novel approaches to extend the functionality of an existing optimisation program. This research has lead to the development of novel methods for handling multi-objectives, contradictory shunt switching configurations and selecting all credible contingencies. Studies indicate that a theoretical loss saving of 1.9% is achievable, equivalent to an annual emissions saving of approximately 64,000 tonnes of carbon dioxide. A novel security constrained mixed integer non-linear optimisation technique has also been developed. The proposed method has been shown to be superior to several conventional methods on a wide range of IEEE standard network models and also on a range of large-scale GB network models. The proposed method manages to further reduce active power losses and also satisfies all security constraints.
Style APA, Harvard, Vancouver, ISO itp.
13

Menezes, Jeffrey Louis. "Use of isoperformance, constraint programming, and mixed integer linear programing for architecture tradespace exploration of passive Optical Earth Observation Systems". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119313.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, In conjunction with the Leaders for Global Operations Program at MIT, 2018.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management 2018 In conjunction with the Leaders for Global Operations Program at MIT
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 147-150).
This thesis presents work performed during the course of an internship at An Aerospace Company (AAC) and research performed at Massachusetts Institute of Technology (MIT) Lincoln Laboratory as part of a fellowship. Both efforts entailed the development of architecture tradespace exploration models for space systems. The tradespace exploration model developed at AAC, called the Earth Observation Architecture Isoperformance Model (EO-AIM), uses automation techniques, isoperformance, and constraint programming to rapidly construct potential space-based passive optical EO sensor architecture concepts which meet a given set of customer requirements. Cost estimates are also generated for each sensor concept via integration with stakeholder-trusted cost modeling software allowing for cost to be treated as both an independent variable and consequence when evaluating various architecture solutions. The EO-AIM then uses simple algorithms to identify potential satellite bus options for hosting each sensor architecture in orbit. The total cost of populating an entire constellation based on the sensor architecture is finally estimated using cost estimates for the sensor, satellite bus, and the best launch vehicle option capable of lifting the satellite(s) to orbit. In general, the EO-AIM seeks to bolster's AAC's capabilities for conducting architecture trade space exploration and initial proposal development given advancements in satellite bus, launch vehicle, and sensing technologies. The tradespace exploration model developed at MIT Lincoln Laboratory is a satellite network mixed integer linear program (MILP) which is used for making system architecture decisions and estimating final architecture cost. The satellite network MILP is formulated as both an assignment problem and a network maximum flow problem which must send sensor generated data to a ground user. Results of the MILP vary with the selected objective function and provide insights on the potential benefits of architecture decisions such as sensor disaggregation and the utility of introducing additional communication nodes into existing networks. The satellite network MILP is also capable of verifying network data volume throughput capacity and providing an optimized link schedule for the duration of the simulation. Overall, the satellite network MILP model explores the general problem of optimizing use of limited resources for a given space-based sensor while ensuring mission data needs are met. It is a higher fidelity alternative to the simple satellite bus and launch vehicle compatibility algorithm used in EO-AIM. Both models are shown to improve architecture tradespace exploration of space-based passive-optical EO systems. With a simple demonstration, it is exhibited that using the EO-AIM can increase sensor architecture concepts generated by a factor of ten or more by creating all feasible sensor architecture concepts given user inputs and settings. Furthermore, the use of the satellite network MILP to examine alternative network architecture options for NASA's HyspIRI mission resulted in a system architecture with 20% higher data throughput for marginally less cost.
by Jeffrey Louis Menezes.
S.M.
M.B.A.
Style APA, Harvard, Vancouver, ISO itp.
14

Qiu, Feng. "Probabilistic covering problems". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47567.

Pełny tekst źródła
Streszczenie:
This dissertation studies optimization problems that involve probabilistic covering constraints. A probabilistic constraint evaluates and requires that the probability that a set of constraints involving random coefficients with known distributions hold satisfy a minimum requirement. A covering constraint involves a linear inequality on non-negative variables with a greater or equal to sign and non-negative coefficients. A variety of applications, such as set cover problems, node/edge cover problems, crew scheduling, production planning, facility location, and machine learning, in uncertain settings involve probabilistic covering constraints. In the first part of this dissertation we consider probabilistic covering linear programs. Using the sampling average approximation (SAA) framework, a probabilistic covering linear program can be approximated by a covering k-violation linear program (CKVLP), a deterministic covering linear program in which at most k constraints are allowed to be violated. We show that CKVLP is strongly NP-hard. Then, to improve the performance of standard mixed-integer programming (MIP) based schemes for CKVLP, we (i) introduce and analyze a coefficient strengthening scheme, (ii) adapt and analyze an existing cutting plane technique, and (iii) present a branching technique. Through computational experiments, we empirically verify that these techniques are significantly effective in improving solution times over the CPLEX MIP solver. In particular, we observe that the proposed schemes can cut down solution times from as much as six days to under four hours in some instances. We also developed valid inequalities arising from two subsets of the constraints in the original formulation. When incorporating them with a modified coefficient strengthening procedure, we are able to solve a difficult probabilistic portfolio optimization instance listed in MIPLIB 2010, which cannot be solved by existing approaches. In the second part of this dissertation we study a class of probabilistic 0-1 covering problems, namely probabilistic k-cover problems. A probabilistic k-cover problem is a stochastic version of a set k-cover problem, which is to seek a collection of subsets with a minimal cost whose union covers each element in the set at least k times. In a stochastic setting, the coefficients of the covering constraints are modeled as Bernoulli random variables, and the probabilistic constraint imposes a minimal requirement on the probability of k-coverage. To account for absence of full distributional information, we define a general ambiguous k-cover set, which is ``distributionally-robust." Using a classical linear program (called the Boolean LP) to compute the probability of events, we develop an exact deterministic reformulation to this ambiguous k-cover problem. However, since the boolean model consists of exponential number of auxiliary variables, and hence not useful in practice, we use two linear program based bounds on the probability that at least k events occur, which can be obtained by aggregating the variables and constraints of the Boolean model, to develop tractable deterministic approximations to the ambiguous k-cover set. We derive new valid inequalities that can be used to strengthen the linear programming based lower bounds. Numerical results show that these new inequalities significantly improve the probability bounds. To use standard MIP solvers, we linearize the multi-linear terms in the approximations and develop mixed-integer linear programming formulations. We conduct computational experiments to demonstrate the quality of the deterministic reformulations in terms of cost effectiveness and solution robustness. To demonstrate the usefulness of the modeling technique developed for probabilistic k-cover problems, we formulate a number of problems that have up till now only been studied under data independence assumption and we also introduce a new applications that can be modeled using the probabilistic k-cover model.
Style APA, Harvard, Vancouver, ISO itp.
15

Tollefson, Eric Sander. "Optimal randomized and non-randomized procedures for multinomial selection problems". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43629.

Pełny tekst źródła
Streszczenie:
Multinomial selection problem procedures are ranking and selection techniques that aim to select the best (most probable) alternative based upon a sequence of multinomial observations. The classical formulation of the procedure design problem is to find a decision rule for terminating sampling. The decision rule should minimize the expected number of observations taken while achieving a specified indifference zone requirement on the prior probability of making a correct selection when the alternative configurations are in a particular subset of the probability space called the preference zone. We study the constrained version of the design problem in which there is a given maximum number of allowed observations. Numerous procedures have been proposed over the past 50 years, all of them suboptimal. In this thesis, we find via linear programming the optimal selection procedure for any given probability configuration. The optimal procedure turns out to be necessarily randomized in many cases. We also find via mixed integer programming the optimal non-randomized procedure. We demonstrate the performance of the methodology on a number of examples. We then reformulate the mathematical programs to make them more efficient to implement, thereby significantly expanding the range of computationally feasible problems. We prove that there exists an optimal policy which has at most one randomized decision point and we develop a procedure for finding such a policy. We also extend our formulation to replicate existing procedures. Next, we show that there is very little difference between the relative performances of the optimal randomized and non-randomized procedures. Additionally, we compare existing procedures using the optimal procedure as a benchmark, and produce updated tables for a number of those procedures. Then, we develop a methodology that guarantees the optimal randomized and non-randomized procedures for a broad class of variable observation cost functions -- the first of its kind. We examine procedure performance under a variety of cost functions, demonstrating that incorrect assumptions regarding marginal observation costs may lead to increased total costs. Finally, we investigate and challenge key assumptions concerning the indifference zone parameter and the conditional probability of correct selection, revealing some interesting implications.
Style APA, Harvard, Vancouver, ISO itp.
16

Olanders, David. "Optimal Time-Varying Cash Allocation". Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273626.

Pełny tekst źródła
Streszczenie:
A payment is the most fundamental aspect of a trade that involves funds. In recent years, the development of new payment services has accelerated significantly as the world has moved further into the digital era. This transition has led to an increased demand of digital payment solutions that can handle trades across the world. As trades today can be agreed at any time wherever the payer and payee are located, the party that mediates payments must at any time to be available in order to mediate an agreed exchange. This requires the payment service provider to always have funds available in the required countries and currencies in order for trades to always be available. This thesis concerns how a payment service provider can reallocate capital in a cost efficient way in order for trades to always be available. Traditionally, the reallocation of capital is done in a rule-based manner, which discard the cost dimension and thereby only focus on the reallocation itself. This thesis concerns methods to optimally reallocate capital focusing on the cost of transferring capital within the network. Where the concerned methods has the potential of transferring capital in a far more cost efficient way. When mathematically formulating the reallocation decisions as an optimization problem, the cost function is formulated as a linear program with both Boolean and real constraints. This impose non-feasibility of locating the optimal solution using traditional methods for linear programs, why developed traditional and more advanced methods were used. The model was evaluated based on a large number of simulations in comparison with the performance of a rule-based reallocation system. The developed model provides a significant cost reduction compared to the rule-based approach and thereby outperforms the traditional reallocation system. Future work should focus on expanding the model by broadening the available transfer options, by increasing the considered uncertainty via a bayesian treatment and finally by considering all cost aspects of the network.
En betalning är den mest fundamentala aspekten av handel som involverar kapital. De senaste åren har utvecklingen av nya betalmedel ökat drastiskt då världen fortsatt att utvecklas genom digitaliseringen. Utvecklingen har lett till en ökad efterfrågan på digitala betalningslösningar som kan hantera handel över hela världen. Då handel idag kan ske när som helst oberoende av var betalaren och betalningsmottagaren befinner sig, måste systemet som genomför betalningen alltid vara tillgängligt för att kunna förmedla handel mellan olika parter. Detta kräver att betalningssystemet alltid måste ha medel tillgängligt i efterfrågade länder och valutor för att handeln ska kunna genomföras. Den här uppsatsen fokuserar på hur kapital kostnadseffektivt kan omallokeras i ett betalsystem för att säkerställa att handel alltid är tillgängligt. Traditionellt har omallokeringen av kapital gjorts på ett regelbaserat sätt, vilket inte tagit hänsyn till kostnadsdimensionen och därigenom enbart fokuserat på själva omallokeringen. Den här uppsatsen använder metoder för att optimalt omallokera kapital baserat på kostnaderna för omallokeringen. Därigenom skapas en möjlighet att flytta kapital på ett avsevärt mer kostnadseffektivt sätt. När omallokeringsbesluten formuleras matematiskt som ett optimeringsproblem är kostnadsfunktionen formulerad som ett linjärt program med både Booleska och reella begränsningar av variablerna. Detta gör att traditionella lösningsmetoder för linjära program inte är användningsbara för att finna den optimala lösningen, varför vidareutveckling av tradtionella metoder tillsammans med mer avancerade metoder använts. Modellen utvärderades baserat på ett stort antal simuleringar som jämförde dess prestanda med det regelbaserade systemet. Den utvecklade modellen presterar en signfikant kostnadsreduktion i jämförelse med det regelbaserade systemet och överträffar därigenom det traditionellt använda systemet. Framtida arbete bör fokusera på att expandera modellen genom att utöka de potentiella överföringsmöjligheterna, att ta ökad hänsyn till osäkerhet genom en bayesiansk hantering, samt slutligen att integrera samtliga kostnadsaspekter i nätverket.
Style APA, Harvard, Vancouver, ISO itp.
17

Varanis, Luciano Pereira. "Programação estocástica para fundos de pensão". reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/13859.

Pełny tekst źródła
Streszczenie:
Submitted by Luciano Pereira Varanis (luciano_varanis@yahoo.com) on 2015-06-29T19:17:42Z No. of bitstreams: 1 Dissertação _Luciano_Varanis - Completo.pdf: 1367224 bytes, checksum: 848c44e67420ced7c19a6d2d1ab98f61 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2015-07-22T20:07:23Z (GMT) No. of bitstreams: 1 Dissertação _Luciano_Varanis - Completo.pdf: 1367224 bytes, checksum: 848c44e67420ced7c19a6d2d1ab98f61 (MD5)
Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-07-24T17:56:00Z (GMT) No. of bitstreams: 1 Dissertação _Luciano_Varanis - Completo.pdf: 1367224 bytes, checksum: 848c44e67420ced7c19a6d2d1ab98f61 (MD5)
Made available in DSpace on 2015-07-24T17:56:24Z (GMT). No. of bitstreams: 1 Dissertação _Luciano_Varanis - Completo.pdf: 1367224 bytes, checksum: 848c44e67420ced7c19a6d2d1ab98f61 (MD5) Previous issue date: 2014-12-18
In this dissertation we discuss models and solution methods of stochastic programs to solve ALM problmes for pension funds. We will provide the model from Drijver et al. based on Multistage Mixed-integer Stochastic Programming. A case study based on simulation for an ALM problem will be presented
Nesta dissertação discutiremos modelos e métodos de soluções de programação estocástica para resolver problemas de ALM em fundos de pensão. Apresentaremos o modelo de (Drijver et al.), baseado na programação estocástica multiestágios inteira-mista. Um estudo de caso para um problema de ALM será apresentado usando simulação de cenários.
Style APA, Harvard, Vancouver, ISO itp.
18

Olabode, John A. "Analysis of the performance of an optimization model for time-shiftable electrical load scheduling under uncertainty". Thesis, Monterey, California: Naval Postgraduate School, 2016. http://hdl.handle.net/10945/51591.

Pełny tekst źródła
Streszczenie:
Approved for public release; distribution is unlimited
To ensure sufficient capacity to handle unexpected demands for electric power, decision makers often over-estimate expeditionary power requirements. Therefore, we often use limited resources inefficiently by purchasing more generators and investing in more renewable energy sources than needed to run power systems on the battlefield. Improvement of the efficiency of expeditionary power units requires better managing of load requirements on the power grids and, where possible, shifting those loads to a more economical time of day. We analyze the performance of a previously developed optimization model for scheduling time-shiftable electrical loads in an expeditionary power grids model in two experiments. One experiment uses model data similar to the original baseline data, in which expected demand and expected renewable production remain constant throughout the day. The second experiment introduces unscheduled demand and realistic fluctuations in the power production and the demand distributions data that more closely reflect actual data. Our major findings show energy grid power production composition affects which uncertain factor(s) influence fuel con-sumption, and uncertainty in the energy grid system does not always increase fuel consumption by a large amount. We also discover that the generators running the most do not always have the best load factor on the grid, even when optimally scheduled.
Lieutenant Commander, United States Navy
Style APA, Harvard, Vancouver, ISO itp.
19

Kim, Bosung. "Two-stage combinatorial optimization framework for air traffic flow management under constrained capacity". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53500.

Pełny tekst źródła
Streszczenie:
Air traffic flow management is a critical component of air transport operations because at some point in time, often very frequently, one of more of the critical resources in the air transportation network has significantly reduced capacity, resulting in congestion and delay for airlines and other entities and individuals who use the network. Typically, these “bottlenecks” are noticed at a given airport or terminal area, but they also occur in en route airspace. The two-stage combinatorial optimization framework for air traffic flow management under constrained capacity that is presented in this thesis, represents a important step towards the full consideration of the combinatorial nature of air traffic flow management decision that is often ignored or dealt with via priority-based schemes. It also illustrates the similarities between two traffic flow management problems that heretofore were considered to be quite distinct. The runway systems at major airports are highly constrained resources. From the perspective of arrivals, unnecessary delays and emissions may occur during peak periods when one or more runways at an airport are in great demand while other runways at the same airport are operating under their capacity. The primary cause of this imbalance in runway utilization is that the traffic flow into and out of the terminal areas is asymmetric (as a result of airline scheduling practices), and arrivals are typically assigned to the runway nearest the fix through which they enter the terminal areas. From the perspective of departures, delays and emissions occur because arrivals take precedence over departures with regard to the utilization of runways (despite the absence of binding safety constraints), and because arrival trajectories often include level segments that ensure “procedural separation” from arriving traffic while planes are not allowed to climb unrestricted along the most direct path to their destination. Similar to the runway systems, the terminal radar approach control facilities (TRACON) boundary fixes are also constrained resources of the terminal airspace. Because some arrival traffic from different airports merges at an arrival fix, a queue for the terminal areas generally starts to form at the arrival fix, which are caused by delays due to heavy arriving traffic streams. The arrivals must then absorb these delays by path stretching and adjusting their speed, resulting in unplanned fuel consumption. However, these delays are often not distributed evenly. As a result, some arrival fixes experience severe delays while, similar to the runway systems, the other arrival fixes might experience no delays at all. The goal of this thesis is to develop a combined optimization approach for terminal airspace flow management that assigns a TRACON boundary fix and a runway to each flight while minimizing the required fuel burn and emissions. The approach lessens the severity of terminal capacity shortage caused by and imbalance of traffic demand by shunting flights from current positions to alternate runways. This is done by considering every possible path combination. To attempt to solve the congestion of the terminal airspace at both runways and arrival fixes, this research focuses on two sequential optimizations. The fix assignments are dealt with by considering, simultaneously, the capacity constraints of fixes and runways as well as the fuel consumption and emissions of each flight. The research also develops runway assignments with runway scheduling such that the total emissions produced in the terminal area and on the airport surface are minimized. The two-stage sequential framework is also extended to en route airspace. When en route airspace loses its capacity for any reason, e.g. severe weather condition, air traffic controllers and flight operators plan flight schedules together based on the given capacity limit, thereby maximizing en route throughput and minimizing flight operators' costs. However, the current methods have limitations due to the lacks of consideration of the combinatorial nature of air traffic flow management decision. One of the initial attempts to overcome these limitations is the Collaborative Trajectory Options Program (CTOP), which will be initiated soon by the Federal Aviation Administration (FAA). The developed two-stage combinatorial optimization framework fits this CTOP perfectly from the flight operator's perspective. The first stage is used to find an optimal slot allocation for flights under satisfying the ration by schedule (RBS) algorithm of the FAA. To solve the formulated first stage problem efficiently, two different solution methodologies, a heuristic algorithm and a modified branch and bound algorithm, are presented. Then, flights are assigned to the resulting optimized slots in the second stage so as to minimize the flight operator's costs.
Style APA, Harvard, Vancouver, ISO itp.
20

He, Qing. "Robust-Intelligent Traffic Signal Control within a Vehicle-to-Infrastructure and Vehicle-to-Vehicle Communication Environment". Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/196011.

Pełny tekst źródła
Streszczenie:
Modern traffic signal control systems have not changed significantly in the past 40-50 years. The most widely applied traffic signal control systems are still time-of-day, coordinated-actuated system, since many existing advanced adaptive signal control systems are too complicated and fathomless for most of people. Recent advances in communications standards and technologies provide the basis for significant improvements in traffic signal control capabilities. In the United States, the IntelliDriveSM program (originally called Vehicle Infrastructure Integration - VII) has identified 5.9GHz Digital Short Range Communications (DSRC) as the primary communications mode for vehicle-to-vehicle (v2v) and vehicle-to-infrastructure (v2i) safety based applications, denoted as v2x. The ability for vehicles and the infrastructure to communication information is a significant advance over the current system capability of point presence and passage detection that is used in traffic control systems. Given enriched data from IntelliDriveSM, the problem of traffic control can be solved in an innovative data-driven and mathematical way to produce robust and optimal outputs.In this doctoral research, three different problems within a v2x environment- "enhanced pseudo-lane-level vehicle positioning", "robust coordinated-actuated multiple priority control", and "multimodal platoon-based arterial traffic signal control", are addressed with statistical techniques and mathematical programming.First, a pseudo-lane-level GPS positioning system is proposed based on an IntelliDriveSM v2x environment. GPS errors can be categorized into common-mode errors and noncommon-mode errors, where common-mode errors can be mitigated by differential GPS (DGPS) but noncommon-mode cannot. Common-mode GPS errors are cancelled using differential corrections broadcast from the road-side equipment (RSE). With v2i communication, a high fidelity roadway layout map (called MAP in the SAE J2735 standard) and satellite pseudo-range corrections are broadcast by the RSE. To enhance and correct lane level positioning of a vehicle, a statistical process control approach is used to detect significant vehicle driving events such as turning at an intersection or lane-changing. Whenever a turn event is detected, a mathematical program is solved to estimate and update the GPS noncommon-mode errors. Overall the GPS errors are reduced by corrections to both common-mode and noncommon-mode errors.Second, an analytical mathematical model, a mixed-integer linear program (MILP), is developed to provide robust real-time multiple priority control, assuming penetration of IntelliDriveSM is limited to emergency vehicles and transit vehicles. This is believed to be the first mathematical formulation which accommodates advanced features of modern traffic controllers, such as green extension and vehicle actuations, to provide flexibility in implementation of optimal signal plans. Signal coordination between adjacent signals is addressed by virtual coordination requests which behave significantly different than the current coordination control in a coordinated-actuated controller. The proposed new coordination method can handle both priority and coordination together to reduce and balance delays for buses and automobiles with real-time optimized solutions.The robust multiple priority control problem was simplified as a polynomial cut problem with some reasonable assumptions and applied on a real-world intersection at Southern Ave. & 67 Ave. in Phoenix, AZ on February 22, 2010 and March 10, 2010. The roadside equipment (RSE) was installed in the traffic signal control cabinet and connected with a live traffic signal controller via Ethernet. With the support of Maricopa County's Regional Emergency Action Coordinating (REACT) team, three REACT vehicles were equipped with onboard equipments (OBE). Different priority scenarios were tested including concurrent requests, conflicting requests, and mixed requests. The experiments showed that the traffic controller was able to perform desirably under each scenario.Finally, a unified platoon-based mathematical formulation called PAMSCOD is presented to perform online arterial (network) traffic signal control while considering multiple travel modes in the IntelliDriveSM environment with high market penetration, including passenger vehicles. First, a hierarchical platoon recognition algorithm is proposed to identify platoons in real-time. This algorithm can output the number of platoons approaching each intersection. Second, a mixed-integer linear program (MILP) is solved to determine the future optimal signal plans based on the real-time platoon data (and the platoon request for service) and current traffic controller status. Deviating from the traditional common network cycle length, PAMSCOD aims to provide multi-modal dynamical progression (MDP) on the arterial based on the real-time platoon information. The integer feasible solution region is enhanced in order to reduce the solution times by assuming a first-come, first-serve discipline for the platoon requests on the same approach. Microscopic online simulation in VISSIM shows that PAMSCOD can easily handle two traffic modes including buses and automobiles jointly and significantly reduce delays for both modes, compared with SYNCHRO optimized plans.
Style APA, Harvard, Vancouver, ISO itp.
21

Sahin, Cem. "Optimization Of Electricity Markets In The Price Based And Security Constrained Unit Commitment Problems Frameworks". Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612242/index.pdf.

Pełny tekst źródła
Streszczenie:
Operation of the electricity markets is subject to a number of strict and specific constraints such as continuous load-generation balance, security of supply, and generation technology related limitations. Contributions have been made to two important problems of the Electricity Markets, in the context of this study. In this study, Price Based Unit Commitment problem in the literature, which is a tool for the GENCO for operations planning, is extended considering the interdependencies between the Natural Gas (NG) and Electricity infrastructures and the uncertainty of Wind Power generation. The effect of the NG infrastructure physical limitations is considered via linearized NG transmission system equations, and the Wind energy sources and conventional generation resource uncertainties are simulated by Monte-Carlo simulations. The contribution of the forward energy Bilateral Contracts (BC), as a financial risk hedging tool is also included by modeling these in the proposed PBUC framework. In the case studies , it is observed that a GENCO could prevent its financial losses due to NG interruptions, by depositing only a portion of the midterm interrupted NG in the storage facilities. The Security Constrained Unit Commitment (SCUC) Problem is widely accepted tool in the industry which models the market clearing process. This study integrates two novelties to the SCUC problem
&bull
A discrete demand response model to consider active participation of the consumers, &bull
A hybrid deterministic/stochastic contingency model to represent the N-1 contingencies together with the uncertainties related with the wind power generation and system load. It is observed that the curtailment of available wind power capacity would enable the TSO to take corrective actions against occurrence of the contingencies and realization of the uncertainties in the most possible economical manner.
Style APA, Harvard, Vancouver, ISO itp.
22

Chandrashekar, Sachin. "Impact of Flexibility in Plug-in Electric Vehicle Charging with Uncertainty of Wind". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462551799.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Robazzi, João Vítor Silva. "Novos limitantes inferiores para o flowshop com buffer zero". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18156/tde-25092018-143916/.

Pełny tekst źródła
Streszczenie:
O sequenciamento e a programação da produção trazem grandes benefícios financeiros às empresas se realizados de forma adequada. Atualmente, soluções generalizadas apresentam resultados aceitáveis, porém têm como consequência benefícios inferiores quando comparados a estudos específicos. O ramo da otimização de resultados possui dois tipos de soluções: as exatas para problemas de menores dimensões e não exatas, ou heurísticas, para problemas de médias e grandes dimensões. Este trabalho apresenta algoritmos exatos do tipo Branch & Bound e Modelos de Programação Linear Inteira Mista para solucionar quatro variações de problemas de scheduling: Fm|block|∑Cjm, Fm|block|∑Tj, Fm|block, Sijk|∑Cjm e Fm|block, Sijk|∑Tj. As abordagens utilizadas são inéditas na literatura e apresentaram resultados animadores para a maioria dos cenários. O limitante para o tempo total de fluxo obteve resposta ótima em 100% dos casos para problemas de até 20 tarefas e 4 máquinas em menos de uma hora. Para o tempo total de atraso, o limitante se mostrou mais eficiente quando os valores das due dates apresentam alta taxa de dispersão. Para os casos com setup, foram elaboradas três variações de limitantes para cada problema. O limitante com setup que apresentou o melhor desempenho foi o que obteve a melhor relação entre o seu valor numérico e seu custo computacional. Os modelos MILP solucionaram 100% dos problemas sem setup para até 20 tarefas e 4 máquinas e para os casos com setup, foram solucionados problemas de até 14 tarefas e 4 máquinas no tempo limite de uma hora. Os testes computacionais mostram a eficiência na redução do número de nós e, consequentemente, no tempo de execução. Portanto, o estudo realizado indica que, para problemas de pequeno porte e médio, os métodos em questão possuem grande potencial para aplicações práticas.
Job Sequence and Programming give benefits both financial and organizational to any company when performed properly. Nowadays, there is still a gap between theory and practice due to solutions that are short in specification. The analyzed problems differ in type and dimension thus modifying its complexity. The results optimization field is divided into two types of solution: the exact solution for minor problems and the non-exact solution for greater dimension problems. The present paper presents exact algorithms to solve the problems Fm|block|∑Cjm, Fm|block|∑Tj, Fm|block, Sijk|∑Cjm by the Branch & Bounds and Mixed Integer Linear Program models. The approaches are new and presented good results for most cases. Bounds for the no-setup total flow time scenario solved 100% of the 20 jobs and 4 machines cases. High dispersion range due dates contributed for the effectiveness of the no-setup total tardiness bound\'s effectiveness. Three different approaches were developed for the setup cases. The best approach aimed to optimize the value/effort factor for the B&B. The Mixed Integer Linear Program models solved 100% of the no-setup cases for 20 jobs and 4 machines. The MILPs setup cases solved optimally 14 jobs and 4 machines cases. Computational tests were executed and analyzed and they highlighted the node count reduction and, consequently, the execution time. The present study points out that the exact methods can be applied to small and medium scheduling problems in practice.
Style APA, Harvard, Vancouver, ISO itp.
24

Oluleye, Oluwagbemisola Olarinde. "Integration of waste heat recovery in process sites". Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/integration-of-waste-heat-recovery-in-process-sites(ebbc2669-2c9b-40be-9eae-8d2252f0286f).html.

Pełny tekst źródła
Streszczenie:
Exploitation of waste heat could achieve economic and environmental benefits, while at the same time increase energy efficiency in process sites. Diverse commercialised technologies exist to recover useful energy from waste heat. In addition, there are multiple on-site and offsite end-uses of recovered energy. The challenge is to find the optimal mix of technologies and end-uses of recovered energy taking into account the quantity and quality of waste heat sources, interactions with interconnected systems and constraints on capital investment. Explicit models for waste heat recovery technologies that are easily embedded within appropriate process synthesis frameworks are proposed in this work. A novel screening tool is also proposed to guide selection of technology options. The screening tool considers the deviation of the actual performance from the ideal performance of technologies, where the actual performance takes into account irreversibilities due to finite temperature heat transfer. Results from applying the screening tool show that better temperature matching between heat sources and technologies reduces the energy quality degradation during the conversion process. A ranking criterion is also proposed to evaluate end-uses of recovered energy. Applying the ranking criterion shows the use to which energy recovered from waste heat is put determines the economics and potential to reduce CO2 emissions when waste heat recovery is integrated in process sites. This thesis also proposes a novel methodological framework based on graphical and optimization techniques to integrate waste heat recovery into existing process sites. The graphical techniques are shown to provide useful insights into the features of a good solution and assess the potential in industrial waste heat prior to detailed design. The optimization model allows systematic selection and combination of waste heat source streams, selection of technology options, technology working fluids, and exploitation of interactions with interconnected systems. The optimization problem is formulated as a Mixed Integer Linear Program, solved using the branch-and-bound algorithm. The objective is to maximize the economic potential considering capital investment, maintenance costs and operating costs of the selected waste heat recovery technologies. The methodology is applied to industrial case studies. Results indicate that combining waste heat recovery options yield additional increases in efficiency, reductions in CO2 emissions and costs. The case study also demonstrates that significant benefits from waste heat utilization can be achieved when interactions with interconnected systems are considered simultaneously. The thesis shows that the methodology has potential to identify, screen, select and combine waste heat recovery options for process sites. Results suggest that recovery of waste heat can improve the energy security of process sites and global energy security through the conservation of fuel and reduction in CO2 emissions and costs. The methodological framework can inform integration of waste heat recovery in the process industries and formulation of public policies on industrial waste heat utilization.
Style APA, Harvard, Vancouver, ISO itp.
25

Baud-Lavigne, Bertrand. "Conception conjointe de nomenclatures et de la chaîne logistique pour une famille de produits : outils d'optimisation et analyse". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00770172.

Pełny tekst źródła
Streszczenie:
Le travail de thèse présenté dans ce mémoire porte sur des méthodes d'optimisation pour la conception conjointe des nomenclatures d'une famille de produits et de sa chaîne logistique. Dans les milieux industriels comme dans les services, le contexte commercial très concurrentiel oblige les entreprises à diversifier leurs offres pour mieux répondre aux demandes de leurs clients. La gestion de cette diversité est alors une problématique centrale : comment proposer une large variété de produits pour satisfaire les besoins des clients tout en maîtrisant les coûts de production, d'inventaire et de logistique ? Les réponses à ce problème relèvent des disciplines habituellement séparées : la conception des produits, la production et la logistique. Si une majorité des approches existantes traitent ces problématiques de façon séquentielle, l'interdisciplinarité apparaît cependant comme un élément essentiel dans la gestion de la diversité. L'objectif de cette thèse est de chercher comment améliorer les interactions entre la conception de familles de produits et l'optimisation des réseaux logistiques en proposant une étape de conception intermédiaire et en développant des outils mathématiques, avec un intérêt particulier porté aux problématiques de développement durable.
Style APA, Harvard, Vancouver, ISO itp.
26

Darwiche, Mostafa. "When operations research meets structural pattern recognition : on the solution of error-tolerant graph matching problems". Thesis, Tours, 2018. http://www.theses.fr/2018TOUR4022/document.

Pełny tekst źródła
Streszczenie:
Cette thèse se situe à l’intersection de deux domaines de recherche scientifique la Reconnaissance d’Objets Structurels (ROS) et la Recherche Opérationnelle (RO). Le premier consiste à rendre la machine plus intelligente et à reconnaître les objets, en particulier ceux basés sur les graphes. Alors que le second se focalise sur la résolution de problèmes d’optimisation combinatoire difficiles. L’idée principale de cette thèse est de combiner les connaissances de ces deux domaines. Parmi les problèmes difficiles existants en ROS, le problème de la distance d’édition entre graphes (DEG) a été sélectionné comme le cœur de ce travail. Les contributions portent sur la conception de méthodes adoptées du domaine RO pour la résolution du problème de DEG. Explicitement, des nouveaux modèles linéaires en nombre entiers et des matheuristiques ont été développé à cet effet et de très bons résultats ont été obtenus par rapport à des approches existantes
This thesis is focused on Graph Matching (GM) problems and in particular the Graph Edit Distance (GED) problems. There is a growing interest in these problems due to their numerous applications in different research domains, e.g. biology, chemistry, computer vision, etc. However, these problems are known to be complex and hard to solve, as the GED is a NP-hard problem. The main objectives sought in this thesis, are to develop methods for solving GED problems to optimality and/or heuristically. Operations Research (OR) field offers a wide range of exact and heuristic algorithms that have accomplished very good results when solving optimization problems. So, basically all the contributions presented in thesis are methods inspired from OR field. The exact methods are designed based on deep analysis and understanding of the problem, and are presented as Mixed Integer Linear Program (MILP) formulations. The proposed heuristic approaches are adapted versions of existing MILP-based heuristics (also known as matheuristics), by considering problem-dependent information to improve their performances and accuracy
Style APA, Harvard, Vancouver, ISO itp.
27

Jörg, Markus [Verfasser]. "k-disjunctive cuts and cutting plane algorithms for general mixed integer linear programs / Markus Jörg". 2008. http://d-nb.info/991900855/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Ogunyomi, Babafemi Anthony. "Petroleum development optimization under uncertainty : integrating multi-compartment tank models in mixed integer non-linear programs". Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-12-2504.

Pełny tekst źródła
Streszczenie:
A field development plan is an important document used to tell the share holders and investors that every aspect of the project has been carefully evaluated. The field development plan should include the objectives of the development, petroleum engineering data, operating and maintenance principles, description of engineering facilities, cost and manpower estimates, project planning and budget proposal. But to arrive at decisions concerning the contents of the field development plan many concept and ideas would have to be screened so that the best ideas and concepts are carried forward for detailed analysis. This screening process can be daunting as there is no limit to the number of viable concepts and ideas. To add to this, for a new field there is hardly ever enough data to fully characterize the reservoir at the time the field development plan is being formulated because there are only a handful of wells in the reservoir. This lack of information about the reservoir introduces uncertainty in the analysis done during the screening process of the concept selection and can have a significant impact on the quality of the project. In this work, we present a simple integrated asset model that can be used in conjunction with a proposed framework at the concept screening and selection phase of a project to evaluate the impact of uncertainty in the input variables on key project drivers. The model can be used to screen multiple concepts to arrive at a few promising concepts that point the direction for detailed studies. The application of the model is demonstrated with synthetic cases formulated for a deep water field which is at the concept selection phase. In the demonstration, we investigated how uncertainty in the reservoir thickness (NTG) and the degree of heterogeneity affect the optimal choices for initial facility size, the number of rigs and the number of pre drilled wells.
text
Style APA, Harvard, Vancouver, ISO itp.
29

Gupta, Devyani. "Optimal Placement and Traffic Steering of VNFs and Edge Servers using Column Generation in Data Center Networks". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5974.

Pełny tekst źródła
Streszczenie:
Telecom Service Providers (TSPs) were traditionally dependent on physical devices to provide end-to-end communication. The services provided were high quality and stable but low in agility and hardware-dependent. As the demand for quick deployment of diverse services increased, TSP-s needed much higher flexibility and agility. This is how Network Functions Virtualization (NFV) came into being. NFV is the concept of replacing dedicated hardware with commercial-off-the-shelf (COTS) servers. It decouples the physical hardware and the function running on it. A network function can be dispatched as an instance of the software, called a Virtual Network Function (VNF). Thus, a service can be decomposed into several VNFs that can be run on industry-standard physical servers. The optimal placement of these VNFs is a potential question for TSPs to reduce the overall cost. We first study a network operations problem where we optimally deploy VNFs in Service Chains (SCs) such that the maximum consumed bandwidth across network links is minimized. The network parameters (link bandwidths, compute capacities of nodes, link propagation delays, etc.) and the number of SCs are known a priori. The problem formulated is a large Mixed-Integer Linear Program (MILP). We use the Column Generation (CG) technique to solve the problem optimally. Through various examples, we show the power of CG. We compare our results with recent heuristics and demonstrate that our approach performs better as it gives exact optimal solutions quickly. Second, we extend our previous setup to the online case where the number of SCs is not known a priori. We serve SC requests as they come. A new SC is implemented on the "residual network" while the previously deployed SCs are undisturbed. The problem formulated is a large MILP, and we use CG as the solution technique. The results show the percentage improvement in the solutions over those obtained using heuristics. Next, we study a network design problem in an Edge Computing Environment. A general communication network has a single Data Center (DC) in its "core," which serves as a gateway to the Internet. For delay-constrained services of the kind needed by online gaming, this model does not suffice because the propagation delay between the subscriber and the DC may be too high. This requires some servers to be located close to the network edge. Thus, the question of the optimal placement of these edge servers arises. To lower the network design cost, it is also essential to ensure good traffic routing, so that aggregate traffic on each link remains as low as possible. This enables lower capacity assignment on each link and thereby minimizes design cost. We study a novel joint optimization problem of network design cost minimization. Edge server placement cost and link capacity assignment cost constitute the total cost. The problem formulated is a large MILP, and we again use CG to solve it. We compare our results with many heuristics and show the improvement in design cost. Finally, we extend the above work by relaxing some assumptions and constraints. Unlike previously, we consider servers with different capacities. Also, a server can serve more than one request depending on its core capabilities. We also consider the split-and-merge of an SC through various paths in the network. The formulated problem can also provide the minimum number of servers to be used. Again, the formulation is a large MILP, and CG is used to solve it exactly.
Style APA, Harvard, Vancouver, ISO itp.
30

Kuo, Shih-Hao, i 郭詩豪. "A mixed-integer linear program for the project scheduling under partially renewable resource constraints". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/t269m8.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
工業管理科學系碩博士班
92
The Resource Constrained Project Scheduling Problem (RCPSP) is more complicated than the general project scheduling problem owing to the fact that the scheduling of activities of a project must meet the resource constraints. Most researches on RCPSP focus on the renewable resource or nonrenewable resource so far and there are few researches focusing on “partially renewable resource”. However, the project manager often faces this kind of resource like the available labor hours of a week or the budget of a month. Therefore, the project scheduling under partially renewable resource constraints is much worthy of being studied.   A mixed-integer linear program (MILP) for the project scheduling under partially renewable resource constraints is presented and the objective is makespan minimization. The integer assumptions on the activity durations, resources and the completion time of project are relaxed. There are two characteristics in this model: (1) The scheduling of activities is not involved by the time intervals, i.e. an activity can start and complete in the same time interval, across time intervals or including time intervals. (2) The resource usage amount of an activity in every time interval is determined by the proportion of its covering periods in every time interval to its duration.   This model also allows for the use of a wider variety of objectives than makespan minimization after moderately adjusted under partially renewable resource constraints. Furthermore, this model can be used to solve Material Requirements Planning (MRP) in a job shop environment under partially renewable resource constraints. Using this model, the general or even large size problems can be solve in reasonable computational time under the problem parameters of moderately complicated levels.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii