Dissertations / Theses on the topic 'Chance Constraint Optimization'

To see the other types of publications on this topic, follow the link: Chance Constraint Optimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 41 dissertations / theses for your research on the topic 'Chance Constraint Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Koker, Ezgi. "Chance Constrained Optimization Of Booster Disinfection In Water Distribution Networks." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613640/index.pdf.

Full text
Abstract:
Quality of municipal water is sustained by addition of disinfectant, generally chlorine, to the water distribution network. Because of health problems, chlorine concentration in the network is limited between maximum and minimum limits. Cancerogenic disinfectant by-products start to occur at high concentrations so it is desired to have minimum amount of chlorine without violating the limit. In addition to the health issues, minimum injection amount is favorable concerning cost. Hence, an optimization model is necessary which covers all of these considerations. However, there are uncertain factors as chlorine is reactive and decays both over time and space. Thus, probabilistic approach is necessary to obtain reliable and realistic results from the model. In this study, a linear programming model is developed for the chance constrained optimization of the water distribution network. The objective is to obtain minimum amount of injection mass subjected to maintaining more uniformly distributed chlorine concentrations within the limits while considering the randomness of chlorine concentration by probability distributions. Network hydraulics and chlorine concentration computations are done by the network simulation software, EPANET.
APA, Harvard, Vancouver, ISO, and other styles
2

Sassi, Achille. "Numerical methods for hybrid control and chance-constrained optimization problems." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY005/document.

Full text
Abstract:
Cette thèse est dediée à l'alanyse numérique de méthodes numériques dans le domaine du contrôle optimal, et est composée de deux parties. La première partie est consacrée à des nouveaux résultats concernant des méthodes numériques pour le contrôle optimal de systèmes hybrides, qui peuvent être contrôlés simultanément par des fonctions mesurables et des sauts discontinus dans la variable d'état. La deuxième partie est dédiée è l'étude d'une application spécifique surl'optimisation de trajectoires pour des lanceurs spatiaux avec contraintes en probabilité. Ici, on utilise des méthodes d'optimisation nonlineaires couplées avec des techniques de statistique non parametrique. Le problème traité dans cette partie appartient à la famille des problèmes d'optimisation stochastique et il comporte la minimisation d'une fonction de coût en présence d'une contrainte qui doit être satisfaite dans les limites d'un seuil de probabilité souhaité
This thesis is devoted to the analysis of numerical methods in the field of optimal control, and it is composed of two parts. The first part is dedicated to new results on the subject of numerical methods for the optimal control of hybrid systems, controlled by measurable functions and discontinuous jumps in the state variable simultaneously. The second part focuses on a particular application of trajectory optimization problems for space launchers. Here we use some nonlinear optimization methods combined with non-parametric statistics techniques. This kind of problems belongs to the family of stochastic optimization problems and it features the minimization of a cost function in the presence of a constraint which needs to be satisfied within a desired probability threshold
APA, Harvard, Vancouver, ISO, and other styles
3

Helmberg, Christoph, Sebastian Richter, and Dominic Schupke. "A Chance Constraint Model for Multi-Failure Resilience in Communication Networks." Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-175454.

Full text
Abstract:
For ensuring network survivability in case of single component failures many routing protocols provide a primary and a back up routing path for each origin destination pair. We address the problem of selecting these paths such that in the event of multiple failures, occuring with given probabilities, the total loss in routable demand due to both paths being intersected is small with high probability. We present a chance constraint model and solution approaches based on an explicit integer programming formulation, a robust formulation and a cutting plane approach that yield reasonably good solutions assuming that the failures are caused by at most two elementary events, which may each affect several network components.
APA, Harvard, Vancouver, ISO, and other styles
4

Calfa, Bruno Abreu. "Data Analytics Methods for Enterprise-wide Optimization Under Uncertainty." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/575.

Full text
Abstract:
This dissertation primarily proposes data-driven methods to handle uncertainty in problems related to Enterprise-wide Optimization (EWO). Datadriven methods are characterized by the direct use of data (historical and/or forecast) in the construction of models for the uncertain parameters that naturally arise from real-world applications. Such uncertainty models are then incorporated into the optimization model describing the operations of an enterprise. Before addressing uncertainty in EWO problems, Chapter 2 deals with the integration of deterministic planning and scheduling operations of a network of batch plants. The main contributions of this chapter include the modeling of sequence-dependent changeovers across time periods for a unitspecific general precedence scheduling formulation, the hybrid decomposition scheme using Bilevel and Temporal Lagrangean Decomposition approaches, and the solution of subproblems in parallel. Chapters 3 to 6 propose different data analytics techniques to account for stochasticity in EWO problems. Chapter 3 deals with scenario generation via statistical property matching in the context of stochastic programming. A distribution matching problem is proposed that addresses the under-specification shortcoming of the originally proposed moment matching method. Chapter 4 deals with data-driven individual and joint chance constraints with right-hand side uncertainty. The distributions are estimated with kernel smoothing and are considered to be in a confidence set, which is also considered to contain the true, unknown distributions. The chapter proposes the calculation of the size of the confidence set based on the standard errors estimated from the smoothing process. Chapter 5 proposes the use of quantile regression to model production variability in the context of Sales & Operations Planning. The approach relies on available historical data of actual vs. planned production rates from which the deviation from plan is defined and considered a random variable. Chapter 6 addresses the combined optimal procurement contract selection and pricing problems. Different price-response models, linear and nonlinear, are considered in the latter problem. Results show that setting selling prices in the presence of uncertainty leads to the use of different purchasing contracts.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Jianzhe. "On Control and Optimization of DC Microgrids." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1512049527948171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dai, Siyu S. M. Massachusetts Institute of Technology. "Probabilistic motion planning and optimization incorporating chance constraints." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120230.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 201-208).
For high-dimensional robots, motion planning is still a challenging problem, especially for manipulators mounted to underwater vehicles or human support robots where uncertainties and risks of plan failure can have severe impact. However, existing risk-aware planners mostly focus on low-dimensional planning tasks, meanwhile planners that can account for uncertainties and react fast in high degree-of-freedom (DOF) robot planning tasks are lacking. In this thesis, a risk-aware motion planning and execution system called Probabilistic Chekov (p-Chekov) is introduced, which includes a deterministic stage and a risk-aware stage. A systematic set of experiments on existing motion planners as well as p-Chekov is also presented. The deterministic stage of p-Chekov leverages the recent advances in obstacle-aware trajectory optimization to improve the original tube-based-roadmap Chekov planner. Through experiments in 4 common application scenarios with 5000 test cases each, we show that using sampling-based planners alone on high DOF robots can not achieve a high enough reaction speed, whereas the popular trajectory optimizer TrajOpt with naive straight-line seed trajectories has very high collision rate despite its high planning speed. To the best of our knowledge, this is the first work that presents such a systematic and comprehensive evaluation of state-of-the-art motion planners, which are based on a significant amount of experiments. We then combine different stand-alone planners with trajectory optimization. The results show that the deterministic planning part of p-Chekov, which combines a roadmap approach that caches the all pair shortest paths solutions and an online obstacle-aware trajectory optimizer, provides superior performance over other standard sampling-based planners' combinations. Simulation results show that, in typical real-life applications, this "roadmap + TrajOpt" approach takes about 1 s to plan and the failure rate of its solutions is under 1%. The risk-aware stage of p-Chekov accounts for chance constraints through state probability distribution and collision probability estimation. Based on the deterministic Chekov planner, p-Chekov incorporates a linear-quadratic Gaussian motion planning (LQG-MP) approach into robot state probability distribution estimation, applies quadrature-sampling theories to collision risk estimation, and adapts risk allocation approaches for chance constraint satisfaction. It overcomes existing risk-aware planners' limitation in real-time motion planning tasks with high-DOF robots in 3- dimensional non-convex environments. The experimental results in this thesis show that this new risk-aware motion planning and execution system can effectively reduce collision risk and satisfy chance constraints in typical real-world planning scenarios for high-DOF robots. This thesis makes the following three main contributions: (1) a systematic evaluation of several state-of-the-art motion planners in realistic planning scenarios, including popular sampling-based motion planners and trajectory optimization type motion planners, (2) the establishment of a "roadmap + TrajOpt" deterministic motion planning system that shows superior performance in many practical planning tasks in terms of solution feasibility, optimality and reaction time, and (3) the development of a risk-aware motion planning and execution system that can handle high-DOF robotic planning tasks in 3-dimensional non-convex environments.
by Siyu Dai.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Yufei. "Chance-constrained optimization & optimal control problems." Thesis, Curtin University, 2015. http://hdl.handle.net/20.500.11937/183.

Full text
Abstract:
Four optimization or optimal control problems subject to probabilistic constraints are studied. The first identifies the optimal portfolio using a new probabilistic risk measure while maximizing return. The second explores a preventive-based maintenance scheduling problem while minimizing the total cost of operation. The third investigates how asset allocation of a pension fund should change in the face of default risk. The fourth suggests the optimal manpower scheduling which minimizes the salary costs of the employees.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Yi. "Sequential convex approximations of chance constrained programming /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?IELM%202008%20YANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Arellano-Garcia, Harvey. "Chance constrained optimization of process systems under uncertainty." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=982225652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Luedtke, James. "Integer Programming Approaches for Some Non-convex and Stochastic Optimization Problems." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19711.

Full text
Abstract:
In this dissertation we study several non-convex and stochastic optimization problems. The common theme is the use of mixed-integer programming (MIP) techniques including valid inequalities and reformulation to solve these problems. We first study a strategic capacity planning model which captures the trade-off between the incentive to delay capacity installation to wait for improved technology and the need for some capacity to be installed to meet current demands. This problem is naturally formulated as a MIP with a bilinear objective. We develop several linear MIP formulations, along with classes of strong valid inequalities. We also present a specialized branch-and-cut algorithm to solve a compact concave formulation. Computational results indicate that these formulations can be used to solve large-scale instances. We next study methods for optimization with joint probabilistic constraints. These problems are challenging because evaluating solution feasibility requires multidimensional integration and the feasible region is not convex. We propose and analyze a Monte Carlo sampling scheme to simplify the probabilistic structure of such problems. Computational tests of the approach indicate that it can yield good feasible solutions and reasonable bounds on their quality. Next, we study a MIP formulation of the non-convex sample approximation problem. We obtain two strengthened formulations. As a byproduct of this analysis, we obtain new results for the previously studied mixing set, subject to an additional knapsack inequality. Computational results indicate that large-scale instances can be solved using the strengthened formulations. Finally, we study optimization problems with stochastic dominance constraints. A stochastic dominance constraint states that a random outcome which depends on the decision variables should stochastically dominate a given random variable. We present new formulations for both first and second order stochastic dominance which are significantly more compact than existing formulations. Computational tests illustrate the benefits of the new formulations.
APA, Harvard, Vancouver, ISO, and other styles
11

Ibrahim, Sarmad Khaleel. "DISTRIBUTION SYSTEM OPTIMIZATION WITH INTEGRATED DISTRIBUTED GENERATION." UKnowledge, 2018. https://uknowledge.uky.edu/ece_etds/116.

Full text
Abstract:
In this dissertation, several volt-var optimization methods have been proposed to improve the expected performance of the distribution system using distributed renewable energy sources and conventional volt-var control equipment: photovoltaic inverter reactive power control for chance-constrained distribution system performance optimisation, integrated distribution system optimization using a chance-constrained formulation, integrated control of distribution system equipment and distributed generation inverters, and coordination of PV inverters and voltage regulators considering generation correlation and voltage quality constraints for loss minimization. Distributed generation sources (DGs) have important benefits, including the use of renewable resources, increased customer participation, and decreased losses. However, as the penetration level of DGs increases, the technical challenges of integrating these resources into the power system increase as well. One such challenge is the rapid variation of voltages along distribution feeders in response to DG output fluctuations, and the traditional volt-var control equipment and inverter-based DG can be used to address this challenge. These methods aim to achieve an optimal expected performance with respect to the figure of merit of interest to the distribution system operator while maintaining appropriate system voltage magnitudes and considering the uncertainty of DG power injections. The first method is used to optimize only the reactive power output of DGs to improve system performance (e.g., operating profit) and compensate for variations in active power injection while maintaining appropriate system voltage magnitudes and considering the uncertainty of DG power injections over the interval of interest. The second method proposes an integrated volt-var control based on a control action ahead of time to find the optimal voltage regulation tap settings and inverter reactive control parameters to improve the expected system performance (e.g., operating profit) while keeping the voltages across the system within specified ranges and considering the uncertainty of DG power injections over the interval of interest. In the third method, an integrated control strategy is formulated for the coordinated control of both distribution system equipment and inverter-based DG. This control strategy combines the use of inverter reactive power capability with the operation of voltage regulators to improve the expected value of the desired figure of merit (e.g., system losses) while maintaining appropriate system voltage magnitudes. The fourth method proposes a coordinated control strategy of voltage and reactive power control equipment to improve the expected system performance (e.g., system losses and voltage profiles) while considering the spatial correlation among the DGs and keeping voltage magnitudes within permissible limits, by formulating chance constraints on the voltage magnitude and considering the uncertainty of PV power injections over the interval of interest. The proposed methods require infrequent communication with the distribution system operator and base their decisions on short-term forecasts (i.e., the first and second methods) and long-term forecasts (i.e., the third and fourth methods). The proposed methods achieve the best set of control actions for all voltage and reactive power control equipment to improve the expected value of the figure of merit proposed in this dissertation without violating any of the operating constraints. The proposed methods are validated using the IEEE 123-node radial distribution test feeder.
APA, Harvard, Vancouver, ISO, and other styles
12

Mrázková, Eva. "Approximations in Stochastic Optimization and Their Applications." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-233932.

Full text
Abstract:
Mnoho inženýrských úloh vede na optimalizační modely s~omezeními ve tvaru obyčejných (ODR) nebo parciálních (PDR) diferenciálních rovnic, přičemž jsou v praxi často některé parametry neurčité. V práci jsou uvažovány tři inženýrské problémy týkající se optimalizace vibrací a optimálního návrhu rozměrů nosníku. Neurčitost je v nich zahrnuta ve formě náhodného zatížení nebo náhodného Youngova modulu. Je zde ukázáno, že dvoustupňové stochastické programování nabízí slibný přístup k řešení úloh daného typu. Odpovídající matematické modely, zahrnující ODR nebo PDR omezení, neurčité parametry a více kritérií, vedou na (vícekriteriální) stochastické nelineární optimalizační modely. Dále je dokázáno, pro jaký typ úloh je nutné použít stochastické programování (EO reformulace), a kdy naopak stačí řešit jednodušší deterministickou úlohu (EV reformulace), což má v praxi význam z hlediska výpočetní náročnosti. Jsou navržena výpočetní schémata zahrnující diskretizační metody pro náhodné proměnné a ODR nebo PDR omezení. Matematické modely odvozené pomocí těchto aproximací jsou implementovány a řešeny v softwaru GAMS. Kvalita řešení je určena na základě intervalových odhadů "optimality gapu" spočtených pomocí metody Monte Carlo. Parametrická analýza vícekriteriálního modelu vede na výpočet "efficient frontier". Jsou studovány možnosti aproximace modelu zahrnujícího pravděpodobnostní členy související se spolehlivostí pomocí smíšeného celočíselného nelineárního programování a reformulace pomocí penalizační funkce. Dále je vzhledem k budoucím možnostem paralelních výpočtů rozsáhlých inženýrských úloh implementován a testován PHA algoritmus. Výsledky ukazují, že lze tento algoritmus použít, i když nejsou splněny matematické podmínky zaručující konvergenci. Na závěr je pro deterministickou verzi jedné z úloh porovnána metoda konečných diferencí s metodou konečných prvků za použití softwarů GAMS a ANSYS se zcela srovnatelnými výsledky.
APA, Harvard, Vancouver, ISO, and other styles
13

Piprek, Patrick [Verfasser], Florian [Akademischer Betreuer] Holzapfel, Sébastien [Gutachter] Gros, and Florian [Gutachter] Holzapfel. "Robust Trajectory Optimization Applying Chance Constraints and Generalized Polynomial Chaos / Patrick Piprek ; Gutachter: Sébastien Gros, Florian Holzapfel ; Betreuer: Florian Holzapfel." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1211086992/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kubo, Seiji. "Topology optimization for the duct flow problems in laminar and turbulent flow regimes." Kyoto University, 2019. http://hdl.handle.net/2433/242491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Qiu, Feng. "Probabilistic covering problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47567.

Full text
Abstract:
This dissertation studies optimization problems that involve probabilistic covering constraints. A probabilistic constraint evaluates and requires that the probability that a set of constraints involving random coefficients with known distributions hold satisfy a minimum requirement. A covering constraint involves a linear inequality on non-negative variables with a greater or equal to sign and non-negative coefficients. A variety of applications, such as set cover problems, node/edge cover problems, crew scheduling, production planning, facility location, and machine learning, in uncertain settings involve probabilistic covering constraints. In the first part of this dissertation we consider probabilistic covering linear programs. Using the sampling average approximation (SAA) framework, a probabilistic covering linear program can be approximated by a covering k-violation linear program (CKVLP), a deterministic covering linear program in which at most k constraints are allowed to be violated. We show that CKVLP is strongly NP-hard. Then, to improve the performance of standard mixed-integer programming (MIP) based schemes for CKVLP, we (i) introduce and analyze a coefficient strengthening scheme, (ii) adapt and analyze an existing cutting plane technique, and (iii) present a branching technique. Through computational experiments, we empirically verify that these techniques are significantly effective in improving solution times over the CPLEX MIP solver. In particular, we observe that the proposed schemes can cut down solution times from as much as six days to under four hours in some instances. We also developed valid inequalities arising from two subsets of the constraints in the original formulation. When incorporating them with a modified coefficient strengthening procedure, we are able to solve a difficult probabilistic portfolio optimization instance listed in MIPLIB 2010, which cannot be solved by existing approaches. In the second part of this dissertation we study a class of probabilistic 0-1 covering problems, namely probabilistic k-cover problems. A probabilistic k-cover problem is a stochastic version of a set k-cover problem, which is to seek a collection of subsets with a minimal cost whose union covers each element in the set at least k times. In a stochastic setting, the coefficients of the covering constraints are modeled as Bernoulli random variables, and the probabilistic constraint imposes a minimal requirement on the probability of k-coverage. To account for absence of full distributional information, we define a general ambiguous k-cover set, which is ``distributionally-robust." Using a classical linear program (called the Boolean LP) to compute the probability of events, we develop an exact deterministic reformulation to this ambiguous k-cover problem. However, since the boolean model consists of exponential number of auxiliary variables, and hence not useful in practice, we use two linear program based bounds on the probability that at least k events occur, which can be obtained by aggregating the variables and constraints of the Boolean model, to develop tractable deterministic approximations to the ambiguous k-cover set. We derive new valid inequalities that can be used to strengthen the linear programming based lower bounds. Numerical results show that these new inequalities significantly improve the probability bounds. To use standard MIP solvers, we linearize the multi-linear terms in the approximations and develop mixed-integer linear programming formulations. We conduct computational experiments to demonstrate the quality of the deterministic reformulations in terms of cost effectiveness and solution robustness. To demonstrate the usefulness of the modeling technique developed for probabilistic k-cover problems, we formulate a number of problems that have up till now only been studied under data independence assumption and we also introduce a new applications that can be modeled using the probabilistic k-cover model.
APA, Harvard, Vancouver, ISO, and other styles
16

Amini, Mahraz. "Optimal dispatch of uncertain energy resources." ScholarWorks @ UVM, 2019. https://scholarworks.uvm.edu/graddis/1046.

Full text
Abstract:
The future of the electric grid requires advanced control technologies to reliably integrate high level of renewable generation and residential and small commercial distributed energy resources (DERs). Flexible loads are known as a vital component of future power systems with the potential to boost the overall system efficiency. Recent work has expanded the role of flexible and controllable energy resources, such as energy storage and dispatchable demand, to regulate power imbalances and stabilize grid frequency. This leads to the DER aggregators to develop concepts such as the virtual energy storage system (VESS). VESSs aggregate the flexible loads and energy resources and dispatch them akin to a grid-scale battery to provide flexibility to the system operator. Since the level of flexibility from aggregated DERs is uncertain and time varying, the VESSs’ dispatch can be challenging. To optimally dispatch uncertain, energy-constrained reserves, model predictive control offers a viable tool to develop an appropriate trade-off between closed-loop performance and robustness of the dispatch. To improve the system operation, flexible VESSs can be formulated probabilistically and can be realized with chance-constrained model predictive control. The large-scale deployment of flexible loads needs to carefully consider the existing regulation schemes in power systems, i.e., generator droop control. In this work first, we investigate the complex nature of system-wide frequency stability from time-delays in actuation of dispatchable loads. Then, we studied the robustness and performance trade-offs in receding horizon control with uncertain energy resources. The uncertainty studied herein is associated with estimating the capacity of and the estimated state of charge from an aggregation of DERs. The concept of uncertain flexible resources in markets leads to maximizing capacity bids or control authority which leads to dynamic capacity saturation (DCS) of flexible resources. We show there exists a sensitive trade-off between robustness of the optimized dispatch and closed-loop system performance and sacrificing some robustness in the dispatch of the uncertain energy capacity can significantly improve system performance. We proposed and formulated a risk-based chance constrained MPC (RB-CC-MPC) to co-optimize the operational risk of prematurely saturating the virtual energy storage system against deviating generators from their scheduled set-point. On a fast minutely timescale, the RB-CC-MPC coordinates energy-constrained virtual resources to minimize unscheduled participation of ramp-rate limited generators for balancing variability from renewable generation, while taking into account grid conditions. We show under the proposed method it is possible to improve the performance of the controller over conventional distributionally robust methods by more than 20%. Moreover, a hardware-in-the-loop (HIL) simulation of a cyber-physical system consisting of packetized energy management (PEM) enabled DERs, flexible VESSs and transmission grid is developed in this work. A predictive, energy-constrained dispatch of aggregated PEM-enabled DERs is formulated, implemented, and validated on the HIL cyber-physical platform. The experimental results demonstrate that the existing control schemes, such as AGC, dispatch VESSs without regard to their energy state, which leads to unexpected capacity saturation. By accounting for the energy states of VESSs, model-predictive control (MPC) can optimally dispatch conventional generators and VESSs to overcome disturbances while avoiding undesired capacity saturation. The results show the improvement in dynamics by using MPC over conventional AGC and droop for a system with energy-constrained resources.
APA, Harvard, Vancouver, ISO, and other styles
17

Serra, Romain. "Opérations de proximité en orbite : évaluation du risque de collision et calcul de manoeuvres optimales pour l'évitement et le rendez-vous." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0035/document.

Full text
Abstract:
Cette thèse traite de l'évitement de collision entre un engin spatial opérationnel, appelé objet primaire, et un débris orbital, dit secondaire. Ces travaux concernent aussi bien la question de l'estimation du risque pour une paire d'objets sphériques que celle du calcul d'un plan de manoeuvres d'évitement pour le primaire. Pour ce qui est du premier point, sous certaines hypothèses, la probabilité de collision s'exprime comme l'intégrale d'une fonction gaussienne sur une boule euclidienne, en dimension deux ou trois. On en propose ici une nouvelle méthode de calcul, basée sur les théories de la transformée de Laplace et des fonctions holonomes. En ce qui concerne le calcul de manoeuvres de propulsion, différentes méthodes sont développées en fonction du modèle considéré. En toute généralité, le problème peut être formulé dans le cadre de l'optimisation sous contrainte probabiliste et s'avère difficile à résoudre. Dans le cas d'un mouvement considéré comme relatif rectiligne, l'approche par scénarios se prête bien au problème et permet d'obtenir des solutions admissibles. Concernant les rapprochements lents, une linéarisation de la dynamique des objets et un recouvrement polyédral de l'objet combiné sont à la base de la construction d'un problème de substitution. Deux approches sont proposées pour sa résolution : une première directe et une seconde par sélection du risque. Enfin, la question du calcul de manoeuvres de proximité en consommation optimale et temps fixé, sans contrainte d'évitement, est abordée. Par l'intermédiaire de la théorie du vecteur efficacité, la solution analytique est obtenue pour la partie hors-plan de la dynamique képlérienne linéarisée
This thesis is about collision avoidance for a pair of spherical orbiting objects. The primary object - the operational satellite - is active in the sense that it can use its thrusters to change its trajectory, while the secondary object is a space debris that cannot be controlled in any way. Onground radars or other means allow to foresee a conjunction involving an operational space craft,leading in the production of a collision alert. The latter contains statistical data on the position and velocity of the two objects, enabling for the construction of a probabilistic collision model.The work is divided in two parts : the computation of collision probabilities and the design of maneuvers to lower the collision risk. In the first part, two kinds of probabilities - that can be written as integrals of a Gaussian distribution over an Euclidean ball in 2 and 3 dimensions -are expanded in convergent power series with positive terms. It is done using the theories of Laplace transform and Definite functions. In the second part, the question of collision avoidance is formulated as a chance-constrained optimization problem. Depending on the collision model, namely short or long-term encounters, it is respectively tackled via the scenario approach or relaxed using polyhedral collision sets. For the latter, two methods are proposed. The first one directly tackles the joint chance constraints while the second uses another relaxation called risk selection to obtain a mixed-integer program. Additionaly, the solution to the problem of fixed-time fuel minimizing out-of-plane proximity maneuvers is derived. This optimal control problem is solved via the primer vector theory
APA, Harvard, Vancouver, ISO, and other styles
18

Sadat, Sayed Abdullah. "Optimal Bidding Strategy for a Strategic Power Producer Using Mixed Integer Programming." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6631.

Full text
Abstract:
The thesis focuses on a mixed integer linear programming (MILP) formulation for a bi-level mathematical program with equilibrium constraints (MPEC) considering chance constraints. The particular MPEC problem relates to a power producer’s bidding strategy: maximize its total benefit through determining bidding price and bidding power output while considering an electricity pool’s operation and guessing the rival producer’s bidding price. The entire decision-making process can be described by a bi-level optimization problem. The contribution of our thesis is the MILP formulation of this problem considering the use of chance constrained mathematical program for handling the uncertainties. First, the lower-level poor operation problem is replaced by Karush-Kuhn-Tucker (KKT) optimality condition, which is further converted to an MILP formulation except a bilinear item in the objective function. Secondly, duality theory is implemented to replace the bilinear item by linear items. Finally, two types of chance constraints are examined and modeled in MILP formulation. With the MILP formulation, the entire MPEC problem considering randomness in price guessing can be solved using off-shelf MIP solvers, e.g., Gurobi. A few examples and a case study are given to illustrate the formulation and show the case study results.
APA, Harvard, Vancouver, ISO, and other styles
19

Fleming, James. "Robust and stochastic MPC of uncertain-parameter systems." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:c19ff07c-0756-45f6-977b-9d54a5214310.

Full text
Abstract:
Constraint handling is difficult in model predictive control (MPC) of linear differential inclusions (LDIs) and linear parameter varying (LPV) systems. The designer is faced with a choice of using conservative bounds that may give poor performance, or accurate ones that require heavy online computation. This thesis presents a framework to achieve a more flexible trade-off between these two extremes by using a state tube, a sequence of parametrised polyhedra that is guaranteed to contain the future state. To define controllers using a tube, one must ensure that the polyhedra are a sub-set of the region defined by constraints. Necessary and sufficient conditions for these subset relations follow from duality theory, and it is possible to apply these conditions to constrain predicted system states and inputs with only a little conservatism. This leads to a general method of MPC design for uncertain-parameter systems. The resulting controllers have strong theoretical properties, can be implemented using standard algorithms and outperform existing techniques. Crucially, the online optimisation used in the controller is a convex problem with a number of constraints and variables that increases only linearly with the length of the prediction horizon. This holds true for both LDI and LPV systems. For the latter it is possible to optimise over a class of gain-scheduled control policies to improve performance, with a similar linear increase in problem size. The framework extends to stochastic LDIs with chance constraints, for which there are efficient suboptimal methods using online sampling. Sample approximations of chance constraint-admissible sets are generally not positively invariant, which motivates the novel concept of ‘sample-admissible' sets with this property to ensure recursive feasibility when using sampling methods. The thesis concludes by introducing a simple, convex alternative to chance-constrained MPC that applies a robust bound to the time average of constraint violations in closed-loop.
APA, Harvard, Vancouver, ISO, and other styles
20

Excoffier, Mathilde. "Chance-Constrained Programming Approaches for Staffing and Shift-Scheduling Problems with Uncertain Forecasts : application to Call Centers." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112244/document.

Full text
Abstract:
Le problème de dimensionnement et planification d'agents en centre d'appels consiste à déterminer sur une période le nombre d'interlocuteurs requis afin d'atteindre la qualité de service exigée et minimiser les coûts induits. Ce sujet fait l'objet d'un intérêt croissant pour son intérêt théorique mais aussi pour l'impact applicatif qu'il peut avoir. Le but de cette thèse est d'établir des approches en contraintes en probabilités en considérant l'incertitude de la demande.Tout d'abord, la thèse présente un modèle en problème d'optimisation stochastique avec contrainte en probabilité jointe traitant la problématique complète en une étape afin d'obtenir un programme facile à résoudre. Une approche basée sur l'idée de continuité est proposée grâce à des lois de probabilité continues, une nouvelle relation entre les taux d'arrivées et les besoins théoriques et la linéarisation de contraintes. La répartition du risque global est faite pendant le processus d'optimisation, permettant une solution au coût réduit. Ces solutions résultantes respectent le niveau de risque tout en diminuant le coût par rapport à d'autres approches.De plus, le modèle en une étape est étendu pour améliorer sa représentation de la réalité. D'une part, le modèle de file d'attente est amélioré et inclus la patience limitée des clients. D'autre part, une nouvelle expression de l'incertitude est proposée pour prendre la dépendance des périodes en compte.Enfin, une nouvelle représentation de l'incertitude est considérée. L'approche distributionally robust permet de modéliser le problème sous l'hypothèse que la loi de probabilité adéquate est inconnue et fait partie d'un ensemble de lois, défini par une moyenne et une variance données. Le problème est modélisé par une contrainte en probabilité jointe. Le risque à chaque période est définie par une variable à optimiser.Un problème déterministe équivalent est proposé et des approximations linéaires permettent d'obtenir une formulation d'optimisation linéaire
The staffing and shift-scheduling problems in call centers consist in deciding how many agents handling the calls should be assigned to work during a given period in order to reach the required Quality of Service and minimize the costs. These problems are subject to a growing interest, both for their interesting theoritical formulation and their possible applicative effects. This thesis aims at proposing chance-constrained approaches considering uncertainty on demand forecasts.First, this thesis proposes a model solving the problems in one step through a joint chance-constrained stochastic program, providing a cost-reducing solution. A continuous-based approach leading to an easily-tractable optimization program is formulated with random variables following continuous distributions, a new continuous relation between arrival rates and theoritical real agent numbers and constraint linearizations. The global risk level is dynamically shared among the periods during the optimization process, providing reduced-cost solution. The resulting solutions respect the targeted risk level while reducing the cost compared to other approaches.Moreover, this model is extended so that it provides a better representation of real situations. First, the queuing system model is improved and consider the limited patience of customers. Second, another formulation of uncertainty is proposed so that the period correlation is considered.Finally, another uncertainty representation is proposed. The distributionally robust approach provides a formulation while assuming that the correct probability distribution is unknown and belongs to a set of possible distributions defined by given mean and variance. The problem is formulated with a joint chance constraint. The risk at each period is a decision variable to be optimized. A deterministic equivalent problem is proposed. An easily-tractable mixed-integer linear formulation is obtained through piecewise linearizations
APA, Harvard, Vancouver, ISO, and other styles
21

Baker, Kyri A. "Coordination of Resources Across Areas for the Integration of Renewable Generation: Operation, Sizing, and Siting of Storage Devices." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/465.

Full text
Abstract:
An increased penetration of renewable energy into the electric power grid is desirable from an environmental standpoint as well as an economical one. However, renewable sources such as wind and solar energy are often variable and intermittent, and additionally, are non-dispatchable. Also, the locations with the highest amount of available wind or solar may be located in areas that are far from areas with high levels of demand, and these areas may be under the control of separate, individual entities. In this dissertation, a method that coordinates these areas, accounts for the variability and intermittency, reduces the impact of renewable energy forecast errors, and increases the overall social benefit in the system is developed. The approach for the purpose of integrating intermittent energy sources into the electric power grid is considered from both the planning and operations stages. In the planning stage, two-stage stochastic optimization is employed to find the optimal size and location for a storage device in a transmission system with the goal of reducing generation costs, increasing the penetration of wind energy, alleviating line congestions, and decreasing the impact of errors in wind forecasts. The size of this problem grows dramatically with respect to the number of variables and constraints considered. Thus, a scenario reduction approach is developed which makes this stochastic problem computationally feasible. This scenario reduction technique is derived from observations about the relationship between the variance of locational marginal prices corresponding to the power balance equations and the optimal storage size. Additionally, a probabilistic, or chance, constrained model predictive control (MPC) problem is formulated to take into account wind forecast errors in the optimal storage sizing problem. A probability distribution of wind forecast errors is formed and incorporated into the original storage sizing problem. An analytical form of this constraint is derived to directly solve the optimization problem without having to use Monte-Carlo simulations or other techniques that sample the probability distribution of forecast errors. In the operations stage, a MPC AC Optimal Power Flow problem is decomposed with respect to physical control areas. Each area performs an independent optimization and variable values on the border buses between areas are exchanged at each Newton-Raphson iteration. Two modifications to the Approximate Newton Directions (AND) method are presented and used to solve the distributed MPC optimization problem, both with the intention of improving the original AND method by improving upon the convergence rate. Methods are developed to account for numerical difficulties encountered by these formula- tions, specifically with regards to Jacobian singularities introduced due to the intertemporal constraints. Simulation results show convergence of the decomposed optimization problem to the centralized result, demonstrating the benefits of coordinating control areas in the IEEE 57- bus test system. The benefit of energy storage in MPC formulations is also demonstrated in the simulations, reducing the impact of the fluctuations in the power supply introduced by intermittent sources by coordinating resources across control areas. An overall reduction of generation costs and increase in renewable penetration in the system is observed, with promising results to effectively and efficiently integrate renewable resources into the electric power grid on a large scale.
APA, Harvard, Vancouver, ISO, and other styles
22

Klöppel, Michael [Verfasser], Armin [Akademischer Betreuer] Hoffmann, Pu [Gutachter] Li, and Oliver [Gutachter] Stein. "Efficient numerical solution of chance constrained optimization problems with engineering applications / Michael Klöppel ; Gutachter: Pu Li, Oliver Stein ; Betreuer: Armin Hoffmann." Ilmenau : TU Ilmenau, 2014. http://d-nb.info/1178182878/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lourens, Spencer. "Bias in mixtures of normal distributions and joint modeling of longitudinal and time-to-event data with monotonic change curves." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1685.

Full text
Abstract:
Estimating parameters in a mixture of normal distributions dates back to the 19th century when Pearson originally considered data of crabs from the Bay of Naples. Since then, many real world applications of mixtures have led to various proposed methods for studying similar problems. Among them, maximum likelihood estimation (MLE) and the continuous empirical characteristic function (CECF) methods have drawn the most attention. However, the performance of these competing estimation methods has not been thoroughly studied in the literature and conclusions have not been consistent in published research. In this article, we review this classical problem with a focus on estimation bias. An extensive simulation study is conducted to compare the estimation bias between the MLE and CECF methods over a wide range of disparity values. We use the overlapping coefficient (OVL) to measure the amount of disparity, and provide a practical guideline for estimation quality in mixtures of normal distributions. Application to an ongoing multi-site Huntington disease study is illustrated for ascertaining cognitive biomarkers of disease progression. We also study joint modeling of longitudinal and time-to-event data and discuss pattern-mixture and selection models, but focus on shared parameter models, which utilize unobserved random effects in order to "join" a marginal longitudinal data model and marginal survival model in order to assess an internal time-dependent covariate's effect on time-to-event. The marginal models used in the analysis are the Cox Proportional Hazards model and the Linear Mixed model, and both of these models are covered in some detail before defining joints models and describing the estimation process. Joint modeling provides a modeling framework which accounts for correlation between the longitudinal data and the time-to-event data, while also accounting for measurement error in the longitudinal process, which previous methods failed to do. Since it has been shown that bias is incurred, and this bias is proportional to the amount of measurement error, utilizing a joint modeling approach is preferred. Our setting is also complicated by monotone degeneration of the internal covariate considered, and so a joint model which utilizes monotone B-Splines to recover the longitudinal trajectory and a Cox Proportional Hazards (CPH) model for the time-to-event data is proposed. The monotonicity constraints are satisfied via the Projected Newton Raphson Algorithm as described by Cheng et al., 2012, with the baseline hazard profiled out of the $Q$ function in each M-step of the Expectation Maximization (EM) algorithm used for optimizing the observed likelihood. This method is applied to assess Total Motor Score's (TMS) ability to predict Huntington Disease motor diagnosis in the Biological Predictors of Huntington's Disease study (PREDICT-HD) data.
APA, Harvard, Vancouver, ISO, and other styles
24

Kůdela, Jakub. "Advanced Decomposition Methods in Stochastic Convex Optimization." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-403864.

Full text
Abstract:
Při práci s úlohami stochastického programování se často setkáváme s optimalizačními problémy, které jsou příliš rozsáhlé na to, aby byly zpracovány pomocí rutinních metod matematického programování. Nicméně, v některých případech mají tyto problémy vhodnou strukturu, umožňující použití specializovaných dekompozičních metod, které lze použít při řešení rozsáhlých optimalizačních problémů. Tato práce se zabývá dvěma třídami úloh stochastického programování, které mají speciální strukturu, a to dvoustupňovými stochastickými úlohami a úlohami s pravděpodobnostním omezením, a pokročilými dekompozičními metodami, které lze použít k řešení problému v těchto dvou třídách. V práci popisujeme novou metodu pro tvorbu “warm-start” řezů pro metodu zvanou “Generalized Benders Decomposition”, která se používá při řešení dvoustupňových stochastických problémů. Pro třídu úloh s pravděpodobnostním omezením zde uvádíme originální dekompoziční metodu, kterou jsme nazvali “Pool & Discard algoritmus”. Užitečnost popsaných dekompozičních metod je ukázána na několika příkladech a inženýrských aplikacích.
APA, Harvard, Vancouver, ISO, and other styles
25

Prigent, Sylvain. "Approche novatrice pour la conception et l’exploitation d’avions écologiques." Thesis, Toulouse, ISAE, 2015. http://www.theses.fr/2015ESAE0014/document.

Full text
Abstract:
L'objectif de ce travail de thèse est de poser, d'analyser et de résoudre le problème multidisciplinaire et multi-objectif de la conception d'avions plus écologiques et plus économiques. Dans ce but, les principaux drivers de l'optimisation des performances d'un avion seront: la géométrie de l'avion, son moteur ainsi que son profil de mission, autrement dit sa trajectoire. Les objectifs à minimiser considérés sont la consommation de carburant, l'impact climatique et le coût d'opération de l'avion. L'étude sera axée sur la stratégie de recherche de compromis entre ces objectifs, afin d'identifier les configurations d'avions optimales selon le critère sélectionné et de proposer une analyse de ces résultats. L'incertitude présente au niveau des modèles utilisés sera prise en compte par des méthodes rigoureusement sélectionnées. Une configuration d'avion hybride est proposée pour atteindre l'objectif de réduction d'impact climatique
The objective of this PhD work is to pose, investigate, and solve the highly multidisciplinary and multiobjective problem of environmentally efficient aircraft design and operation. In this purpose, the main three drivers for optimizing the environmental performance of an aircraft are the airframe, the engine, and the mission profiles. The figures of merit, which will be considered for optimization, are fuel burn, local emissions, global emissions, and climate impact (noise excluded). The study will be focused on finding efficient compromise strategies and identifying the most powerful design architectures and design driver combinations for improvement of environmental performances. The modeling uncertainty will be considered thanks to rigorously selected methods. A hybrid aircraft configuration is proposed to reach the climatic impact reduction objective
APA, Harvard, Vancouver, ISO, and other styles
26

Velay, Maxime. "Méthodes d’optimisation distribuée pour l’exploitation sécurisée des réseaux électriques interconnectés." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT063/document.

Full text
Abstract:
Notre société étant plus dépendante que jamais au vecteur électrique, la moindre perturbation du transport ou de l’acheminement de l’électricité a un impact social et économique important. La fiabilité et la sécurité des réseaux électriques sont donc cruciales pour les gestionnaires de réseaux, en plus des aspects économiques. De plus, les réseaux de transport sont interconnectés pour réduire les coûts des opérations et pour améliorer la sécurité. Un des plus grand défis des gestionnaires des réseaux de transport est ainsi de se coordonner avec les réseaux voisins, ce qui soulève des problèmes liés à la taille du problème, à l’interopérabilité et à la confidentialité des données.Cette thèse se focalise principalement sur la sécurité des opérations sur les réseaux électriques, c’est pourquoi l’évolution des principales caractéristiques des blackouts, qui sont des échecs de la sécurité des réseaux, sont étudiés sur la période 2005-2016. L’approche de cette étude consiste à déterminer quelles sont les principales caractéristiques des incidents de ces 10 dernières années, afin d’identifier ce qui devrait être intégré pour réduire le risque que ces incidents se reproduisent. L’évolution a été étudiée et comparé avec les caractéristiques des blackouts qui se sont produit avant 2005. L’étude se focalise sur les préconditions qui ont mené à ces blackouts et sur les cascades, et particulièrement sur le rôle de la vitesse des cascades. Les caractéristiques importante sont extraites et intégrées dans la suite de notre travail.Un algorithme résolvant un problème préventif d’Optimal Power Flow avec contraintes de sécurité (SCOPF) de manière distribuée est ainsi développé. Ce problème consiste en l’ajout de contraintes qui assure qu’après la perte de n’importe quel appareil d’importance, le nouveau point d’équilibre, atteint suite au réglage primaire en fréquence, respecte les contraintes du système. L’algorithme développé utilise une décomposition fine du problème et est implémenté sous le paradigme multi-agent, basé sur deux catégories d’agents : les appareils et les bus. Les agents sont coordonnés grâce à l’ « Alternating Direction Method of Multipliers (ADMM)» et grâce à un problème de consensus. Cette décomposition procure l’autonomie et la confidentialité nécessaire aux différents acteurs du système, mais aussi, un bon passage à l’échelle par rapport à la taille du problème. Cet algorithme a aussi pour avantage d’être robuste à n’importe quelle perturbation, incluant la séparation du système en plusieurs régions.Puis, pour prendre en compte l’incertitude sur la production créée par les erreurs de prédiction des fermes éoliennes, une approche distribuée à deux étapes est développée pour résoudre un problème d’Optimal Power Flow avec contraintes probabilistes (CCOPF), d’une manière complétement distribuée. Les erreurs de prédiction des fermes éoliennes sont modélisées par des lois normales indépendantes et les écarts par rapport aux plannings de production sont considérés compensés par le réglage primaire en fréquence. La première étape de l’algorithme a pour but de déterminer des paramètres de sensibilités nécessaires pour formuler le problème. Les résultats de cette étape sont ensuite des paramètres d’entrée de la seconde étape qui, elle, résout le problème de CCOPF. Une extension de cette formulation permet d’ajouter de la flexibilité au problème en permettant la réduction de la production éolienne. Cet algorithme est basé sur la même décomposition fine que précédemment où les agents sont également coordonnés par l’ADMM et grâce à un problème de consensus. En conclusion, cet algorithme en deux étapes garantit la confidentialité et l’autonomie des différents acteurs, et est parallèle et adaptée aux plateformes hautes performances
Our societies are more dependent on electricity than ever, thus any disturbance in the power transmission and delivery has major economic and social impact. The reliability and security of power systems are then crucial to keep, for power system operators, in addition to minimizing the system operating cost. Moreover, transmission systems are interconnected to decrease the cost of operation and improve the system security. One of the main challenges for transmission system operators is therefore to coordinate with interconnected power systems, which raises scalability, interoperability and privacy issues. Hence, this thesis is concerned with how TSOs can operate their networks in a decentralized way but coordinating their operation with other neighboring TSOs to find a cost-effective scheduling that is globally secure.The main focus of this thesis is the security of power systems, this is why the evolution of the main characteristics of the blackouts that are failures in power system security, of the period 2005-2016 is studied. The approach consists in determining what the major characteristics of the incidents of the past 10 years are, to identify what should be taken into account to mitigate the risk of incidents. The evolution have been studied and compared with the characteristics of the blackouts before 2005. The study focuses on the pre-conditions that led to those blackouts and on the cascades, and especially the role of the cascade speed. Some important features are extracted and later integrated in our work.An algorithm that solve the preventive Security Constrained Optimal Power Flow (SCOPF) problem in a fully distributed manner, is thus developed. The preventive SCOPF problem consists in adding constraints that ensure that, after the loss of any major device of the system, the new steady-state reached, as a result of the primary frequency control, does not violate any constraint. The developed algorithm uses a fine-grained decomposition and is implemented under the multi-agent system paradigm based on two categories of agents: devices and buses. The agents are coordinated with the Alternating Direction method of multipliers in conjunction with a consensus problem. This decomposition provides the autonomy and privacy to the different actors of the system and the fine-grained decomposition allows to take the most of the decomposition and provides a good scalability regarding the size of the problem. This algorithm also have the advantage of being robust to any disturbance of the system, including the separation of the system into regions.Then, to account for the uncertainty of production brought by wind farms forecast error, a two-step distributed approach is developed to solve the Chance-Constrained Optimal Power Flow problem, in a fully distributed manner. The wind farms forecast errors are modeled by independent Gaussian distributions and the mismatches with the initials are assumed to be compensated by the primary frequency response of generators. The first step of this algorithm aims at determining the sensitivity factors of the system, needed to formulate the problem. The results of this first step are inputs of the second step that is the CCOPF. An extension of this formulation provides more flexibility to the problem and consists in including the possibility to curtail the wind farms. This algorithm relies on the same fine-grained decomposition where the agents are again coordinated by the ADMM and a consensus problem. In conclusion, this two-step algorithm ensures the privacy and autonomy of the different system actors and it is de facto parallel and adapted to high performance platforms
APA, Harvard, Vancouver, ISO, and other styles
27

Alais, Jean-Christophe. "Risque et optimisation pour le management d'énergies : application à l'hydraulique." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1071/document.

Full text
Abstract:
L'hydraulique est la principale énergie renouvelable produite en France. Elle apporte une réserve d'énergie et une flexibilité intéressantes dans un contexte d'augmentation de la part des énergies intermittentes dans la production. Sa gestion soulève des problèmes difficiles dus au nombre des barrages, aux incertitudes sur les apports d'eau et sur les prix, ainsi qu'aux usages multiples de l'eau. Cette thèse CIFRE, effectuée en partenariat avec Electricité de France, aborde deux questions de gestion hydraulique formulées comme des problèmes d'optimisation dynamique stochastique. Elles sont traitées dans deux grandes parties.Dans la première partie, nous considérons la gestion de la production hydroélectrique d'un barrage soumise à une contrainte dite de cote touristique. Cette contrainte vise à assurer une hauteur de remplissage du réservoir suffisamment élevée durant l'été avec un niveau de probabilité donné. Nous proposons différentes modélisations originales de ce problème et nous développons les algorithmes de résolution correspondants. Nous présentons des résultats numériques qui éclairent différentes facettes du problème utiles pour les gestionnaires du barrage.Dans la seconde partie, nous nous penchons sur la gestion d'une cascade de barrages. Nous présentons une méthode de résolution approchée par décomposition-coordination, l'algorithme Dual Approximate Dynamic Programming (DADP). Nousmontrons comment décomposer, barrage par barrage, le problème de la cascade en sous-problèmes obtenus en dualisant la contrainte de couplage spatial ``déversé supérieur = apport inférieur''. Sur un cas à trois barrages, nous sommes en mesure de comparer les résultats de DADP à la solution exacte (obtenue par programmation dynamique), obtenant desgains à quelques pourcents de l'optimum avec des temps de calcul intéressants. Les conclusions auxquelles nous sommes parvenu offrent des perspectives encourageantes pour l'optimisation stochastique de systèmes de grande taille
Hydropower is the main renewable energy produced in France. It brings both an energy reserve and a flexibility, of great interest in a contextof penetration of intermittent sources in the production of electricity. Its management raises difficulties stemming from the number of dams, from uncertainties in water inflows and prices and from multiple uses of water. This Phd thesis has been realized in partnership with Electricité de France and addresses two hydropower management issues, modeled as stochastic dynamic optimization problems. The manuscript is divided in two parts. In the first part, we consider the management of a hydroelectric dam subject to a so-called tourist constraint. This constraint assures the respect of a given minimum dam stock level in Summer months with a prescribed probability level. We propose different original modelings and we provide corresponding numerical algorithms. We present numerical results that highlight the problem under various angles useful for dam managers. In the second part, we focus on the management of a cascade of dams. We present the approximate decomposition-coordination algorithm called Dual Approximate Dynamic Programming (DADP). We show how to decompose an original (large scale) problem into smaller subproblems by dualizing the spatial coupling constraints. On a three dams instance, we are able to compare the results of DADP with the exact solution (obtained by dynamic programming); we obtain approximate gains that are only at a few percents of the optimum, with interesting running times. The conclusions we arrived at offer encouraging perspectives for the stochastic optimization of large scale problems
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Chenghao. "Contribution à l’optimisation robuste de réseaux." Thesis, Compiègne, 2021. http://www.theses.fr/2021COMP2632.

Full text
Abstract:
Cette thèse a pour objectif la proposition de nouvelles approches algorithmiques et de modélisation pour la résolution de certains problèmes d’optimisation de réseau dans les domaines des transports et des télécommunications. Plus précisément, les problèmes étudiés tombent dans le domaine du transport aérien, nommément le problème d’affectation des niveaux de vol dans l’espace aérienne, et dans le domaine des télécommunications où on traite des problèmes d’allocation de ressources dans les réseaux 5G. Un aspect important qui a été pris en compte dans cette étude est l’incertitude des données, c’est-à-dire le fait qu’une partie des données d’entrée ne sont pas connues de façon précise. Notre tâche a consisté à modéliser chacun des problèmes, proposer une formulation compacte des variantes déterministe et robuste, et proposer des approches appropriées pour les résoudre. Les problèmes étudiés tombent dans la catégorie des problèmes NP-complets et ils sont difficile à résoudre même pour des instances de taille modeste. Ils deviennent encore plus difficiles dans leur version robuste. Pour le problème d’affectation des niveaux de vols, nous avons considéré les incertitudes liées à l’heure de départ qui sont modélisées via un modèle de mélange gaussien. Le problème est modélisé comme un « chance-constrained problem » et résolu par un algorithme heuristique de génération de contraintes. Il s’agit d’une approche générale qui trouvera des applications plus large que ceux étudiés dans cette thèse. Ensuite, nous avons étudié la conception optimale des réseaux sans fil de 5ème génération (5G) dans le contexte de l’architectures Superfluid. Plus précisément, l’architecture 5G Superfluid est basée sur des entités de réseau appelées « Bloc Fonctionnel Réutilisable » (RFB) qui sont à la base des réseaux 5G. Nous avons étudié le problème de conception d’un tel réseau Superfluid à coût minimum pour lequel une formulation en programme linéaire mixte suivie d’une approche de résolution utilisant la décomposition de Benders a été implémentée ont été proposées. Enfin, le problème spécifique de conception de réseaux virtuels a été considéré sous l’angle de l’efficacité énergétique. Nous avons proposé une formulation de programmation linéaire en nombres entiers mixte du problème robuste, et présentons une nouvelle matheuristique basée sur la combinaison d’un algorithme génétique avec la recherche de voisinage. Les résultats numériques ont porté sur des instances de taille réalistes et ont montré la validité des modèles et des approches proposées
This Ph.D. Thesis is focused on proposing new optimization modeling and algorithmic approaches for dealing with real-world network optimization problems arising in the transportation and telecommunications fields. Since the focus has been on real-world applications, a relevant aspect that has been taken into account is data uncertainty, i.e. the fact that the value of a subset of input data of the problem is not exactly known when the problem is solved. More precisely, in the context of transportation problems, it was considered the flight level assignment problem, which arises in air traffic management. It aims at establishing the flight levels of a set of aircraft in order to improve the total assignment revenue, to reduce the total number of flight conflicts and also the total en-route delay. In this context, we proposed a new chance-constrained optimization problem and iterative constraint-generation heuristic which is based on both analytical and sampling methods. Besides transportation problems, this Thesis has also focused on the optimal design of 5th generation of wireless networks (5G) considering Superfluid and virtual architectures. Specifically, the 5G Superfluid architecture is based on atomic virtual entities called Reusable Functional Block (RFB). We investigated the problem of minimizing the total installation costs of a 5G Superfluid network (composed of virtual entities and realized over a physical network) while guaranteeing constraint on user coverage, downlink traffic performance and technical constraints on RFBs of different nature. To solve this hard problem, we proposed a Benders decomposition approach. Concerning instead the design of general virtual networks, we adopted a green paradigm that pursues energy-efficiency and tackled a state-of-the-art robust mixed integer linear programming formulation of the problem, by means of a new matheuris tic based on combining a genetic algorithm with exact large neighborhood searches. Results of computational tests executed considering realistic problem instances have shown the validity of all the new optimization modeling and algorithmic approaches proposed in this Thesis for the transportation and telecommunications problems sketched above
APA, Harvard, Vancouver, ISO, and other styles
29

Lorenz, Nicole. "Application of the Duality Theory." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-94108.

Full text
Abstract:
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
APA, Harvard, Vancouver, ISO, and other styles
30

Kabalan, Bilal. "Systematic methodology for generation and design of hybrid vehicle powertrains." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE1048.

Full text
Abstract:
Pour répondre aux objectifs de consommation des flottes de véhicules, au normes d’émissions de polluants et aux nouvelles demandes de l’usager, les constructeurs automobiles doivent développer des motorisations hybrides et électriques. Réaliser une chaine de traction hybride reste cependant une tâche difficile. Ces systèmes sont complexes et possèdent de nombreuses variables réparties sur différents niveaux : architecture, technologie des composants, dimensionnement et contrôle/commande. L’industrie manque encore d’environnements et d’outils pouvant aider à l’exploration de l’ensemble de l’espace de dimensionnement et à trouver la meilleure solution parmi tous ces niveaux. Cette thèse propose une méthodologie systématique pour répondre au moins partiellement à ce besoin. Partant d’un ensemble de composants, cette méthodologie permet de générer automatiquement tous les graphes d’architectures possibles en utilisant la technique de programmation par contraintes. Une représentation dédiée est développée pour visualiser ces graphes. Les éléments de boites de vitesse (embrayages, synchroniseurs) sont représentés avec un niveau de détails approprié pour générer de nouvelles transmission mécaniques sans trop complexifier le problème. Les graphes obtenus sont ensuite transformés en d’autres types de représentation : 0ABC Table (décrivant les connections mécaniques entre les composants), Modes Table (décrivant les modes de fonctionnement disponibles dans les architectures) et Modes Table + (décrivant pour chaque mode le rendement et le rapport de réduction global des chemins de transfert de l’énergie entre tous les composants). Sur la base de cette représentation, les nombreuses architectures générées sont filtrées et seules les plus prometteuses sont sélectionnées. Elles sont ensuite automatiquement évaluées et optimisées avec un modèle général spécifiquement développé pour calculer les performances et la consommation de toute les architectures générées. Ce modèle est inséré dans un processus d’optimisation à deux niveaux ; un algorithme génétique GA est utilisé pour le dimensionnement des composants et la programmation dynamique est utilisée au niveau contrôle (gestion de l’énergie) du système. Un cas d’étude est ensuite réalisé pour montrer le potentiel de cette méthodologie. Nous générons ainsi automatiquement toutes les architectures qui incluent un ensemble de composants défini à l’avance, et le filtrage automatique élimine les architectures présupposées non efficaces et sélectionnent les plus prometteuses pour l’optimisation. Les résultats montrent que la méthodologie proposée permet d’aboutir à une architecture meilleure (consommation diminuée de 5%) que celles imaginées de prime abord (en dehors de toute méthodologie)
To meet the vehicle fleet-wide average CO2 targets, the stringent pollutant emissions standards, and the clients’ new demands, the automakers realized the inevitable need to offer more hybrid and electric powertrains. Designing a hybrid powertrain remains however a complex task. It is an intricate system involving numerous variables that are spread over different levels: architecture, component technologies, sizing, and control. The industry lacks frameworks or tools that help in exploring the entire design space and in finding the global optimal solution on all these levels. This thesis proposes a systematic methodology that tries to answer a part of this need. Starting from a set of chosen components, the methodology automatically generates all the possible graphs of architectures using constraint-programming techniques. A tailored representation is developed to picture these graphs. The gearbox elements (clutches, synchronizer units) are represented with a level of details appropriate to generate the new-trend dedicated hybrid gearboxes, without making the problem too complex. The graphs are then transformed into other types of representation: 0ABC Table (describing the mechanical connections between the components), Modes Table (describing the available modes in the architectures) and Modes Table + (describing for each available mode the global efficiency and ratio of the power flow between all the components). Based on these representations, the architectures are filtered and the most promising ones are selected. They are automatically assessed and optimized using a general hybrid model specifically developed to calculate the performance and fuel consumption of all the generated architectures. This model is inserted inside a bi-level optimization process: Genetic Algorithm GA is used on the sizing and components level, while Dynamic Programming DP is used on the control level. A case study is performed and the capability of the methodology is proven. It succeeded in automatically generating all the graphs of possible architectures, and filtering dismissed architectures that were then proven not efficient. It also selected the most promising architectures for optimization. The results show that the proposed methodology succeeded in finding an architecture better than the ones proposed without the methodology (consumption about 5% lower)
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Chi-Ping, and 林奇平. "Solving Minimum Cost Redundancy Allocation Problem with Chance Constraint Using Simulation Optimization." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/btfs47.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系所
105
Redundancy allocation problem (RAP), which aims to minimize the system total cost subject to some constraints on system reliability, represents an important problem in system design with many applications in areas such as electronic systems, power systems, telecommunication systems and manufacturing systems. In this paper, we extend RAP to a more generalized situation, First, the network topology is generalized, i.e., the components in the system can be of any logical relationship. Second, there exist chance constraints that the overall system reliability is required to exceed a prescribed value. We propose an efficient simulation optimization method. A numerical study shows that the proposed method can locate the optimal solution more efficiently compared to the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Ta, Thuy Anh. "Stochastic optimization of staffing for multiskill call centers." Thèse, 2019. http://hdl.handle.net/1866/23441.

Full text
Abstract:
Dans cette thèse, nous étudions le problème d’optimisation des effectifs dans les centres d’appels, dans lequel nous visons à minimiser les coûts d’exploitation tout en offrant aux clients une qualité de service (QoS) élevée. Nous introduisons également l'utilisation de contraintes probabilistes qui exigent que la qualité de service soit satisfaite avec une probabilité donnée. Ces contraintes sont adéquates dans le cas où la performance est mesurée sur un court intervalle de temps, car les mesures de QoS sont des variables aléatoires sur une période donnée. Les problèmes de personnel proposés sont difficiles en raison de l'absence de forme analytique pour les contraintes probabilistes et doivent être approximées par simulation. En outre, les fonctions QoS sont généralement non linéaires et non convexes. Nous considérons les problèmes d’affectation personnel dans différents contextes et étudions les modèles proposés tant du point de vue théorique que pratique. Les méthodologies développées sont générales, en ce sens qu'elles peuvent être adaptées et appliquées à d'autres problèmes de décision dans les systèmes de files d'attente. La thèse comprend trois articles traitant de différents défis en matière de modélisation et de résolution de problèmes d'optimisation d’affectation personnel dans les centres d'appels à compétences multiples. Les premier et deuxième article concernent un problème d'optimisation d'affectation de personnel en deux étapes sous l'incertitude. Alors que dans le second, nous étudions un modèle général de programmation stochastique discrète en deux étapes pour fournir une garantie théorique de la consistance de l'approximation par moyenne échantillonnale (SAA) lorsque la taille des échantillons tend vers l'infini, le troisième applique l'approche du SAA pour résoudre le problème d’optimisation d'affectation de personnel en deux étapes avec les taux d’arrivée incertain. Les deux articles indiquent la viabilité de l'approche SAA dans notre contexte, tant du point de vue théorique que pratique. Pour être plus précis, dans le premier article, nous considérons un problème stochastique discret général en deux étapes avec des contraintes en espérance. Nous formulons un problème SAA avec échantillonnage imbriqué et nous montrons que, sous certaines hypothèses satisfaites dans les exemples de centres d'appels, il est possible d'obtenir les solutions optimales du problème initial en résolvant son SAA avec des échantillons suffisamment grands. De plus, nous montrons que la probabilité que la solution optimale du problème de l’échantillon soit une solution optimale du problème initial tend vers un de manière exponentielle au fur et à mesure que nous augmentons la taille des échantillons. Ces résultats théoriques sont importants, non seulement pour les applications de centre d'appels, mais également pour d'autres problèmes de prise de décision avec des variables de décision discrètes. Le deuxième article concerne les méthodes de résolution d'un problème d'affectation en personnel en deux étapes sous incertitude du taux d'arrivée. Le problème SAA étant coûteux à résoudre lorsque le nombre de scénarios est important. En effet, pour chaque scénario, il est nécessaire d'effectuer une simulation pour estimer les contraintes de QoS. Nous développons un algorithme combinant simulation, génération de coupes, renforcement de coupes et décomposition de Benders pour résoudre le problème SAA. Nous montrons l'efficacité de l'approche, en particulier lorsque le nombre de scénarios est grand. Dans le dernier article, nous examinons les problèmes de contraintes en probabilité sur les mesures de niveau de service. Notre méthodologie proposée dans cet article est motivée par le fait que les fonctions de QoS affichent généralement des courbes en S et peuvent être bien approximées par des fonctions sigmoïdes appropriées. Sur la base de cette idée, nous avons développé une nouvelle approche combinant la régression non linéaire, la simulation et la recherche locale par région de confiance pour résoudre efficacement les problèmes de personnel à grande échelle de manière viable. L’avantage principal de cette approche est que la procédure d’optimisation peut être formulée comme une séquence de simulations et de résolutions de problèmes de programmation linéaire. Les résultats numériques basés sur des exemples réels de centres d'appels montrent l'efficacité pratique de notre approche. Les méthodologies développées dans cette thèse peuvent être appliquées dans de nombreux autres contextes, par exemple les problèmes de personnel et de planification dans d'autres systèmes basés sur des files d'attente avec d'autres types de contraintes de QoS. Celles-ci soulèvent également plusieurs axes de recherche qu'il pourrait être intéressant d'étudier. Par exemple, une approche de regroupement de scénarios pour atténuer le coût des modèles d'affectation en deux étapes, ou une version d'optimisation robuste en distribution pour mieux gérer l'incertitude des données.
In this thesis, we study the staffing optimization problem in multiskill call centers, in which we aim at minimizing the operating cost while delivering a high quality of service (QoS) to customers. We also introduce the use of chance constraints which require that the QoSs are met with a given probability. These constraints are adequate in the case when the performance is measured over a short time interval as QoS measures are random variables in a given time period. The proposed staffing problems are challenging in the sense that the stochastic constraints have no-closed forms and need to be approximated by simulation. In addition, the QoS functions are typically non-linear and non-convex. We consider staffing optimization problems in different settings and study the proposed models in both theoretical and practical aspects. The methodologies developed are general, in the sense that they can be adapted and applied to other staffing/scheduling problems in queuing-based systems. The thesis consists of three articles dealing with different challenges in modeling and solving staffing optimization problems in multiskill call centers. The first and second articles concern a two-stage staffing optimization problem under uncertainty. While in the first one, we study a general two-stage discrete stochastic programming model to provide a theoretical guarantee for the consistency of the sample average approximation (SAA) when the sample sizes go to infinity, the second one applies the SAA approach to solve the two-stage staffing optimization problem under arrival rate uncertainty. Both papers indicate the viability of the SAA approach in our context, in both theoretical and practical aspects. To be more precise, in the first article, we consider a general two-stage discrete stochastic problem with expected value constraints. We formulate its SAA with nested sampling. We show that under some assumptions that hold in call center examples, one can obtain the optimal solutions of the original problem by solving its SAA with large enough sample sizes. Moreover, we show that the probability that the optimal solution of the sample problem is an optimal solution of the original problem, approaches one exponentially fast as we increase the sample sizes. These theoretical findings are important, not only for call center applications, but also for other decision-making problems with discrete decision variables. The second article concerns solution methods to solve a two-stage staffing problem under arrival rate uncertainty. It is motivated by the fact that the SAA version of the two-stage staffing problem becomes expensive to solve with a large number of scenarios, as for each scenario, one needs to use simulation to approximate the QoS constraints. We develop an algorithm that combines simulation, cut generation, cut strengthening and Benders decomposition to solve the SAA problem. We show the efficiency of the approach, especially when the number of scenarios is large. In the last article, we consider problems with chance constraints on the service level measures. Our methodology proposed in this article is motivated by the fact that the QoS functions generally display ``S-shape'' curves and might be well approximated by appropriate sigmoid functions. Based on this idea, we develop a novel approach that combines non-linear regression, simulation and trust region local search to efficiently solve large-scale staffing problems in a viable way. The main advantage of the approach is that the optimization procedure can be formulated as a sequence of steps of performing simulation and solving linear programming models. Numerical results based on real-life call center examples show the practical viability of our approach. The methodologies developed in this thesis can be applied in many other settings, e.g., staffing and scheduling problems in other queuing-based systems with other types of QoS constraints. These also raise several research directions that might be interesting to investigate. For examples, a clustering approach to mitigate the expensiveness of the two-stage staffing models, or a distributionally robust optimization version to better deal with data uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
33

Ta, Thuy Anh. "Staffing Optimization with Chance Constraints in Call Centers." Thèse, 2013. http://hdl.handle.net/1866/10687.

Full text
Abstract:
Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.
Call centers are key components of almost any large organization. The problem of labor management has received a great deal of attention in the literature. A typical formulation of the staffing problem is in terms of infinite-horizon performance measures. The method of combining simulation and optimization is used to solve this staffing problem. In this thesis, we consider a problem of staffing call centers with respect to chance constraints. We introduce chance-constrained formulations of the scheduling problem which requires that the quality of service (QoS) constraints are met with high probability. We define a sample average approximation of this problem in a multiskill setting. We prove the convergence of the optimal solution of the sample-average problem to that of the original problem when the sample size increases. For the special case where we consider the staffing problem and all agents have all skills (a single group of agents), we design three simulation-based optimization methods for the sample problem. Given a starting solution, we increase the staffings in periods where the constraints are violated, and decrease the number of agents in several periods where decrease is acceptable, as much as possible, provided that the constraints are still satisfied. For the call center models in our numerical experiment, these algorithms give good solutions, i.e., most constraints are satisfied, and we cannot decrease any agent in any period to obtain better results. One advantage of these algorithms, compared with other methods, that they are very easy to implement.
APA, Harvard, Vancouver, ISO, and other styles
34

"Chance-constrained optimization with stochastically dependent perturbations." 2012. http://library.cuhk.edu.hk/record=b5549428.

Full text
Abstract:
近年来,随着机会约束规划被广泛应用以及凸分析和概率论的新进展,如何有效的处理机会约束成为一个炙手可热的研究方向。其中,一个成功的解决方法就是考虑其安全可解近似,也就是说将机会约束转化成一组方便处理的确定性约束,并且保持原机会约束在新的约束下成立。目前这样的方法主要应用于带有独立分布的数据扰动的机会约束规划,或者已知扰动的协方差矩阵的情况。同时,带有相关数据扰动的机会约束下的锥不等式广泛应用于供应链管理、金融、控制以及信号处理等学科,而现有的优化理论却极少涵盖。
在这篇论文中我们主要研究机会约束下的线性矩阵不等式,并假设扰动分布不必相互独立,其仅有的相关性信息只由一系列子扰动的独立关系结构提供。通过推导矩阵值随机变量的大偏差上界,我们得出这一类条件约束的安全可解近似。我们随后考虑了基于条件风险价值度量的机会约束规划问题, 以及带多项式扰动的机会约束优化问题。另外,通过构造相应的鲁棒对等式的不确定集合,我们把机会约束规划转换成鲁棒优化问题。由于这种近似可以表示为一组线性矩阵不等式,因而可以使用现成的优化软件方便地求解。最后,我们把该安全可解近似方法运用到一个控制理论问题,以及一个带风险价值约束的投资组合优化问题中。
The wide applicability of chanceconstrained programming, together with advances in convex optimization and probability theory, has created a surge of interest in finding efficient methods for processing chance constraints in recent years. One of the successes is the development of so-called safe tractable approximations of chance-constrained programs, where a chance constraint is replaced by a deterministic and efficiently computable inner approximation. Currently, such an approach applies mainly to chance-constrained linear inequalities, in which the data perturbations are either independent or define a known covariance matrix. However, its applicability to the case of chanceconstrained conic inequalities with dependent perturbations--which arises in supply chain management, finance, control and signal processing applications--remains largely unexplored.
In this thesis, we consider the problem of processing chance-constrained affinely perturbed linear matrix inequalities, in which the perturbations are not necessarily independent, and the only information available about the dependence structure is a list of independence relations. Using large deviation bounds for matrix-valued random variables, we develop safe tractable approximations of those chance constraints. Extensions to the Matrix CVaR (Conditional Value-at-Risk) risk measure and general polynomials perturbations are also provided separately. Further more, we show that the chanceconstrained linear matrix inequalities optimization problem can be converted to a robust optimization problem by constructing the uncertainty set of the corresponding robust counterpart. A nice feature of our approximations is that they can be expressed as systems of linear matrix inequalities, thus allowing them to be solved easily and efficiently by off-the-shelf optimization solvers. We also provide a numerical illustration of our constructions through a problem in control theory and a portfolio VaR (Value-at-Risk) optimization problem.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Wang, Kuncheng.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 94-101).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Abstract --- p.i
Acknowledgement --- p.vi
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivations and Philosophy --- p.1
Chapter 1.2 --- Background --- p.2
Chapter 1.3 --- Literature Review --- p.4
Chapter 1.4 --- Contribution --- p.7
Chapter 2 --- Preliminaries --- p.10
Chapter 2.1 --- Probabilistic Inequalities --- p.10
Chapter 2.2 --- Exact Proper Fractional Covers --- p.12
Chapter 2.2.1 --- Exact Proper Fractional Cover of Quadratic Perturbations --- p.15
Chapter 3 --- Large Deviations of Sums of Dependent Random Matrices --- p.18
Chapter 3.1 --- The Matrix Exponential Function and Its Properties --- p.18
Chapter 3.2 --- Main Theorem --- p.19
Chapter 4 --- From Large Deviations to ChanceConstrained LMIs --- p.26
Chapter 4.1 --- General Results --- p.26
Chapter 4.2 --- Application to ChanceConstrained Quadratically Perturbed Linear Matrix Inequalities --- p.30
Chapter 4.3 --- Bounding the Matrix Moment Generating Functions --- p.31
Chapter 4.4 --- Iterative Improvement of the Proposed Approximations --- p.42
Chapter 5 --- Computational Studies --- p.49
Chapter 5.1 --- Application to Control Problems --- p.49
Chapter 5.2 --- Application to Value-at-Risk Portfolio Optimization --- p.57
Chapter 6 --- ChanceConstrained LMIs with CVaR Risk Measure --- p.64
Chapter 6.1 --- Matrix CVaR Risk Measure --- p.65
Chapter 6.2 --- Some Useful Inequalities --- p.68
Chapter 6.3 --- From Matrix CVaR to ChanceConstrained LMIs --- p.69
Chapter 6.3.1 --- Bound π¹(A₀, · · · ,A[subscript m]) --- p.70
Chapter 6.3.2 --- Bound π²(A₀, · · · ,A[subscript m]) --- p.71
Chapter 6.3.3 --- Bound π³(A₀, · · · ,A[subscript m]) --- p.72
Chapter 6.3.4 --- Convex Approximation of π[superscript i](A0, · · · ,Am) --- p.73
Chapter 7 --- Extension to Polynomials Perturbations --- p.75
Chapter 7.1 --- Decoupling Theory --- p.75
Chapter 7.2 --- Safe Tractable Approximation by SecondOrder Cone Programming --- p.77
Chapter 8 --- Construct Uncertainty Set for Chance Constraints --- p.81
Chapter 8.1 --- Problem Statement --- p.82
Chapter 8.2 --- Fractional Cover for Quartic Perturbations --- p.83
Chapter 8.3 --- Probabilistic Guarantees --- p.85
Chapter 8.3.1 --- Probabilistic Bound Based on Large Deviations --- p.85
Chapter 8.4 --- The Value of Ω for Bounds --- p.88
Chapter 8.5 --- Computational Study --- p.89
Chapter 8.5.1 --- Independent Standard Normal Perturbations --- p.89
Chapter 8.5.2 --- Independent Bounded Quadratic Perturbations --- p.91
Chapter 9 --- Conclusion --- p.93
Bibliography --- p.94
APA, Harvard, Vancouver, ISO, and other styles
35

"Chance-constrained Optimization Models for Agricultural Seed Development and Selection." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54819.

Full text
Abstract:
abstract: Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development. This thesis attempts to create a chance-constrained knapsack optimization model, which the breeder can use to make better decisions about seed progression and help reduce the levels of risk in their selections. The model’s objective is to select seed varieties out of a larger pool of varieties and maximize the average yield of the “knapsack” based on meeting some risk criteria. Two models are created for different cases. First is the risk reduction model which seeks to reduce the risk of getting a bad yield but still maximize the total yield. The second model considers the possibility of adverse environmental effects and seeks to mitigate the negative effects it could have on the total yield. In practice, breeders can use these models to better quantify uncertainty in selecting seed varieties
Dissertation/Thesis
Masters Thesis Industrial Engineering 2019
APA, Harvard, Vancouver, ISO, and other styles
36

Bhadra, Sahely. "Learning Robust Support Vector Machine Classifiers With Uncertain Observations." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2475.

Full text
Abstract:
The central theme of the thesis is to study linear and non linear SVM formulations in the presence of uncertain observations. The main contribution of this thesis is to derive robust classfiers from partial knowledge of the underlying uncertainty. In the case of linear classification, a new bounding scheme based on Bernstein inequality has been proposed, which models interval-valued uncertainty in a less conservative fashion and hence is expected to generalize better than the existing methods. Next, potential of partial information such as bounds on second order moments along with support information has been explored. Bounds on second order moments make the resulting classifiers robust to moment estimation errors. Uncertainty in the dataset will lead to uncertainty in the kernel matrices. A novel distribution free large deviation inequality has been proposed which handles uncertainty in kernels through co-positive programming in a chance constraint setting. Although such formulations are NP hard, under several cases of interest the problem reduces to a convex program. However, the independence assumption mentioned above, is restrictive and may not always define a valid uncertain kernel. To alleviate this problem an affine set based alternative is proposed and using a robust optimization framework the resultant problem is posed as a minimax problem. In both the cases of Chance Constraint Program or Robust Optimization (for non-linear SVM), mirror descent algorithm (MDA) like procedures have been applied.
APA, Harvard, Vancouver, ISO, and other styles
37

Arellano-Garcia, Harvey [Verfasser]. "Chance constrained optimization of process systems under uncertainty / vorgelegt von Harvey Arellano-Garcia." 2006. http://d-nb.info/982225652/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Shanker, Akshay. "Papers in Modern Economic Growth Theory." Phd thesis, 2018. http://hdl.handle.net/1885/154321.

Full text
Abstract:
This thesis seeks to develop our understanding of two emerging themes in economic growth theory: constrained optima in heterogeneous agent economies with non- trivial wealth distributions, and the relationship between energy use and long-run growth. The first paper proves the existence of constrained optima in a growth model with incomplete insurance markets and idiosyncratic risk, also known as the Aiyagari- Huggett model. A natural way to formulate an optimal policy problem in the Aiyagari-Huggett model is by using a constrained planner; the constrained plan- ner cannot complete markets, but must improve welfare subject to agents’ budget constraints. Because the planner must respect infinitely many agents’ idiosyncratic budget constraints, the state and control for the constrained planner are an infinite dimensional wealth distribution and policy function. The key mathematical chal- lenge the proof overcomes is non-compactness of feasibility correspondences in the constrained planner’s dynamic optimization problem, brought on by the infinite dimensional structure of the economy. The second paper computes constrained optima in an Aiyagari-Huggett model with labor-leisure choice. In an economy calibrated to U.S. wealth and income inequality, the paper finds the constrained planner increases aggregate capital and reduces aggregate hours worked. The resulting increase in wages and fall in interest rates shifts the distribution of consumption towards the consumption poor, since they rely more on labor income than capital income. However, in a constrained efficient allocation, only highly productive individuals increase saving, while less productive individuals reduce saving. Moreover, only the asset poor and less productive agents reduce work hours; the wealthy and highly productive increase work hours due to wealth effects. While total work hours fall, the increase in work hours by the most productive is sufficient to increase effective labor supply. The third paper turns to energy and growth. World and U.S. energy intensities have declined over the past century, falling, on average, approximately 1.2–1.5 percent a year. The decline has persisted through periods of non-increasing raw energy prices, suggesting the decline is in a large part driven by autonomous factors, inde- pendent of price changes. In this paper I use a model of Schumpetarian endogenous growth with directed technical change to understand the autonomous decline in energy intensity and consider if the decline will continue. In an economy with no state dependence, where existing knowledge does not make R&D more profitable, this paper shows that energy intensity continues to decline from profit driven energy augmenting innovation, albeit at a slower rate than output growth. In an economy with state dependence, however, energy intensity eventually stops declining because labor augmenting innovation crowds out energy augmenting innovation. In either case, energy use always increases as long as the raw price of energy stays constant.
APA, Harvard, Vancouver, ISO, and other styles
39

(11210091), Prateek Jaiswal. "Variational Inference for Data-driven Stochastic Programming." Thesis, 2021.

Find full text
Abstract:
Stochastic programs are standard models for decision-making under uncertainty and have been extensively studied in the operations research literature. In general, stochastic programming involves minimizing an expected cost function, where the expectation is with respect to fully specified stochastic models that quantify the aleatoric or `inherent' uncertainty in the decision-making problem. In practice, however, the stochastic models are unknown but can be estimated from data, introducing an additional epistemic uncertainty into the decision-making problem. The Bayesian framework provides a coherent way to quantify the epistemic uncertainty through the posterior distribution by combining prior beliefs of the decision-makers with the observed data. Bayesian methods have been used for data-driven decision-making in various applications such as inventory management, portfolio design, machine learning, optimal scheduling, and staffing, etc.
Bayesian methods are challenging to implement, mainly due to the fact that the posterior is computationally intractable, necessitating the computation of approximate posteriors. Broadly speaking, there are two methods in the literature implementing approximate posterior inference. First are sampling-based methods such as Markov Chain Monte Carlo. Sampling-based methods are theoretically well understood, but they suffer from various issues like high variance, poor scalability to high-dimensional problems, and have complex diagnostics. Consequently, we propose to use optimization-based methods collectively known as variational inference (VI) that use information projections to compute an approximation to the posterior. Empirical studies have shown that VI methods are computationally faster and easily scalable to higher-dimensional problems and large datasets. However, the theoretical guarantees of these methods are not well understood. Moreover, VI methods are empirically and theoretically less explored in the decision-theoretic setting.

In this thesis, we first propose a novel VI framework for risk-sensitive data-driven decision-making, which we call risk-sensitive variational Bayes (RSVB). In RSVB, we jointly compute a risk-sensitive approximation to the `true' posterior and the optimal decision by solving a minimax optimization problem. The RSVB framework includes the naive approach of first computing a VI approximation to the true posterior and then using it in place of the true posterior for decision-making. We show that the RSVB approximate posterior and the corresponding optimal value and decision rules are asymptotically consistent, and we also compute their rate of convergence. We illustrate our theoretical findings in both parametric as well as nonparametric setting with the help of three examples: the single and multi-product newsvendor model and Gaussian process classification. Second, we present the Bayesian joint chance-constrained stochastic program (BJCCP) for modeling decision-making problems with epistemically uncertain constraints. We discover that using VI methods for posterior approximation can ensure the convexity of the feasible set in (BJCCP) unlike any sampling-based methods and thus propose a VI approximation for (BJCCP). We also show that the optimal value computed using the VI approximation of (BJCCP) are statistically consistent. Moreover, we derive the rate of convergence of the optimal value and compute the rate at which a VI approximate solution of (BJCCP) is feasible under the true constraints. We demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queue. Finally, this thesis also contributes to the growing literature in understanding statistical performance of VI methods. In particular, we establish the frequentist consistency of an approximate posterior computed using a well known VI method that computes an approximation to the posterior distribution by minimizing the Renyi divergence from the ‘true’ posterior.
APA, Harvard, Vancouver, ISO, and other styles
40

Lorenz, Nicole. "Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine Learning." Doctoral thesis, 2011. https://monarch.qucosa.de/id/qucosa%3A19760.

Full text
Abstract:
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
APA, Harvard, Vancouver, ISO, and other styles
41

Nikjah, Reza. "Performance evaluation and protocol design of fixed-rate and rateless coded relaying networks." Phd thesis, 2010. http://hdl.handle.net/10048/1674.

Full text
Abstract:
The importance of cooperative relaying communication in substituting for, or complementing, multiantenna systems is described, and a brief literature review is presented. Amplify-and-forward (AF) and decode-and-forward (DF) relaying are investigated and compared for a dual-hop relay channel. The optimal strategy, source and relay optimal power allocation, and maximum cooperative gain are determined for the relay channel. It is shown that while DF relaying is preferable to AF relaying for strong source-relay links, AF relaying leads to more gain for strong source-destination or relay-destination links. Superimposed and selection AF relaying are investigated for multirelay, dual-hop relaying. Selection AF relaying is shown to be globally strictly outage suboptimal. A necessary condition for the selection AF outage optimality, and an upper bound on the probability of this optimality are obtained. A near-optimal power allocation scheme is derived for superimposed AF relaying. The maximum instantaneous rates, outage probabilities, and average capacities of multirelay, dual-hop relaying schemes are obtained for superimposed, selection, and orthogonal DF relaying, each with parallel channel cooperation (PCC) or repetition-based cooperation (RC). It is observed that the PCC over RC gain can be as much as 4 dB for the outage probabilities and 8.5 dB for the average capacities. Increasing the number of relays deteriorates the capacity performance of orthogonal relaying, but improves the performances of the other schemes. The application of rateless codes to DF relaying networks is studied by investigating three single-relay protocols, one of which is new, and three novel, low complexity multirelay protocols for dual-hop networks. The maximum rate and minimum energy per bit and per symbol are derived for the single-relay protocols under a peak power and an average power constraint. The long-term average rate and energy per bit, and relay-to-source usage ratio (RSUR), a new performance measure, are evaluated for the single-relay and multirelay protocols. The new single-relay protocol is the most energy efficient single-relay scheme in most cases. All the multirelay protocols exhibit near-optimal rate performances, but are vastly different in the RSUR. Several future research directions for fixed-rate and rateless coded cooperative systems, and frameworks for comparing these systems, are suggested.
Communications
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography