Dissertations / Theses on the topic 'Discrete optimal control problems'

To see the other types of publications on this topic, follow the link: Discrete optimal control problems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Discrete optimal control problems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Woon, Siew Fang. "Global algorithms for nonlinear discrete optimization and discrete-valued optimal control problems." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/538.

Full text
Abstract:
Optimal control problems arise in many applications, such as in economics, finance, process engineering, and robotics. Some optimal control problems involve a control which takes values from a discrete set. These problems are known as discrete-valued optimal control problems. Most practical discrete-valued optimal control problems have multiple local minima and thus require global optimization methods to generate practically useful solutions. Due to the high complexity of these problems, metaheuristic based global optimization techniques are usually required.One of the more recent global optimization tools in the area of discrete optimization is known as the discrete filled function method. The basic idea of the discrete filled function method is as follows. We choose an initial point and then perform a local search to find an initial local minimizer. Then, we construct an auxiliary function, called a discrete filled function, at this local minimizer. By minimizing the filled function, either an improved local minimizer is found or one of the vertices of the constraint set is reached. Otherwise, the parameters of the filled function are adjusted. This process is repeated until no better local minimizer of the corresponding filled function is found. The final local minimizer is then taken as an approximation of the global minimizer.While the main aim of this thesis is to present a new computational methodfor solving discrete-valued optimal control problems, the initial focus is on solvingpurely discrete optimization problems. We identify several discrete filled functionstechniques in the literature and perform a critical review including comprehensive numerical tests. Once the best filled function method is identified, we propose and test several variations of the method with numerical examples.We then consider the task of determining near globally optimal solutions of discrete-valued optimal control problems. The main difficulty in solving the discrete-valued optimal control problems is that the control restraint set is discrete and hence not convex. Conventional computational optimal control techniques are designed for problems in which the control takes values in a connected set, such as an interval, and thus they cannot solve the problem directly. Furthermore, variable switching times are known to cause problems in the implementation of any numerical algorithm due to the variable location of discontinuities in the dynamics. Therefore, such problem cannot be solved using conventional computational approaches. We propose a time scaling transformation to overcome this difficulty, where a new discrete variable representing the switching sequence and a new variable controlling the switching times are introduced. The transformation results in an equivalent mixed discrete optimization problem. The transformed problemis then decomposed into a bi-level optimization problem, which is solved using a combination of an efficient discrete filled function method identified earlier and a computational optimal control technique based on the concept of control parameterization.To demonstrate the applicability of the proposed method, we solve two complex applied engineering problems involving a hybrid power system and a sensor scheduling task, respectively. Computational results indicate that this method is robust, reliable, and efficient. It can successfully identify a near-global solution for these complex applied optimization problems, despite the demonstrated presence of multiple local optima. In addition, we also compare the results obtained with other methods in the literature. Numerical results confirm that the proposed method yields significant improvements over those obtained by other methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Rieck, Rainer Matthias [Verfasser], Florian [Akademischer Betreuer] [Gutachter] Holzapfel, and Matthias [Gutachter] Gerdts. "Discrete Controls and Constraints in Optimal Control Problems / Rainer Matthias Rieck ; Gutachter: Matthias Gerdts, Florian Holzapfel ; Betreuer: Florian Holzapfel." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1126644137/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ferraço, Igor Breda. "Controle ótimo por modos deslizantes via função penalidade." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-09112011-161224/.

Full text
Abstract:
Este trabalho aborda o problema de controle ótimo por modos deslizantes via função penalidade para sistemas de tempo discreto. Para resolver este problema será desenvolvido uma estrutura matricial alternativa baseada no problema de mínimos quadrados ponderados e funções penalidade. A partir desta nova formulação é possível obter a lei de controle ótimo por modos deslizantes, as equações de Riccati e a matriz do ganho de realimentação através desta estrutura matricial alternativa. A motivação para propormos essa nova abordagem é mostrar que é possível obter uma solução alternativa para o problema clássico de controle ótimo por modos deslizantes.
This work introduces a penalty function approach to deal with the optimal sliding mode control problem for discrete-time systems. To solve this problem an alternative array structure based on the problem of weighted least squares penalty function will be developed. Using this alternative matrix structure, the optimal sliding mode control law of, the matrix Riccati equations and feedback gain were obtained. The motivation of this new approach is to show that it is possible to obtain an alternative solution to the classic problem of optimal sliding mode control.
APA, Harvard, Vancouver, ISO, and other styles
4

Hazell, Andrew. "Discrete-time optimal preview control." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/8472.

Full text
Abstract:
There are many situations in which one can preview future reference signals, or future disturbances. Optimal Preview Control is concerned with designing controllers which use this preview to improve closed-loop performance. In this thesis a general preview control problem is presented which includes previewable disturbances, dynamic weighting functions, output feedback and nonpreviewable disturbances. It is then shown how a variety of problems may be cast as special cases of this general problem; of particular interest is the robust preview tracking problem and the problem of disturbance rejection with uncertainty in the previewed signal. The general preview problem is solved in both the Fh and Beo settings. The H2 solution is a relatively straightforward extension ofpreviously known results, however, our contribution is to provide a single framework that may be used as a reference work when tackling a variety of preview problems. We also provide some new analysis concerning the maximum possible reduction in closed-loop H2 norm which accrues from the addition of preview action. The solution to the Hoo problem involves a completely new approach to Hoo preview control, in which the structure of the associated Riccati equation is exploited in order to find an efficient algorithm for computing the optimal controller. The problem tackled here is also more generic than those previously appearing in the literature. The above theory finds obvious applications in the design of controllers for autonomous vehicles, however, a particular class of nonlinearities found in typical vehicle models presents additional problems. The final chapters are concerned with a generic framework for implementing vehicle preview controllers, and also a'case study on preview control of a bicycle.
APA, Harvard, Vancouver, ISO, and other styles
5

Soler, Edilaine Martins. "Resolução do problema de fluxo de potência ótimo com variáveis de controle discretas." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-07042011-151716/.

Full text
Abstract:
O objetivo de um problema de Fluxo de Potência Ótimo é determinar o estado de um sistema de transmissão de energia elétrica que otimize um dado desempenho do sistema, e satisfaça suas restrições físicas e operacionais. O problema de Fluxo de Potência Ótimo é modelado como um problema de programação não linear com variáveis discretas e contínuas. Na maioria das abordagens da literatura para a resolução de problemas de Fluxo de Potência Ótimo, os controles discretos são modelados como variáveis contínuas. Estas formulações estão longe da realidade de um sistema elétrico pois alguns controles podem somente ser ajustados por passos discretos. Este trabalho apresenta um método para tratar as variáveis discretas do problema de Fluxo de Potência Ótimo. Uma função, que penaliza a função objetivo quando as variáveis discretas assumem valores não discretos, é apresentada. Ao incorporar esta função na função objetivo, um problema de programação não linear com somente variáveis contínuas é obtido e a solução desse problema é equivalente à solução do problema original, que contém variáveis discretas e contínuas. O problema de programação não linear é resolvido pelo Método de Pontos Interiores com Filtro. Experimentos numéricos com os sistemas elétricos IEEE 14, 30, 118 e 300 Barras comprovam que a abordagem proposta é eficiente na resolução de problemas de Fluxo de Potência Ótimo.
The aim of solving the Optimal Power Flow problem is to determine the state of an electric power transmission system that optimizes a given system performance, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. In most techniques existing in the literature to solve the Optimal Power Flow problems, the discrete controls are modeled as continuous variables. These formulations are unrealistic, as some controls can be set only to values taken from a given set of discrete values. This study proposes a method for handling the discrete variables of the Optimal Power Flow problem. A function, which penalizes the objective function when discrete variables assume non-discrete values, is presented. By including this penalty function into the objective function, a nonlinear programming problem with only continuous variables is obtained and the solution of this problem is equivalent to the solution of the initial problem that contains discrete and continuous variables. The nonlinear programming problem is solved by a Interior-Point Method with filter line-search. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the proposed approach is efficient in the resolution of OPF problems.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Hongqing. "Algorithms for optimal feedback control problems." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1177101576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Seywald, Hans. "Optimal control problems with switching points." Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-07282008-135220/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Barth, Eric J. "Approximating discrete-time optimal control using a neural network." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/19009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cheung, Ka-chun. "Optimal asset allocation problems under the discrete-time regime-switching model." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B31311234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cheung, Ka-chun, and 張家俊. "Optimal asset allocation problems under the discrete-time regime-switching model." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B31311234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pfeiffer, Laurent. "Sensitivity analysis for optimal control problems. Stochastic optimal control with a probability constraint." Palaiseau, Ecole polytechnique, 2013. https://pastel.hal.science/docs/00/88/11/19/PDF/thesePfeiffer.pdf.

Full text
Abstract:
Cette thèse est divisée en deux parties. Dans la première partie, nous étudions des problèmes de contrôle optimal déterministes avec contraintes et nous nous intéressons à des questions d'analyse de sensibilité. Le point de vue que nous adoptons est celui de l'optimisation abstraite; les conditions d'optimalité nécessaires et suffisantes du second ordre jouent alors un rôle crucial et sont également étudiées en tant que telles. Dans cette thèse, nous nous intéressons à des solutions fortes. De façon générale, nous employons ce terme générique pour désigner des contrôles localement optimaux pour la norme L1. En renforçant la notion d'optimalité locale utilisée, nous nous attendons à obtenir des résultats plus forts. Deux outils sont utilisés de façon essentielle : une technique de relaxation, qui consiste à utiliser plusieurs contrôles simultanément, ainsi qu'un principe de décomposition, qui est un développement de Taylor au second ordre particulier du lagrangien. Les chapitres 2 et 3 portent sur les conditions d'optimalité nécessaires et suffisantes du second ordre pour des solutions fortes de problèmes avec contraintes pures, mixtes et sur l'état final. Dans le chapitre 4, nous réalisons une analyse de sensibilité pour des problèmes relaxés avec des contraintes sur l'état final. Dans le chapitre 5, nous réalisons une analyse de sensibilité pour un problème de production d'énergie nucléaire. Dans la deuxième partie, nous étudions des problèmes de contrôle optimal stochastique sous contrainte en probabilité. Nous étudions une approche par programmation dynamique, dans laquelle le niveau de probabilité est vu comme une variable d'état supplémentaire. Dans ce cadre, nous montrons que la sensibilité de la fonction valeur par rapport au niveau de probabilité est constante le long des trajectoires optimales. Cette analyse nous permet de développer des méthodes numériques pour des problèmes en temps continu. Ces résultats sont présentés dans le chapitre 6, dans lequel nous étudions également une application à la gestion actif-passif
This thesis is divided into two parts. In the first part, we study constrained deterministic optimal control problems and sensitivity analysis issues, from the point of view of abstract optimization. Second-order necessary and sufficient optimality conditions, which play an important role in sensitivity analysis, are also investigated. In this thesis, we are interested in strong solutions. We use this generic term for locally optimal controls for the L1-norm, roughly speaking. We use two essential tools: a relaxation technique, which consists in using simultaneously several controls, and a decomposition principle, which is a particular second-order Taylor expansion of the Lagrangian. Chapters 2 and 3 deal with second-order necessary and sufficient optimality conditions for strong solutions of problems with pure, mixed, and final-state constraints. In Chapter 4, we perform a sensitivity analysis for strong solutions of relaxed problems with final-state constraints. In Chapter 5, we perform a sensitivity analysis for a problem of nuclear energy production. In the second part of the thesis, we study stochastic optimal control problems with a probability constraint. We study an approach by dynamic programming, in which the level of probability is a supplementary state variable. In this framework, we show that the sensitivity of the value function with respect to the probability level is constant along optimal trajectories. We use this analysis to design numerical schemes for continuous-time problems. These results are presented in Chapter 6, in which we also study an application to asset-liability management
APA, Harvard, Vancouver, ISO, and other styles
12

Tian, Wenyi. "Numerical study on some inverse problems and optimal control problems." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/193.

Full text
Abstract:
In this thesis, we focus on the numerical study on some inverse problems and optimal control problems. In the first part, we consider some linear inverse problems with discontinuous or piecewise constant solutions. We use the total variation to regularize these inverse problems and then the finite element technique to discretize the regularized problems. These discretized problems are treated from the saddle-point perspective; and some primal-dual numerical schemes are proposed. We intensively investigate the convergence of these primal-dual type schemes, establishing the global convergence and estimating their worst-case convergence rates measured by the iteration complexity. We test these schemes by some experiments and verify their efficiency numerically. In the second part, we consider the finite difference and finite element discretization for an optimal control problem which is governed by time fractional diffusion equation. The prior error estimate of the discretized model is analyzed, and a projection gradient method is applied for iteratively solving the fully discretized surrogate. Some numerical experiments are conducted to verify the efficiency of the proposed method. Overall speaking, the thesis has been mainly inspired by some most recent advances developed in optimization community, especially in the area of operator splitting methods for convex programming; and it can be regarded as a combination of some contemporary optimization techniques with some relatively mature inverse and control problems. Keywords: Total variation minimization, linear inverse problem, saddle-point problem, finite element method, primal-dual method, convergence rate, optimal control problem, time fractional diffusion equation, projection gradient method.
APA, Harvard, Vancouver, ISO, and other styles
13

De, Pinho Maria Do Rosario Marques Fernandes Teixeira. "Optimal control problems with differential algebraic constraints." Thesis, Imperial College London, 1993. http://hdl.handle.net/10044/1/7392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tsang, Siu Chung. "Preconditioners for linear parabolic optimal control problems." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/464.

Full text
Abstract:
In this thesis, we consider the computational methods for linear parabolic optimal control problems. We wish to minimize the cost functional while fulfilling the parabolic partial differential equations (PDE) constraint. This type of problems arises in many fields of science and engineering. Since solving such parabolic PDE optimal control problems often lead to a demanding computational cost and time, an effective algorithm is desired. In this research, we focus on the distributed control problems. Three types of cost functional are considered: Target States problems, Tracking problems, and All-time problems. Our major contribution in this research is that we developed a preconditioner for each kind of problems, so our iterative method is accelerated. In chapter 1, we gave a brief introduction to our problems with a literature review. In chapter 2, we demonstrated how to derive the first-order optimality conditions from the parabolic optimal control problems. Afterwards, we showed how to use the shooting method along with the flexible generalized minimal residual to find the solution. In chapter 3, we offered three preconditioners to enhance our shooting method for the problems with symmetric differential operator. Next, in chapter 4, we proposed another three preconditioners to speed up our scheme for the problems with non-symmetric differential operator. Lastly, we have the conclusion and the future development in chapter 5.
APA, Harvard, Vancouver, ISO, and other styles
15

Guo, Chaoyang. "Some optimal control problems in mathematical finance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0022/NQ39269.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Birkett, N. R. C. "Optimal control problems in tidal power calculations." Thesis, University of Reading, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Xuchao. "Optimal control approaches for persistent monitoring problems." Thesis, Boston University, 2014. https://hdl.handle.net/2144/11119.

Full text
Abstract:
Thesis (Ph. D.)--Boston University
Persistent monitoring tasks arise when agents must monitor a dynamically changing environment which cannot be fully covered by a stationary team of available agents. It differs from traditional coverage tasks due to the perpetual need to cover a changing environment, i.e., all areas of the mission space must be visited infinitely often. This dissertation presents an optimal control framework for persistent monitoring problems where the objective is to control the movement of multiple cooperating agents to minimize an uncertainty metric in a given mission space. In an one-dimensional mission space, it is shown that the optimal solution is for each agent to move at maximal speed from one switching point to the next, possibly waiting some time at each point before reversing its direction. Thus, the solution is reduced to a simpler parametric optimization problem: determining a sequence of switching locations and associated waiting times at these switching points for each agent. This amounts to a hybrid system which is analyzed using Infinitesimal Perturbation Analysis (IPA) , to obtain a complete on-line solution through a gradient-based algorithm. IPA is a method used to provide unbiased gradient estimates of performance metrics with respect to various controllable parameters in Discrete Event Systems (DES) as well as in Hybrid Systems (HS). It is also shown that the solution is robust with respect to the uncertainty model used, i.e., IPA provides an unbiased estimate of the gradient without any detailed knowledge of how uncertainty affects the mission space. In a two-dimensional mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the one dimensional analysis. It is proved, however, that elliptical trajectories outperform linear ones. With this motivation, the dissertation formulates a parametric optimization problem to determine such trajectories. It is again shown that the problem can be solved using IPA to obtain performance gradients on line and obtain a complete and scalable solution. Since the solutions obtained are generally locally optimal, a stochastic comparison algorithm is incorporated for deriving globally optimal elliptical trajectories. The dissertation also approaches the problem by representing an agent trajectory in terms of general function families characterized by a set of parameters to be optimized. The approach is applied to the family of Lissajous functions as well as a Fourier series representation of an agent trajectory. Numerical examples indicate that this scalable approach provides solutions that are near optimal relative to those obtained through a computationally intensive two point boundary value problem (TPBVP) solver. In the end, the problem is tackled using centralized and decentralized Receding Horizon Control (RHC) algorithms, which dynamically determine the control for agents by solving a sequence of optimization problems over a planning horizon and executing them over a shorter action horizon.
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, Yufei. "Chance-constrained optimization & optimal control problems." Thesis, Curtin University, 2015. http://hdl.handle.net/20.500.11937/183.

Full text
Abstract:
Four optimization or optimal control problems subject to probabilistic constraints are studied. The first identifies the optimal portfolio using a new probabilistic risk measure while maximizing return. The second explores a preventive-based maintenance scheduling problem while minimizing the total cost of operation. The third investigates how asset allocation of a pension fund should change in the face of default risk. The fourth suggests the optimal manpower scheduling which minimizes the salary costs of the employees.
APA, Harvard, Vancouver, ISO, and other styles
19

Silva, Francisco Jose. "Interior penalty approximation for optimal control problems. Optimality conditions in stochastic optimal control theory." Palaiseau, Ecole polytechnique, 2010. http://pastel.archives-ouvertes.fr/docs/00/54/22/95/PDF/tesisfjsilva.pdf.

Full text
Abstract:
Résumé français : Cette thèse est divisée en deux parties. Dans la première partie on s'intéresse aux problèmes de commande optimale déterministes et on étudie des approximations intérieures pour deux problèmes modèles avec des contraintes de non-négativité sur la commande. Le premier modèle est un problème de commande optimale dont la fonction de coût est quadratique et dont la dynamique est régie par une équation différentielle ordinaire. Pour une classe générale de fonctions de pénalité intérieure, on montre comment calculer le terme principal du développement ponctuel de l'état et de l'état adjoint. Notre argument principal se fonde sur le fait suivant: si la commande optimale pour le problème initial satisfait les conditions de complémentarité stricte pour le Hamiltonien sauf en un nombre fini d'instants, les estimations pour le problème de commande optimale pénalisé peuvent être obtenues à partir des estimations pour un problème stationnaire associé. Nos résultats fournissent plusieurs types de mesures de qualité de l'approximation pour la technique de pénalisation: estimations des erreurs de la commande , estimations des erreurs pour l'état et l'état adjoint et aussi estimations de erreurs pour la fonction valeur. Le second modèle est le problème de commande optimale d'une équation semi-linéaire elliptique avec conditions de Dirichlet homogène au bord, la commande étant distribuée sur le domaine et positive. L'approche est la même que pour le premier modèle, c'est-à-dire que l'on considère une famille de problèmes pénalisés, dont la solution définit une trajectoire centrale qui converge vers la solution du problème initial. De cette manière, on peut étendre les résultats, obtenus dans le cadre d'équations différentielles, au contrôle optimal d'équations elliptiques semi-linéaires. Dans la deuxième partie on s'intéresse aux problèmes de commande optimale stochastiques. Dans un premier temps, on considère un problème linéaire quadratique stochastique avec des contraintes de non-negativité sur la commande et on étend les estimations d'erreur pour l'approximation par pénalisation logarithmique. La preuve s'appuie sur le principe de Pontriaguine stochastique et un argument de dualité. Ensuite, on considère un problème de commande stochastique général avec des contraintes convexes sur la commande. L'approche dite variationnelle nous permet d'obtenir un développement au premier et au second ordre pour l'état et la fonction de coût, autour d'un minimum local. Avec ces développements on peut montrer des conditions générales d'optimalité de premier ordre et, sous une hypothèse géométrique sur l'ensemble des contraintes, des conditions nécessaires du second ordre sont aussi établies
Résumé anglais : This thesis is divided in two parts. In the first one we consider deterministic optimal control problems and we study interior approximations for two model problems with non-negativity constraints. The first model is a quadratic optimal control problem governed by a nonautonomous affine ordinary differential equation. We provide a first-order expansion for the penalized state an adjoint state (around the corresponding state and adjoint state of the original problem), for a general class of penalty functions. Our main argument relies on the following fact: if the optimal control satisfies strict complementarity conditions for its Hamiltonian, except for a set of times with null Lebesgue measure, the functional estimates of the penalized optimal control problem can be derived from the estimates of a related finite dimensional problem. Our results provide three types of measure to analyze the penalization technique: error estimates of the control, error estimates of the state and the adjoint state and also error estimates for the value function. The second model we study is the optimal control problem of a semilinear elliptic PDE with a Dirichlet boundary condition, where the control variable is distributed over the domain and is constrained to be non-negative. Following the same approach as in the first model, we consider an associated family of penalized problems, whose solutions define a central path converging to the solution of the original one. In this fashion, we are able to extend the results obtained in the ODE framework to the case of semilinear elliptic PDE constraints. In the second part of the thesis we consider stochastic optimal control problems. We begin withthe study of a stochastic linear quadratic problem with non-negativity control constraints and we extend the error estimates for the approximation by logarithmic penalization. The proof is based is the stochastic Pontryagin's principle and a duality argument. Next, we deal with a general stochastic optimal control problem with convex control constraints. Using the variational approach, we are able to obtain first and second-order expansions for the state and cost function, around a local minimum. This analysis allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second-order necessary conditions are also established
APA, Harvard, Vancouver, ISO, and other styles
20

Barth, Eric J. "Approximating infinite horizon discrete-time optimal control using CMAC networks." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/19464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Xiaohong. "Optimal feedback control for nonlinear discrete systems and applications to optimal control of nonlinear periodic ordinary differential equations." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Weiser, Martin. "Function space complementarity methods for optimal control problems." [S.l. : s.n.], 2001. http://www.diss.fu-berlin.de/2001/189/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Achmatowicz, Richard L. (Richard Leon). "Optimal control problems on an infinite time horizon." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hodge, D. J. "Problems of stochastic optimal control and yield management." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604120.

Full text
Abstract:
We present a collection of results in the broad area of stochastic optimization. Our basic model is that of dynamic resource allocation via customer acceptance control. We begin by modelling optimal acceptance to a discrete-capacity service-on-demand system where customers arrive with differing demands and revenues. With strong restrictions on customer types we establish the optimal policy under general arrival processes. With weaker restrictions we establish monotonicity properties under stationary arrivals. We than look at a deterministic demand-curve approach to the same problem; resource allocation over time. We solve the problem of non-overlapping customer demands, for a number of different demand curves. Our main work concerns selling perishable goods via customer acceptance control. We look at the optimal boundary between accepting and declining customers of different types. Existing papers demonstrate this threshold but fail to observe its surprisingly linear nature. We study the problem of finding the best linear threshold and see that, as a heuristic, it performs very well. Our study of linear thresholds educes an interesting problem: sample-path analysis. The problem concerns the evolution of segments of the sample-path in inventory-time space with regions of different downward drift. We succeed in fully characterising the studied sample path segments, finding a remarkable dual use of an interesting identity. In the final chapters, we look at two further problems of stochastic optimization. The first is an innovative approach to modelling future demand, utilizing previous price requests. Using these dynamic demand estimations we demonstrate monotonicity properties of the optimal pricing policy. The second problem is the famous parking problem first introduced by Rényi in the fifties. We study a Markov chain queuing model for the availability of parking spaces. We derive the pay-offs from the class of very natural threshold policies, with respect to an ‘average distance from venue’ objective.
APA, Harvard, Vancouver, ISO, and other styles
25

Leung, Ho-yin, and 梁浩賢. "Stochastic models for optimal control problems with applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42841781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Riquelme, Victor. "Optimal control problems for bioremediation of water resources." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT290/document.

Full text
Abstract:
Cette thèse se compose de deux parties. Dans la première partie, nous étudions les stratégies de temps minimum pour le traitement de la pollution dans de grandes ressources en eau, par exemple des lacs ou réservoirs naturels, à l'aide d'un bioréacteur continu qui fonctionne à un état quasi stationnaire. On contrôle le débit d'entrée d'eau au bioréacteur, dont la sortie revient à la ressource avec le même débit. Nous disposons de l'hypothèse d'homogénéité de la concentration de polluant dans la ressource en proposant trois modèles spatialement structurés. Le premier modèle considère deux zones connectées l'une à l'autre par diffusion et seulement une d'entre elles connectée au bioréacteur. Avec l'aide du Principe du Maximum de Pontryagin, nous montrons que le contrôle optimal en boucle fermée dépend seulement des mesures de pollution dans la zone traitée, sans influence des paramètres de volume, diffusion, ou la concentration dans la zone non traitée. Nous montrons que l'effet d'une pompe de recirculation qui aide à homogénéiser les deux zones est avantageux si opérée à vitesse maximale. Nous prouvons que la famille de fonctions de temps minimal en fonction du paramètre de diffusion est décroissante. Le deuxième modèle consiste en deux zones connectées l'une à l'autre par diffusion et les deux connectées au bioréacteur. Ceci est un problème dont l'ensemble des vitesses est non convexe, pour lequel il n'est pas possible de prouver directement l'existence des solutions. Nous surmontons cette difficulté et résolvons entièrement le problème étudié en appliquant le principe de Pontryagin au problème de contrôle relaxé associé, obtenant un contrôle en boucle fermée qui traite la zone la plus polluée jusqu'au l'homogénéisation des deux concentrations. Nous obtenons des limites explicites sur la fonction valeur via des techniques de Hamilton-Jacobi-Bellman. Nous prouvons que la fonction de temps minimal est non monotone par rapport au paramètre de diffusion. Le troisième modèle consiste en deux zones connectées au bioréacteur en série et une pompe de recirculation entre elles. L'ensemble des contrôles dépend de l'état, et nous montrons que la contrainte est active à partir d'un temps jusqu'à la fin du processus. Nous montrons que le contrôle optimal consiste à l'atteinte d'un temps à partir duquel il est optimal de recirculer à vitesse maximale et ensuite ré-polluer la deuxième zone avec la concentration de la première. Ce résultat est non intuitif. Des simulations numériques illustrent les résultats théoriques, et les stratégies optimales obtenues sont testées sur des modèles hydrodynamiques, en montrant qu'elles sont de bonnes approximations de la solution du problème inhomogène. La deuxième partie consiste au développement et l'étude d'un modèle stochastique de réacteur biologique séquentiel. Le modèle est obtenu comme une limite des processus de naissance et de mort. Nous établissons l'existence et l'unicité des solutions de l'équation contrôlée qui ne satisfait pas les hypothèses habituelles. Nous prouvons que pour n'importe quelle loi de contrôle la probabilité d'extinction de la biomasse est positive. Nous étudions le problème de la maximisation de la probabilité d'atteindre un niveau de pollution cible, avec le réacteur à sa capacité maximale, avant l'extinction. Ce problème ne satisfait aucune des suppositions habituelles (la dynamique n'est pas lipschitzienne, diffusion dégénérée localement hölderienne, contraintes d'état, ensembles cible et absorbant s'intersectent), donc le problème doit être étudié dans deux étapes: en premier lieu, nous prouvons la continuité de la fonction de coût non contrôlée pour les conditions initiales avec le volume maximal et ensuite nous développons un principe de programmation dynamique pour une modification du problème original comme un problème de contrôle optimal avec coût final sans contrainte sur l'état
This thesis consists of two parts. In the first part we study minimal time strategies for the treatment of pollution in large water volumes, such as lakes or natural reservoirs, using a single continuous bioreactor that operates in a quasi-steady state. The control consists of feeding the bioreactor from the resource, with clean output returning to the resource with the same flow rate. We drop the hypothesis of homogeneity of the pollutant concentration in the water resource by proposing three spatially structured models. The first model considers two zones connected to each other by diffusion and only one of them treated by the bioreactor. With the help of the Pontryagin Maximum Principle, we show that the optimal state feedback depends only on the measurements of pollution in the treated zone, with no influence of volume, diffusion parameter, or pollutant concentration in the untreated zone. We show that the effect of a recirculation pump that helps to mix the two zones is beneficial if operated at full speed. We prove that the family of minimal time functions depending on the diffusion parameter is decreasing. The second model consists of two zones connected to each other by diffusion and each of them connected to the bioreactor. This is a problem with a non convex velocity set for which it is not possible to directly prove the existence of its solutions. We overcome this difficulty and fully solve the studied problem applying Pontryagin's principle to the associated problem with relaxed controls, obtaining a feedback control that treats the most polluted zone up to the homogenization of the two concentrations. We also obtain explicit bounds on its value function via Hamilton-Jacobi-Bellman techniques. We prove that the minimal time function is nonmonotone as a function of the diffusion parameter. The third model consists of a system of two zones connected to the bioreactor in series, and a recirculation pump between them. The control set depends on the state variable; we show that this constraint is active from some time up to the final time. We show that the optimal control consists of waiting up to a time from which it is optimal the mixing at maximum speed, and then to repollute the second zone with the concentration of the first zone. This is a non intuitive result. Numerical simulations illustrate the theoretical results, and the obtained optimal strategies are tested in hydrodynamic models, showing to be good approximations of the solution of the inhomogeneous problem. The second part consists of the development and study of a stochastic model of sequencing batch reactor. We obtain the model as a limit of birth and death processes. We establish the existence and uniqueness of solutions of the controlled equation that does not satisfy the usual assumptions. We prove that with any control law the probability of extinction is positive, which is a non classical result. We study the problem of the maximization of the probability of attaining a target pollution level, with the reactor at maximum capacity, prior to extinction. This problem does not satisfy any of the usual assumptions (non Lipschitz dynamics, degenerate locally H"older diffusion parameter, restricted state space, intersecting reach and avoid sets), so the problem must be studied in two stages: first, we prove the continuity of the uncontrolled cost function for initial conditions with maximum volume, and then we develop a dynamic programming principle for a modification of the problem as an optimal control problem with final cost and without state constraint
APA, Harvard, Vancouver, ISO, and other styles
27

Leung, Ho-yin. "Stochastic models for optimal control problems with applications." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42841781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Blanchard, Eunice Anita. "Exact penalty methods for nonlinear optimal control problems." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/1805.

Full text
Abstract:
Research comprised of developing solution techniques to three classes of non-standard optimal control problems, namely: optimal control problems with discontinuous objective functions arising in aquaculture operations; impulsive optimal control problems with minimum subsystem durations; optimal control problems involving dual-mode hybrid systems with state-dependent switching conditions. The numerical algorithms developed involved an exact penalty approach to transform the constrained problem into an unconstrained problem which was readily solvable by a standard optimal control software.
APA, Harvard, Vancouver, ISO, and other styles
29

Riffer, Jennifer Lynn. "Time-optimal control of discrete-time systems with known waveform disturbances." [Milwaukee, Wis.] : e-Publications@Marquette, 2009. http://epublications.marquette.edu/theses_open/18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Granzotto, Mathieu. "Near-optimal control of discrete-time nonlinear systems with stability guarantees." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0301.

Full text
Abstract:
L’intelligence artificielle est riche en algorithmes de commande optimale. Il s’agit de générer des entrées de commande pour des systèmes dynamiques afin de minimiser une fonction de coût donnée décrivant l’énergie du système par exemple. Ces méthodes sont applicables à de larges classes de systèmes non-linéaires en temps discret et ont fait leurs preuves dans de nombreuses applications. Leur exploitation en automatique s’avère donc très prometteuse. Une question fondamentale reste néanmoins à élucider pour cela: celle de la stabilité. En effet, ces travaux se concentrent sur l’optimalité et ignorent dans la plupart des cas la stabilité du système commandé, qui est au coeur de l’automatique. L’objectif de ma thèse est d’étudier la stabilité de systèmes non-linéaires commandés par de tels algorithmes. L’enjeu est important car cela permettra de créer un nouveau pont entre l’intelligence artificielle et l’automatique. La stabilité nous informe sur le comportement du système en fonction du temps et garantit sa robustesse en présence de perturbations ou d’incertitudes de modèle. Les algorithmes en intelligence artificielle se concentrent sur l’optimalité de la commande et n’exploitent pas les propriétés de la dynamique du système. La stabilité est non seulement désirable pour les raisons auparavant, mais aussi pour la possibilité de l’exploitée pour améliorer ces algorithmes d’intelligence artificielle. Mes travaux de recherche se concentrent sur les techniques de commande issues de la programmation dynamique (approchée) lorsque le modèle du système est connu. J’identifie pour cela des conditions générales grâce auxquelles il est possible de garantir la stabilité du système en boucle fermée. En contrepartie, une fois la stabilité établie, nous pouvons l’exploiter pour améliorer drastiquement les garanties d’optimalité de la littérature. Mes travaux se sont concentrés autour de deux axes. Le premier concerne l’approche par itération sur les valeurs, qui est l’un des piliers de la programmation dynamique approchée et est au coeur de nombre d’algorithmes d’apprentissage par renforcement. Le deuxième concerne l’approche de planification optimiste, appliqué aux systèmes commutés. J’adapte l’algorithme de planification optimiste pour que, sous hypothèse naturel dans un contexte de stabilisation, obtenir la stabilité en boucle fermé et améliorer drastiquement les garanties d’optimalité de l’algorithme
Artificial intelligence is rich in algorithms for optimal control. These generate commands for dynamical systems in order to minimize a a given cost function describing the energy of the system, for example. These methods are applicable to large classes of non-linear systems in discrete time and have proven themselves in many applications. Their application in control problems is therefore very promising. However, a fundamental question remains to be clarified for this purpose: that of stability. Indeed, these studies focus on optimality and ignore in the In most cases the stability of the controlled system, which is at the heart of control theory. The objective of my thesis is to study the stability of non-linear systems controlled by such algorithms. The stakes are high because it will create a new bridge between artificial intelligence and control theory. Stability informs us about the behaviour of the system as a function of time and guarantees its robustness in the presence of model disturbances or uncertainties. Algorithms in artificial intelligence focus on control optimality and do not exploit the properties of the system dynamics. Stability is not only desirable for the reasons mentioned above, but also for the possibility of using it to improve these intelligence algorithms artificial. My research focuses on control techniques from (approximated) dynamic programming when the system model is known. For this purpose, I identify general conditions by which it is possible to guarantee the stability of the closed-loop system. On the other hand, once stability has been established, we can use it to drastically improve the optimality guarantees of literature. My work has focused on two main areas. The first concerns the approach by iteration on values, which is one of the pillars of dynamic programming is approached and is at the heart of many reinforcement learning algorithms. The second concerns the approach by optimistic planning, applied to switched systems. I adapt the optimistic planning algorithm such that, under natural assumptions in an a stabilisation context, we obtain the stability of closed-loop systems where inputs are generated by this modified algorithm, and to drastically improve the optimality guarantees of the generated inputs
APA, Harvard, Vancouver, ISO, and other styles
31

Ng, Chi Kong. "Globally convergent and efficient methods for unconstrained discrete-time optimal control." HKBU Institutional Repository, 1998. http://repository.hkbu.edu.hk/etd_ra/149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yucel, Hamdullah. "Adaptive Discontinuous Galerkin Methods For Convectiondominated Optimal Control Problems." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614523/index.pdf.

Full text
Abstract:
Many real-life applications such as the shape optimization of technological devices, the identification of parameters in environmental processes and flow control problems lead to optimization problems governed by systems of convection diusion partial dierential equations (PDEs). When convection dominates diusion, the solutions of these PDEs typically exhibit layers on small regions where the solution has large gradients. Hence, it requires special numerical techniques, which take into account the structure of the convection. The integration of discretization and optimization is important for the overall eciency of the solution process. Discontinuous Galerkin (DG) methods became recently as an alternative to the finite dierence, finite volume and continuous finite element methods for solving wave dominated problems like convection diusion equations since they possess higher accuracy. This thesis will focus on analysis and application of DG methods for linear-quadratic convection dominated optimal control problems. Because of the inconsistencies of the standard stabilized methods such as streamline upwind Petrov Galerkin (SUPG) on convection diusion optimal control problems, the discretize-then-optimize and the optimize-then-discretize do not commute. However, the upwind symmetric interior penalty Galerkin (SIPG) method leads to the same discrete optimality systems. The other DG methods such as nonsymmetric interior penalty Galerkin (NIPG) and incomplete interior penalty Galerkin (IIPG) method also yield the same discrete optimality systems when penalization constant is taken large enough. We will study a posteriori error estimates of the upwind SIPG method for the distributed unconstrained and control constrained optimal control problems. In convection dominated optimal control problems with boundary and/or interior layers, the oscillations are propagated downwind and upwind direction in the interior domain, due the opposite sign of convection terms in state and adjoint equations. Hence, we will use residual based a posteriori error estimators to reduce these oscillations around the boundary and/or interior layers. Finally, theoretical analysis will be confirmed by several numerical examples with and without control constraints
APA, Harvard, Vancouver, ISO, and other styles
33

Smith, Stephen Bevis. "Exact penalty function algorithms for constrained optimal control problems." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/7996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Benner, Peter, Enrique S. Quintana-Ortí, and Gregorio Quintana-Ortí. "Solving Linear-Quadratic Optimal Control Problems on Parallel Computers." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601704.

Full text
Abstract:
We discuss a parallel library of efficient algorithms for the solution of linear-quadratic optimal control problems involving largescale systems with state-space dimension up to $O(10^4)$. We survey the numerical algorithms underlying the implementation of the chosen optimal control methods. The approaches considered here are based on invariant and deflating subspace techniques, and avoid the explicit solution of the associated algebraic Riccati equations in case of possible ill-conditioning. Still, our algorithms can also optionally compute the Riccati solution. The major computational task of finding spectral projectors onto the required invariant or deflating subspaces is implemented using iterative schemes for the sign and disk functions. Experimental results report the numerical accuracy and the parallel performance of our approach on a cluster of Intel Itanium-2 processors.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Yu Chung Eugene. "Co-ordinated supply chain management and optimal control problems." online access from Digital Dissertation Consortium, 2007. http://libweb.cityu.edu.hk/cgi-bin/er/db/ddcdiss.pl?3299869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Xu. "First and second order conditions for optimal control problems." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Siska, David. "Numerical approximations of stochastic optimal stopping and control problems." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/2571.

Full text
Abstract:
We study numerical approximations for the payoff function of the stochastic optimal stopping and control problem. It is known that the payoff function of the optimal stopping and control problem corresponds to the solution of a normalized Bellman PDE. The principal aim of this thesis is to study the rate at which finite difference approximations, derived from the normalized Bellman PDE, converge to the payoff function of the optimal stopping and control problem. We do this by extending results of N.V. Krylov from the Bellman equation to the normalized Bellman equation. To our best knowledge, until recently, no results about the rate of convergence of finite difference approximations to Bellman equations have been known. A major breakthrough has been made by N. V. Krylov. He proved rate of rate of convergence of tau 1/4 + h 1/2 where tau and h are the step sizes in time and space respectively. We will use the known idea of randomized stopping to give a direct proof showing that optimal stopping and control problems can be rewritten as pure optimal control problems by introducing a new control parameter and by allowing the reward and discounting functions to be unbounded in the control parameter. We extend important results of N. V. Krylov on the numerical solutions to the Bellman equations to the normalized Bellman equations associated with the optimal stopping of controlled diffusion processes. We obtain the same rate of convergence of tau1/4 + h1/2. This rate of convergence holds for finite difference schemes defined on a grid on the whole space [0, T]×Rd i.e. on a grid with infinitely many elements. This leads to the study of localization error, which arises when restricting the finite difference approximations to a cylindrical domain. As an application of our results, we consider an optimal stopping problem from mathematical finance: the pricing of American put option on multiple assets. We prove the rate of convergence of tau1/4 + h1/2 for the finite difference approximations.
APA, Harvard, Vancouver, ISO, and other styles
38

Wong, Man-kwun, and 黃文冠. "Some sensitivity results for time-delay optimal control problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sexton, Jennifer. "Optimal stopping and control problems using the Legendre transform." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/optimal-stopping-and-control-problems-using-the-legendre-transform(aa3ce911-2a1d-4d48-8096-367706c798c9).html.

Full text
Abstract:
This thesis addresses some aspects of the connection between convex analysis andoptimal stopping and control problems. The first chapter contains a summary of theoriginal contributions made in subsequent chapters. The second chapter uses elementary tools from convex analysis to establish anextension of the Legendre transformation. These results complement the results in[66] and are used to provide an alternative proof that Nash equilibria exist in optimalstopping games driven by diffusions. In the third chapter a ‘maximum principle’ for singular stochastic control is es-tablished using methods from convex analysis which is a generalisation of the firstorder conditions derived in [18]. This ‘maximum principle’ is used to show that thesolution to certain singular stochastic control problems can be expressed in termsof a family of associated optimal stopping problems. These results connect the firstorder conditions in [3] and the representation result originating in [5] to variationalanalysis. In particular, the Legendre transform is used to derive first order conditionsfor a class of constrained optimisation problems. Sections 2.1-2.4 and Example 30 have been accepted for publication to the ‘Journalof Convex Analysis’ as [75] subject to minor corrections. The suggested revision hasbeen implemented in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
40

Fabrini, Giulia. "Numerical methods for optimal control problems with biological applications." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066096/document.

Full text
Abstract:
Cette thèse se développe sur deux fronts: nous nous concentrons sur les méthodes numériques des problèmes de contrôle optimal, en particulier sur le Principe de la Programmation Dynamique et sur le Model Predictive Control (MPC) et nous présentons des applications de techniques de contrôle en biologie. Dans la première partie, nous considérons l'approximation d'un problème de contrôle optimal avec horizon infini, qui combine une première étape, basée sur MPC permettant d'obtenir rapidement une bonne approximation de la trajectoire optimal, et une seconde étape, dans la quelle l¿équation de Bellman est résolue dans un voisinage de la trajectoire de référence. De cette façon, on peux réduire une grande partie de la taille du domaine dans lequel on résout l¿équation de Bellman et diminuer la complexité du calcul. Le deuxième sujet est le contrôle des méthodes Level Set: on considère un problème de contrôle optimal, dans lequel la dynamique est donnée par la propagation d'un graphe à une dimension, contrôlé par la vitesse normale. Un état finale est fixé, l'objectif étant de le rejoindre en minimisant une fonction coût appropriée. On utilise la programmation dynamique grâce à une réduction d'ordre de l'équation utilisant la Proper Orthogonal Decomposition. La deuxième partie est dédiée à l'application des méthodes de contrôle en biologie. On présente un modèle décrit par une équation aux dérivées partielles qui modélise l'évolution d'une population de cellules tumorales. On analyse les caractéristiques du modèle et on formule et résout numériquement un problème de contrôle optimal concernant ce modèle, où le contrôle représente la quantité du médicament administrée
This thesis is divided in two parts: in the first part we focus on numerical methods for optimal control problems, in particular on the Dynamic Programming Principle and on Model Predictive Control (MPC), in the second part we present some applications of the control techniques in biology. In the first part of the thesis, we consider the approximation of an optimal control problem with an infinite horizon, which combines a first step based on MPC, to obtain a fast but rough approximation of the optimal trajectory and a second step where we solve the Bellman equation in a neighborhood of the reference trajectory. In this way, we can reduce the size of the domain in which the Bellman equation can be solved and so the computational complexity is reduced as well. The second topic of this thesis is the control of the Level Set methods: we consider an optimal control, in which the dynamics is given by the propagation of a one dimensional graph, which is controlled by the normal velocity. A final state is fixed and the aim is to reach the trajectory chosen as a target minimizing an appropriate cost functional. To apply the Dynamic Programming approach we firstly reduce the size of the system using the Proper Orthogonal Decomposition. The second part of the thesis is devoted to the application of control methods in biology. We present a model described by a partial differential equation that models the evolution of a population of tumor cells. We analyze the mathematical and biological features of the model. Then we formulate an optimal control problem for this model and we solve it numerically
APA, Harvard, Vancouver, ISO, and other styles
41

Chai, Qinqin. "Computational methods for solving optimal industrial process control problems." Thesis, Curtin University, 2013. http://hdl.handle.net/20.500.11937/1227.

Full text
Abstract:
In this thesis, we develop new computational methods for three classes of dynamic optimization problems: (i) A parameter identification problem for a general nonlinear time-delay system; (ii) an optimal control problem involving systems with both input and output delays, and subject to continuous inequality state constraints; and (iii) a max-min optimal control problem arising in gradient elution chromatography.In the first problem, we consider a parameter identification problem involving a general nonlinear time-delay system, where the unknown time delays and system parameters are to be identified. This problem is posed as a dynamic optimization problem, where its cost function is to measure the discrepancy between predicted output and observed system output. The aim is to find unknown time-delays and system parameters such that the cost function is minimized. We develop a gradient-based computational method for solving this dynamic optimization problem. We show that the gradients of the cost function with respect to these unknown parameters can be obtained via solving a set of auxiliary time-delay differential systems from t = 0 to t = T. On this basis, the parameter identification problem can be solved as a nonlinear optimization problem and existing optimization techniques can be used. Two numerical examples are solved using the proposed computational method. Simulation results show that the proposed computational method is highly effective. In particular, the convergence is very fast even when the initial guess of the parameter values is far away from the optimal values.Unlike the first problem, in the second problem, we consider a time delay identification problem, where the input function for the nonlinear time-delay system is piecewise-constant. We assume that the time-delays—one involving the state variables and the other involving the input variables—are unknown and need to be estimated using experimental data. We also formulate the problem of estimating the unknown delays as a nonlinear optimization problem in which the cost function measures the least-squares error between predicted output and measured system output. This estimation problem can be viewed as a switched system optimal control problem with time-delays. We show that the gradient of the cost function with respect to the unknown state delay can be obtained via solving a auxiliary time-delay differential system. Furthermore, the gradient of the cost function with respect to the unknown input delay can be obtained via solving an auxiliary time-delay differential system with jump conditions at the delayed control switching time points. On this basis, we develop a heuristic computational algorithm for solving this problem using gradient based optimization algorithms. Time-delays in two industrial processes are estimated using the proposed computational method. Simulation results show that the proposed computational method is highly effective.For the third problem, we consider a general optimal control problem governed by a system with input and output delays, and subject to continuous inequality constraints on the state and control. We focus on developing an effective computational method for solving this constrained time delay optimal control problem. For this, the control parameterization technique is used to approximate the time planning horizon [0, T] into N subintervals. Then, the control is approximated by a piecewise constant function with possible discontinuities at the pre-assigned partition points, which are also called the switching time points. The heights of the piecewise constant function are decision variables which are to be chosen such that a given cost function is minimized. For the continuous inequality constraints on the state, we construct approximating smooth functions in integral form. Then, the summation of these approximating smooth functions in integral form, which is called the constraint violation, is appended to the cost function to form a new augmented cost function. In this way, we obtain a sequence of approximate optimization problems subject to only boundedness constraints on the decision variables. Then, the gradient of the augmented cost function is derived. On this basis, we develop an effective computational method for solving the time-delay optimal control problem with continuous inequality constraints on the state and control via solving a sequence of approximate optimization problems, each of which can be solved as a nonlinear optimization problem by using existing gradient-based optimization techniques. This proposed method is then used to solve a practical optimal control problem arising in the study of a real evaporation process. The results obtained are highly satisfactory, showing that the proposed method is highly effective.The fourth problem that we consider is a max-min optimal control problem arising in the study of gradient elution chromatography, where the manipulative variables in the chromatographic process are to be chosen such that the separation efficiency is maximized. This problem has three non-standard characteristics: (i) The objective function is nonsmooth; (ii) each state variable is defined over a different time horizon; and (iii) the order of the final times for the state variable, the so-called retention times, are not fixed. To solve this problem, we first introduce a set of auxiliary decision variables to govern the ordering of the retention times. The integer constraints on these auxiliary decision variables are approximated by continuous boundedness constraints. Then, we approximate the control by a piecewise constant function, and apply a novel time-scaling transformation to map the retention times and control switching times to fixed points in a new time horizon. The retention times and control switching times become decision variables in the new time horizon. In addition, the max-min objective function is approximated by a minimization problem subject to an additional constraint. On this basis, the optimal control problem is reduced to an approximate nonlinear optimization problem subject to smooth constraints, which is then solved using a recently developed exact penalty function method. Numerical results obtained show that this approach is highly effective.Finally, some concluding remarks and suggestions for further study are made in the conclusion chapter.
APA, Harvard, Vancouver, ISO, and other styles
42

Loxton, Ryan Christopher. "Optimal control problems involving constrained, switched, and delay systems." Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/1479.

Full text
Abstract:
In this thesis, we develop numerical methods for solving five nonstandard optimal control problems. The main idea of each method is to reformulate the optimal control problem as, or approximate it by, a nonlinear programming problem. The decision variables in this nonlinear programming problem influence its cost function (and constraints, if it has any) implicitly through the dynamic system. Hence, deriving the gradient of the cost and the constraint functions is a difficult task. A major focus of this thesis is on developing methods for computing these gradients. These methods can then be used in conjunction with a gradient-based optimization technique to solve the optimal control problem efficiently.The first optimal control problem that we consider has nonlinear inequality constraints that depend on the state at two or more discrete time points. These time points are decision variables that, together with a control function, should be chosen in an optimal manner. To tackle this problem, we first approximate the control by a piecewise constant function whose values and switching times (the times at which it changes value) are decision variables. We then apply a novel time-scaling transformation that maps the switching times to fixed points in a new time horizon. This yields an approximate dynamic optimization problem with a finite number of decision variables. We develop a new algorithm, which involves integrating an auxiliary dynamic system forward in time, for computing the gradient of the cost and constraints in this approximate problem.The second optimal control problem that we consider has nonlinear continuous inequality constraints. These constraints restrict both the state and the control at every point in the time horizon. As with the first problem, we approximate the control by a piecewise constant function and then transform the time variable. This yields an approximate semi-infinite programming problem, which can be solved using a penalty function algorithm. A solution of this problem immediately furnishes a suboptimal control for the original optimal control problem. By repeatedly increasing the number of parameters used in the approximation, we can generate a sequence of suboptimal controls. Our main result shows that the cost of these suboptimal controls converges to the minimum cost.The third optimal control problem that we consider is an applied problem from electrical engineering. Its aim is to determine an optimal operating scheme for a switchedcapacitor DC-DC power converter—an electronic device that transforms one DC voltage into another by periodically switching between several circuit topologies. Specifically, the optimal control problem is to choose the times at which the topology switches occur so that the output voltage ripple is minimized and the load regulation is maximized. This problem is governed by a switched system with linear subsystems (each subsystem models one of the power converter’s topologies). Moreover, its cost function is non-smooth. By introducing an auxiliary dynamic system and transforming the time variable (so that the topology switching times become fixed), we derive an equivalent semi-infinite programming problem. This semi-infinite programming problem, like the one that approximates the continuously-constrained optimal control problem, can be solved using a penalty function algorithm.The fourth optimal control problem that we consider involves a general switched system, which includes the model of a switched-capacitor DC-DC power converter as a special case. This switched system evolves by switching between several subsystems of nonlinear ordinary differential equations. Furthermore, each subsystem switch is accompanied by an instantaneous change in the state. These instantaneous changes—so-called state jumps—are influenced by control variables that, together with the subsystem switching times, should be selected in an optimal manner. As with the previous optimal control problems, we tackle this problem by transforming the time variable to obtain an equivalent problem in which the switching times are fixed. However, the functions governing the state jumps in this new problem are discontinuous. To overcome this difficulty, we introduce an approximate problem whose state jumps are governed by smooth functions. This approximate problem can be solved using a nonlinear programming algorithm. We prove an important convergence result that links the approximate problem’s solution with the original problem’s solution.The final optimal control problem that we consider is a parameter identification problem. The aim of this problem is to use given experimental data to identify unknown state-delays in a nonlinear delay-differential system. More precisely, the optimal control problem involves choosing the state-delays to minimize a cost function measuring the discrepancy between predicted and observed system output. We show that the gradient of this cost function can be computed by solving an auxiliary delay-differential system. On the basis of this result, the optimal control problem can be formulated—and hence solved—as a standard nonlinear programming problem.
APA, Harvard, Vancouver, ISO, and other styles
43

Winkler, Gunter. "Control constrained optimal control problems in non-convex three dimensional polyhedral domains." Doctoral thesis, Universitätsbibliothek Chemnitz, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800626.

Full text
Abstract:
The work selects a specific issue from the numerical analysis of optimal control problems. We investigate a linear-quadratic optimal control problem based on a partial differential equation on 3-dimensional non-convex domains. Based on efficient solution methods for the partial differential equation an algorithm known from control theory is applied. Now the main objectives are to prove that there is no degradation in efficiency and to verify the result by numerical experiments. We describe a solution method which has second order convergence, although the intermediate control approximations are piecewise constant functions. This superconvergence property is gained from a special projection operator which generates a piecewise constant approximation that has a supercloseness property, from a sufficiently graded mesh which compensates the singularities introduced by the non-convex domain, and from a discretization condition which eliminates some pathological cases. Both isotropic and anisotropic discretizations are investigated and similar superconvergence properties are proven. A model problem is presented and important results from the regularity theory of solutions to partial differential equation in non-convex domains have been collected in the first chapters. Then a collection of statements from the finite element analysis and corresponding numerical solution strategies is given. Here we show newly developed tools regarding error estimates and projections into finite element spaces. These tools are necessary to achieve the main results. Known fundamental statements from control theory are applied to the given model problems and certain conditions on the discretization are defined. Then we describe the implementation used to solve the model problems and present all computed results.
APA, Harvard, Vancouver, ISO, and other styles
44

Foley, Dawn Christine. "Short horizon optimal control of nonlinear systems via discrete state space realization." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/16803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Olofsson, Marcus. "Optimal Switching Problems and Related Equations." Doctoral thesis, Uppsala universitet, Analys och sannolikhetsteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247298.

Full text
Abstract:
This thesis consists of five scientific papers dealing with equations related to the optimal switching problem, mainly backward stochastic differential equations and variational inequalities. Besides the scientific papers, the thesis contains an introduction to the optimal switching problem and a brief outline of possible topics for future research. Paper I concerns systems of variational inequalities with operators of Kolmogorov type. We prove a comparison principle for sub- and supersolutions and prove the existence of a solution as the limit of solutions to iteratively defined interconnected obstacle problems. Furthermore, we use regularity results for a related obstacle problem to prove Hölder continuity of this solution. Paper II deals with systems of variational inequalities in which the operator is of non-local type. By using a maximum principle adapted to this non-local setting we prove a comparison principle for sub- and supersolutions. Existence of a solution is proved using this comparison principle and Perron's method. In Paper III we study backward stochastic differential equations in which the solutions are reflected to stay inside a time-dependent domain. The driving process is of Wiener-Poisson type, allowing for jumps. By a penalization technique we prove existence of a solution when the bounding domain has convex and non-increasing time slices. Uniqueness is proved by an argument based on Ito's formula. Paper IV and Paper V concern optimal switching problems under incomplete information. In Paper IV, we construct an entirely simulation based numerical scheme to calculate the value function of such problems. We prove the convergence of this scheme when the underlying processes fit into the framework of Kalman-Bucy filtering. Paper V contains a deterministic approach to incomplete information optimal switching problems. We study a simplistic setting and show that the problem can be reduced to a full information optimal switching problem. Furthermore, we prove that the value of information is positive and that the value function under incomplete information converges to that under full information when the noise in the observation vanishes.
APA, Harvard, Vancouver, ISO, and other styles
46

Sternberg, Julia. "Memory efficient approaches of second order for optimal control problems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1135250699292-11488.

Full text
Abstract:
Consider a time-dependent optimal control problem, where the state evolution is described by an initial value problem. There are a variety of numerical methods to solve these problems. The so-called indirect approach is considered detailed in this thesis. The indirect methods solve decoupled boundary value problems resulting from the necessary conditions for the optimal control problem. The so-called Pantoja method describes a computationally efficient stage-wise construction of the Newton direction for the discrete-time optimal control problem. There are many relationships between multiple shooting techniques and Pantoja method, which are investigated in this thesis. In this context, the equivalence of Pantoja method and multiple shooting method of Riccati type is shown. Moreover, Pantoja method is extended to the case where the state equations are discretised using one of implicit numerical methods. Furthermore, the concept of symplecticness and Hamiltonian systems is introduced. In this regard, a suitable numerical method is presented, which can be applied to unconstrained optimal control problems. It is proved that this method is a symplectic one. The iterative solution of optimal control problems in ordinary differential equations by Pantoja or Riccati equivalent methods leads to a succession of triple sweeps through the discretised time interval. The second (adjoint) sweep relies on information from the first (original) sweep, and the third (final) sweep depends on both of them. Typically, the steps on the adjoint sweep involve more operations and require more storage than the other two. The key difficulty is given by the enormous amount of memory required for the implementation of these methods if all states throughout forward and adjoint sweeps are stored. One of goals of this thesis is to present checkpointing techniques for memory reduced implementation of these methods. For this purpose, the well known aspect of checkpointing has to be extended to a `nested checkpointing` for multiple transversals. The proposed nested reversal schedules drastically reduce the required spatial complexity. The schedules are designed to minimise the overall execution time given a certain total amount of storage for the checkpoints. The proposed scheduling schemes are applied to the memory reduced implementation of the optimal control problem of laser surface hardening and other optimal control problems
Es wird ein Problem der optimalen Steuerung betrachtet. Die dazugehoerigen Zustandsgleichungen sind mit einer Anfangswertaufgabe definiert. Es existieren zahlreiche numerische Methoden, um Probleme der optimalen Steuerung zu loesen. Der so genannte indirekte Ansatz wird in diesen Thesen detailliert betrachtet. Die indirekten Methoden loesen das aus den Notwendigkeitsbedingungen resultierende Randwertproblem. Das so genannte Pantoja Verfahren beschreibt eine zeiteffiziente schrittweise Berechnung der Newton Richtung fuer diskrete Probleme der optimalen Steuerung. Es gibt mehrere Beziehungen zwischen den unterschiedlichen Mehrzielmethoden und dem Pantoja Verfahren, die in diesen Thesen detailliert zu untersuchen sind. In diesem Zusammenhang wird die aequivalence zwischen dem Pantoja Verfahren und der Mehrzielmethode vom Riccati Typ gezeigt. Ausserdem wird das herkoemlige Pantoja Verfahren dahingehend erweitert, dass die Zustandsgleichungen mit Hilfe einer impliziten numerischen Methode diskretisiert sind. Weiterhin wird das Symplektische Konzept eingefuehrt. In diesem Zusammenhang wird eine geeignete numerische Methode praesentiert, die fuer ein unrestringiertes Problem der optimalen Steuerung angewendet werden kann. In diesen Thesen wird bewiesen, dass diese Methode symplectisch ist. Das iterative Loesen eines Problems der optimalen Steuerung in gewoenlichen Differentialgleichungen mit Hilfe von Pantoja oder Riccati aequivalenten Verfahren fuehrt auf eine Aufeinanderfolge der Durchlaeufetripeln in einem diskretisierten Zeitintervall. Der zweite (adjungierte) Lauf haengt von der Information des ersten (primalen) Laufes, und der dritte (finale) Lauf haeng von den beiden vorherigen ab. Ueblicherweise beinhalten Schritte und Zustaende des adjungierten Laufes wesentlich mehr Operationen und benoetigen auch wesentlich mehr Speicherplatzkapazitaet als Schritte und Zustaende der anderen zwei Durchlaeufe. Das Grundproblem besteht in einer enormen Speicherplatzkapazitaet, die fuer die Implementierung dieser Methoden benutzt wird, falls alle Zustaende des primalen und des adjungierten Durchlaufes zu speichern sind. Ein Ziel dieser Thesen besteht darin, Checkpointing Strategien zu praesentieren, um diese Methoden speichereffizient zu implementieren. Diese geschachtelten Umkehrschemata sind so konstruiert, dass fuer einen gegebenen Speicherplatz die gesamte Laufzeit zur Abarbeitung des Umkehrschemas minimiert wird. Die aufgestellten Umkehrschemata wurden fuer eine speichereffiziente Implementierung von Problemen der optimalen Steuerung angewendet. Insbesondere betrifft dies das Problem einer Oberflaechenabhaertung mit Laserbehandlung
APA, Harvard, Vancouver, ISO, and other styles
47

Frey, Michael [Verfasser]. "Shape Calculus Applied to Elliptic Optimal Control Problems / Michael Frey." Bayreuth : Universität Bayreuth, 2012. http://d-nb.info/1059412780/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Kouzoupis, Dimitris [Verfasser], and Moritz [Akademischer Betreuer] Diehl. "Structure-exploiting numerical methods for tree-sparse optimal control problems." Freiburg : Universität, 2019. http://d-nb.info/1191689549/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Andrews, Timothy Paul. "An existence theory for optimal control problems with time delays." Thesis, Imperial College London, 1989. http://hdl.handle.net/10044/1/47332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Voisei, Mircea D. "First-Order Necessary Optimality Conditions for Nonlinear Optimal Control Problems." Ohio University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1091111473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography