Дисертації з теми "Discrete optimal control problems"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Discrete optimal control problems".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Woon, Siew Fang. "Global algorithms for nonlinear discrete optimization and discrete-valued optimal control problems." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/538.
Повний текст джерелаRieck, Rainer Matthias [Verfasser], Florian [Akademischer Betreuer] [Gutachter] Holzapfel, and Matthias [Gutachter] Gerdts. "Discrete Controls and Constraints in Optimal Control Problems / Rainer Matthias Rieck ; Gutachter: Matthias Gerdts, Florian Holzapfel ; Betreuer: Florian Holzapfel." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1126644137/34.
Повний текст джерелаFerraço, Igor Breda. "Controle ótimo por modos deslizantes via função penalidade." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-09112011-161224/.
Повний текст джерелаThis work introduces a penalty function approach to deal with the optimal sliding mode control problem for discrete-time systems. To solve this problem an alternative array structure based on the problem of weighted least squares penalty function will be developed. Using this alternative matrix structure, the optimal sliding mode control law of, the matrix Riccati equations and feedback gain were obtained. The motivation of this new approach is to show that it is possible to obtain an alternative solution to the classic problem of optimal sliding mode control.
Hazell, Andrew. "Discrete-time optimal preview control." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/8472.
Повний текст джерелаSoler, Edilaine Martins. "Resolução do problema de fluxo de potência ótimo com variáveis de controle discretas." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-07042011-151716/.
Повний текст джерелаThe aim of solving the Optimal Power Flow problem is to determine the state of an electric power transmission system that optimizes a given system performance, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. In most techniques existing in the literature to solve the Optimal Power Flow problems, the discrete controls are modeled as continuous variables. These formulations are unrealistic, as some controls can be set only to values taken from a given set of discrete values. This study proposes a method for handling the discrete variables of the Optimal Power Flow problem. A function, which penalizes the objective function when discrete variables assume non-discrete values, is presented. By including this penalty function into the objective function, a nonlinear programming problem with only continuous variables is obtained and the solution of this problem is equivalent to the solution of the initial problem that contains discrete and continuous variables. The nonlinear programming problem is solved by a Interior-Point Method with filter line-search. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the proposed approach is efficient in the resolution of OPF problems.
Huang, Hongqing. "Algorithms for optimal feedback control problems." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1177101576.
Повний текст джерелаSeywald, Hans. "Optimal control problems with switching points." Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-07282008-135220/.
Повний текст джерелаBarth, Eric J. "Approximating discrete-time optimal control using a neural network." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/19009.
Повний текст джерелаCheung, Ka-chun. "Optimal asset allocation problems under the discrete-time regime-switching model." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B31311234.
Повний текст джерелаCheung, Ka-chun, and 張家俊. "Optimal asset allocation problems under the discrete-time regime-switching model." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B31311234.
Повний текст джерелаPfeiffer, Laurent. "Sensitivity analysis for optimal control problems. Stochastic optimal control with a probability constraint." Palaiseau, Ecole polytechnique, 2013. https://pastel.hal.science/docs/00/88/11/19/PDF/thesePfeiffer.pdf.
Повний текст джерелаThis thesis is divided into two parts. In the first part, we study constrained deterministic optimal control problems and sensitivity analysis issues, from the point of view of abstract optimization. Second-order necessary and sufficient optimality conditions, which play an important role in sensitivity analysis, are also investigated. In this thesis, we are interested in strong solutions. We use this generic term for locally optimal controls for the L1-norm, roughly speaking. We use two essential tools: a relaxation technique, which consists in using simultaneously several controls, and a decomposition principle, which is a particular second-order Taylor expansion of the Lagrangian. Chapters 2 and 3 deal with second-order necessary and sufficient optimality conditions for strong solutions of problems with pure, mixed, and final-state constraints. In Chapter 4, we perform a sensitivity analysis for strong solutions of relaxed problems with final-state constraints. In Chapter 5, we perform a sensitivity analysis for a problem of nuclear energy production. In the second part of the thesis, we study stochastic optimal control problems with a probability constraint. We study an approach by dynamic programming, in which the level of probability is a supplementary state variable. In this framework, we show that the sensitivity of the value function with respect to the probability level is constant along optimal trajectories. We use this analysis to design numerical schemes for continuous-time problems. These results are presented in Chapter 6, in which we also study an application to asset-liability management
Tian, Wenyi. "Numerical study on some inverse problems and optimal control problems." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/193.
Повний текст джерелаDe, Pinho Maria Do Rosario Marques Fernandes Teixeira. "Optimal control problems with differential algebraic constraints." Thesis, Imperial College London, 1993. http://hdl.handle.net/10044/1/7392.
Повний текст джерелаTsang, Siu Chung. "Preconditioners for linear parabolic optimal control problems." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/464.
Повний текст джерелаGuo, Chaoyang. "Some optimal control problems in mathematical finance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0022/NQ39269.pdf.
Повний текст джерелаBirkett, N. R. C. "Optimal control problems in tidal power calculations." Thesis, University of Reading, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370809.
Повний текст джерелаLin, Xuchao. "Optimal control approaches for persistent monitoring problems." Thesis, Boston University, 2014. https://hdl.handle.net/2144/11119.
Повний текст джерелаPersistent monitoring tasks arise when agents must monitor a dynamically changing environment which cannot be fully covered by a stationary team of available agents. It differs from traditional coverage tasks due to the perpetual need to cover a changing environment, i.e., all areas of the mission space must be visited infinitely often. This dissertation presents an optimal control framework for persistent monitoring problems where the objective is to control the movement of multiple cooperating agents to minimize an uncertainty metric in a given mission space. In an one-dimensional mission space, it is shown that the optimal solution is for each agent to move at maximal speed from one switching point to the next, possibly waiting some time at each point before reversing its direction. Thus, the solution is reduced to a simpler parametric optimization problem: determining a sequence of switching locations and associated waiting times at these switching points for each agent. This amounts to a hybrid system which is analyzed using Infinitesimal Perturbation Analysis (IPA) , to obtain a complete on-line solution through a gradient-based algorithm. IPA is a method used to provide unbiased gradient estimates of performance metrics with respect to various controllable parameters in Discrete Event Systems (DES) as well as in Hybrid Systems (HS). It is also shown that the solution is robust with respect to the uncertainty model used, i.e., IPA provides an unbiased estimate of the gradient without any detailed knowledge of how uncertainty affects the mission space. In a two-dimensional mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the one dimensional analysis. It is proved, however, that elliptical trajectories outperform linear ones. With this motivation, the dissertation formulates a parametric optimization problem to determine such trajectories. It is again shown that the problem can be solved using IPA to obtain performance gradients on line and obtain a complete and scalable solution. Since the solutions obtained are generally locally optimal, a stochastic comparison algorithm is incorporated for deriving globally optimal elliptical trajectories. The dissertation also approaches the problem by representing an agent trajectory in terms of general function families characterized by a set of parameters to be optimized. The approach is applied to the family of Lissajous functions as well as a Fourier series representation of an agent trajectory. Numerical examples indicate that this scalable approach provides solutions that are near optimal relative to those obtained through a computationally intensive two point boundary value problem (TPBVP) solver. In the end, the problem is tackled using centralized and decentralized Receding Horizon Control (RHC) algorithms, which dynamically determine the control for agents by solving a sequence of optimization problems over a planning horizon and executing them over a shorter action horizon.
Sun, Yufei. "Chance-constrained optimization & optimal control problems." Thesis, Curtin University, 2015. http://hdl.handle.net/20.500.11937/183.
Повний текст джерелаSilva, Francisco Jose. "Interior penalty approximation for optimal control problems. Optimality conditions in stochastic optimal control theory." Palaiseau, Ecole polytechnique, 2010. http://pastel.archives-ouvertes.fr/docs/00/54/22/95/PDF/tesisfjsilva.pdf.
Повний текст джерелаRésumé anglais : This thesis is divided in two parts. In the first one we consider deterministic optimal control problems and we study interior approximations for two model problems with non-negativity constraints. The first model is a quadratic optimal control problem governed by a nonautonomous affine ordinary differential equation. We provide a first-order expansion for the penalized state an adjoint state (around the corresponding state and adjoint state of the original problem), for a general class of penalty functions. Our main argument relies on the following fact: if the optimal control satisfies strict complementarity conditions for its Hamiltonian, except for a set of times with null Lebesgue measure, the functional estimates of the penalized optimal control problem can be derived from the estimates of a related finite dimensional problem. Our results provide three types of measure to analyze the penalization technique: error estimates of the control, error estimates of the state and the adjoint state and also error estimates for the value function. The second model we study is the optimal control problem of a semilinear elliptic PDE with a Dirichlet boundary condition, where the control variable is distributed over the domain and is constrained to be non-negative. Following the same approach as in the first model, we consider an associated family of penalized problems, whose solutions define a central path converging to the solution of the original one. In this fashion, we are able to extend the results obtained in the ODE framework to the case of semilinear elliptic PDE constraints. In the second part of the thesis we consider stochastic optimal control problems. We begin withthe study of a stochastic linear quadratic problem with non-negativity control constraints and we extend the error estimates for the approximation by logarithmic penalization. The proof is based is the stochastic Pontryagin's principle and a duality argument. Next, we deal with a general stochastic optimal control problem with convex control constraints. Using the variational approach, we are able to obtain first and second-order expansions for the state and cost function, around a local minimum. This analysis allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second-order necessary conditions are also established
Barth, Eric J. "Approximating infinite horizon discrete-time optimal control using CMAC networks." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/19464.
Повний текст джерелаZhang, Xiaohong. "Optimal feedback control for nonlinear discrete systems and applications to optimal control of nonlinear periodic ordinary differential equations." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40185.
Повний текст джерелаWeiser, Martin. "Function space complementarity methods for optimal control problems." [S.l. : s.n.], 2001. http://www.diss.fu-berlin.de/2001/189/index.html.
Повний текст джерелаAchmatowicz, Richard L. (Richard Leon). "Optimal control problems on an infinite time horizon." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66052.
Повний текст джерелаHodge, D. J. "Problems of stochastic optimal control and yield management." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604120.
Повний текст джерелаLeung, Ho-yin, and 梁浩賢. "Stochastic models for optimal control problems with applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42841781.
Повний текст джерелаRiquelme, Victor. "Optimal control problems for bioremediation of water resources." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT290/document.
Повний текст джерелаThis thesis consists of two parts. In the first part we study minimal time strategies for the treatment of pollution in large water volumes, such as lakes or natural reservoirs, using a single continuous bioreactor that operates in a quasi-steady state. The control consists of feeding the bioreactor from the resource, with clean output returning to the resource with the same flow rate. We drop the hypothesis of homogeneity of the pollutant concentration in the water resource by proposing three spatially structured models. The first model considers two zones connected to each other by diffusion and only one of them treated by the bioreactor. With the help of the Pontryagin Maximum Principle, we show that the optimal state feedback depends only on the measurements of pollution in the treated zone, with no influence of volume, diffusion parameter, or pollutant concentration in the untreated zone. We show that the effect of a recirculation pump that helps to mix the two zones is beneficial if operated at full speed. We prove that the family of minimal time functions depending on the diffusion parameter is decreasing. The second model consists of two zones connected to each other by diffusion and each of them connected to the bioreactor. This is a problem with a non convex velocity set for which it is not possible to directly prove the existence of its solutions. We overcome this difficulty and fully solve the studied problem applying Pontryagin's principle to the associated problem with relaxed controls, obtaining a feedback control that treats the most polluted zone up to the homogenization of the two concentrations. We also obtain explicit bounds on its value function via Hamilton-Jacobi-Bellman techniques. We prove that the minimal time function is nonmonotone as a function of the diffusion parameter. The third model consists of a system of two zones connected to the bioreactor in series, and a recirculation pump between them. The control set depends on the state variable; we show that this constraint is active from some time up to the final time. We show that the optimal control consists of waiting up to a time from which it is optimal the mixing at maximum speed, and then to repollute the second zone with the concentration of the first zone. This is a non intuitive result. Numerical simulations illustrate the theoretical results, and the obtained optimal strategies are tested in hydrodynamic models, showing to be good approximations of the solution of the inhomogeneous problem. The second part consists of the development and study of a stochastic model of sequencing batch reactor. We obtain the model as a limit of birth and death processes. We establish the existence and uniqueness of solutions of the controlled equation that does not satisfy the usual assumptions. We prove that with any control law the probability of extinction is positive, which is a non classical result. We study the problem of the maximization of the probability of attaining a target pollution level, with the reactor at maximum capacity, prior to extinction. This problem does not satisfy any of the usual assumptions (non Lipschitz dynamics, degenerate locally H"older diffusion parameter, restricted state space, intersecting reach and avoid sets), so the problem must be studied in two stages: first, we prove the continuity of the uncontrolled cost function for initial conditions with maximum volume, and then we develop a dynamic programming principle for a modification of the problem as an optimal control problem with final cost and without state constraint
Leung, Ho-yin. "Stochastic models for optimal control problems with applications." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42841781.
Повний текст джерелаBlanchard, Eunice Anita. "Exact penalty methods for nonlinear optimal control problems." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/1805.
Повний текст джерелаRiffer, Jennifer Lynn. "Time-optimal control of discrete-time systems with known waveform disturbances." [Milwaukee, Wis.] : e-Publications@Marquette, 2009. http://epublications.marquette.edu/theses_open/18.
Повний текст джерелаGranzotto, Mathieu. "Near-optimal control of discrete-time nonlinear systems with stability guarantees." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0301.
Повний текст джерелаArtificial intelligence is rich in algorithms for optimal control. These generate commands for dynamical systems in order to minimize a a given cost function describing the energy of the system, for example. These methods are applicable to large classes of non-linear systems in discrete time and have proven themselves in many applications. Their application in control problems is therefore very promising. However, a fundamental question remains to be clarified for this purpose: that of stability. Indeed, these studies focus on optimality and ignore in the In most cases the stability of the controlled system, which is at the heart of control theory. The objective of my thesis is to study the stability of non-linear systems controlled by such algorithms. The stakes are high because it will create a new bridge between artificial intelligence and control theory. Stability informs us about the behaviour of the system as a function of time and guarantees its robustness in the presence of model disturbances or uncertainties. Algorithms in artificial intelligence focus on control optimality and do not exploit the properties of the system dynamics. Stability is not only desirable for the reasons mentioned above, but also for the possibility of using it to improve these intelligence algorithms artificial. My research focuses on control techniques from (approximated) dynamic programming when the system model is known. For this purpose, I identify general conditions by which it is possible to guarantee the stability of the closed-loop system. On the other hand, once stability has been established, we can use it to drastically improve the optimality guarantees of literature. My work has focused on two main areas. The first concerns the approach by iteration on values, which is one of the pillars of dynamic programming is approached and is at the heart of many reinforcement learning algorithms. The second concerns the approach by optimistic planning, applied to switched systems. I adapt the optimistic planning algorithm such that, under natural assumptions in an a stabilisation context, we obtain the stability of closed-loop systems where inputs are generated by this modified algorithm, and to drastically improve the optimality guarantees of the generated inputs
Ng, Chi Kong. "Globally convergent and efficient methods for unconstrained discrete-time optimal control." HKBU Institutional Repository, 1998. http://repository.hkbu.edu.hk/etd_ra/149.
Повний текст джерелаYucel, Hamdullah. "Adaptive Discontinuous Galerkin Methods For Convectiondominated Optimal Control Problems." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614523/index.pdf.
Повний текст джерелаSmith, Stephen Bevis. "Exact penalty function algorithms for constrained optimal control problems." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/7996.
Повний текст джерелаBenner, Peter, Enrique S. Quintana-Ortí, and Gregorio Quintana-Ortí. "Solving Linear-Quadratic Optimal Control Problems on Parallel Computers." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601704.
Повний текст джерелаLee, Yu Chung Eugene. "Co-ordinated supply chain management and optimal control problems." online access from Digital Dissertation Consortium, 2007. http://libweb.cityu.edu.hk/cgi-bin/er/db/ddcdiss.pl?3299869.
Повний текст джерелаLiu, Xu. "First and second order conditions for optimal control problems." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443828.
Повний текст джерелаSiska, David. "Numerical approximations of stochastic optimal stopping and control problems." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/2571.
Повний текст джерелаWong, Man-kwun, and 黃文冠. "Some sensitivity results for time-delay optimal control problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223655.
Повний текст джерелаSexton, Jennifer. "Optimal stopping and control problems using the Legendre transform." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/optimal-stopping-and-control-problems-using-the-legendre-transform(aa3ce911-2a1d-4d48-8096-367706c798c9).html.
Повний текст джерелаFabrini, Giulia. "Numerical methods for optimal control problems with biological applications." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066096/document.
Повний текст джерелаThis thesis is divided in two parts: in the first part we focus on numerical methods for optimal control problems, in particular on the Dynamic Programming Principle and on Model Predictive Control (MPC), in the second part we present some applications of the control techniques in biology. In the first part of the thesis, we consider the approximation of an optimal control problem with an infinite horizon, which combines a first step based on MPC, to obtain a fast but rough approximation of the optimal trajectory and a second step where we solve the Bellman equation in a neighborhood of the reference trajectory. In this way, we can reduce the size of the domain in which the Bellman equation can be solved and so the computational complexity is reduced as well. The second topic of this thesis is the control of the Level Set methods: we consider an optimal control, in which the dynamics is given by the propagation of a one dimensional graph, which is controlled by the normal velocity. A final state is fixed and the aim is to reach the trajectory chosen as a target minimizing an appropriate cost functional. To apply the Dynamic Programming approach we firstly reduce the size of the system using the Proper Orthogonal Decomposition. The second part of the thesis is devoted to the application of control methods in biology. We present a model described by a partial differential equation that models the evolution of a population of tumor cells. We analyze the mathematical and biological features of the model. Then we formulate an optimal control problem for this model and we solve it numerically
Chai, Qinqin. "Computational methods for solving optimal industrial process control problems." Thesis, Curtin University, 2013. http://hdl.handle.net/20.500.11937/1227.
Повний текст джерелаLoxton, Ryan Christopher. "Optimal control problems involving constrained, switched, and delay systems." Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/1479.
Повний текст джерелаWinkler, Gunter. "Control constrained optimal control problems in non-convex three dimensional polyhedral domains." Doctoral thesis, Universitätsbibliothek Chemnitz, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800626.
Повний текст джерелаFoley, Dawn Christine. "Short horizon optimal control of nonlinear systems via discrete state space realization." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/16803.
Повний текст джерелаOlofsson, Marcus. "Optimal Switching Problems and Related Equations." Doctoral thesis, Uppsala universitet, Analys och sannolikhetsteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247298.
Повний текст джерелаSternberg, Julia. "Memory efficient approaches of second order for optimal control problems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1135250699292-11488.
Повний текст джерелаEs wird ein Problem der optimalen Steuerung betrachtet. Die dazugehoerigen Zustandsgleichungen sind mit einer Anfangswertaufgabe definiert. Es existieren zahlreiche numerische Methoden, um Probleme der optimalen Steuerung zu loesen. Der so genannte indirekte Ansatz wird in diesen Thesen detailliert betrachtet. Die indirekten Methoden loesen das aus den Notwendigkeitsbedingungen resultierende Randwertproblem. Das so genannte Pantoja Verfahren beschreibt eine zeiteffiziente schrittweise Berechnung der Newton Richtung fuer diskrete Probleme der optimalen Steuerung. Es gibt mehrere Beziehungen zwischen den unterschiedlichen Mehrzielmethoden und dem Pantoja Verfahren, die in diesen Thesen detailliert zu untersuchen sind. In diesem Zusammenhang wird die aequivalence zwischen dem Pantoja Verfahren und der Mehrzielmethode vom Riccati Typ gezeigt. Ausserdem wird das herkoemlige Pantoja Verfahren dahingehend erweitert, dass die Zustandsgleichungen mit Hilfe einer impliziten numerischen Methode diskretisiert sind. Weiterhin wird das Symplektische Konzept eingefuehrt. In diesem Zusammenhang wird eine geeignete numerische Methode praesentiert, die fuer ein unrestringiertes Problem der optimalen Steuerung angewendet werden kann. In diesen Thesen wird bewiesen, dass diese Methode symplectisch ist. Das iterative Loesen eines Problems der optimalen Steuerung in gewoenlichen Differentialgleichungen mit Hilfe von Pantoja oder Riccati aequivalenten Verfahren fuehrt auf eine Aufeinanderfolge der Durchlaeufetripeln in einem diskretisierten Zeitintervall. Der zweite (adjungierte) Lauf haengt von der Information des ersten (primalen) Laufes, und der dritte (finale) Lauf haeng von den beiden vorherigen ab. Ueblicherweise beinhalten Schritte und Zustaende des adjungierten Laufes wesentlich mehr Operationen und benoetigen auch wesentlich mehr Speicherplatzkapazitaet als Schritte und Zustaende der anderen zwei Durchlaeufe. Das Grundproblem besteht in einer enormen Speicherplatzkapazitaet, die fuer die Implementierung dieser Methoden benutzt wird, falls alle Zustaende des primalen und des adjungierten Durchlaufes zu speichern sind. Ein Ziel dieser Thesen besteht darin, Checkpointing Strategien zu praesentieren, um diese Methoden speichereffizient zu implementieren. Diese geschachtelten Umkehrschemata sind so konstruiert, dass fuer einen gegebenen Speicherplatz die gesamte Laufzeit zur Abarbeitung des Umkehrschemas minimiert wird. Die aufgestellten Umkehrschemata wurden fuer eine speichereffiziente Implementierung von Problemen der optimalen Steuerung angewendet. Insbesondere betrifft dies das Problem einer Oberflaechenabhaertung mit Laserbehandlung
Frey, Michael [Verfasser]. "Shape Calculus Applied to Elliptic Optimal Control Problems / Michael Frey." Bayreuth : Universität Bayreuth, 2012. http://d-nb.info/1059412780/34.
Повний текст джерелаKouzoupis, Dimitris [Verfasser], and Moritz [Akademischer Betreuer] Diehl. "Structure-exploiting numerical methods for tree-sparse optimal control problems." Freiburg : Universität, 2019. http://d-nb.info/1191689549/34.
Повний текст джерелаAndrews, Timothy Paul. "An existence theory for optimal control problems with time delays." Thesis, Imperial College London, 1989. http://hdl.handle.net/10044/1/47332.
Повний текст джерелаVoisei, Mircea D. "First-Order Necessary Optimality Conditions for Nonlinear Optimal Control Problems." Ohio University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1091111473.
Повний текст джерела