To see the other types of publications on this topic, follow the link: Nonlinear optimal control.

Dissertations / Theses on the topic 'Nonlinear optimal control'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Nonlinear optimal control.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhu, Jinghao. "Some results on nonlinear optimal control." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10042006-143910/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Xiaohong. "Optimal feedback control for nonlinear discrete systems and applications to optimal control of nonlinear periodic ordinary differential equations." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gavriel, Christos. "Higher order conditions in nonlinear optimal control." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9042.

Full text
Abstract:
The most widely used tool for the solution of optimal control problems is the Pontryagin Maximum Principle. But the Maximum Principle is, in general, only a necessary condition for optimality. It is therefore desirable to have supplementary conditions, for example second order sufficient conditions, which confirm optimality (at least locally) of an extremal arc, meaning one that satisfies the Maximum Principle. Standard second order sufficient conditions for optimality, when they apply, yield the information not only that the extremal is locally minimizing, but that it is also locally unique. There are problems of interest, however, where minimizers are not locally unique, owing to the fact that the cost is invariant under small perturbations of the extremal of a particular structure (translations, rotations or time-shifting). For such problems the standard second order conditions can never apply. The first contribution of this thesis is to develop new second order conditions for optimality of extremals which are applicable in some cases of interest when minimizers are not locally unique. The new conditions can, for example, be applied to problems with periodic boundary conditions when the cost is invariant under time translations. The second order conditions investigated here apply to normal extremals. These extremals satisfy the conditions of the Maximum Principle in normal form (with the cost multiplier taken to be 1). It is, therefore, of interest to know when the Maximum Principle applies in normal form. This issue is also addressed in this thesis, for optimal control problems that can be expressed as calculus of variations problems. Normality of the Maximum Principle follows from the fact that, under the regularity conditions developed, the highest time derivative of an extremal arc is essentially bounded. The thesis concludes with a brief account of possible future research directions.
APA, Harvard, Vancouver, ISO, and other styles
4

Primbs, James A. Doyle John Comstock. "Nonlinear optimal control : a receding horizon approach /." Diss., Pasadena, Calif. : California Institute of Technology, 1999. http://resolver.caltech.edu/CaltechETD:etd-10172005-103315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tigrek, Tuba. "Nonlinear adaptive optimal control of HVAC systems." Thesis, University of Iowa, 2001. https://ir.uiowa.edu/etd/3429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chudoung, Jerawan. "Robust Control for Hybrid, Nonlinear Systems." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/26983.

Full text
Abstract:
We develop the robust control theories of stopping-time nonlinear systems and switching-control nonlinear systems. We formulate a robust optimal stopping-time control problem for a state-space nonlinear system and give the connection between various notions of lower value function for the associated game (and storage function for the associated dissipative system) with solutions of the appropriate variational inequality (VI). We show that the stopping-time rule can be obtained by solving the VI in the viscosity sense. It also happens that a positive definite supersolution of the VI can be used for stability analysis. We also show how to solve the VI for some prototype examples with one-dimensional state space. For the robust optimal switching-control problem, we establish the Dynamic Programming Principle (DPP) for the lower value function of the associated game and employ it to derive the appropriate system of quasivariational inequalities (SQVI) for the lower value vector function. Moreover we formulate the problem in the L2-gain/dissipative system framework. We show that, under appropriate assumptions, continuous switching-storage (vector) functions are characterized as viscosity supersolutions of the SQVI, and that the minimal such storage function is equal to the lower value function for the game. We show that the control strategy achieving the dissipative inequality is obtained by solving the SQVI in the viscosity sense; in fact this solution is also used to address stability analysis of the switching system. In addition we prove the comparison principle between a viscosity subsolution and a viscosity supersolution of the SQVI satisfying a boundary condition and use it to give an alternative derivation of the characterization of the lower value function. Finally we solve the SQVI for a simple one-dimensional example by a direct geometric construction.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Meum, Patrick. "Optimal Reservoir control using nonlinear MPC and ECLIPSE." Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9610.

Full text
Abstract:

Recent years advances within well deployment and instrumentation technology offers huge potentials for increased oil recovery from reservoir production. Wells can now be equipped with controllable valves at reservoir depth, which may possibly alter the production profitability of the field completely, if the devices are used in an intelligent manner. This thesis investigates this potential by using model predictive control to maximize reservoir production performance and total oil production. The report describes an algorithm for nonlinear model predictive control, using a single shooting, multistep, quasi-Newton method, and implements it on an existing industrial MPC platform - Statoil's in-house MPC tool SEPTIC. The method is an iterative method, solving a series of quadratic problems analogous to sequential quadratic programming, to find the optimal control settings. An interface between SEPTIC and a commercial reservoir simulator, ECLIPSE, is developed for process modelling and predictions. ECLIPSE provides highly realistic and detailed reservoir behaviour and is used by SEPTIC to obtain numerical gradients for optimization. The method is applied to two reservoir examples, Case 1 and Case 2, and develops optimal control strategies for each of these. The two examples have conceptually different model structures. Case 1 is a simple introduction model. Case 2 is a benchmark model previously used in Yeten, Durlofsky and Aziz (2002) and models a North Sea type channelized reservoir. It is described by a set of different realizations, to capture a notion of model uncertainty. The report addresses each of the available realizations and shows how the value of an optimal production strategy can vary for equally probable realizations. Improvements in reservoir production performance using the model predictive control method are shown for all cases, compared to basic controlled references cases. For the benchmark example improvements range up to as much as 68% increase in one realization, and 30% on average for all realizations. This is an increase from the results previously published for the benchmark, with a 3% average. However, the level of improvement shows significant variation, and is only marginal for example Case 1. A thorough field analysis should therefore be performed before deciding to take the extra cost of well equipment and optimal control.

APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Wenjie. "Self-organizing and optimal control for nonlinear systems." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3359894.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed January 27, 2010). Includes bibliographical references (p. 82-87). Issued in print and online. Available via ProQuest Digital Dissertations.
APA, Harvard, Vancouver, ISO, and other styles
9

Blanchard, Eunice Anita. "Exact penalty methods for nonlinear optimal control problems." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/1805.

Full text
Abstract:
Research comprised of developing solution techniques to three classes of non-standard optimal control problems, namely: optimal control problems with discontinuous objective functions arising in aquaculture operations; impulsive optimal control problems with minimum subsystem durations; optimal control problems involving dual-mode hybrid systems with state-dependent switching conditions. The numerical algorithms developed involved an exact penalty approach to transform the constrained problem into an unconstrained problem which was readily solvable by a standard optimal control software.
APA, Harvard, Vancouver, ISO, and other styles
10

Jorge, Tiago R., João M. Lemos, and Miguel Barão. "Optimal Control for Vehicle Cruise Speed Transfer." Bachelor's thesis, ACTAPRESS, 2011. http://hdl.handle.net/10174/4513.

Full text
Abstract:
The contribution of this paper consists in a procedure to solve the optimal cruise control problem that consists in transferring the car velocity between two specified values, in a fixed interval of time, with minimum fuel consumption. The solution is obtained by applying a recursive numerical algorithm that provides an approximation to the condition provided by Pontryagin’s Optimum Principle. This solution is compared with the one obtained by using a reduced complexity linear model for the car dynamics that allows an exact (“analytical”) solution of the corresponding optimal control problem. This work has been performed within the framework of activity 2.4.1 – Smart drive control of project SE2A - Nanoelectronics for Safe, Fuel Efficient and Environment Friendly Automotive Solutions, ENIAC initiative.
APA, Harvard, Vancouver, ISO, and other styles
11

Sjöberg, Johan. "Optimal Control and Model Reduction of Nonlinear DAE Models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11345.

Full text
Abstract:
In this thesis, different topics for models that consist of both differential and algebraic equations are studied. The interest in such models, denoted DAE models, have increased substantially during the last years. One of the major reasons is that several modern object-oriented modeling tools used to model large physical systems yield models in this form. The DAE models will, at least locally, be assumed to be described by a decoupled set of ordinary differential equations and purely algebraic equations. In theory, this assumption is not very restrictive because index reduction techniques can be used to rewrite rather general DAE models to satisfy this assumption. One of the topics considered in this thesis is optimal feedback control. For state-space models, it is well-known that the Hamilton-Jacobi-Bellman equation (HJB) can be used to calculate the optimal solution. For DAE models, a similar result exists where a Hamilton-Jacobi-Bellman-like equation is solved. This equation has an extra term in order to incorporate the algebraic equations, and it is investigated how the extra term must be chosen in order to obtain the same solution from the different equations. A problem when using the HJB to find the optimal feedback law is that it involves solving a nonlinear partial differential equation. Often, this equation cannot be solved explicitly. An easier problem is to compute a locally optimal feedback law. For analytic nonlinear time-invariant state-space models, this problem was solved in the 1960's, and in the 1970's the time-varying case was solved as well. In both cases, the optimal solution is described by convergent power series. In this thesis, both of these results are extended to analytic DAE models. Usually, the power series solution of the optimal feedback control problem consists of an infinite number of terms. In practice, an approximation with a finite number of terms is used. A problem is that for certain problems, the region in which the approximate solution is accurate may be small. Therefore, another parametrization of the optimal solution, namely rational functions, is studied. It is shown that for some problems, this parametrization gives a substantially better result than the power series approximation in terms of approximating the optimal cost over a larger region. A problem with the power series method is that the computational complexity grows rapidly both in the number of states and in the order of approximation. However, for DAE models where the underlying state-space model is control-affine, the computations can be simplified. Therefore, conditions under which this property holds are derived. Another major topic considered is how to include stochastic processes in nonlinear DAE models. Stochastic processes are used to model uncertainties and noise in physical processes, and are often an important part in for example state estimation. Therefore, conditions are presented under which noise can be introduced in a DAE model such that it becomes well-posed. For well-posed models, it is then discussed how particle filters can be implemented for estimating the time-varying variables in the model. The final topic in the thesis is model reduction of nonlinear DAE models. The objective with model reduction is to reduce the number of states, while not affecting the input-output behavior too much. Three different approaches are studied, namely balanced truncation, balanced truncation using minimization of the co-observability function and balanced residualization. To compute the reduced model for the different approaches, a method originally derived for nonlinear state-space models is extended to DAE models.
APA, Harvard, Vancouver, ISO, and other styles
12

Rosato, Andrea. "A Gaussian Process Learning Method for Nonlinear Optimal Control." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22463/.

Full text
Abstract:
This thesis is focused on discrete-time nonlinear optimal control techniques enhanced via a supervised learning approach based on Gaussian Process regression. Since optimal control strategies are strongly model-based, a perfect knowledge of the real system is required in order to obtain the best performances; however, it is not always possible to satisfy these requirements, since model uncertainties due to, e.g., unavailable information or hard to model dynamical effects may be present leading to a suboptimal solution for the real problem. The basic idea is to exploit measurement data to reduce (and possibly mitigate) model uncertainty. In this context, a machine learning tool, Gaussian Processes, provides a flexible and easy tool to directly assess the residual model uncertainty and to enhance the control performances. In this work, Sequential Open Loop and Differential Dynamic Programming, two well-known optimal control strategies, are adapted such that to include Gaussian Process regression. The resulting strategies adopt a simultaneous approach: at each iteration, an optimization step is executed such that a trajectory with a lower cost than the previous one is obtained. Then a learning step is performed, in order to collect data which improve the regression for next iterations. Thanks to Gaussian Process regression, the new strategies are able to solve optimal control problems when model uncertainties are present. Furthermore, the learning part is enhanced by including a selection policy which rejects not useful measurements, such that to speed-up the resolution and to assure more stability to the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Sjöberg, Johan. "Some Results On Optimal Control for Nonlinear Descriptor Systems." Licentiate thesis, Linköping University, Linköping University, Automatic Control, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7489.

Full text
Abstract:

I denna avhandling studeras optimal återkopplad styrning av olinjära deskriptorsystem. Ett deskriptorsystem är en matematisk beskrivning som kan innehålla både differentialekvationer och algebraiska ekvationer. En av anledningarna till intresset för denna klass av system är att objekt-orienterade modelleringsverktyg ger systembeskrivningar på denna form. Här kommer det att antas att det, åtminstone lokalt, är möjligt att eliminera de algebraiska ekvationerna och få ett system på tillståndsform. Teoretiskt är detta inte så inskränkande för genom att använda någon indexreduktionsmetod kan ganska generella deskriptor\-system skrivas om så att de uppfyller detta antagande.

För system på tillståndsform kan Hamilton-Jacobi-Bellman-ekvationen användas för att bestämma den optimala återkopplingen. Ett liknande resultat finns för deskriptor\-system där istället en Hamilton-Jacobi-Bellman-liknande ekvation ska lösas. Denna ekvation innehåller dock en extra term för att hantera de algebraiska ekvationerna. Eftersom antagandena i denna avhandling gör det möjligt att skriva om deskriptorsystemet som ett tillståndssystem, undersöks hur denna extra term måste väljas för att båda ekvationerna ska få samma lösning.

Ett problem med att beräkna den optimala återkopplingen med hjälp av Hamilton-Jacobi-Bellman-ekvationen är att det leder till att en olinjär partiell differentialekvation ska lösas. Generellt har denna ekvation ingen explicit lösning. Ett lättare problem är att beräkna en lokal optimal återkoppling. För analytiska system på tillståndsform löstes detta problem på 1960-talet och den optimala lösningen beskrivs av serieutvecklingar. I denna avhandling generaliseras detta resultat så att även deskriptor-system kan hanteras. Metoden illustreras med ett exempel som beskriver en faslåsande krets.

I många situationer vill man veta om ett område är möjligt att nå genom att styra på något sätt. För linjära tidsinvarianta system fås denna information från styrbarhetgramianen. För olinjära system används istället styrbarhetsfunktionen. Tre olika metoder för att beräkna styrbarhetsfunktionen har härletts i denna avhandling. De framtagna metoderna är också applicerade på några exempel för att visa beräkningsstegen.

Dessutom har observerbarhetsfunktionen studerats. Observerbarhetsfunktionen visar hur mycket utsignalenergi ett visst initial tillstånd svarar mot. Ett par olika metoder för att beräkna observerbarhetsfunktionen för deskriptorsystem tagits fram. För att beskriva en av metoderna, studeras ett litet exempel bestående av en elektrisk krets.


In this thesis, optimal feedback control for nonlinear descriptor systems is studied. A descriptor system is a mathematical description that can include both differential and algebraic equations. One of the reasons for the interest in this class of systems is that several modern object-oriented modeling tools yield system descriptions in this form. Here, it is assumed that it is possible to rewrite the descriptor system as a state-space system, at least locally. In theory, this assumption is not very restrictive because index reduction techniques can be used to rewrite rather general descriptor systems to satisfy this assumption.

The Hamilton-Jacobi-Bellman equation can be used to calculate the optimal feedback control for systems in state-space form. For descriptor systems, a similar result exists where a Hamilton-Jacobi-Bellman-like equation is solved. This equation includes an extra term in order to incorporate the algebraic equations. Since the assumptions made here make it possible to rewrite the descriptor system in state-space form, it is investigated how the extra term must be chosen in order to obtain the same solution from the different equations.

A problem when computing the optimal feedback law using the Hamilton-Jacobi-Bellman equation is that it involves solving a nonlinear partial differential equation. Often, this equation cannot be solved explicitly. An easier problem is to compute a locally optimal feedback law. This problem was solved in the 1960's for analytical systems in state-space form and the optimal solution is described using power series. In this thesis, this result is extended to also incorporate descriptor systems and it is applied to a phase-locked loop circuit.

In many situations, it is interesting to know if a certain region is reachable using some control signal. For linear time-invariant state-space systems, this information is given by the controllability gramian. For nonlinear state-space systems, the controllabilty function is used instead. Three methods for calculating the controllability function for descriptor systems are derived in this thesis. These methods are also applied to some examples in order to illustrate the computational steps.

Furthermore, the observability function is studied. This function reflects the amount of output energy a certain initial state corresponds to. Two methods for calculating the observability function for descriptor systems are derived. To describe one of the methods, a small example consisting of an electrical circuit is studied.


Report code: LiU-TEK-LIC-2006:8
APA, Harvard, Vancouver, ISO, and other styles
14

Hein, Sabine. "MPC/LQG-Based Optimal Control of Nonlinear Parabolic PDEs." Doctoral thesis, Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201000134.

Full text
Abstract:
The topic of this thesis is the theoretical and numerical research of optimal control problems for uncertain nonlinear systems, described by semilinear parabolic differential equations with additive noise, where the state is not completely available. Based on a paper by Kazufumi Ito and Karl Kunisch, which was published in 2006 with the title "Receding Horizon Control with Incomplete Observations", we analyze a Model Predictive Control (MPC) approach where the resulting linear problems on small intervals are solved with a Linear Quadratic Gaussian (LQG) design. Further we define a performance index for the MPC/LQG approach, find estimates for it and present bounds for the solutions of the underlying Riccati equations. Another large part of the thesis is devoted to extensive numerical studies for an 1+1- and 3+1-dimensional problem to show the robustness of the MPC/LQG strategy. The last part is a generalization of the MPC/LQG approach to infinite-dimensional problems.
APA, Harvard, Vancouver, ISO, and other styles
15

San-Blas, Felipe. "A case study in nonlinear on-line optimal control." Thesis, Imperial College London, 2003. http://hdl.handle.net/10044/1/7344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sjöberg, Johan. "Some results on optimal control for nonlinear descriptor systems /." Linköping : Univ, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sjöberg, Johan. "Optimal control and model reduction of nonlinear DAE models /." Linköping : Department of Electrical Engineering, Linköping University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Johnson, Miles J. "Inverse optimal control for deterministic continuous-time nonlinear systems." Thesis, University of Illinois at Urbana-Champaign, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3632073.

Full text
Abstract:

Inverse optimal control is the problem of computing a cost function with respect to which observed state input trajectories are optimal. We present a new method of inverse optimal control based on minimizing the extent to which observed trajectories violate first-order necessary conditions for optimality. We consider continuous-time deterministic optimal control systems with a cost function that is a linear combination of known basis functions. We compare our approach with three prior methods of inverse optimal control. We demonstrate the performance of these methods by performing simulation experiments using a collection of nominal system models. We compare the robustness of these methods by analyzing how they perform under perturbations to the system. We consider two scenarios: one in which we exactly know the set of basis functions in the cost function, and another in which the true cost function contains an unknown perturbation. Results from simulation experiments show that our new method is computationally efficient relative to prior methods, performs similarly to prior approaches under large perturbations to the system, and better learns the true cost function under small perturbations. We then apply our method to three problems of interest in robotics. First, we apply inverse optimal control to learn the physical properties of an elastic rod. Second, we apply inverse optimal control to learn models of human walking paths. These models of human locomotion enable automation of mobile robots moving in a shared space with humans, and enable motion prediction of walking humans given partial trajectory observations. Finally, we apply inverse optimal control to develop a new method of learning from demonstration for quadrotor dynamic maneuvering. We compare and contrast our method with an existing state-of-the-art solution based on minimum-time optimal control, and show that our method can generalize to novel tasks and reject environmental disturbances.

APA, Harvard, Vancouver, ISO, and other styles
19

Garcia, Fermin N. (Fermin Noel). "A nonlinear control algorithm for fuel optimal attitude control using reaction jets." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/46267.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1998.
Includes bibliographical references (p. 159-161).
We present the analysis and design of a weighted nonlinear time-fuel optimal control algorithm for spacecraft attitude dynamics using on-off gas jets. In the development of a controller, we explore four control algorithms within a single-step control framework where the step is the fundamental update time of the digital controller. The benchmark controller is a basic pulse-width modulator (PWM) with a proportional derivative controller driving the feedback loop. The second is a standard rate-ledge controller (RLC) with full-on or full-off pulse commands, while the third varies the duration of the RLC pulse commands based on the location of the states in the phase plane. The RLC algorithm is shown to well-approximate a continuous-time weighted time-fuel optimal controller. The fourth control algorithm consists of a combination of the variable-pulse RLC algorithm and a tracking-fuel optimal controller that reduces the residual error relative to the latter algorithm. Experimental data from a dynamic air-bearing testbed at Lawrence Livermore National Laboratory are used to compare the four control algorithms. The PWM scheme proves to be robust to disturbances and unmodeled dynamics and quite fast, but yields excessive fuel consumption from frequent switching. The standard RLC algorithm gives poor closed-loop performance in the presence of unmodeled dynamics and ends up being equally as fuel costly as the PWM scheme. The third algorithm, the RLC with variable pulses, significantly improves the transient and steady-state responses of the first two controllers. Via parameter tuning, we observe that this modified RLC gives excellent steady-state fuel consumption as well as reasonably fast settling times. The fourth algorithm, although more fuel efficient than the PWM and standard RLC controllers, is less efficient than the variable RLC algorithm. Matlab simulations of the four control algorithms studied are corroborated by these test results.
by Fermín Noel García.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Yiing-Yuh. "Nonlinear optimal control and near-optimal guidance strategies in spacecraft general attitude maneuvers." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53572.

Full text
Abstract:
Solving the optimal open-loop control problems for spacecraft large-angle attitude maneuvers generally requires the use of numerical techniques whose reliability is strongly case dependent. The primary goal of this dissertation is to increase the solution reliability of the associated nonlinear two-point boundary-value problems as derived from Pontryagin’s Principle. Major emphasis is placed upon the formulation of the best possible starting or nominal solution. Constraint relationships among the state and costate variables are utilized. A hybrid approach which begins with the direct gradient method and ends with the indirect method of particular solutions is proposed. Test case results which indicate improved reliability are presented. The nonlinear optimal control law derived from iterative procedures cannot adjust itself in accordance with state deviations measured during the control period. A real-time near-optimal guidance scheme which takes the perturbed states to the desired manifold by tracking a given optimal trajectory is also proposed. Numerical simulations are presented which show that highly accurate tracking results can be achieved.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

李澤康 and Chak-hong Lee. "Nonlinear time-delay optimal control problem: optimality conditions and duality." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31212475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lee, Chak-hong. "Nonlinear time-delay optimal control problem : optimality conditions and duality /." [Hong Kong] : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B16391640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sforni, Lorenzo. "A First-Order Closed-loop Methodology for Nonlinear Optimal Control." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21429/.

Full text
Abstract:
This thesis is focused on state-of-art numerical optimization methods for nonlinear (discrete-time) optimal control. These challenging problems arise when dealing with complex tasks for autonomous systems (e.g. vehicles or robots) which require the generation of a trajectory that satisfies the system dynamics and, possibly, input and state constraints due to, e.g, actuator limits or safety region of operation. A general formulation is proposed that allows the implementation of different descent optimization algorithms on optimal control problems exploiting the beneficial effects of state feedback in terms of efficiency and stability. The main idea is the following: at each iteration a new (infeasible) state-input curve is conveniently updated by any descent method, e.g, gradient descent or Newton methods, then a nonlinear feedback controller maps the curve to a trajectory satisfying the dynamics. Thanks to its inherent flexibility, this strategy provides the opportunity to speed-up the resolution of optimization problems by conveniently choosing the descent method. This thesis proposes, for example, to exploit the Heavy-ball method to speed up the convergence. It is important to underline that this methodology enjoys recursive feasibility during the algorithm evolution, i.e. at each iteration a system trajectory is available. This feature is extremely important in real-time control schemes since it allows one to stop the algorithm at any iteration and yet have a (suboptimal) system trajectory. Furthermore, tasks which require the introduction of state and input constraints can be managed introducing an approximate barrier function which embeds the constraints within the cost function. The second main contribution of this thesis is an original Python toolbox developed in order to implement and compare different optimization methods. Moreover, thanks to a modular approach, with just few adjustments it is possible to change system, cost function and constraints.
APA, Harvard, Vancouver, ISO, and other styles
24

Petersson, Daniel. "A Nonlinear Optimization Approach to H2-Optimal Modeling and Control." Doctoral thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93324.

Full text
Abstract:
Mathematical models of physical systems are pervasive in engineering. These models can be used to analyze properties of the system, to simulate the system, or synthesize controllers. However, many of these models are too complex or too large for standard analysis and synthesis methods to be applicable. Hence, there is a need to reduce the complexity of models. In this thesis, techniques for reducing complexity of large linear time-invariant (lti) state-space models and linear parameter-varying (lpv) models are presented. Additionally, a method for synthesizing controllers is also presented. The methods in this thesis all revolve around a system theoretical measure called the H2-norm, and the minimization of this norm using nonlinear optimization. Since the optimization problems rapidly grow large, significant effort is spent on understanding and exploiting the inherent structures available in the problems to reduce the computational complexity when performing the optimization. The first part of the thesis addresses the classical model-reduction problem of lti state-space models. Various H2 problems are formulated and solved using the proposed structure-exploiting nonlinear optimization technique. The standard problem formulation is extended to incorporate also frequency-weighted problems and norms defined on finite frequency intervals, both for continuous and discrete-time models. Additionally, a regularization-based method to account for uncertainty in data is explored. Several examples reveal that the method is highly competitive with alternative approaches. Techniques for finding lpv models from data, and reducing the complexity of lpv models are presented. The basic ideas introduced in the first part of the thesis are extended to the lpv case, once again covering a range of different setups. lpv models are commonly used for analysis and synthesis of controllers, but the efficiency of these methods depends highly on a particular algebraic structure in the lpv models. A method to account for and derive models suitable for controller synthesis is proposed. Many of the methods are thoroughly tested on a realistic modeling problem arising in the design and flight clearance of an Airbus aircraft model. Finally, output-feedback H2 controller synthesis for lpv models is addressed by generalizing the ideas and methods used for modeling. One of the ideas here is to skip the lpv modeling phase before creating the controller, and instead synthesize the controller directly from the data, which classically would have been used to generate a model to be used in the controller synthesis problem. The method specializes to standard output-feedback H2 controller synthesis in the lti case, and favorable comparisons with alternative state-of-the-art implementations are presented.
APA, Harvard, Vancouver, ISO, and other styles
25

Timmerman, Marc A. A. "Sub-optimal momentum managed control of 1-Dof nonlinear systems." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

de, Villiers J. P. "Monte Carlo approaches to nonlinear optimal and model predictive control." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598462.

Full text
Abstract:
This work explores the novel use of advanced Monte Carlo techniques in the disciplines of nonlinear optimal and model predictive control. The interrelation between the subjects of estimation, random sampling and optimisation is exploited to expand the application of advanced numerical. Bayesian inference techniques to the control setting. Firstly, the deterministic optimal control problem is considered. Sophisticated inter-dimensional population Markov Chain Monte Carlo (MCMC) techniques are proposed to solve the nonlinear optimal control problem. The linear quadratic and Acrobot example problems are used as demonstration of the relevant techniques. Secondly, these methods are extended to the Nonlinear Model Predictive Control (NMPC) setting with uncertain state observations. In this case, two variants of the novel Particle Predictive Controller (PPC) are developed. These PPC algorithms are successfully applied to an F-16 aircraft terrain following problem.
APA, Harvard, Vancouver, ISO, and other styles
27

Voisei, Mircea D. "First-Order Necessary Optimality Conditions for Nonlinear Optimal Control Problems." Ohio University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1091111473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Abramova, Ekaterina. "Combining reinforcement learning and optimal control for the control of nonlinear dynamical systems." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/39968.

Full text
Abstract:
This thesis presents a novel hierarchical learning framework, Reinforcement Learning Optimal Control, for controlling nonlinear dynamical systems with continuous states and actions. The adapted approach mimics the neural computations that allow our brain to bridge across the divide between symbolic action-selection and low-level actuation control by operating at two levels of abstraction. First, current findings demonstrate that at the level of limb coordination human behaviour is explained by linear optimal feedback control theory, where cost functions match energy and timing constraints of tasks. Second, humans learn cognitive tasks involving learning symbolic level action selection, in terms of both model-free and model-based reinforcement learning algorithms. We postulate that the ease with which humans learn complex nonlinear tasks arises from combining these two levels of abstraction. The Reinforcement Learning Optimal Control framework learns the local task dynamics from naive experience using an expectation maximization algorithm for estimation of linear dynamical systems and forms locally optimal Linear Quadratic Regulators, producing continuous low-level control. A high-level reinforcement learning agent uses these available controllers as actions and learns how to combine them in state space, while maximizing a long term reward. The optimal control costs form training signals for high-level symbolic learner. The algorithm demonstrates that a small number of locally optimal linear controllers can be combined in a smart way to solve global nonlinear control problems and forms a proof-of-principle to how the brain may bridge the divide between low-level continuous control and high-level symbolic action selection. It competes in terms of computational cost and solution quality with state-of-the-art control, which is illustrated with solutions to benchmark problems.
APA, Harvard, Vancouver, ISO, and other styles
29

Dinesh, K. "An approximation method for the finite-time optimal control for nonlinear systems." Thesis, University of Sheffield, 1999. http://etheses.whiterose.ac.uk/14781/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wolek, Artur. "Optimal Paths in Gliding Flight." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52783.

Full text
Abstract:
Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Okura, Yuki. "Trajectory Design Based on Robust Optimal Control and Path Following Control." Kyoto University, 2019. http://hdl.handle.net/2433/242499.

Full text
Abstract:
付記する学位プログラム名: デザイン学大学院連携プログラム
Kyoto University (京都大学)
0048
新制・課程博士
博士(工学)
甲第21761号
工博第4578号
新制||工||1713(附属図書館)
京都大学大学院工学研究科航空宇宙工学専攻
(主査)教授 藤本 健治, 教授 泉田 啓, 教授 太田 快人
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
32

Aziz, Mohd Ismail bin Abd. "Development of hierarchical optimal control algorithms for interconnected nonlinear dynamical systems." Thesis, City University London, 1999. http://openaccess.city.ac.uk/7753/.

Full text
Abstract:
The main concern of this thesis is to develop and advance the knowledge of new hierarchical algorithms for optimal control of interconnected nonlinear systems. To achieve this, four basic hierarchical structures are developed by taking into account the manner in which real process measurements taken from interaction inputs are incorporated and utilized in the model-based optimal control problem. The structures are iterative in nature, and are derived using the dynamic integrated system optimization and parameter estimation (DISOPE) technique to take into account model-reality differences that may have been deliberately introduced to facilitate the solution of the complex nonlinear problem or due to uncertainty in the model used for computation. Three of the four basic hierarchical structures are used as a basis for developing hierarchical optimal control algorithms using a linear quadratic model formulation. Two approaches are used in the coordination problem of the algorithms, price coordination approach and the direct coordination approach. The algorithms are then implemented using two techniques, the single loop and the double loop techniques. All the algorithms are implemented in software and a simulation study is carried out using two examples to investigate their effectiveness and convergence properties. The optimality of the solution provided by the structures and the algorithms described in this research work are established. In addition, convergence analysis is carried out to provide sufficient convergence conditions of the double loop algorithms. Suggestions for future research as a continuation of the work presented in this thesis are also made.
APA, Harvard, Vancouver, ISO, and other styles
33

Torrey, David Allan. "Optimal-efficiency constant-speed control of nonlinear variable reluctance motor drives." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hua, H. "Optimal and robust control for a class of nonlinear stochastic systems." Thesis, University of Liverpool, 2016. http://livrepository.liverpool.ac.uk/3001023/.

Full text
Abstract:
This thesis focuses on theoretical research of optimal and robust control theory for a class of nonlinear stochastic systems. The nonlinearities that appear in the diffusion terms are of a square-root type. Under such systems the following problems are investigated: optimal stochastic control in both finite and infinite horizon; robust stabilization and robust H∞ control; H₂/H∞ control in both finite and infinite horizon; and risk-sensitive control. The importance of this work is that explicit optimal linear controls are obtained, which is a very rare case in the nonlinear system. This is regarded as an advantage because with explicit solutions, our work becomes easier to be applied into the real problems. Apart from the mathematical results obtained, we have also introduced some applications to finance.
APA, Harvard, Vancouver, ISO, and other styles
35

Granzotto, Mathieu. "Near-optimal control of discrete-time nonlinear systems with stability guarantees." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0301.

Full text
Abstract:
L’intelligence artificielle est riche en algorithmes de commande optimale. Il s’agit de générer des entrées de commande pour des systèmes dynamiques afin de minimiser une fonction de coût donnée décrivant l’énergie du système par exemple. Ces méthodes sont applicables à de larges classes de systèmes non-linéaires en temps discret et ont fait leurs preuves dans de nombreuses applications. Leur exploitation en automatique s’avère donc très prometteuse. Une question fondamentale reste néanmoins à élucider pour cela: celle de la stabilité. En effet, ces travaux se concentrent sur l’optimalité et ignorent dans la plupart des cas la stabilité du système commandé, qui est au coeur de l’automatique. L’objectif de ma thèse est d’étudier la stabilité de systèmes non-linéaires commandés par de tels algorithmes. L’enjeu est important car cela permettra de créer un nouveau pont entre l’intelligence artificielle et l’automatique. La stabilité nous informe sur le comportement du système en fonction du temps et garantit sa robustesse en présence de perturbations ou d’incertitudes de modèle. Les algorithmes en intelligence artificielle se concentrent sur l’optimalité de la commande et n’exploitent pas les propriétés de la dynamique du système. La stabilité est non seulement désirable pour les raisons auparavant, mais aussi pour la possibilité de l’exploitée pour améliorer ces algorithmes d’intelligence artificielle. Mes travaux de recherche se concentrent sur les techniques de commande issues de la programmation dynamique (approchée) lorsque le modèle du système est connu. J’identifie pour cela des conditions générales grâce auxquelles il est possible de garantir la stabilité du système en boucle fermée. En contrepartie, une fois la stabilité établie, nous pouvons l’exploiter pour améliorer drastiquement les garanties d’optimalité de la littérature. Mes travaux se sont concentrés autour de deux axes. Le premier concerne l’approche par itération sur les valeurs, qui est l’un des piliers de la programmation dynamique approchée et est au coeur de nombre d’algorithmes d’apprentissage par renforcement. Le deuxième concerne l’approche de planification optimiste, appliqué aux systèmes commutés. J’adapte l’algorithme de planification optimiste pour que, sous hypothèse naturel dans un contexte de stabilisation, obtenir la stabilité en boucle fermé et améliorer drastiquement les garanties d’optimalité de l’algorithme
Artificial intelligence is rich in algorithms for optimal control. These generate commands for dynamical systems in order to minimize a a given cost function describing the energy of the system, for example. These methods are applicable to large classes of non-linear systems in discrete time and have proven themselves in many applications. Their application in control problems is therefore very promising. However, a fundamental question remains to be clarified for this purpose: that of stability. Indeed, these studies focus on optimality and ignore in the In most cases the stability of the controlled system, which is at the heart of control theory. The objective of my thesis is to study the stability of non-linear systems controlled by such algorithms. The stakes are high because it will create a new bridge between artificial intelligence and control theory. Stability informs us about the behaviour of the system as a function of time and guarantees its robustness in the presence of model disturbances or uncertainties. Algorithms in artificial intelligence focus on control optimality and do not exploit the properties of the system dynamics. Stability is not only desirable for the reasons mentioned above, but also for the possibility of using it to improve these intelligence algorithms artificial. My research focuses on control techniques from (approximated) dynamic programming when the system model is known. For this purpose, I identify general conditions by which it is possible to guarantee the stability of the closed-loop system. On the other hand, once stability has been established, we can use it to drastically improve the optimality guarantees of literature. My work has focused on two main areas. The first concerns the approach by iteration on values, which is one of the pillars of dynamic programming is approached and is at the heart of many reinforcement learning algorithms. The second concerns the approach by optimistic planning, applied to switched systems. I adapt the optimistic planning algorithm such that, under natural assumptions in an a stabilisation context, we obtain the stability of closed-loop systems where inputs are generated by this modified algorithm, and to drastically improve the optimality guarantees of the generated inputs
APA, Harvard, Vancouver, ISO, and other styles
36

Alhejji, Ayman Khalid. "Dynamic Neural Network-based Adaptive Inverse Optimal Control Design." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/891.

Full text
Abstract:
This dissertation introduces a Dynamical Neural Network (DNN) model based adaptive inverse optimal control design for a class of nonlinear systems. A DNN structure is developed and stabilized based on a control Lyapunov function (CLF). The CLF must satisfy the partial Hamilton Jacobi-Bellman (HJB) equation to solve the cost function in order to prove the optimality. In other words, the control design is derived from the CLF and inversely achieves optimality when the given cost function variables are determined posterior. All the stability of the closed loop system is ensured using the Lyapunov-based analysis. In addition to structure stability, uncertainty/ disturbance presents a problem to a DNN in that it could degrade the system performance. Therefore, the DNN needs a robust control against uncertainty. Sliding mode control (SMC) is added to nominal control design based CLF in order to stabilize and counteract the effects of disturbance from uncertain DNN, also to achieve global asymptotic stability. In the next section, a DNN observer is considered for estimating states of a class of controllable and observable nonlinear systems. A DNN observer-based adaptive inverse optimal control (AIOC) is needed. With weight adaptations, an adaptive technique is introduced in the observer design and its stabilizing control. The AIOC is designed to control a DNN observer and nonlinear system simultaneously while the weight parameters are updated online. This control scheme guarantees the quality of a DNN's state and minimizes the cost function. In addition, a tracking problem is investigated. An inverse optimal adaptive tracking control based on a DNN observer for unknown nonlinear systems is proposed. Within this framework, a time-varying desired trajectory is investigated, which generates a desired trajectory based on the external inputs. The tracking control design forces system states to follow the desired trajectory, while the DNN observer estimates the states and identifies unknown system dynamics. The stability method based on Lyapunov-based analysis is guaranteed a global asymptotic stability. Numerical examples and simulation studies are presented and shown for each section to validate the effectiveness of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Takahama, Morio, Noboru Sakamoto, and Yuhei Yamato. "Attitude Stabilization of an Aircraft via Nonlinear Optimal Control Based on Aerodynamic Data." Institute of Electrical and Electronics Engineers, 2009. http://hdl.handle.net/2237/14420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Basset, Gareth. "Virtual Motion Camouflage Based Nonlinear Constrained Optimal Trajectory Design Method." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5116.

Full text
Abstract:
Nonlinear constrained optimal trajectory control is an important and fundamental area of research that continues to advance in numerous fields. Many attempts have been made to present new methods that can solve for optimal trajectories more efficiently or to improve the overall performance of existing techniques. This research presents a recently developed bio-inspired method called the Virtual Motion Camouflage (VMC) method that offers a means of quickly finding, within a defined but varying search space, the optimal trajectory that is equal or close to the optimal solution. The research starts with the polynomial-based VMC method, which works within a search space that is defined by a selected and fixed polynomial type virtual prey motion. Next will be presented a means of improving the solution's optimality by using a sequential based form of VMC, where the search space is adjusted by adjusting the polynomial prey trajectory after a solution is obtained. After the search space is adjusted, an optimization is performed in the new search space to find a solution closer to the global space optimal solution, and further adjustments are made as desired. Finally, a B-spline augmented VMC method is presented, in which a B-spline curve represents the prey motion and will allow the search space to be optimized together with the solution trajectory. It is shown that (1) the polynomial based VMC method will significantly reduce the overall problem dimension, which in practice will significantly reduce the computational cost associated with solving nonlinear constrained optimal trajectory problems; (2) the sequential VMC method will improve the solution optimality by sequentially refining certain parameters, such as the prey motion; and (3) the B-spline augmented VMC method will improve the solution optimality without sacrificing the CPU time much as compared with the polynomial based approach. Several simulation scenarios, including the Breakwell problem, the phantom track problem, the minimum-time mobile robot obstacle avoidance problem, and the Snell's river problem are simulated to demonstrate the capabilities of the various forms of the VMC algorithm. The capabilities of the B-spline augmented VMC method are also shown in a hardware demonstration using a mobile robot obstacle avoidance testbed.
ID: 031001346; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Adviser: .; Title from PDF title page (viewed April 18, 2013).; Thesis (Ph.D.)--University of Central Florida, 2012.; Includes bibliographical references (p. 110-116).
Ph.D.
Doctorate
Mechanical and Aerospace Engineering
Engineering and Computer Science
Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
39

Voisei, Mircea Dan. "First-order necessary optimality conditions for nonlinar optimal control problems." Ohio : Ohio University, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1091111473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

CALVIA, ALESSANDRO. "Optimal control of pure jump Markov processes with noise-free partial observation." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199013.

Full text
Abstract:
La presente tesi tratta un problema di controllo ottimo su orizzonte temporale infinito per un processo di puro salto Markoviano e con osservazione parziale di tipo noise-free. È definita una coppia di processi stocastici, detti processo non osservato o segnale e processo osservato o dei dati. Il segnale è un processo di puro salto Markoviano a tempo continuo, a valori in uno spazio metrico completo e separabile, di cui è nota la misura controllata dei tassi di transizione. Il processo osservato prende valori in un ulteriore spazio metrico completo e separabile ed è di tipo noise-free. Con questa espressione si intende che i suoi valori a ogni tempo t sono funzione dei corrispondenti valori al tempo t del processo non osservato. Si fa l’ipotesi che tale funzione sia un’applicazione deterministica e, senza perdita di generalità, suriettiva tra gli spazi di stato dei processi non osservato e osservato. L’obiettivo è controllare la dinamica del processo non osservato, ossia la sua misura controllata dei tassi di transizione, attraverso un processo di controllo, il quale prende valori nell’insieme delle misure di probabilità di Borel su uno spazio metrico compatto, detto spazio delle azioni di controllo. I controlli ammissibili per il nostro problema sono i processi appena descritti che siano anche prevedibili rispetto alla filtrazione naturale del processo osservato. Il processo di controllo è scelto in questa classe al fine di minimizzare un funzionale costo con fattore di sconto su orizzonte temporale infinito. L’estremo inferiore di tale funzionale costo tra tutti i controlli ammissibili è la funzione valore. Per studiare la funzione valore è necessario un passo preliminare. Il problema di controllo ottimo a osservazione parziale deve essere espresso come problema a osservazione completa. Ciò è possibile grazie allo studio del processo di filtraggio, un processo a valori in misure che fornisce a ogni istante t la legge condizionale del processo non osservato data l’osservazione disponibile fino al tempo t (rappresentata dalla filtrazione naturale del processo osservato al tempo t). Si dimostra che il processo di filtraggio soddisfa un’equazione differenziale stocastica esplicita e si caratterizza tale processo come Piecewise Deterministic Markov Process, nel senso di Davis. Allo scopo di trattare il processo di filtraggio come variabile di stato, si studia un problema di controllo separato. Questo è definito come problema a tempo discreto e si mostra che è equivalente a quello originario, nel senso che le rispettive funzioni valore sono legate da una formula esplicita. Si dimostra, inoltre, che i controlli ammissibili per il problema originario e le strategie ammissibili di quello separato hanno una ben precisa struttura ed esiste una specifica relazione tra di essi. Si caratterizza, quindi, la funzione valore del problema di controllo separato (dunque, indirettamente, la funzione valore del problema originario) come unico punto fisso di un operatore di contrazione, il quale agisce dallo spazio delle funzioni continue e limitate sullo spazio di stato del processo di filtraggio in sé. Di conseguenza, si dimostra che la funzione valore è continua e limitata. Si studia anche il caso di un processo non osservato dato da una catena di Markov a stati finiti. In questo contesto, si mostra che la funzione valore del problema di controllo separato è uniformemente continua sullo spazio di stato del processo di filtraggio e che è l’unica soluzione viscosa vincolata (nel senso di Soner) di un’equazione di Hamilton-Jacobi-Bellman. Si dimostra, inoltre, che esiste un controllo ottimo ordinario, ossia un processo di controllo che prende valori nell’insieme delle azioni di controllo, e che tale processo è un piecewise open-loop control nel senso di Vermes.
This thesis is concerned with an infinite horizon optimal control problem for a pure jump Markov process with noise-free partial observation. We are given a pair of stochastic processes, named unobserved or signal process and observed or data process. The signal process is a continuous-time pure jump Markov process, taking values in a complete and separable metric space, whose controlled rate transition measure is known. The observed process takes values in another complete and separable metric space and is of noise-free type. With this we mean that its values at each time t are given as a function of the corresponding values at time t of the unobserved process. We assume that this function is a deterministic and, without loss of generality, surjective map between the state spaces of the signal and data processes. The aim is to control the dynamics of the unobserved process, i.e. its controlled rate transition measure, through a control process, taking values in the set of Borel probability measures on a compact metric space, named set of control actions. We take as admissible controls for our problem all the processes of this kind that are also predictable with respect to the natural filtration of the data process. The control process is chosen in this class to minimize a discounted cost functional on infinite time horizon. The infimum of this cost functional among all admissible controls is the value function. In order to study the value function a preliminary step is required. We need to recast our optimal control problem with partial observation into a problem with complete observation. This is done studying the filtering process, a measure-valued stochastic process providing at each time t the conditional law of the unobserved process given the available observations up to time t (represented by the natural filtration of the data process at time t). We show that the filtering process satisfies an explicit stochastic differential equation and we characterize it as a Piecewise Deterministic Markov Process, in the sense of Davis. To treat the filtering process as a state variable, we study a separated optimal control problem. We introduce it as a discrete-time one and we show that it is equivalent to the original one, i.e. their respective value functions are linked by an explicit formula. We also show that admissible controls of the original problem and admissible policies of the separated one have a specific structure and there is a precise relationship between them. Next, we characterize the value function of the separated control problem (hence, indirectly, the value function of the original control problem) as the unique fixed point of a contraction mapping, acting from the space of bounded continuous function on the state space of the filtering process into itself. Therefore, we prove that the value function is bounded and continuous. The special case of a signal process given by a finite-state Markov chain is also studied. In this setting, we show that the value function of the separated control problem is uniformly continuous on the state space of the filtering process and that it is the unique constrained viscosity solution (in the sense of Soner) of a Hamilton-Jacobi-Bellman equation. We also prove that an optimal ordinary control exists, i.e. a control process taking values in the set of control actions, and that this process is a piecewise open-loop control in the sense of Vermes.
APA, Harvard, Vancouver, ISO, and other styles
41

Foley, Dawn Christine. "Short horizon optimal control of nonlinear systems via discrete state space realization." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/16803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Woon, Siew Fang. "Global algorithms for nonlinear discrete optimization and discrete-valued optimal control problems." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/538.

Full text
Abstract:
Optimal control problems arise in many applications, such as in economics, finance, process engineering, and robotics. Some optimal control problems involve a control which takes values from a discrete set. These problems are known as discrete-valued optimal control problems. Most practical discrete-valued optimal control problems have multiple local minima and thus require global optimization methods to generate practically useful solutions. Due to the high complexity of these problems, metaheuristic based global optimization techniques are usually required.One of the more recent global optimization tools in the area of discrete optimization is known as the discrete filled function method. The basic idea of the discrete filled function method is as follows. We choose an initial point and then perform a local search to find an initial local minimizer. Then, we construct an auxiliary function, called a discrete filled function, at this local minimizer. By minimizing the filled function, either an improved local minimizer is found or one of the vertices of the constraint set is reached. Otherwise, the parameters of the filled function are adjusted. This process is repeated until no better local minimizer of the corresponding filled function is found. The final local minimizer is then taken as an approximation of the global minimizer.While the main aim of this thesis is to present a new computational methodfor solving discrete-valued optimal control problems, the initial focus is on solvingpurely discrete optimization problems. We identify several discrete filled functionstechniques in the literature and perform a critical review including comprehensive numerical tests. Once the best filled function method is identified, we propose and test several variations of the method with numerical examples.We then consider the task of determining near globally optimal solutions of discrete-valued optimal control problems. The main difficulty in solving the discrete-valued optimal control problems is that the control restraint set is discrete and hence not convex. Conventional computational optimal control techniques are designed for problems in which the control takes values in a connected set, such as an interval, and thus they cannot solve the problem directly. Furthermore, variable switching times are known to cause problems in the implementation of any numerical algorithm due to the variable location of discontinuities in the dynamics. Therefore, such problem cannot be solved using conventional computational approaches. We propose a time scaling transformation to overcome this difficulty, where a new discrete variable representing the switching sequence and a new variable controlling the switching times are introduced. The transformation results in an equivalent mixed discrete optimization problem. The transformed problemis then decomposed into a bi-level optimization problem, which is solved using a combination of an efficient discrete filled function method identified earlier and a computational optimal control technique based on the concept of control parameterization.To demonstrate the applicability of the proposed method, we solve two complex applied engineering problems involving a hybrid power system and a sensor scheduling task, respectively. Computational results indicate that this method is robust, reliable, and efficient. It can successfully identify a near-global solution for these complex applied optimization problems, despite the demonstrated presence of multiple local optima. In addition, we also compare the results obtained with other methods in the literature. Numerical results confirm that the proposed method yields significant improvements over those obtained by other methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Cimen, Tayfun. "Global optimal feedback control of nonlinear systems and viscosity solutions of Hamilton-Jacobi-Bellman equations." Thesis, University of Sheffield, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.289660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Fawcett, Randall Tyler. "Real-Time Planning and Nonlinear Control for Robust Quadrupedal Locomotion with Tails." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104201.

Full text
Abstract:
This thesis aims to address the real-time planning and nonlinear control of quadrupedal locomotion such that the resulting gaits are robust to various kinds of disturbances. Specifically, this work addresses two scenarios. Namely, a quasi-static formulation in which an inertial appendage (i.e., a tail) is used to assist the quadruped in negating external push disturbances, and an agile formulation which is derived in a manner such that an appendage could easily be added in future work to examine the affect of tails on agile and high-speed motions. Initially, this work presents a unified method in which bio-inspired articulated serpentine robotic tails may be integrated with walking robots, specifically quadrupeds, in order to produce stable and highly robust locomotion. The design and analysis of a holonomically constrained 2 degree of freedom (DOF) tail is shown and its accompanying nonlinear dynamic model is presented. The model created is used to develop a hierarchical control scheme which consists of a high-level path planner and a full-order nonlinear low-level controller. The high-level controller is based on model predictive control (MPC) and acts on a linear inverted pendulum (LIP) model which has been extended to include the forces produced by the tail by augmenting the LIP model with linearized tail dynamics. The MPC is used to generate center of mass (COM) and tail trajectories and is subject to the net ground reaction forces of the system, tail shape, and torque saturation of the tail in order to ensure overall feasibility of locomotion. At the lower level, a full-order nonlinear controller is implemented to track the generated trajectories using quadratic program (QP) based input-output (I-O) feedback linearization which acts on virtual constraints. The analytical results of the proposed approach are verified numerically through simulations using a full-order nonlinear model for the quadrupedal robot, Vision60, augmented with a tail, totaling at 20 DOF. The simulations include a variety of disturbances to show the robustness of the presented hierarchical control scheme. The aforementioned control scheme is then extended in the latter portion of this thesis to achieve more dynamic, agile, and robust locomotion. In particular, we examine the use of a single rigid body model as the template model for the real-time high-level MPC, which is linearized using variational based linearization (VBL) and is solved at 200 Hz as opposed to an event-based manner. The previously defined virtual constraints controller is also extended so as to include a control Lyapunov function (CLF) which contributes to both numerical stability of the QP and aids in stability of the output dynamics. This new hierarchical scheme is validated on the A1 robot, with a total of 18 DOF, through extensive simulations to display agility and robustness to ground height variations and external disturbances. The low-level controller is then further validated through a series of experiments displaying the ability for this algorithm to be readily transferred to hardware platforms.
Master of Science
This thesis aims to address the real-time planning and nonlinear control of four legged walking robots such that the resulting gaits are robust to various kinds of disturbances. Initially, this work presents a method in which a robotic tail can be integrated with legged robots to produce very stable walking patterns. A model is subsequently created to develop a multi-layer control scheme which consists of a high-level path planner, based on a reduced-order model and model predictive control techniques, that determines the trajectory for the quadruped and tail, followed by a low-level controller that considers the full-order dynamics of the robot and tail for robust tracking of the planned trajectory. The reduced-order model considered here enforces quasi-static motions which are slow but generally stable. This formulation is validated numerically through extensive full-order simulations of the Vision60 robot. This work then proceeds to develop an agile formulation using a similar multi-layer structure, but uses a reduced-order model which is more amenable to dynamic walking patterns. The low-level controller is also augmented slightly to provide additional robustness and theoretical guarantees. The latter control algorithm is extensively numerically validated in simulation using the A1 robot to show the large increase in robustness compared to the quasi-static formulation. Finally, this work presents experimental validation of the low-level controller formulated in the latter half of this work.
APA, Harvard, Vancouver, ISO, and other styles
45

Reig, Bernad Alberto. "Optimal Control for Automotive Powertrain Applications." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90624.

Full text
Abstract:
Optimal Control (OC) is essentially a mathematical extremal problem. The procedure consists on the definition of a criterion to minimize (or maximize), some constraints that must be fulfilled and boundary conditions or disturbances affecting to the system behavior. The OC theory supplies methods to derive a control trajectory that minimizes (or maximizes) that criterion. This dissertation addresses the application of OC to automotive control problems at the powertrain level, with emphasis on the internal combustion engine. The necessary tools are an optimization method and a mathematical representation of the powertrain. Thus, the OC theory is reviewed with a quantitative analysis of the advantages and drawbacks of the three optimization methods available in literature: dynamic programming, Pontryagin minimum principle and direct methods. Implementation algorithms for these three methods are developed and described in detail. In addition to that, an experimentally validated dynamic powertrain model is developed, comprising longitudinal vehicle dynamics, electrical motor and battery models, and a mean value engine model. OC can be utilized for three different purposes: 1. Applied control, when all boundaries can be accurately defined. The engine control is addressed with this approach assuming that a the driving cycle is known in advance, translating into a large mathematical problem. Two specific cases are studied: the management of a dual-loop EGR system, and the full control of engine actuators, namely fueling rate, SOI, EGR and VGT settings. 2. Derivation of near-optimal control rules, to be used if some disturbances are unknown. In this context, cycle-specific engine calibrations calculation, and a stochastic feedback control for power-split management in hybrid vehicles are analyzed. 3. Use of OC trajectories as a benchmark or base line to improve the system design and efficiency with an objective criterion. OC is used to optimize the heat release law of a diesel engine and to size a hybrid powertrain with a further cost analysis. OC strategies have been applied experimentally in the works related to the internal combustion engine, showing significant improvements but non-negligible difficulties, which are analyzed and discussed. The methods developed in this dissertation are general and can be extended to other criteria if appropriate models are available.
El Control Óptimo (CO) es esencialmente un problema matemático de búsqueda de extremos, consistente en la definición de un criterio a minimizar (o maximizar), restricciones que deben satisfacerse y condiciones de contorno que afectan al sistema. La teoría de CO ofrece métodos para derivar una trayectoria de control que minimiza (o maximiza) ese criterio. Esta Tesis trata la aplicación del CO en automoción, y especialmente en el motor de combustión interna. Las herramientas necesarias son un método de optimización y una representación matemática de la planta motriz. Para ello, se realiza un análisis cuantitativo de las ventajas e inconvenientes de los tres métodos de optimización existentes en la literatura: programación dinámica, principio mínimo de Pontryagin y métodos directos. Se desarrollan y describen los algoritmos para implementar estos métodos así como un modelo de planta motriz, validado experimentalmente, que incluye la dinámica longitudinal del vehículo, modelos para el motor eléctrico y las baterías, y un modelo de motor de combustión de valores medios. El CO puede utilizarse para tres objetivos distintos: 1. Control aplicado, en caso de que las condiciones de contorno estén definidas. Puede aplicarse al control del motor de combustión para un ciclo de conducción dado, traduciéndose en un problema matemático de grandes dimensiones. Se estudian dos casos particulares: la gestión de un sistema de EGR de doble lazo, y el control completo del motor, en particular de las consignas de inyección, SOI, EGR y VGT. 2. Obtención de reglas de control cuasi-óptimas, aplicables en casos en los que no todas las perturbaciones se conocen. A este respecto, se analizan el cálculo de calibraciones de motor específicas para un ciclo, y la gestión energética de un vehículo híbrido mediante un control estocástico en bucle cerrado. 3. Empleo de trayectorias de CO como comparativa o referencia para tareas de diseño y mejora, ofreciendo un criterio objetivo. La ley de combustión así como el dimensionado de una planta motriz híbrida se optimizan mediante el uso de CO. Las estrategias de CO han sido aplicadas experimentalmente en los trabajos referentes al motor de combustión, poniendo de manifiesto sus ventajas sustanciales, pero también analizando dificultades y líneas de actuación para superarlas. Los métodos desarrollados en esta Tesis Doctoral son generales y aplicables a otros criterios si se dispone de los modelos adecuados.
El Control Òptim (CO) és essencialment un problema matemàtic de cerca d'extrems, que consisteix en la definició d'un criteri a minimitzar (o maximitzar), restriccions que es deuen satisfer i condicions de contorn que afecten el sistema. La teoria de CO ofereix mètodes per a derivar una trajectòria de control que minimitza (o maximitza) aquest criteri. Aquesta Tesi tracta l'aplicació del CO en automoció i especialment al motor de combustió interna. Les ferramentes necessàries són un mètode d'optimització i una representació matemàtica de la planta motriu. Per a això, es realitza una anàlisi quantitatiu dels avantatges i inconvenients dels tres mètodes d'optimització existents a la literatura: programació dinàmica, principi mínim de Pontryagin i mètodes directes. Es desenvolupen i descriuen els algoritmes per a implementar aquests mètodes així com un model de planta motriu, validat experimentalment, que inclou la dinàmica longitudinal del vehicle, models per al motor elèctric i les bateries, i un model de motor de combustió de valors mitjans. El CO es pot utilitzar per a tres objectius diferents: 1. Control aplicat, en cas que les condicions de contorn estiguen definides. Es pot aplicar al control del motor de combustió per a un cicle de conducció particular, traduint-se en un problema matemàtic de grans dimensions. S'estudien dos casos particulars: la gestió d'un sistema d'EGR de doble llaç, i el control complet del motor, particularment de les consignes d'injecció, SOI, EGR i VGT. 2. Obtenció de regles de control quasi-òptimes, aplicables als casos on no totes les pertorbacions són conegudes. A aquest respecte, s'analitzen el càlcul de calibratges específics de motor per a un cicle, i la gestió energètica d'un vehicle híbrid mitjançant un control estocàstic en bucle tancat. 3. Utilització de trajectòries de CO com comparativa o referència per a tasques de disseny i millora, oferint un criteri objectiu. La llei de combustió així com el dimensionament d'una planta motriu híbrida s'optimitzen mitjançant l'ús de CO. Les estratègies de CO han sigut aplicades experimentalment als treballs referents al motor de combustió, manifestant els seus substancials avantatges, però també analitzant dificultats i línies d'actuació per superar-les. Els mètodes desenvolupats a aquesta Tesi Doctoral són generals i aplicables a uns altres criteris si es disposen dels models adequats.
Reig Bernad, A. (2017). Optimal Control for Automotive Powertrain Applications [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90624
TESIS
APA, Harvard, Vancouver, ISO, and other styles
46

Jennings, Alan Lance. "Autonomous Motion Learning for Near Optimal Control." University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1344016631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Boonnithivorakul, Nattapong. "Recursive on-line strategy for optimal control of a class of nonlinear systems /." Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1136091921&sid=7&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Rungger, Matthias [Verfasser]. "On the Numerical Solution of Nonlinear and Hybrid Optimal Control Problems / Matthias Rungger." Kassel : Kassel University Press, 2012. http://d-nb.info/1036916677/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Medagam, Peda Vasanta Reddy. "Online optimal control for a class of nonlinear system using RBF neural networks /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1650508351&sid=19&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Umemura, Yoshio, Noboru Sakamoto, and Yuto Yuasa. "Optimal Control Designs for Systems with Input Saturations and Rate Limiters." IEEE, 2010. http://hdl.handle.net/2237/14447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography