Dissertations / Theses on the topic 'Nonlinear optimal control'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Nonlinear optimal control.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Zhu, Jinghao. "Some results on nonlinear optimal control." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10042006-143910/.
Full textZhang, Xiaohong. "Optimal feedback control for nonlinear discrete systems and applications to optimal control of nonlinear periodic ordinary differential equations." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40185.
Full textGavriel, Christos. "Higher order conditions in nonlinear optimal control." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9042.
Full textPrimbs, James A. Doyle John Comstock. "Nonlinear optimal control : a receding horizon approach /." Diss., Pasadena, Calif. : California Institute of Technology, 1999. http://resolver.caltech.edu/CaltechETD:etd-10172005-103315.
Full textTigrek, Tuba. "Nonlinear adaptive optimal control of HVAC systems." Thesis, University of Iowa, 2001. https://ir.uiowa.edu/etd/3429.
Full textChudoung, Jerawan. "Robust Control for Hybrid, Nonlinear Systems." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/26983.
Full textPh. D.
Meum, Patrick. "Optimal Reservoir control using nonlinear MPC and ECLIPSE." Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9610.
Full textRecent years advances within well deployment and instrumentation technology offers huge potentials for increased oil recovery from reservoir production. Wells can now be equipped with controllable valves at reservoir depth, which may possibly alter the production profitability of the field completely, if the devices are used in an intelligent manner. This thesis investigates this potential by using model predictive control to maximize reservoir production performance and total oil production. The report describes an algorithm for nonlinear model predictive control, using a single shooting, multistep, quasi-Newton method, and implements it on an existing industrial MPC platform - Statoil's in-house MPC tool SEPTIC. The method is an iterative method, solving a series of quadratic problems analogous to sequential quadratic programming, to find the optimal control settings. An interface between SEPTIC and a commercial reservoir simulator, ECLIPSE, is developed for process modelling and predictions. ECLIPSE provides highly realistic and detailed reservoir behaviour and is used by SEPTIC to obtain numerical gradients for optimization. The method is applied to two reservoir examples, Case 1 and Case 2, and develops optimal control strategies for each of these. The two examples have conceptually different model structures. Case 1 is a simple introduction model. Case 2 is a benchmark model previously used in Yeten, Durlofsky and Aziz (2002) and models a North Sea type channelized reservoir. It is described by a set of different realizations, to capture a notion of model uncertainty. The report addresses each of the available realizations and shows how the value of an optimal production strategy can vary for equally probable realizations. Improvements in reservoir production performance using the model predictive control method are shown for all cases, compared to basic controlled references cases. For the benchmark example improvements range up to as much as 68% increase in one realization, and 30% on average for all realizations. This is an increase from the results previously published for the benchmark, with a 3% average. However, the level of improvement shows significant variation, and is only marginal for example Case 1. A thorough field analysis should therefore be performed before deciding to take the extra cost of well equipment and optimal control.
Dong, Wenjie. "Self-organizing and optimal control for nonlinear systems." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3359894.
Full textIncludes abstract. Title from first page of PDF file (viewed January 27, 2010). Includes bibliographical references (p. 82-87). Issued in print and online. Available via ProQuest Digital Dissertations.
Blanchard, Eunice Anita. "Exact penalty methods for nonlinear optimal control problems." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/1805.
Full textJorge, Tiago R., João M. Lemos, and Miguel Barão. "Optimal Control for Vehicle Cruise Speed Transfer." Bachelor's thesis, ACTAPRESS, 2011. http://hdl.handle.net/10174/4513.
Full textSjöberg, Johan. "Optimal Control and Model Reduction of Nonlinear DAE Models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11345.
Full textRosato, Andrea. "A Gaussian Process Learning Method for Nonlinear Optimal Control." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22463/.
Full textSjöberg, Johan. "Some Results On Optimal Control for Nonlinear Descriptor Systems." Licentiate thesis, Linköping University, Linköping University, Automatic Control, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7489.
Full textI denna avhandling studeras optimal återkopplad styrning av olinjära deskriptorsystem. Ett deskriptorsystem är en matematisk beskrivning som kan innehålla både differentialekvationer och algebraiska ekvationer. En av anledningarna till intresset för denna klass av system är att objekt-orienterade modelleringsverktyg ger systembeskrivningar på denna form. Här kommer det att antas att det, åtminstone lokalt, är möjligt att eliminera de algebraiska ekvationerna och få ett system på tillståndsform. Teoretiskt är detta inte så inskränkande för genom att använda någon indexreduktionsmetod kan ganska generella deskriptor\-system skrivas om så att de uppfyller detta antagande.
För system på tillståndsform kan Hamilton-Jacobi-Bellman-ekvationen användas för att bestämma den optimala återkopplingen. Ett liknande resultat finns för deskriptor\-system där istället en Hamilton-Jacobi-Bellman-liknande ekvation ska lösas. Denna ekvation innehåller dock en extra term för att hantera de algebraiska ekvationerna. Eftersom antagandena i denna avhandling gör det möjligt att skriva om deskriptorsystemet som ett tillståndssystem, undersöks hur denna extra term måste väljas för att båda ekvationerna ska få samma lösning.
Ett problem med att beräkna den optimala återkopplingen med hjälp av Hamilton-Jacobi-Bellman-ekvationen är att det leder till att en olinjär partiell differentialekvation ska lösas. Generellt har denna ekvation ingen explicit lösning. Ett lättare problem är att beräkna en lokal optimal återkoppling. För analytiska system på tillståndsform löstes detta problem på 1960-talet och den optimala lösningen beskrivs av serieutvecklingar. I denna avhandling generaliseras detta resultat så att även deskriptor-system kan hanteras. Metoden illustreras med ett exempel som beskriver en faslåsande krets.
I många situationer vill man veta om ett område är möjligt att nå genom att styra på något sätt. För linjära tidsinvarianta system fås denna information från styrbarhetgramianen. För olinjära system används istället styrbarhetsfunktionen. Tre olika metoder för att beräkna styrbarhetsfunktionen har härletts i denna avhandling. De framtagna metoderna är också applicerade på några exempel för att visa beräkningsstegen.
Dessutom har observerbarhetsfunktionen studerats. Observerbarhetsfunktionen visar hur mycket utsignalenergi ett visst initial tillstånd svarar mot. Ett par olika metoder för att beräkna observerbarhetsfunktionen för deskriptorsystem tagits fram. För att beskriva en av metoderna, studeras ett litet exempel bestående av en elektrisk krets.
In this thesis, optimal feedback control for nonlinear descriptor systems is studied. A descriptor system is a mathematical description that can include both differential and algebraic equations. One of the reasons for the interest in this class of systems is that several modern object-oriented modeling tools yield system descriptions in this form. Here, it is assumed that it is possible to rewrite the descriptor system as a state-space system, at least locally. In theory, this assumption is not very restrictive because index reduction techniques can be used to rewrite rather general descriptor systems to satisfy this assumption.
The Hamilton-Jacobi-Bellman equation can be used to calculate the optimal feedback control for systems in state-space form. For descriptor systems, a similar result exists where a Hamilton-Jacobi-Bellman-like equation is solved. This equation includes an extra term in order to incorporate the algebraic equations. Since the assumptions made here make it possible to rewrite the descriptor system in state-space form, it is investigated how the extra term must be chosen in order to obtain the same solution from the different equations.
A problem when computing the optimal feedback law using the Hamilton-Jacobi-Bellman equation is that it involves solving a nonlinear partial differential equation. Often, this equation cannot be solved explicitly. An easier problem is to compute a locally optimal feedback law. This problem was solved in the 1960's for analytical systems in state-space form and the optimal solution is described using power series. In this thesis, this result is extended to also incorporate descriptor systems and it is applied to a phase-locked loop circuit.
In many situations, it is interesting to know if a certain region is reachable using some control signal. For linear time-invariant state-space systems, this information is given by the controllability gramian. For nonlinear state-space systems, the controllabilty function is used instead. Three methods for calculating the controllability function for descriptor systems are derived in this thesis. These methods are also applied to some examples in order to illustrate the computational steps.
Furthermore, the observability function is studied. This function reflects the amount of output energy a certain initial state corresponds to. Two methods for calculating the observability function for descriptor systems are derived. To describe one of the methods, a small example consisting of an electrical circuit is studied.
Report code: LiU-TEK-LIC-2006:8
Hein, Sabine. "MPC/LQG-Based Optimal Control of Nonlinear Parabolic PDEs." Doctoral thesis, Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201000134.
Full textSan-Blas, Felipe. "A case study in nonlinear on-line optimal control." Thesis, Imperial College London, 2003. http://hdl.handle.net/10044/1/7344.
Full textSjöberg, Johan. "Some results on optimal control for nonlinear descriptor systems /." Linköping : Univ, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7489.
Full textSjöberg, Johan. "Optimal control and model reduction of nonlinear DAE models /." Linköping : Department of Electrical Engineering, Linköping University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11345.
Full textJohnson, Miles J. "Inverse optimal control for deterministic continuous-time nonlinear systems." Thesis, University of Illinois at Urbana-Champaign, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3632073.
Full textInverse optimal control is the problem of computing a cost function with respect to which observed state input trajectories are optimal. We present a new method of inverse optimal control based on minimizing the extent to which observed trajectories violate first-order necessary conditions for optimality. We consider continuous-time deterministic optimal control systems with a cost function that is a linear combination of known basis functions. We compare our approach with three prior methods of inverse optimal control. We demonstrate the performance of these methods by performing simulation experiments using a collection of nominal system models. We compare the robustness of these methods by analyzing how they perform under perturbations to the system. We consider two scenarios: one in which we exactly know the set of basis functions in the cost function, and another in which the true cost function contains an unknown perturbation. Results from simulation experiments show that our new method is computationally efficient relative to prior methods, performs similarly to prior approaches under large perturbations to the system, and better learns the true cost function under small perturbations. We then apply our method to three problems of interest in robotics. First, we apply inverse optimal control to learn the physical properties of an elastic rod. Second, we apply inverse optimal control to learn models of human walking paths. These models of human locomotion enable automation of mobile robots moving in a shared space with humans, and enable motion prediction of walking humans given partial trajectory observations. Finally, we apply inverse optimal control to develop a new method of learning from demonstration for quadrotor dynamic maneuvering. We compare and contrast our method with an existing state-of-the-art solution based on minimum-time optimal control, and show that our method can generalize to novel tasks and reject environmental disturbances.
Garcia, Fermin N. (Fermin Noel). "A nonlinear control algorithm for fuel optimal attitude control using reaction jets." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/46267.
Full textIncludes bibliographical references (p. 159-161).
We present the analysis and design of a weighted nonlinear time-fuel optimal control algorithm for spacecraft attitude dynamics using on-off gas jets. In the development of a controller, we explore four control algorithms within a single-step control framework where the step is the fundamental update time of the digital controller. The benchmark controller is a basic pulse-width modulator (PWM) with a proportional derivative controller driving the feedback loop. The second is a standard rate-ledge controller (RLC) with full-on or full-off pulse commands, while the third varies the duration of the RLC pulse commands based on the location of the states in the phase plane. The RLC algorithm is shown to well-approximate a continuous-time weighted time-fuel optimal controller. The fourth control algorithm consists of a combination of the variable-pulse RLC algorithm and a tracking-fuel optimal controller that reduces the residual error relative to the latter algorithm. Experimental data from a dynamic air-bearing testbed at Lawrence Livermore National Laboratory are used to compare the four control algorithms. The PWM scheme proves to be robust to disturbances and unmodeled dynamics and quite fast, but yields excessive fuel consumption from frequent switching. The standard RLC algorithm gives poor closed-loop performance in the presence of unmodeled dynamics and ends up being equally as fuel costly as the PWM scheme. The third algorithm, the RLC with variable pulses, significantly improves the transient and steady-state responses of the first two controllers. Via parameter tuning, we observe that this modified RLC gives excellent steady-state fuel consumption as well as reasonably fast settling times. The fourth algorithm, although more fuel efficient than the PWM and standard RLC controllers, is less efficient than the variable RLC algorithm. Matlab simulations of the four control algorithms studied are corroborated by these test results.
by Fermín Noel García.
S.M.
Lin, Yiing-Yuh. "Nonlinear optimal control and near-optimal guidance strategies in spacecraft general attitude maneuvers." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53572.
Full textPh. D.
李澤康 and Chak-hong Lee. "Nonlinear time-delay optimal control problem: optimality conditions and duality." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31212475.
Full textLee, Chak-hong. "Nonlinear time-delay optimal control problem : optimality conditions and duality /." [Hong Kong] : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B16391640.
Full textSforni, Lorenzo. "A First-Order Closed-loop Methodology for Nonlinear Optimal Control." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21429/.
Full textPetersson, Daniel. "A Nonlinear Optimization Approach to H2-Optimal Modeling and Control." Doctoral thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93324.
Full textTimmerman, Marc A. A. "Sub-optimal momentum managed control of 1-Dof nonlinear systems." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17332.
Full textde, Villiers J. P. "Monte Carlo approaches to nonlinear optimal and model predictive control." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598462.
Full textVoisei, Mircea D. "First-Order Necessary Optimality Conditions for Nonlinear Optimal Control Problems." Ohio University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1091111473.
Full textAbramova, Ekaterina. "Combining reinforcement learning and optimal control for the control of nonlinear dynamical systems." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/39968.
Full textDinesh, K. "An approximation method for the finite-time optimal control for nonlinear systems." Thesis, University of Sheffield, 1999. http://etheses.whiterose.ac.uk/14781/.
Full textWolek, Artur. "Optimal Paths in Gliding Flight." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52783.
Full textPh. D.
Okura, Yuki. "Trajectory Design Based on Robust Optimal Control and Path Following Control." Kyoto University, 2019. http://hdl.handle.net/2433/242499.
Full textKyoto University (京都大学)
0048
新制・課程博士
博士(工学)
甲第21761号
工博第4578号
新制||工||1713(附属図書館)
京都大学大学院工学研究科航空宇宙工学専攻
(主査)教授 藤本 健治, 教授 泉田 啓, 教授 太田 快人
学位規則第4条第1項該当
Aziz, Mohd Ismail bin Abd. "Development of hierarchical optimal control algorithms for interconnected nonlinear dynamical systems." Thesis, City University London, 1999. http://openaccess.city.ac.uk/7753/.
Full textTorrey, David Allan. "Optimal-efficiency constant-speed control of nonlinear variable reluctance motor drives." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14713.
Full textHua, H. "Optimal and robust control for a class of nonlinear stochastic systems." Thesis, University of Liverpool, 2016. http://livrepository.liverpool.ac.uk/3001023/.
Full textGranzotto, Mathieu. "Near-optimal control of discrete-time nonlinear systems with stability guarantees." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0301.
Full textArtificial intelligence is rich in algorithms for optimal control. These generate commands for dynamical systems in order to minimize a a given cost function describing the energy of the system, for example. These methods are applicable to large classes of non-linear systems in discrete time and have proven themselves in many applications. Their application in control problems is therefore very promising. However, a fundamental question remains to be clarified for this purpose: that of stability. Indeed, these studies focus on optimality and ignore in the In most cases the stability of the controlled system, which is at the heart of control theory. The objective of my thesis is to study the stability of non-linear systems controlled by such algorithms. The stakes are high because it will create a new bridge between artificial intelligence and control theory. Stability informs us about the behaviour of the system as a function of time and guarantees its robustness in the presence of model disturbances or uncertainties. Algorithms in artificial intelligence focus on control optimality and do not exploit the properties of the system dynamics. Stability is not only desirable for the reasons mentioned above, but also for the possibility of using it to improve these intelligence algorithms artificial. My research focuses on control techniques from (approximated) dynamic programming when the system model is known. For this purpose, I identify general conditions by which it is possible to guarantee the stability of the closed-loop system. On the other hand, once stability has been established, we can use it to drastically improve the optimality guarantees of literature. My work has focused on two main areas. The first concerns the approach by iteration on values, which is one of the pillars of dynamic programming is approached and is at the heart of many reinforcement learning algorithms. The second concerns the approach by optimistic planning, applied to switched systems. I adapt the optimistic planning algorithm such that, under natural assumptions in an a stabilisation context, we obtain the stability of closed-loop systems where inputs are generated by this modified algorithm, and to drastically improve the optimality guarantees of the generated inputs
Alhejji, Ayman Khalid. "Dynamic Neural Network-based Adaptive Inverse Optimal Control Design." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/891.
Full textTakahama, Morio, Noboru Sakamoto, and Yuhei Yamato. "Attitude Stabilization of an Aircraft via Nonlinear Optimal Control Based on Aerodynamic Data." Institute of Electrical and Electronics Engineers, 2009. http://hdl.handle.net/2237/14420.
Full textBasset, Gareth. "Virtual Motion Camouflage Based Nonlinear Constrained Optimal Trajectory Design Method." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5116.
Full textID: 031001346; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Adviser: .; Title from PDF title page (viewed April 18, 2013).; Thesis (Ph.D.)--University of Central Florida, 2012.; Includes bibliographical references (p. 110-116).
Ph.D.
Doctorate
Mechanical and Aerospace Engineering
Engineering and Computer Science
Mechanical Engineering
Voisei, Mircea Dan. "First-order necessary optimality conditions for nonlinar optimal control problems." Ohio : Ohio University, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1091111473.
Full textCALVIA, ALESSANDRO. "Optimal control of pure jump Markov processes with noise-free partial observation." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199013.
Full textThis thesis is concerned with an infinite horizon optimal control problem for a pure jump Markov process with noise-free partial observation. We are given a pair of stochastic processes, named unobserved or signal process and observed or data process. The signal process is a continuous-time pure jump Markov process, taking values in a complete and separable metric space, whose controlled rate transition measure is known. The observed process takes values in another complete and separable metric space and is of noise-free type. With this we mean that its values at each time t are given as a function of the corresponding values at time t of the unobserved process. We assume that this function is a deterministic and, without loss of generality, surjective map between the state spaces of the signal and data processes. The aim is to control the dynamics of the unobserved process, i.e. its controlled rate transition measure, through a control process, taking values in the set of Borel probability measures on a compact metric space, named set of control actions. We take as admissible controls for our problem all the processes of this kind that are also predictable with respect to the natural filtration of the data process. The control process is chosen in this class to minimize a discounted cost functional on infinite time horizon. The infimum of this cost functional among all admissible controls is the value function. In order to study the value function a preliminary step is required. We need to recast our optimal control problem with partial observation into a problem with complete observation. This is done studying the filtering process, a measure-valued stochastic process providing at each time t the conditional law of the unobserved process given the available observations up to time t (represented by the natural filtration of the data process at time t). We show that the filtering process satisfies an explicit stochastic differential equation and we characterize it as a Piecewise Deterministic Markov Process, in the sense of Davis. To treat the filtering process as a state variable, we study a separated optimal control problem. We introduce it as a discrete-time one and we show that it is equivalent to the original one, i.e. their respective value functions are linked by an explicit formula. We also show that admissible controls of the original problem and admissible policies of the separated one have a specific structure and there is a precise relationship between them. Next, we characterize the value function of the separated control problem (hence, indirectly, the value function of the original control problem) as the unique fixed point of a contraction mapping, acting from the space of bounded continuous function on the state space of the filtering process into itself. Therefore, we prove that the value function is bounded and continuous. The special case of a signal process given by a finite-state Markov chain is also studied. In this setting, we show that the value function of the separated control problem is uniformly continuous on the state space of the filtering process and that it is the unique constrained viscosity solution (in the sense of Soner) of a Hamilton-Jacobi-Bellman equation. We also prove that an optimal ordinary control exists, i.e. a control process taking values in the set of control actions, and that this process is a piecewise open-loop control in the sense of Vermes.
Foley, Dawn Christine. "Short horizon optimal control of nonlinear systems via discrete state space realization." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/16803.
Full textWoon, Siew Fang. "Global algorithms for nonlinear discrete optimization and discrete-valued optimal control problems." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/538.
Full textCimen, Tayfun. "Global optimal feedback control of nonlinear systems and viscosity solutions of Hamilton-Jacobi-Bellman equations." Thesis, University of Sheffield, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.289660.
Full textFawcett, Randall Tyler. "Real-Time Planning and Nonlinear Control for Robust Quadrupedal Locomotion with Tails." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104201.
Full textMaster of Science
This thesis aims to address the real-time planning and nonlinear control of four legged walking robots such that the resulting gaits are robust to various kinds of disturbances. Initially, this work presents a method in which a robotic tail can be integrated with legged robots to produce very stable walking patterns. A model is subsequently created to develop a multi-layer control scheme which consists of a high-level path planner, based on a reduced-order model and model predictive control techniques, that determines the trajectory for the quadruped and tail, followed by a low-level controller that considers the full-order dynamics of the robot and tail for robust tracking of the planned trajectory. The reduced-order model considered here enforces quasi-static motions which are slow but generally stable. This formulation is validated numerically through extensive full-order simulations of the Vision60 robot. This work then proceeds to develop an agile formulation using a similar multi-layer structure, but uses a reduced-order model which is more amenable to dynamic walking patterns. The low-level controller is also augmented slightly to provide additional robustness and theoretical guarantees. The latter control algorithm is extensively numerically validated in simulation using the A1 robot to show the large increase in robustness compared to the quasi-static formulation. Finally, this work presents experimental validation of the low-level controller formulated in the latter half of this work.
Reig, Bernad Alberto. "Optimal Control for Automotive Powertrain Applications." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90624.
Full textEl Control Óptimo (CO) es esencialmente un problema matemático de búsqueda de extremos, consistente en la definición de un criterio a minimizar (o maximizar), restricciones que deben satisfacerse y condiciones de contorno que afectan al sistema. La teoría de CO ofrece métodos para derivar una trayectoria de control que minimiza (o maximiza) ese criterio. Esta Tesis trata la aplicación del CO en automoción, y especialmente en el motor de combustión interna. Las herramientas necesarias son un método de optimización y una representación matemática de la planta motriz. Para ello, se realiza un análisis cuantitativo de las ventajas e inconvenientes de los tres métodos de optimización existentes en la literatura: programación dinámica, principio mínimo de Pontryagin y métodos directos. Se desarrollan y describen los algoritmos para implementar estos métodos así como un modelo de planta motriz, validado experimentalmente, que incluye la dinámica longitudinal del vehículo, modelos para el motor eléctrico y las baterías, y un modelo de motor de combustión de valores medios. El CO puede utilizarse para tres objetivos distintos: 1. Control aplicado, en caso de que las condiciones de contorno estén definidas. Puede aplicarse al control del motor de combustión para un ciclo de conducción dado, traduciéndose en un problema matemático de grandes dimensiones. Se estudian dos casos particulares: la gestión de un sistema de EGR de doble lazo, y el control completo del motor, en particular de las consignas de inyección, SOI, EGR y VGT. 2. Obtención de reglas de control cuasi-óptimas, aplicables en casos en los que no todas las perturbaciones se conocen. A este respecto, se analizan el cálculo de calibraciones de motor específicas para un ciclo, y la gestión energética de un vehículo híbrido mediante un control estocástico en bucle cerrado. 3. Empleo de trayectorias de CO como comparativa o referencia para tareas de diseño y mejora, ofreciendo un criterio objetivo. La ley de combustión así como el dimensionado de una planta motriz híbrida se optimizan mediante el uso de CO. Las estrategias de CO han sido aplicadas experimentalmente en los trabajos referentes al motor de combustión, poniendo de manifiesto sus ventajas sustanciales, pero también analizando dificultades y líneas de actuación para superarlas. Los métodos desarrollados en esta Tesis Doctoral son generales y aplicables a otros criterios si se dispone de los modelos adecuados.
El Control Òptim (CO) és essencialment un problema matemàtic de cerca d'extrems, que consisteix en la definició d'un criteri a minimitzar (o maximitzar), restriccions que es deuen satisfer i condicions de contorn que afecten el sistema. La teoria de CO ofereix mètodes per a derivar una trajectòria de control que minimitza (o maximitza) aquest criteri. Aquesta Tesi tracta l'aplicació del CO en automoció i especialment al motor de combustió interna. Les ferramentes necessàries són un mètode d'optimització i una representació matemàtica de la planta motriu. Per a això, es realitza una anàlisi quantitatiu dels avantatges i inconvenients dels tres mètodes d'optimització existents a la literatura: programació dinàmica, principi mínim de Pontryagin i mètodes directes. Es desenvolupen i descriuen els algoritmes per a implementar aquests mètodes així com un model de planta motriu, validat experimentalment, que inclou la dinàmica longitudinal del vehicle, models per al motor elèctric i les bateries, i un model de motor de combustió de valors mitjans. El CO es pot utilitzar per a tres objectius diferents: 1. Control aplicat, en cas que les condicions de contorn estiguen definides. Es pot aplicar al control del motor de combustió per a un cicle de conducció particular, traduint-se en un problema matemàtic de grans dimensions. S'estudien dos casos particulars: la gestió d'un sistema d'EGR de doble llaç, i el control complet del motor, particularment de les consignes d'injecció, SOI, EGR i VGT. 2. Obtenció de regles de control quasi-òptimes, aplicables als casos on no totes les pertorbacions són conegudes. A aquest respecte, s'analitzen el càlcul de calibratges específics de motor per a un cicle, i la gestió energètica d'un vehicle híbrid mitjançant un control estocàstic en bucle tancat. 3. Utilització de trajectòries de CO com comparativa o referència per a tasques de disseny i millora, oferint un criteri objectiu. La llei de combustió així com el dimensionament d'una planta motriu híbrida s'optimitzen mitjançant l'ús de CO. Les estratègies de CO han sigut aplicades experimentalment als treballs referents al motor de combustió, manifestant els seus substancials avantatges, però també analitzant dificultats i línies d'actuació per superar-les. Els mètodes desenvolupats a aquesta Tesi Doctoral són generals i aplicables a uns altres criteris si es disposen dels models adequats.
Reig Bernad, A. (2017). Optimal Control for Automotive Powertrain Applications [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90624
TESIS
Jennings, Alan Lance. "Autonomous Motion Learning for Near Optimal Control." University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1344016631.
Full textBoonnithivorakul, Nattapong. "Recursive on-line strategy for optimal control of a class of nonlinear systems /." Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1136091921&sid=7&Fmt=2&clientId=1509&RQT=309&VName=PQD.
Full textRungger, Matthias [Verfasser]. "On the Numerical Solution of Nonlinear and Hybrid Optimal Control Problems / Matthias Rungger." Kassel : Kassel University Press, 2012. http://d-nb.info/1036916677/34.
Full textMedagam, Peda Vasanta Reddy. "Online optimal control for a class of nonlinear system using RBF neural networks /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1650508351&sid=19&Fmt=2&clientId=1509&RQT=309&VName=PQD.
Full textUmemura, Yoshio, Noboru Sakamoto, and Yuto Yuasa. "Optimal Control Designs for Systems with Input Saturations and Rate Limiters." IEEE, 2010. http://hdl.handle.net/2237/14447.
Full text