Auswahl der wissenschaftlichen Literatur zum Thema „Deterministic optimal control“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Deterministic optimal control" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Deterministic optimal control"

1

Chaplais, F. „Averaging and Deterministic Optimal Control“. SIAM Journal on Control and Optimization 25, Nr. 3 (Mai 1987): 767–80. http://dx.doi.org/10.1137/0325044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Behncke, Horst. „Optimal control of deterministic epidemics“. Optimal Control Applications and Methods 21, Nr. 6 (November 2000): 269–85. http://dx.doi.org/10.1002/oca.678.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pareigis, Stephan. „Learning optimal control in deterministic systems“. ZAMM - Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik 78, S3 (1998): 1033–34. http://dx.doi.org/10.1002/zamm.19980781585.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Yuanchang, und Jiongmin Yong. „A deterministic affine-quadratic optimal control problem“. ESAIM: Control, Optimisation and Calculus of Variations 20, Nr. 3 (21.05.2014): 633–61. http://dx.doi.org/10.1051/cocv/2013078.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Verms, D. „Optimal control of piecewise deterministic markov process“. Stochastics 14, Nr. 3 (Februar 1985): 165–207. http://dx.doi.org/10.1080/17442508508833338.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Soravia, Pierpaolo. „On Aronsson Equation and Deterministic Optimal Control“. Applied Mathematics and Optimization 59, Nr. 2 (28.05.2008): 175–201. http://dx.doi.org/10.1007/s00245-008-9048-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Haurie, A., A. Leizarowitz und Ch van Delft. „Boundedly optimal control of piecewise deterministic systems“. European Journal of Operational Research 73, Nr. 2 (März 1994): 237–51. http://dx.doi.org/10.1016/0377-2217(94)90262-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Seierstad, Atle. „Existence of optimal nonanticipating controls in piecewise deterministic control problems“. ESAIM: Control, Optimisation and Calculus of Variations 19, Nr. 1 (18.01.2012): 43–62. http://dx.doi.org/10.1051/cocv/2011197.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mitsos, Alexander, Jaromił Najman und Ioannis G. Kevrekidis. „Optimal deterministic algorithm generation“. Journal of Global Optimization 71, Nr. 4 (13.02.2018): 891–913. http://dx.doi.org/10.1007/s10898-018-0611-8.

Der volle Inhalt der Quelle
Annotation:
Abstract A formulation for the automated generation of algorithms via mathematical programming (optimization) is proposed. The formulation is based on the concept of optimizing within a parameterized family of algorithms, or equivalently a family of functions describing the algorithmic steps. The optimization variables are the parameters—within this family of algorithms—that encode algorithm design: the computational steps of which the selected algorithms consist. The objective function of the optimization problem encodes the merit function of the algorithm, e.g., the computational cost (possibly also including a cost component for memory requirements) of the algorithm execution. The constraints of the optimization problem ensure convergence of the algorithm, i.e., solution of the problem at hand. The formulation is described prototypically for algorithms used in solving nonlinear equations and in performing unconstrained optimization; the parametrized algorithm family considered is that of monomials in function and derivative evaluation (including negative powers). A prototype implementation in GAMS is provided along with illustrative results demonstrating cases for which well-known algorithms are shown to be optimal. The formulation is a mixed-integer nonlinear program. To overcome the multimodality arising from nonconvexity in the optimization problem, a combination of brute force and general-purpose deterministic global algorithms is employed to guarantee the optimality of the algorithm devised. We then discuss several directions towards which this methodology can be extended, their scope and limitations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yu, Juanyi, Jr-Shin Li und Tzyh-Jong Tarn. „Optimal Control of Gene Mutation in DNA Replication“. Journal of Biomedicine and Biotechnology 2012 (2012): 1–26. http://dx.doi.org/10.1155/2012/743172.

Der volle Inhalt der Quelle
Annotation:
We propose a molecular-level control system view of the gene mutations in DNA replication from the finite field concept. By treating DNA sequences as state variables, chemical mutagens and radiation as control inputs, one cell cycle as a step increment, and the measurements of the resulting DNA sequence as outputs, we derive system equations for both deterministic and stochastic discrete-time, finite-state systems of different scales. Defining the cost function as a summation of the costs of applying mutagens and the off-trajectory penalty, we solve the deterministic and stochastic optimal control problems by dynamic programming algorithm. In addition, given that the system is completely controllable, we find that the global optimum of both base-to-base and codon-to-codon deterministic mutations can always be achieved within a finite number of steps.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Deterministic optimal control"

1

Ribeiro, do Val Joao Bosco. „Stochastic optimal control for piecewise deterministic Markov processes“. Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/38142.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Johnson, Miles J. „Inverse optimal control for deterministic continuous-time nonlinear systems“. Thesis, University of Illinois at Urbana-Champaign, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3632073.

Der volle Inhalt der Quelle
Annotation:

Inverse optimal control is the problem of computing a cost function with respect to which observed state input trajectories are optimal. We present a new method of inverse optimal control based on minimizing the extent to which observed trajectories violate first-order necessary conditions for optimality. We consider continuous-time deterministic optimal control systems with a cost function that is a linear combination of known basis functions. We compare our approach with three prior methods of inverse optimal control. We demonstrate the performance of these methods by performing simulation experiments using a collection of nominal system models. We compare the robustness of these methods by analyzing how they perform under perturbations to the system. We consider two scenarios: one in which we exactly know the set of basis functions in the cost function, and another in which the true cost function contains an unknown perturbation. Results from simulation experiments show that our new method is computationally efficient relative to prior methods, performs similarly to prior approaches under large perturbations to the system, and better learns the true cost function under small perturbations. We then apply our method to three problems of interest in robotics. First, we apply inverse optimal control to learn the physical properties of an elastic rod. Second, we apply inverse optimal control to learn models of human walking paths. These models of human locomotion enable automation of mobile robots moving in a shared space with humans, and enable motion prediction of walking humans given partial trajectory observations. Finally, we apply inverse optimal control to develop a new method of learning from demonstration for quadrotor dynamic maneuvering. We compare and contrast our method with an existing state-of-the-art solution based on minimum-time optimal control, and show that our method can generalize to novel tasks and reject environmental disturbances.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Laera, Simone. „VWAP OPTIMAL EXECUTION Deterministic and stochastic approaches“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Den vollen Inhalt der Quelle finden
Annotation:
Understanding market impact and optimizing trading strategies to minimize market impact has long been an important goal for investors who wishes to execute large orders. Nowadays, among all strategies they can choose, VWAP orders correspond to implementation strategies where traders act with market volume in the attempt to achieve an average execution price equal to the VWAP (Volume Weighted Average Price) benchmark price. In a framework inspired by Robert Almgren and Neill Chriss' original market model, VWAP strategies are analysed in the presence of permanent and temporary impact in order to provide a closed-form solution of the problem. Firstly, following Oliver Guéant's studies in his "The Financial Mathematics of Market Liquidity", the problem boils down to find an optimal deterministic control minimizing a functional, with mathematical tools from variational calculus. Secondly, going beyond the deterministic case, thanks to the results of Alvaro Cartea, Sebastian Jaimungal and José Penalva in "Algorithmic and High-Frequency Trading", two explicit closed-form optimal execution strategies to target VWAP are provided, under general assumptions about the stochastic process followed by the volume traded. So, in short, our main goal consists in trying to check similarities and/or differences between the results we compute along this dissertation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Costa, Oswaldo Luiz de Valle. „Approximations for optimal stopping and impulsive control of piecewise-deterministic processes“. Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38271.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lange, Dirk Klaus [Verfasser], und N. [Akademischer Betreuer] Bäuerle. „Cost optimal control of Piecewise Deterministic Markov Processes under partial observation / Dirk Klaus Lange ; Betreuer: N. Bäuerle“. Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/1132997739/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sainvil, Watson. „Contrôle optimal et application aux énergies renouvelables“. Electronic Thesis or Diss., Antilles, 2023. http://www.theses.fr/2023ANTI0894.

Der volle Inhalt der Quelle
Annotation:
Aujourd'hui, l'électricité est la forme d'énergie la plus aisée à exploiter dans le monde. Cependant, la produire à partir des sources fossiles comme le pétrole, le charbon, le gaz naturel,... est la principale cause du réchauffement climatique en émettant une quantité massive de gaz à effet de serre dans la nature. Il nous faut donc une alternative et vite! L'ensoleillement quasi-quotidien et le vent en quantité importante devraient favoriser davantage le développement des énergies renouvelables.Dans cette thèse, l'objectif principal consiste à appliquer la théorie du contrôle optimal aux énergies renouvelables afin de convaincre les décideurs de basculer vers celles-ci à travers des études mathématiques.Dans un premier temps, nous développons un cas déterministe basé sur ce qui a déjà été fait dans la transition des énergies fossiles vers les énergies renouvelables dans lequel nous formulons deux cas d'étude. Le premier traite un problème de contrôle optimal faisant état de la transition de l'énergie pétrolière vers l'énergie solaire. Le second concerne un problème de contrôle optimal faisant état de la transition de l'énergie pétrolière vers les énergies solaire et éolienne.Dans un second temps, nous développons une partie stochastique dans laquelle nous traitons un problème de contrôle stochastique dont le but est de prendre en compte l'aspect aléatoire de la production de l'énergie solaire puisqu’on ne peut pas garantir un ensoleillement quotidien suffisant
Today, electricity is the easiest form of energy to exploit in the world. However, producing it from fossil sources such as oil, coal, natural gas,…, is the main cause of global warming by emitting a massive amount of greenhouse gases into nature. We need an alternative and fast! The almost daily sunshine and the important quantity of wind should favor the development of renewable energies.In this thesis, the main objective is to apply the optimal control theory to renewable energies in order to convince decision makers to switch to them through mathematical studies. First, we develop a deterministic case based on what has already been done in the transition from fossil fuels to renewable energies in which we formulate two case studies. The first one deals with an optimal control probleminvolving the transition from oil to solar energy. The second deals with an optimal control problem involving the transition from oil to solar and wind energies.Then, we develop a stochastic part in which we treat a stochastic control problem whose objective is to take into account the random aspect of the production of solar energy since we cannot guarantee sufficient daily sunshine
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Schlosser, Rainer. „Six essays on stochastic and deterministic dynamic pricing and advertising models“. Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2014. http://dx.doi.org/10.18452/16973.

Der volle Inhalt der Quelle
Annotation:
Die kumulative Dissertation beschäftigt sich mit stochastischen und deterministischen dynamischen Verkaufsmodellen für langlebige sowie verderbliche Güter. Die analysierten dynamischen Modelle sind durch die Möglichkeit der simultanen Variation von Preis und Werbung in stetiger Zeit charakterisiert und folgen den aktuellen Entwicklungen der Dynamischen Preissetzung. Dabei steht die Berücksichtigung und Analyse von (i) Zeitinhomogenitäten, (ii) Adoptionseffekten, (iii) Oligopolwettbewerb und (iv) der Risikoaversion des Entscheiders im Zentrum der Arbeit. Für die Spezialfälle isoelastischer und exponentieller Nachfrage in Verbindung mit isoelastischer Werbewirkung gelingt es explizite Lösungen der optimalen Preis- und Werbekontrollen herzuleiten. Die optimal gesteuerten Verkaufsprozesse können analytisch beschrieben und ausgewertet werden. Insbesondere werden neben erwarteten Preis- und Restbestandsentwicklungen auch assoziierte Gewinnverteilungen untersucht und Sensitivitätsresultate hergeleitet. Darüber hinaus wird analysiert unter welchen Bedingungen monopolistische Strategien sozial effizient sind und welche Besteuerungs- und Subventionsmechanismen geeignet sind um Effizienz herzustellen. Die Ergebnisse sind in sechs Artikel gefasst und bieten ökonomische Einsichten in verschiedene praktische Verkaufsanwendungen, speziell im Bereich des elektronischen Handels.
The cumulative dissertation deals with stochastic and deterministic dynamic sales models for durable as well as perishable products. The models analyzed are characterized by simultaneous dynamic pricing and advertising controls in continuous time and are in line with recent developments in dynamic pricing. They include the modeling of multi-dimensional decisions and take (i) time dependencies, (ii) adoption effects (iii), competitive settings and (iv) risk aversion, explicitly into account. For special cases with isoelastic demand functions as well as with exponential ones explicit solution formulas of the optimal pricing and advertising feedback controls are derived. Moreover, optimally controlled sales processes are analytically described. In particular, the distribution of profits, the expected evolution of prices as well as inventory levels are analyzed in detail and sensitivity results are obtained. Furthermore, we consider the question whether or not monopolistic policies are socially efficient; in special cases, we propose taxation/subsidy mechanisms to establish efficiency. The results are presented in six articles and provide economic insights into a variety of dynamic sales applications of the business world, especially in the area of e-commerce.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tan, Yang. „Optimal Discrete-in-Time Inventory Control of a Single Deteriorating Product with Partial Backlogging“. Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3711.

Der volle Inhalt der Quelle
Annotation:
The implicit assumption in conventional inventory models is that the stored products maintain the same utility forever, i.e., they can be stored for an infinite period of time without losing their value or characteristics. However, generally speaking, almost all products experience some sort of deterioration over time. Some products have very small deterioration rates, and henceforth the effect of such deterioration can be neglected. Some products may be subject to significant rates of deterioration. Fruits, vegetables, drugs, alcohol and radioactive materials are examples that can experience significant deterioration during storage. Therefore the effect of deterioration must be explicitly taken into account in developing inventory models for such products. In most existing deteriorating inventory models, time is treated as a continuous variable, which is not exactly the case in practice. In real-life problems time factor is always measured on a discrete scale only, i.e. in terms of complete units of days, weeks, etc. In this research, we present several discrete-in-time inventory models and identify optimal ordering policies for a single deteriorating product by minimizing the expected overall costs over the planning horizon. The various conditions have been considered, e.g. periodic review, time-varying deterioration rate, waiting-time-dependent partial backlogging, time-dependent demand, stochastic demand etc. The objective of our research is two-fold: (a) To obtain optimal order quantity and useful insights for the inventory control of a single deteriorating product over a discrete time horizon with deterministic demand, variable deterioration rates and waiting-time-dependent partial backlogging ratios; (b) To identify optimal ordering policy for a single deteriorating product over a finite horizon with stochastic demand and partial backlogging. The explicit ordering policy will be developed for some special cases. Through computational experiments and sensitivity analysis, a thorough and insightful understanding of deteriorating inventory management will be achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Joubaud, Maud. „Processus de Markov déterministes par morceaux branchants et problème d’arrêt optimal, application à la division cellulaire“. Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS031/document.

Der volle Inhalt der Quelle
Annotation:
Les processus markoviens déterministes par morceaux (PDMP) forment une vaste classe de processus stochastiques caractérisés par une évolution déterministe entre des sauts à mécanisme aléatoire. Ce sont des processus de type hybride, avec une composante discrète de mode et une composante d’état qui évolue dans un espace continu. Entre les sauts du processus, la composante continue évolue de façon déterministe, puis au moment du saut un noyau markovien sélectionne la nouvelle valeur des composantes discrète et continue. Dans cette thèse, nous construisons des PDMP évoluant dans des espaces de mesures (de dimension infinie), pour modéliser des population de cellules en tenant compte des caractéristiques individuelles de chaque cellule. Nous exposons notre construction des PDMP sur des espaces de mesure, et nous établissons leur caractère markovien. Sur ces processus à valeur mesure, nous étudions un problème d'arrêt optimal. Un problème d'arrêt optimal revient à choisir le meilleur temps d'arrêt pour optimiser l'espérance d'une certaine fonctionnelle de notre processus, ce qu'on appelle fonction valeur. On montre que cette fonction valeur est solution des équations de programmation dynamique et on construit une famille de temps d'arrêt $epsilon$-optimaux. Dans un second temps, nous nous intéressons à un PDMP en dimension finie, le TCP, pour lequel on construit un schéma d'Euler afin de l'approcher. Ce choix de modèle simple permet d'estimer différents types d'erreurs. Nous présentons des simulations numériques illustrant les résultats obtenus
Piecewise deterministic Markov processes (PDMP) form a large class of stochastic processes characterized by a deterministic evolution between random jumps. They fall into the class of hybrid processes with a discrete mode and an Euclidean component (called the state variable). Between the jumps, the continuous component evolves deterministically, then a jump occurs and a Markov kernel selects the new value of the discrete and continuous components. In this thesis, we extend the construction of PDMPs to state variables taking values in some measure spaces with infinite dimension. The aim is to model cells populations keeping track of the information about each cell. We study our measured-valued PDMP and we show their Markov property. With thoses processes, we study a optimal stopping problem. The goal of an optimal stopping problem is to find the best admissible stopping time in order to optimize some function of our process. We show that the value fonction can be recursively constructed using dynamic programming equations. We construct some $epsilon$-optimal stopping times for our optimal stopping problem. Then, we study a simple finite-dimension real-valued PDMP, the TCP process. We use Euler scheme to approximate it, and we estimate some types of errors. We illustrate the results with numerical simulations
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Geeraert, Alizée. „Contrôle optimal stochastique des processus de Markov déterministes par morceaux et application à l’optimisation de maintenance“. Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0602/document.

Der volle Inhalt der Quelle
Annotation:
On s’intéresse au problème de contrôle impulsionnel à horizon infini avec facteur d’oubli pour les processus de Markov déterministes par morceaux (PDMP). Dans un premier temps, on modélise l’évolution d’un système opto-électronique par des PDMP. Afin d’optimiser la maintenance du système, on met en place un problème de contrôle impulsionnel tenant compte à la fois du coût de maintenance et du coût lié à l’indisponibilité du matériel auprès du client.On applique ensuite une méthode d’approximation numérique de la fonction valeur associée au problème, faisant intervenir la quantification de PDMP. On discute alors de l’influence des paramètres sur le résultat obtenu. Dans un second temps, on prolonge l’étude théorique du problème de contrôle impulsionnel en construisant de manière explicite une famille de stratégies є-optimales. Cette construction se base sur l’itération d’un opérateur dit de simple-saut-ou-intervention associé au PDMP, dont l’idée repose sur le procédé utilisé par U.S. Gugerli pour la construction de temps d’arrêt є-optimaux. Néanmoins, déterminer la meilleure position après chaque intervention complique significativement la construction de telles stratégies et nécessite l’introduction d’un nouvel opérateur. L’originalité de la construction de stratégies є-optimales présentée ici est d’être explicite, au sens où elle ne nécessite pas la résolution préalable de problèmes complexes
We are interested in a discounted impulse control problem with infinite horizon forpiecewise deterministic Markov processes (PDMPs). In the first part, we model the evolutionof an optronic system by PDMPs. To optimize the maintenance of this equipment, we study animpulse control problem where both maintenance costs and the unavailability cost for the clientare considered. We next apply a numerical method for the approximation of the value function associated with the impulse control problem, which relies on quantization of PDMPs. The influence of the parameters on the numerical results is discussed. In the second part, we extendthe theoretical study of the impulse control problem by explicitly building a family of є-optimalstrategies. This approach is based on the iteration of a single-jump-or-intervention operator associatedto the PDMP and relies on the theory for optimal stopping of a piecewise-deterministic Markov process by U.S. Gugerli. In the present situation, the main difficulty consists in approximating the best position after the interventions, which is done by introducing a new operator.The originality of the proposed approach is the construction of є-optimal strategies that areexplicit, since they do not require preliminary resolutions of complex problems
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Deterministic optimal control"

1

Jadamba, Baasansuren, Akhtar A. Khan, Stanisław Migórski und Miguel Sama. Deterministic and Stochastic Optimal Control and Inverse Problems. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003050575.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Carlson, D. A. Infinite horizon optimal control: Deterministic and stochastic systems. 2. Aufl. Berlin: Springer-Verlag, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Carlson, Dean A. Infinite Horizon Optimal Control: Deterministic and Stochastic Systems. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mordukhovich, Boris S., und Hector J. Sussmann, Hrsg. Nonsmooth Analysis and Geometric Methods in Deterministic Optimal Control. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4613-8489-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Optimal design of control systems: Stochastic and deterministic problems. New York: M. Dekker, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sh, Mordukhovich B., und Sussmann Hector J. 1946-, Hrsg. Nonsmooth analysis and geometric methods in deterministic optimal control. New York: Springer, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fleming, Wendell H. Deterministic and Stochastic Optimal Control. Springer, 2012.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fleming, Wendell H., und Raymond W. Rishel. Deterministic and Stochastic Optimal Control. Springer London, Limited, 2012.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Moyer, H. Gardner. Deterministic Optimal Control: An Introduction for Scientists. Trafford Publishing, 2006.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Khan, Akhtar A., Baasansuren Jadamba, Stanislaw Migorski und Miguel Angel Sama Meige. Deterministic and Stochastic Optimal Control and Inverse Problems. Taylor & Francis Group, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Deterministic optimal control"

1

Bensoussan, Alain. „Deterministic Optimal Control“. In Interdisciplinary Applied Mathematics, 215–47. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75456-7_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Seierstad, Atle. „Piecewise Deterministic Optimal Control Problems“. In Stochastic Control in Discrete and Continuous Time, 1–70. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76617-1_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

de Saporta, Benoîte, François Dufour und Huilong Zhang. „Optimal Impulse Control“. In Numerical Methods for Simulation and Optimization of Piecewise Deterministic Markov Processes, 231–67. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781119145066.ch10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dontchev, A. L. „Discrete Approximations in Optimal Control“. In Nonsmooth Analysis and Geometric Methods in Deterministic Optimal Control, 59–80. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4613-8489-2_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zoppoli, Riccardo, Marcello Sanguineti, Giorgio Gnecco und Thomas Parisini. „Deterministic Optimal Control over a Finite Horizon“. In Neural Approximations for Optimal Control and Decision, 255–98. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29693-3_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Costa, O. L. V., und F. Dufour. „Optimal Control of Piecewise Deterministic Markov Processes“. In Stochastic Analysis, Filtering, and Stochastic Optimization, 53–77. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98519-6_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bressan, Alberto. „Impulsive Control Systems“. In Nonsmooth Analysis and Geometric Methods in Deterministic Optimal Control, 1–22. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4613-8489-2_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Filatova, Darya. „Optimal Control Strategies for Stochastic/Deterministic Bioeconomic Models“. In Mathematics in Industry, 537–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-25100-9_62.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chen, Lijun, Na Li, Libin Jiang und Steven H. Low. „Optimal Demand Response: Problem Formulation and Deterministic Case“. In Control and Optimization Methods for Electric Smart Grids, 63–85. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-1605-0_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zolezzi, Tullio. „Well Posed Optimal Control Problems: A Perturbation Approach“. In Nonsmooth Analysis and Geometric Methods in Deterministic Optimal Control, 239–46. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4613-8489-2_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Deterministic optimal control"

1

Zamani, Mohammad, Jochen Trumpf und Robert Mahony. „Near-optimal deterministic attitude filtering“. In 2010 49th IEEE Conference on Decision and Control (CDC). IEEE, 2010. http://dx.doi.org/10.1109/cdc.2010.5717043.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Yuchao, Karl H. Johansson, Jonas Martensson und Dimitri P. Bertsekas. „Data-driven Rollout for Deterministic Optimal Control“. In 2021 60th IEEE Conference on Decision and Control (CDC). IEEE, 2021. http://dx.doi.org/10.1109/cdc45484.2021.9683499.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Barles, G., und B. Perthame. „Discontinuous viscosity solutions of deterministic optimal control problems“. In 1986 25th IEEE Conference on Decision and Control. IEEE, 1986. http://dx.doi.org/10.1109/cdc.1986.267221.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Coote, Paul, Jochen Trumpf, Robert Mahony und Jan C. Willems. „Near-optimal deterministic filtering on the unit circle“. In 2009 Joint 48th IEEE Conference on Decision and Control (CDC) and 28th Chinese Control Conference (CCC). IEEE, 2009. http://dx.doi.org/10.1109/cdc.2009.5399999.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liao, Y., und S. Lenhart. „Optimal control of piecewise-deterministic processes with discrete control actions“. In 1985 24th IEEE Conference on Decision and Control. IEEE, 1985. http://dx.doi.org/10.1109/cdc.1985.268634.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Basin, Michael V., und Irma R. Valadez Guzman. „Optimal controller for integral Volterra systems with deterministic uncertainties“. In 2001 European Control Conference (ECC). IEEE, 2001. http://dx.doi.org/10.23919/ecc.2001.7075973.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Basin, Michael, und Dario Calderon-Alvarez. „Optimal controller for uncertain stochastic polynomial systems with deterministic disturbances“. In 2009 American Control Conference. IEEE, 2009. http://dx.doi.org/10.1109/acc.2009.5160068.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tsumura, Koji. „Optimal Quantizer for Mixed Probabilistic/Deterministic Parameter Estimation“. In Proceedings of the 45th IEEE Conference on Decision and Control. IEEE, 2006. http://dx.doi.org/10.1109/cdc.2006.376940.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

„COST-OPTIMAL STRONG PLANNING IN NON-DETERMINISTIC DOMAINS“. In 8th International Conference on Informatics in Control, Automation and Robotics. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003448200560066.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sun, Jin-gen, Li-jun Fu, Zhi-gang Huang und Dong-sheng Wu. „The Method of Deterministic Optimal Control with Box Constraints“. In 2010 3rd International Conference on Intelligent Networks and Intelligent Systems (ICINIS). IEEE, 2010. http://dx.doi.org/10.1109/icinis.2010.37.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie