Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Stochastic control theory.

Dissertationen zum Thema „Stochastic control theory“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Stochastic control theory" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Beard, Rodney. „Ito stochastic control theory, stochastic differential games and the economic theory of mobile pastoralism /“. [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18631.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hunt, K. J. „Stochastic optimal control theory with application in self-tuning control“. Thesis, University of Strathclyde, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382399.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cao, Jie. „Stochastic inventory control in dynamic environments“. [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011469.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brand, Samuel P. C. „Spatial and stochastic epidemics : theory, simulation and control“. Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/56738/.

Der volle Inhalt der Quelle
Annotation:
It is now widely acknowledged that spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In this work I investigate an ensemble of techniques for understanding the stochastic dynamics of spatial and discrete epidemic processes, with especial consideration given to SIR disease dynamics for the Levins-type metapopulation. I present a toolbox of techniques for the modeller of spatial epidemics. The highlight results are a novel form of moment closure derived directly from a stochastic differential representation of the epidemic, a stochastic simulation algorithm that asymptotically in system size greatly out-performs existing simulation methods for the spatial epidemic and finally a method for tackling optimal vaccination scheduling problems for controlling the spread of an invasive pathogen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hao, Xiao Qi. „The main development of stochastic control problems“. Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691355.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhou, Yulong. „Stochastic control and approximation for Boltzmann equation“. HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/392.

Der volle Inhalt der Quelle
Annotation:
In this thesis we study two problems concerning probability. The first is stochastic control problem, which essentially amounts to find an optimal probability in order to optimize some reward function of probability. The second is to approximate the solution of the Boltzmann equation. Thanks to conservation of mass, the solution can be regarded as a family of probability indexed by time. In the first part, we prove a dynamic programming principle for stochastic optimal control problem with expectation constraint by measurable selection approach. Since state constraint, drawdown constraint, target constraint, quantile hedging and floor constraint can all be reformulated into expectation constraint, we apply our results to prove the corresponding dynamic programming principles for these five classes of stochastic control problems in a continuous but non-Markovian setting. In order to solve the Boltzmann equation numerically, in the second part, we propose a new model equation to approximate the Boltzmann equation without angular cutoff. Here the approximate equation incorporates Boltzmann collision operator with angular cutoff and the Landau collision operator. As a first step, we prove the well-posedness theory for our approximate equation. Then in the next step, we show the error estimate between the solutions to the approximate equation and the original equation. Compared to the standard angular cutoff approximation method, our method results in higher order of accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Damm, Tobias. „Rational matrix equations in stochastic control /“. Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/enhancements/fy0817/2003066858-d.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zeryos, Mihail. „Bayesian pursuit analysis and singular stochastic control“. Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338932.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kabouris, John C. „Stochastic control of the activated sludge process“. Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/20306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Huang, Hui. „Optimal control of piecewise continuous stochastic processes“. Bonn : [s.n.], 1989. http://catalog.hathitrust.org/api/volumes/oclc/23831217.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Cheng, Tak Sum. „Stochastic optimal control in randomly-branching environments“. HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/713.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Pesonen, Joonas. „Stochastic Estimation and Control over WirelessHART Networks: Theory and Implementation“. Thesis, KTH, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105172.

Der volle Inhalt der Quelle
Annotation:
There is currently a high interest of replacing traditional wired networks with wireless technology. Wireless communications can provide several advantages for process industries with aspect to exibility, maintenance and installation. The WirelessHART protocol provides a standardized wireless technology for large automation networks that explore wireless communication. However, wireless networks introduce time delays and losses in the communication system, which denes requirements for designing estimators and controllers that can tolerate and compensate for the losses and delays. This thesis consists of several contributions. First, we develop tools for analyzing the delay and loss probabilities in WirelessHART networks with unreliable transmission links. For given network topology, routing and transmission schedule the developed tools can be used to determine the latency distributions of individual packets and quantify that a packet will arrive within a prescribed deadline. Secondly, we consider estimation and control when sensor and control messages are sent over WirelessHART networks. The network losses and latencies are modelled and compensated for by timevarying Kalman lters and LQG controllers. Both optimal controllers, of high implementation complexity, and simple suboptimal schemes are considered. The control strategies are evaluated on a simulation model of a flotation process in a Boliden mine where the wired sensors of the existent solution are replaced by a WirelessHART network scheduled for time-optimal data collection. Finally, we implement a WirelessHART-compliant sensor on a Tmote sky device and perform real experiments of wireless control on a water tank process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Hassan, Nofal Adrees. „Adaptive-stochastic identification of an idling automotive I.C. engine“. Thesis, University of Liverpool, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316554.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Zhang, Lei. „Stochastic optimal control and regime switching : applications in economics“. Thesis, University of Warwick, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387250.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Silva, Francisco Jose. „Interior penalty approximation for optimal control problems. Optimality conditions in stochastic optimal control theory“. Palaiseau, Ecole polytechnique, 2010. http://pastel.archives-ouvertes.fr/docs/00/54/22/95/PDF/tesisfjsilva.pdf.

Der volle Inhalt der Quelle
Annotation:
Résumé français : Cette thèse est divisée en deux parties. Dans la première partie on s'intéresse aux problèmes de commande optimale déterministes et on étudie des approximations intérieures pour deux problèmes modèles avec des contraintes de non-négativité sur la commande. Le premier modèle est un problème de commande optimale dont la fonction de coût est quadratique et dont la dynamique est régie par une équation différentielle ordinaire. Pour une classe générale de fonctions de pénalité intérieure, on montre comment calculer le terme principal du développement ponctuel de l'état et de l'état adjoint. Notre argument principal se fonde sur le fait suivant: si la commande optimale pour le problème initial satisfait les conditions de complémentarité stricte pour le Hamiltonien sauf en un nombre fini d'instants, les estimations pour le problème de commande optimale pénalisé peuvent être obtenues à partir des estimations pour un problème stationnaire associé. Nos résultats fournissent plusieurs types de mesures de qualité de l'approximation pour la technique de pénalisation: estimations des erreurs de la commande , estimations des erreurs pour l'état et l'état adjoint et aussi estimations de erreurs pour la fonction valeur. Le second modèle est le problème de commande optimale d'une équation semi-linéaire elliptique avec conditions de Dirichlet homogène au bord, la commande étant distribuée sur le domaine et positive. L'approche est la même que pour le premier modèle, c'est-à-dire que l'on considère une famille de problèmes pénalisés, dont la solution définit une trajectoire centrale qui converge vers la solution du problème initial. De cette manière, on peut étendre les résultats, obtenus dans le cadre d'équations différentielles, au contrôle optimal d'équations elliptiques semi-linéaires. Dans la deuxième partie on s'intéresse aux problèmes de commande optimale stochastiques. Dans un premier temps, on considère un problème linéaire quadratique stochastique avec des contraintes de non-negativité sur la commande et on étend les estimations d'erreur pour l'approximation par pénalisation logarithmique. La preuve s'appuie sur le principe de Pontriaguine stochastique et un argument de dualité. Ensuite, on considère un problème de commande stochastique général avec des contraintes convexes sur la commande. L'approche dite variationnelle nous permet d'obtenir un développement au premier et au second ordre pour l'état et la fonction de coût, autour d'un minimum local. Avec ces développements on peut montrer des conditions générales d'optimalité de premier ordre et, sous une hypothèse géométrique sur l'ensemble des contraintes, des conditions nécessaires du second ordre sont aussi établies
Résumé anglais : This thesis is divided in two parts. In the first one we consider deterministic optimal control problems and we study interior approximations for two model problems with non-negativity constraints. The first model is a quadratic optimal control problem governed by a nonautonomous affine ordinary differential equation. We provide a first-order expansion for the penalized state an adjoint state (around the corresponding state and adjoint state of the original problem), for a general class of penalty functions. Our main argument relies on the following fact: if the optimal control satisfies strict complementarity conditions for its Hamiltonian, except for a set of times with null Lebesgue measure, the functional estimates of the penalized optimal control problem can be derived from the estimates of a related finite dimensional problem. Our results provide three types of measure to analyze the penalization technique: error estimates of the control, error estimates of the state and the adjoint state and also error estimates for the value function. The second model we study is the optimal control problem of a semilinear elliptic PDE with a Dirichlet boundary condition, where the control variable is distributed over the domain and is constrained to be non-negative. Following the same approach as in the first model, we consider an associated family of penalized problems, whose solutions define a central path converging to the solution of the original one. In this fashion, we are able to extend the results obtained in the ODE framework to the case of semilinear elliptic PDE constraints. In the second part of the thesis we consider stochastic optimal control problems. We begin withthe study of a stochastic linear quadratic problem with non-negativity control constraints and we extend the error estimates for the approximation by logarithmic penalization. The proof is based is the stochastic Pontryagin's principle and a duality argument. Next, we deal with a general stochastic optimal control problem with convex control constraints. Using the variational approach, we are able to obtain first and second-order expansions for the state and cost function, around a local minimum. This analysis allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second-order necessary conditions are also established
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Liao, Jiali Banerjee Avijit Benson Hande Y. „A discretionary stopping problem in stochastic control: an application in credit exposure control /“. Philadelphia, Pa. : Drexel University, 2006. http://dspace.library.drexel.edu/handle/1860%20/888.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ortiz, Olga L. „Stochastic inventory control with partial demand observability“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22551.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2008.
Committee Co-Chair: Alan L Erera; Committee Co-Chair: Chelsea C, White III; Committee Member: Julie Swann; Committee Member: Paul Griffin; Committee Member: Soumen Ghosh.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Brown, Emma L. „On-line control of paper web formation using stochastic distribution theory“. Thesis, University of Manchester, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488349.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Chen, Hairong. „Dynamic admission and dispatching control of stochastic distribution systems /“. View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?IEEM%202003%20CHEN.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 117-130). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Chen, Gong. „Schemes for using LQG control strategy in the design of regulators for stochastic systems“. Thesis, Cranfield University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305393.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Iourtchenko, Daniil V. „Optimal bounded control and relevant response analysis for random vibrations“. Link to electronic thesis, 2001. http://www.wpi.edu/Pubs/ETD/Available/etd-0525101-111407.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Worcester Polytechnic Institute.
Keywords: Stochastic optimal control; dynamic programming; Hamilton-Jacobi-Bellman equation; Random vibration. Keywords: Stochastic optimal control; dynamic programming; Hamilton-Jacobi-Bellman equation; Random vibration; energy balance method. Includes bibliographical references (p. 86-89).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Yang, Lin. „Linear robust H-infinity stochastic control theory on the insurance premium-reserve processes“. Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2037227/.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the stability analysis of linear discrete-time premium-reserve (P-R) systems in a stochastic framework. Such systems are characterised by a mixture of the premium pricing process and the medium- and long- term stability in the accumulated reserve (surplus) policy, and they play a key role in the modern actuarial literature. Although the mathematical and practical analysis of P-R systems is well studied and motivated, their stability properties have not been studied thoughtfully and they are restricted in a deterministic framework. In Engineering, during the last three decades, many useful techniques are developed in linear robust control theory. This thesis is the first attempt to use some useful tools from linear robust control theory in order to analyze the stability of these classical insurance systems. Analytically, in this thesis, P-R systems are first formulated with structural properties such that time-varying delays, random disturbance and parameter uncertainties. Then as an extension of the previous literature, the results of stabilization and the robust H-infinity control of P-R systems are modelled in stochastic framework. Meanwhile, the risky investment impact on the P-R system stability condition is shown. In this approach, the potential effects from changes in insurer's investment strategy is discussed. Next we develop regime switching P-R systems to describe the abrupt structural changes in the economic fundamentals as well as the periodic switches in the parameters. The results for the regime switching P-R system are illustrated by means of two different approaches: markovian and arbitrary regime switching systems. Finally, we show how robust guaranteed cost control could be implemented to solve an optimal insurance problem. In each chapter, Linear Matrix Inequality (LMI) sufficient conditions are derived to solve the proposed sub-problems and numerical examples are given to illustrate the applicability of the theoretical findings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Basei, Matteo. „Topics in stochastic control and differential game theory, with application to mathematical finance“. Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424239.

Der volle Inhalt der Quelle
Annotation:
We consider three problems in stochastic control and differential game theory, arising from practical situations in mathematical finance and energy markets. First, we address the problem of optimally exercising swing contracts in energy markets. Our main result consists in characterizing the value function as the unique viscosity solution of a Hamilton-Jacobi-Bellman equation. The case of contracts with penalties is straightforward. Conversely, the case of contracts with strict constraints gives rise to stochastic control problems where a non-standard integral constraint is present: we get the anticipated characterization by considering a suitable sequence of unconstrained problems. The approximation result is proved for a general class of problems with an integral constraint on the controls. Then, we consider a retailer who has to decide when and how to intervene and adjust the price of the energy he sells, in order to maximize his earnings. The intervention costs can be either fixed or depending on the market share. In the first case, we get a standard impulsive control problem and we characterize the value function and the optimal price policy. In the second case, classical theory cannot be applied, due to the singularities of the penalty function; we then outline an approximation argument and we finally consider stronger conditions on the controls to characterize the optimal policy. Finally, we focus on a general class of non-zero-sum stochastic differential games with impulse controls. After defining a rigorous framework for such problems, we prove a verification theorem: if a couple of functions is regular enough and satisfies a suitable system of quasi-variational inequalities, it coincides with the value functions of the problem and a characterization of the Nash equilibria is possible. We conclude by a detailed example: we investigate the existence of equilibria in the case where two countries, with different goals, can affect the exchange rate between the corresponding currencies.
In questa tesi vengono considerati tre problemi relativi alla teoria del controllo stocastico e dei giochi differenziali; tali problemi sono legati a situazioni concrete nell'ambito della finanza matematica e, più precisamente, dei mercati dell'energia. Innanzitutto, affrontiamo il problema dell'esercizio ottimale di opzioni swing nel mercato dell'energia. Il risultato principale consiste nel caratterizzare la funzione valore come unica soluzione di viscosità di un'opportuna equazione di Hamilton-Jacobi-Bellman. Il caso relativo ai contratti con penalità può essere trattato in modo standard. Al contrario, il caso relativo ai contratti con vincoli stretti porta a problemi di controllo stocastico in cui è presente un vincolo non standard sui controlli: la suddetta caratterizzazione è allora ottenuta considerando un'opportuna successione di problemi non vincolati. Tale approssimazione viene dimostrata per una classe generale di problemi con vincolo integrale sui controlli. Successivamente, consideriamo un fornitore di energia che deve decidere quando e come intervenire per cambiare il prezzo che chiede ai suoi clienti, al fine di massimizzare il suo guadagno. I costi di intervento possono essere fissi o dipendere dalla quota di mercato del fornitore. Nel primo caso, otteniamo un problema standard di controllo stocastico impulsivo, in cui caratterizziamo la funzione valore e la politica ottimale di gestione del prezzo. Nel secondo caso, la teoria classica non può essere applicata a causa delle singolarità nella funzione che definisce le penalità. Delineiamo quindi una procedura di approssimazione e consideriamo infine condizioni più forti sui controlli, così da caratterizzare, anche in questo caso, il controllo ottimale. Infine, studiamo una classe generale di giochi differenziali a somma non nulla e con controlli di tipo impulsivo. Dopo aver definito rigorosamente tali problemi, forniamo la dimostrazione di un teorema di verifica: se una coppia di funzioni è sufficientemente regolare e soddisfa un opportuno sistema di disequazioni quasi-variazionali, essa coincide con le funzioni valore del problema ed è possibile caratterizzare gli equilibri di Nash. Concludiamo con un esempio dettagliato: indaghiamo l'esistenza di equilibri nel caso in cui due nazioni, con obiettivi differenti, possono condizionare il tasso di cambio tra le rispettive valute.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Evans, Martin A. „Multiplicative robust and stochastic MPC with application to wind turbine control“. Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:0ad9b878-00f3-4cfa-a683-148765e3ae39.

Der volle Inhalt der Quelle
Annotation:
A robust model predictive control algorithm is presented that explicitly handles multiplicative, or parametric, uncertainty in linear discrete models over a finite horizon. The uncertainty in the predicted future states and inputs is bounded by polytopes. The computational cost of running the controller is reduced by calculating matrices offline that provide a means to construct outer approximations to robust constraints to be applied online. The robust algorithm is extended to problems of uncertain models with an allowed probability of violation of constraints. The probabilistic degrees of satisfaction are approximated by one-step ahead sampling, with a greedy solution to the resulting mixed integer problem. An algorithm is given to enlarge a robustly invariant terminal set to exploit the probabilistic constraints. Exponential basis functions are used to create a Robust MPC algorithm for which the predictions are defined over the infinite horizon. The control degrees of freedom are weights that define the bounds on the state and input uncertainty when multiplied by the basis functions. The controller handles multiplicative and additive uncertainty. Robust MPC is applied to the problem of wind turbine control. Rotor speed and tower oscillations are controlled by a low sample rate robust predictive controller. The prediction model has multiplicative and additive uncertainty due to the uncertainty in short-term future wind speeds and in model linearisation. Robust MPC is compared to nominal MPC by means of a high-fidelity numerical simulation of a wind turbine under the two controllers in a wide range of simulated wind conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Mohamad-Than, Mohamad Nor. „The stability of control systems employing Kalman filters as stochastic observers in the state variable feedback configurations“. Thesis, University of Reading, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333408.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bruzzone, Andrea. „P-SGLD : Stochastic Gradient Langevin Dynamics with control variates“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-140121.

Der volle Inhalt der Quelle
Annotation:
Year after years, the amount of data that we continuously generate is increasing. When this situation started the main challenge was to find a way to store the huge quantity of information. Nowadays, with the increasing availability of storage facilities, this problem is solved but it gives us a new issue to deal with: find tools that allow us to learn from this large data sets. In this thesis, a framework for Bayesian learning with the ability to scale to large data sets is studied. We present the Stochastic Gradient Langevin Dynamics (SGLD) framework and show that in some cases its approximation of the posterior distribution is quite poor. A reason for this can be that SGLD estimates the gradient of the log-likelihood with a high variability due to naïve sampling. Our approach combines accurate proxies for the gradient of the log-likelihood with SGLD. We show that it produces better results in terms of convergence to the correct posterior distribution than the standard SGLD, since accurate proxies dramatically reduce the variance of the gradient estimator. Moreover, we demonstrate that this approach is more efficient than the standard Markov Chain Monte Carlo (MCMC) method and that it exceeds other techniques of variance reduction proposed in the literature such as SAGA-LD algorithm. This approach also uses control variates to improve SGLD so that it is straightforward the comparison with our approach. We apply the method to the Logistic Regression model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Marwah, Gaurav. „Algorithms for stochastic finite memory control of partially observable systems“. Master's thesis, Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/etd/show.asp?etd=etd-07082005-132056.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Milisavljevic, Mile. „Information driven optimization methods in control systems, signal processing, telecommunications and stochastic finance“. Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/14912.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Huang, Xin. „A study on the application of machine learning algorithms in stochastic optimal control“. Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252541.

Der volle Inhalt der Quelle
Annotation:
By observing a similarity between the goal of stochastic optimal control to minimize an expected cost functional and the aim of machine learning to minimize an expected loss function, a method of applying machine learning algorithm to approximate the optimal control function is established and implemented via neural approximation. Based on a discretization framework, a recursive formula for the gradient of the approximated cost functional on the parameters of neural network is derived. For a well-known Linear-Quadratic-Gaussian control problem, the approximated neural network function obtained with stochastic gradient descent algorithm manages to reproduce to shape of the theoretical optimal control function, and application of different types of machine learning optimization algorithm gives quite close accuracy rate in terms of their associated empirical value function. Furthermore, it is shown that the accuracy and stability of machine learning approximation can be improved by increasing the size of minibatch and applying a finer discretization scheme. These results suggest the effectiveness and appropriateness of applying machine learning algorithm for stochastic optimal control.
Genom att observera en likhet mellan målet för stokastisk optimal styrning för att minimera en förväntad kostnadsfunktionell och syftet med maskininlärning att minimera en förväntad förlustfunktion etableras och implementeras en metod för att applicera maskininlärningsalgoritmen för att approximera den optimala kontrollfunktionen via neuralt approximation. Baserat på en diskretiseringsram, härleds en rekursiv formel för gradienten av den approximerade kostnadsfunktionen på parametrarna för neuralt nätverk. För ett välkänt linjärt-kvadratisk-gaussiskt kontrollproblem lyckas den approximerade neurala nätverksfunktionen erhållen med stokastisk gradient nedstigningsalgoritm att reproducera till formen av den teoretiska optimala styrfunktionen och tillämpning av olika typer av algoritmer för maskininlärning optimering ger en ganska nära noggrannhet med avseende på deras motsvarande empiriska värdefunktion. Vidare är det visat att noggrannheten och stabiliteten hos maskininlärning simetrationen kan förbättras genom att öka storleken på minibatch och tillämpa ett finare diskretiseringsschema. Dessa resultat tyder på effektiviteten och lämpligheten av att tillämpa maskininlärningsalgoritmen för stokastisk optimal styrning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Hochart, Antoine. „Nonlinear Perron-Frobenius theory and mean-payoff zero-sum stochastic games“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX079/document.

Der volle Inhalt der Quelle
Annotation:
Les jeux stochastiques à somme nulle possèdent une structure récursive qui s'exprime dans leur opérateur de programmation dynamique, appelé opérateur de Shapley. Ce dernier permet d'étudier le comportement asymptotique de la moyenne des paiements par unité de temps. En particulier, le paiement moyen existe et ne dépend pas de l'état initial si l'équation ergodique - une équation non-linéaire aux valeurs propres faisant intervenir l'opérateur de Shapley - admet une solution. Comprendre sous quelles conditions cette équation admet une solution est un problème central de la théorie de Perron-Frobenius non-linéaire, et constitue le principal thème d'étude de cette thèse. Diverses classes connues d'opérateur de Shapley peuvent être caractérisées par des propriétés basées entièrement sur la relation d'ordre ou la structure métrique de l'espace. Nous étendons tout d'abord cette caractérisation aux opérateurs de Shapley "sans paiements", qui proviennent de jeux sans paiements instantanés. Pour cela, nous établissons une expression sous forme minimax des fonctions homogènes de degré un et non-expansives par rapport à une norme faible de Minkowski. Nous nous intéressons ensuite au problème de savoir si l'équation ergodique a une solution pour toute perturbation additive des paiements, problème qui étend la notion d'ergodicité des chaînes de Markov. Quand les paiements sont bornés, cette propriété d'"ergodicité" est caractérisée par l'unicité, à une constante additive près, du point fixe d'un opérateur de Shapley sans paiement. Nous donnons une solution combinatoire s'exprimant au moyen d'hypergraphes à ce problème, ainsi qu'à des problèmes voisins d'existence de points fixes. Puis, nous en déduisons des résultats de complexité. En utilisant la théorie des opérateurs accrétifs, nous généralisons ensuite la condition d'hypergraphes à tous types d'opérateurs de Shapley, y compris ceux provenant de jeux dont les paiements ne sont pas bornés. Dans un troisième temps, nous considérons le problème de l'unicité, à une constante additive près, du vecteur propre. Nous montrons d'abord que l'unicité a lieu pour une perturbation générique des paiements. Puis, dans le cadre des jeux à information parfaite avec un nombre fini d'actions, nous précisons la nature géométrique de l'ensemble des perturbations où se produit l'unicité. Nous en déduisons un schéma de perturbations qui permet de résoudre les instances dégénérées pour l'itération sur les politiques
Zero-sum stochastic games have a recursive structure encompassed in their dynamic programming operator, so-called Shapley operator. The latter is a useful tool to study the asymptotic behavior of the average payoff per time unit. Particularly, the mean payoff exists and is independent of the initial state as soon as the ergodic equation - a nonlinear eigenvalue equation involving the Shapley operator - has a solution. The solvability of the latter equation in finite dimension is a central question in nonlinear Perron-Frobenius theory, and the main focus of the present thesis. Several known classes of Shapley operators can be characterized by properties based entirely on the order structure or the metric structure of the space. We first extend this characterization to "payment-free" Shapley operators, that is, operators arising from games without stage payments. This is derived from a general minimax formula for functions homogeneous of degree one and nonexpansive with respect to a given weak Minkowski norm. Next, we address the problem of the solvability of the ergodic equation for all additive perturbations of the payment function. This problem extends the notion of ergodicity for finite Markov chains. With bounded payment function, this "ergodicity" property is characterized by the uniqueness, up to the addition by a constant, of the fixed point of a payment-free Shapley operator. We give a combinatorial solution in terms of hypergraphs to this problem, as well as other related problems of fixed-point existence, and we infer complexity results. Then, we use the theory of accretive operators to generalize the hypergraph condition to all Shapley operators, including ones for which the payment function is not bounded. Finally, we consider the problem of uniqueness, up to the addition by a constant, of the nonlinear eigenvector. We first show that uniqueness holds for a generic additive perturbation of the payments. Then, in the framework of perfect information and finite action spaces, we provide an additional geometric description of the perturbations for which uniqueness occurs. As an application, we obtain a perturbation scheme allowing one to solve degenerate instances of stochastic games by policy iteration
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Shackleton, Mark Broughton. „Frequency domain and stochastic control theory applied to volatility and pricing in intraday financial data“. Thesis, London Business School (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299700.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Laliotis, Dimitrios. „Financial time series prediction and stochastic control of trading decisions in the fixed income markets“. Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243831.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Fang, Fang. „A simulation study for Bayesian hierarchical model selection methods“. View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/fangf/fangfang.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Simms, Amy E. „A Stochastic Approach to Modeling Aviation Security Problems Using the KNAPSACK Problem“. Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36806.

Der volle Inhalt der Quelle
Annotation:
Designers, operators, and users of multiple-device, access control security systems are challenged by the false alarm, false clear tradeoff. Given a particular access control security system, and a prespecified false clear standard, there is an optimal (minimal) false alarm rate that can be achieved. The objective of this research is to develop methods that can be used to determine this false alarm rate. Meeting this objective requires knowledge of the joint conditional probability density functions for the security device responses. Two sampling procedures, the static grid estimation procedure and the dynamic grid estimation procedure, are proposed to estimate these functions. The concept of a system response function is introduced and the problem of determining the optimal system response function that minimizes the false alarm rate, while meeting the false clear standard, is formulated as a decision problem and proven to be NP-complete. Two heuristic procedures, the Greedy algorithm and the Dynamic Programming algorithm, are formulated to address this problem. Computational results using simulated security data are reported. These results are compared to analytical results, obtained for a prespecified system response function form. Suggestions for future research are also included.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Yucelen, Tansel. „Advances in adaptive control theory: gradient- and derivative-free approaches“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43731.

Der volle Inhalt der Quelle
Annotation:
In this dissertation, we present new approaches to improve standard designs in adaptive control theory, and novel adaptive control architectures. We first present a novel Kalman filter based approach for approximately enforcing a linear constraint in standard adaptive control design. One application is that this leads to alternative forms for well known modification terms such as e-modification. In addition, it leads to smaller tracking errors without incurring significant oscillations in the system response and without requiring high modification gain. We derive alternative forms of e- and adaptive loop recovery (ALR-) modifications. Next, we show how to use Kalman filter optimization to derive a novel adaptation law. This results in an optimization-based time-varying adaptation gain that reduces the need for adaptation gain tuning. A second major contribution of this dissertation is the development of a novel derivative-free, delayed weight update law for adaptive control. The assumption of constant unknown ideal weights is relaxed to the existence of time-varying weights, such that fast and possibly discontinuous variation in weights are allowed. This approach is particularly advantageous for applications to systems that can undergo a sudden change in dynamics, such as might be due to reconfiguration, deployment of a payload, docking, or structural damage, and for rejection of external disturbance processes. As a third and final contribution, we develop a novel approach for extending all the methods developed in this dissertation to the case of output feedback. The approach is developed only for the case of derivative-free adaptive control, and the extension of the other approaches developed previously for the state feedback case to output feedback is left as a future research topic. The proposed approaches of this dissertation are illustrated in both simulation and flight test.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Larrañaga, Maialen. „Dynamic control of stochastic and fluid resource-sharing systems“. Thesis, Toulouse, INPT, 2015. http://www.theses.fr/2015INPT0075/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous étudions le contrôle dynamique des systèmes de partage de ressources qui se posent dans divers domaines : réseaux de gestion des stocks, services de santé, réseaux de communication, etc. Nous visons à allouer efficacement les ressources disponibles entre des projets concurrents, selon certains critères de performance. Ce type de problème est de nature stochastique et peut être très complexe à résoudre. Nous nous concentrons donc sur le développement de méthodes heuristiques performantes. Dans la partie I, nous nous plaçons dans le cadre des Restless Bandit Problems, qui est une classe générale de problèmes d’optimisation dynamique stochastique. Relaxer la contrainte de trajectoire dans le problème d’optimisation permet de définir une politique d’index comme heuristique pour le modèle contraint d’origine, aussi appelée politique d’index de Whittle. Nous dérivons une expression analytique pour l’index de Whittle en fonction des probabilités stationnaires de l’état dans le cas où les bandits (ou projets) suivent un processus de naissance et de mort. D’une part, cette expression nécessite la vérification de plusieurs conditions techniques, d’autre part elle ne peut être calculée explicitement que dans certains cas spécifiques. Nous prouvons ensuite, que dans le cas particulier d’une file d’attente multi-classe avec abandon, la politique d’index de Whittle est asymptotiquement optimale aussi bien pour les régimes à faible trafic comme pour ceux à fort trafic. Dans la partie II, nous dérivons des heuristiques issues de l’approximation des systèmes stochastiques de partage de ressources par des modèles fluides déterministes. Nous formulons dans un premier temps une version fluide du problème d’optimisation relaxé que nous avons introduit dans la partie I, et développons une politique d’index fluide. L’index fluide peut toujours être calculé explicitement et surmonte donc les questions techniques qui se posent lors du calcul de l’index de Whittle. Nous appliquons les politiques d’index de Whittle et de l’index fluide à plusieurs cas : les fermes de serveurs éco-conscients, l’ordonnancement opportuniste dans les systèmes sans fil, et la gestion de stockage de produits périssables. Nous montrons numériquement que ces politiques d’index sont presque optimales. Dans un second temps, nous étudions l’ordonnancement optimal de la version fluide d’une file d’attente multi-classe avec abandon. Nous obtenons le contrôle optimal du modèle fluide en présence de deux classes de clients en concurrence pour une même ressource. En nous appuyant sur ces derniers résultats, nous proposons une heuristique pour le cas général de plusieurs classes. Cette heuristique montre une performance quasi-optimale lorsqu’elle est appliquée au modèle stochastique original pour des charges de travail élevées. Enfin, dans la partie III, nous étudions les phénomènes d’abandon dans le contexte d’un problème de distribution de contenu. Nous caractérisons une politique optimale de regroupement afin que des demandes issues d’utilisateurs impatients puissent être servies efficacement en mode diffusion
In this thesis we study the dynamic control of resource-sharing systems that arise in various domains: e.g. inventory management, healthcare and communication networks. We aim at efficiently allocating the available resources among competing projects according to a certain performance criteria. These type of problems have a stochastic nature and may be very complex to solve. We therefore focus on developing well-performing heuristics. In Part I, we consider the framework of Restless Bandit Problems, which is a general class of dynamic stochastic optimization problems. Relaxing the sample-path constraint in the optimization problem enables to define an index-based heuristic for the original constrained model, the so-called Whittle index policy. We derive a closed-form expression for the Whittle index as a function of the steady-state probabilities for the case in which bandits (projects) evolve in a birth-and-death fashion. This expression requires several technical conditions to be verified, and in addition, it can only be computed explicitly in specific cases. In the particular case of a multi-class abandonment queue, we further prove that the Whittle index policy is asymptotically optimal in the light-traffic and heavy-traffic regimes. In Part II, we derive heuristics by approximating the stochastic resource-sharing systems with deterministic fluid models. We first formulate a fluid version of the relaxed optimization problem introduced in Part I, and we develop a fluid index policy. The fluid index can always be computed explicitly and hence overcomes the technical issues that arise when calculating the Whittle index. We apply the Whittle index and the fluid index policies to several systems: e.g. power-aware server-farms, opportunistic scheduling in wireless systems, and make-to-stock problems with perishable items. We show numerically that both index policies are nearly optimal. Secondly, we study the optimal scheduling control for the fluid version of a multi-class abandonment queue. We derive the fluid optimal control when there are two classes of customers competing for a single resource. Based on the insights provided by this result we build a heuristic for the general multi-class setting. This heuristic shows near-optimal performance when applied to the original stochastic model for high workloads. In Part III, we further investigate the abandonment phenomena in the context of a content delivery problem. We characterize an optimal grouping policy so that requests, which are impatient, are efficiently transmitted in a multi-cast mode
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Fang, Qijun. „Model search strategy when P >> N in Bayesian hierarchical setting“. View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/fangq/qijunfang.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Herzog, David Paul. „Geometry's Fundamental Role in the Stability of Stochastic Differential Equations“. Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145150.

Der volle Inhalt der Quelle
Annotation:
We study dynamical systems in the complex plane under the effect of constant noise. We show for a wide class of polynomial equations that the ergodic property is valid in the associated stochastic perturbation if and only if the noise added is in the direction transversal to all unstable trajectories of the deterministic system. This has the interpretation that noise in the "right" direction prevents the process from being unstable: a fundamental, but not well-understood, geometric principle which seems to underlie many other similar equations. The result is proven by using Lyapunov functions and geometric control theory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Souto, Rafael Fontes 1984. „Processos de difusão controlada = um estudo sobre sistemas em que a variação do controle aumenta a incerteza“. [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259267.

Der volle Inhalt der Quelle
Annotation:
Orientador: João Bosco Ribeiro do Val
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T02:55:02Z (GMT). No. of bitstreams: 1 Souto_RafaelFontes_M.pdf: 470367 bytes, checksum: 516cc5b88625a7d2e5142b69233188f5 (MD5) Previous issue date: 2010
Resumo: Esta dissertação apresenta uma caracterização para sistemas estocásticos em tempo contínuo em que a variação da ação de controle aumenta a incerteza sobre o estado. Este tipo de sistema pode ser aplicado em diversas áreas da ciência e da engenharia, haja vista sua capacidade de modelar sistemas estocásticos complexos, cujas dinâmicas não são completamente conhecidas. Processos de difusão controlada de Itô são usados para descrever a trajetória do estado, e a otimização é realizada por meio do método da programação dinâmica, sendo, portanto, necessária a resolução da equação de Hamilton-Jacobi-Bellman. Adicionalmente, a utilização de ferramentas da análise de funções não suaves indicou a existência de uma região no espaço de estados onde a ação ótima de controle consiste na manutenção do controle que tem sido aplicado ao sistema, seja ele qual for. Intuitivamente, este resultado está de acordo com a natureza cautelosa do controle de sistemas subdeterminados. Finalmente, estudou-se analiticamente o caso particular de um sistema com custo quadrático. Este estudo revelou que a técnica desenvolvida permite o cálculo da solução ótima de maneira simples e eficaz para comportamentos assintóticos do sistema. Essa peculiaridade da solução vem de auxílio à obtenção da solução completa do problema via aproximações numéricas
Abstract: This dissertation presents a framework for continuous-time stochastic systems in which the control variations increase the state uncertainty. This type of system can be applied in several areas of science and engineering, due to its hability of modelling complex stochastic systems, for which the dynamics are not completely known. Controlled Itô diffusion processes are used in order to describe the state path, and the optimization was achieved by the dynamic programming method, so it was necessary to solve the Hamilton-Jacobi-Bellman equation. In addition, tools from nonsmooth analysis indicated the existence of a region in the state space in which the optimal control action is characterized by no variation, no matter the previous control were. Intuitively, this result is expected from the cautionary nature of controlling underdetermined systems. Finally, it was analytically studied the particular case of a system with quadratic running costs. This study revealed that the technique developed allows the computation of the optimal solution in a simple and effective way for asymptotic behavior of the system. This feature of the solution comes in hand to obtain the complete solution of the problem by means of numerical approximations
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Kelome, Djivèdé Armel. „Viscosity solutions of second order equations in a separable Hilbert space and applications to stochastic optimal control“. Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/29159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Bountourelis, Theologos. „Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28144.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. S.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Reveliotis, Spyros; Committee Member: Ayhan, Hayriye; Committee Member: Goldsman, Dave; Committee Member: Shamma, Jeff; Committee Member: Zwart, Bert.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Chen, Si. „Design of Energy Storage Controls Using Genetic Algorithms for Stochastic Problems“. UKnowledge, 2015. http://uknowledge.uky.edu/ece_etds/80.

Der volle Inhalt der Quelle
Annotation:
A successful power system in military applications (warship, aircraft, armored vehicle etc.) must operate acceptably under a wide range of conditions involving different loading configurations; it must maintain war fighting ability and recover quickly and stably after being damaged. The introduction of energy storage for the power system of an electric warship integrated engineering plant (IEP) may increase the availability and survivability of the electrical power under these conditions. Herein, the problem of energy storage control is addressed in terms of maximizing the average performance. A notional medium-voltage dc system is used as the system model in the study. A linear programming model is used to simulate the power system, and two sets of states, mission states and damage states, are formulated to simulate the stochastic scenarios with which the IEP may be confronted. A genetic algorithm is applied to the design of IEP to find optimized energy storage control parameters. By using this algorithm, the maximum average performance of power system is found.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wang, Wen-Kai. „Application of stochastic differential games and real option theory in environmental economics“. Thesis, University of St Andrews, 2009. http://hdl.handle.net/10023/893.

Der volle Inhalt der Quelle
Annotation:
This thesis presents several problems based on papers written jointly by the author and Dr. Christian-Oliver Ewald. Firstly, the author extends the model presented by Fershtman and Nitzan (1991), which studies a deterministic differential public good game. Two types of volatility are considered. In the first case the volatility of the diffusion term is dependent on the current level of public good, while in the second case the volatility is dependent on the current rate of public good provision by the agents. The result in the latter case is qualitatively different from the first one. These results are discussed in detail, along with numerical examples. Secondly, two existing lines of research in game theoretic studies of fisheries are combined and extended. The first line of research is the inclusion of the aspect of predation and the consideration of multi-species fisheries within classical game theoretic fishery models. The second line of research includes continuous time and uncertainty. This thesis considers a two species fishery game and compares the results of this with several cases. Thirdly, a model of a fishery is developed in which the dynamic of the unharvested fish population is given by the stochastic logistic growth equation and it is assumed that the fishery harvests the fish population following a constant effort strategy. Explicit formulas for optimal fishing effort are derived in problems considered and the effects of uncertainty, risk aversion and mean reversion speed on fishing efforts are investigated. Fourthly, a Dixit and Pindyck type irreversible investment problem in continuous time is solved, using the assumption that the project value follows a Cox-Ingersoll- Ross process. This solution differs from the two classical cases of geometric Brownian motion and geometric mean reversion and these differences are examined. The aim is to find the optimal stopping time, which can be applied to the problem of extracting resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Cayci, Semih. „Online Learning for Optimal Control of Communication and Computing Systems“. The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595516470389826.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Olsén, Jörgen. „Stochastic modeling and simulation of the TCP protocol /“. Uppsala : Matematiska institutionen, Univ. [distributör], 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3534.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Soltani-Moghaddam, Alireza. „Network simulator design with extended object model and generalized stochastic petri-net /“. free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9999317.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Cheng, Gang. „Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains“. TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1236.

Der volle Inhalt der Quelle
Annotation:
Stochastic dynamic programming is a recursive method for solving sequential or multistage decision problems. It helps economists and mathematicians construct and solve a huge variety of sequential decision making problems in stochastic cases. Research on stochastic dynamic programming is important and meaningful because stochastic dynamic programming reflects the behavior of the decision maker without risk aversion; i.e., decision making under uncertainty. In the solution process, it is extremely difficult to represent the existing or future state precisely since uncertainty is a state of having limited knowledge. Indeed, compared to the deterministic case, which is decision making under certainty, the stochastic case is more realistic and gives more accurate results because the majority of problems in reality inevitably have many unknown parameters. In addition, time scale calculus theory is applicable to any field in which a dynamic process can be described with discrete or continuous models. Many stochastic dynamic models are discrete or continuous, so the results of time scale calculus are directly applicable to them as well. The aim of this thesis is to introduce a general form of a stochastic dynamic sequence problem on complex discrete time domains and to find the optimal sequence which maximizes the sequence problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Ahmadian, Mansooreh. „Hybrid Modeling and Simulation of Stochastic Effects on Biochemical Regulatory Networks“. Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99481.

Der volle Inhalt der Quelle
Annotation:
A complex network of genes and proteins governs the robust progression through cell cycles in the presence of inevitable noise. Stochastic modeling is viewed as a key paradigm to study the effects of intrinsic and extrinsic noise on the dynamics of biochemical networks. A detailed quantitative description of such complex and multiscale networks via stochastic modeling poses several challenges. First, stochastic models generally require extensive computations, particularly when applied to large networks. Second, the accuracy of stochastic models is highly dependent on the quality of the parameter estimation based on experimental observations. The goal of this dissertation is to address these problems by developing new efficient methods for modeling and simulation of stochastic effects in biochemical systems. Particularly, a hybrid stochastic model is developed to represent a detailed molecular mechanism of cell cycle control in budding yeast cells. In a single multiscale model, the proposed hybrid approach combines the advantages of two regimes: 1) the computational efficiency of a deterministic approach, and 2) the accuracy of stochastic simulations. The results show that this hybrid stochastic model achieves high computational efficiency while generating simulation results that match very well with published experimental measurements. Furthermore, a new hierarchical deep classification (HDC) algorithm is developed to address the parameter estimation problem in a monomolecular system. The HDC algorithm adopts a neural network that, via multiple hierarchical search steps, finds reasonably accurate ranges for the model parameters. To train the neural network in the presence of experimental data scarcity, the proposed method leverages the domain knowledge from stochastic simulations to generate labeled training data. The results show that the proposed HDC algorithm yields accurate ranges for the model parameters and highlight the potentials of model-free learning for parameter estimation in stochastic modeling of complex biochemical networks.
Doctor of Philosophy
Cell cycle is a process in which a growing cell replicates its DNA and divides into two cells. Progression through the cell cycle is regulated by complex interactions between networks of genes, transcripts, and proteins. These interactions inside the confined volume of a cell are subject to inherent noise. To provide a quantitative description of the cell cycle, several deterministic and stochastic models have been developed. However, deterministic models cannot capture the intrinsic noise. In addition, stochastic modeling poses the following challenges. First, stochastic models generally require extensive computations, particularly when applied to large networks. Second, the accuracy of stochastic models is highly dependent on the accuracy of the estimated model parameters. The goal of this dissertation is to address these challenges by developing new efficient methods for modeling and simulation of stochastic effects in biochemical networks. The results show that the proposed hybrid model that combines stochastic and deterministic modeling approaches can achieve high computational efficiency while generating accurate simulation results. Moreover, a new machine learning-based method is developed to address the parameter estimation problem in biochemical systems. The results show that the proposed method yields accurate ranges for the model parameters and highlight the potentials of model-free learning for parameter estimation in stochastic modeling of complex biochemical networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Britton, Matthew Scott. „Stochastic task scheduling in time-critical information delivery systems“. Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phb8629.pdf.

Der volle Inhalt der Quelle
Annotation:
"January 2003" Includes bibliographical references (leaves 120-129) Presents performance analyses of dynamic, stochastic task scheduling policies for a real- time-communications system where tasks lose value as they are delayed in the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Bottegal, Giulio. „Modeling, estimation and identification of stochastic systems with latent variables“. Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423358.

Der volle Inhalt der Quelle
Annotation:
The main topic of this thesis is the analysis of static and dynamic models in which some variables, although directly influencing the behavior of certain observables, are not accessible to measurements. These models find applications in many branches of science and engineering, such as control systems, communications, natural and biological sciences and econometrics. It is well-known that models with unaccessible - or latent - variables, usually suffer from a lack of uniqueness of representation. In other words, there are in general many models of the same type describing a given set of observables say, the measurable input-output variables. This is well-known and has been well-studied for a special class of linear models, called state-space models. In this thesis we shall focus on two particular classes of stochastic systems with latent variables: the generalized factor analysis models and errors-in-variables models. For these classes of models there are still some unresolved issues related to non-uniqueness of the representation and clarifying these issues is of paramount importance for their identification. Since mathematical models usually need to be estimated from experimental data, solving the non-uniqueness problem is essential for their use in statistical inference (system identification) from measured data.
L’argomento principale di questa tesi è l’analisi di modelli statici e dinamici in cui alcune variabili non sono accessibili a misurazioni, nonostante esse influenzino l’evoluzione di certe osservazioni. Questi modelli trovano applicazione in molte discipline delle scienze e dell’ingegneria, come ad esempio l’automatica, le telecomunicazioni, le scienze naturali, la biologia e l’econometria e sono stati studiati approfonditamente nel campo dell’identificazione dei modelli. E' ben noto che sistemi con variabili inaccessibili - o latenti, spesso soffrono di una mancanza di unicità nella rappresentazione. In altre parole, in generale ci sono molti modelli dello stesso tipo che possono descrivere un dato insieme di osservazioni, come ad esempio variabili misurabili di ingresso-uscita. Questo è ben noto, ed è stato studiato a fondo per una classe speciale di modelli lineari, chiamata modelli a spazio di stato. In questa tesi ci si focalizza su due classi particolari di sistemi stocastici a variabili latenti: i modelli generalized factor analysis e i modelli errors-in-variables. Per queste classi di modelli ci sono ancora alcuni problemi irrisolti legati alla non unicità della rappresentazione e chiarificare questi problemi è di importanza fondamentale per la loro identificazione. Poiché solitamente i modelli matematici necessitano ti essere stimati da dati sperimentali, è essenziale risolvere il problema della non unicità per il loro utilizzo nell’inferenza statistica (identificazione di modelli) da dati misurati.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie