To see the other types of publications on this topic, follow the link: Markov processes.

Dissertations / Theses on the topic 'Markov processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Desharnais, Josée. "Labelled Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0031/NQ64546.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Balan, Raluca M. "Set-Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ66119.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eltannir, Akram A. "Markov interactive processes." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/30745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Haugomat, Tristan. "Localisation en espace de la propriété de Feller avec application aux processus de type Lévy." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S046/document.

Full text
Abstract:
Dans cette thèse, nous donnons une localisation en espace de la théorie des processus de Feller. Un premier objectif est d’obtenir des résultats simples et précis sur la convergence de processus de Markov. Un second objectif est d’étudier le lien entre les notions de propriété de Feller, problème de martingales et topologie de Skorokhod. Dans un premier temps nous donnons une version localisée de la topologie de Skorokhod. Nous en étudions les notions de compacité et tension. Nous faisons le lien entre les topologies de Skorokhod localisée et non localisée, grâce à la notion de changement de temps. Dans un second temps, à l’aide de la topologie de Skorokhod localisée et du changement de temps, nous étudions les problèmes de martingales. Nous montrons pour des processus l’équivalence entre, d’une part, être solution d’un problème de martingales bien posé, d’autre part, vérifier une version localisée de la propriété de Feller, et enfin, être markovien et continu en loi par rapport à sa condition initiale. Nous caractérisons la convergence en loi pour les solutions de problèmes de martingale en terme de convergence des opérateurs associés et donnons un résultat similaire pour les approximations à temps discret. Pour finir, nous appliquons la théorie des processus localement fellerien à deux exemples. Nous l’appliquons d’abord au processus de type Lévy et obtenons des résultats de convergence pour des processus à temps discret et continu, notamment des méthodes de simulation et schémas d’Euler. Nous appliquons ensuite cette même théorie aux diffusions unidimensionnelles dans des potentiels, nous obtenons des résultats de convergence de diffusions ou marches aléatoires vers des diffusions singulières. Comme conséquences, nous déduisons la convergence de marches aléatoires en milieux aléatoires vers des diffusions en potentiels aléatoires
In this PhD thesis, we give a space localisation for the theory of Feller processes. A first objective is to obtain simple and precise results on the convergence of Markov processes. A second objective is to study the link between the notions of Feller property, martingale problem and Skorokhod topology. First we give a localised version of the Skorokhod topology. We study the notions of compactness and tightness for this topology. We make the connexion between localised and unlocalised Skorokhod topologies, by using the notion of time change. In a second step, using the localised Skorokhod topology and the time change, we study martingale problems. We show the equivalence between, on the one hand, to be solution of a well-posed martingale problem, on the other hand, to satisfy a localised version of the Feller property, and finally, to be a Markov process weakly continuous with respect to the initial condition. We characterise the weak convergence for solutions of martingale problems in terms of convergence of associated operators and give a similar result for discrete time approximations. Finally, we apply the theory of locally Feller process to some examples. We first apply it to the Lévy-type processes and obtain convergence results for discrete and continuous time processes, including simulation methods and Euler’s schemes. We then apply the same theory to one-dimensional diffusions in a potential and we obtain convergence results of diffusions or random walks towards singular diffusions. As a consequences, we deduce the convergence of random walks in random environment towards diffusions in random potential
APA, Harvard, Vancouver, ISO, and other styles
5

莊競誠 and King-sing Chong. "Explorations in Markov processes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

James, Huw William. "Transient Markov decision processes." Thesis, University of Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ku, Ho Ming. "Interacting Markov branching processes." Thesis, University of Liverpool, 2014. http://livrepository.liverpool.ac.uk/2002759/.

Full text
Abstract:
In engineering, biology and physics, in many systems, the particles or members give birth and die through time. These systems can be modeled by continuoustime Markov Chains and Markov Processes. Applications of Markov Processes are investigated by many scientists, Jagers [1975] for example . In ordinary Markov branching processes, each particles or members are assumed to be identical and independent. However, in some cases, each two members of the species may interact/collide together to give new birth. In considering these cases, we need to have some more general processes. We may use collision branching processes to model such systems. Then, in order to consider an even more general model, i.e. each particles can have branching and collision effect. In this case the branching component and collision component will have an interaction effect. We consider this model as interacting branching collision processes. In this thesis, in Chapter 1, we firstly look at some background, basic concepts of continuous-time Markov Chains and ordinary Markov branching processes. After revising some basic concepts and models, we look into more complicated models, collision branching processes and interacting branching collision processes. In Chapter 2, for collision branching processes, we investigate the basic properties, criteria of uniqueness, and explicit expressions for the extinction probability and the expected/mean extinction time and expected/mean explosion time. In Chapter 3, for interacting branching collision processes, similar to the structure in last chapter, we investigate the basic properties, criteria of uniqueness. Because of the more complicated model settings, a lot more details are required in considering the extinction probability. We will divide this section into several parts and consider the extinction probability under different cases and assumptions. After considering the extinction probability for the interacting branching processes, we notice that the explicit form of the extinction probability may be too complicated. In the last part of Chapter 3, we discuss the asymptotic behavior for the extinction probability of the interacting branching collision processes. In Chapter 4, we look at a related but still important branching model, Markov branching processes with immigration, emigration and resurrection. We investigate the basic properties, criteria of uniqueness. The most interesting part is that we investigate the extinction probability with our technique/methods using in Chapter 4. This can also be served as a good example of the methods introducing in Chapter 3. In Chapter 5, we look at two interacting branching models, One is interacting collision process with immigration, emigration and resurrection. The other one is interacting branching collision processes with immigration, emigration and resurrection. we investigate the basic properties, criteria of uniqueness and extinction probability. My original material starts from Chapter 4. The model used in chapter 4 were introduced by Li and Liu [2011]. In Li and Liu [2011], some calculation in cases of extinction probability evaluation were not strictly defined. My contribution focuses on the extinction probability evaluation and discussing the asymptotic behavior for the extinction probability in Chapter 4. A paper for this model will be submitted in this year. While two interacting branching models are discussed in Chapter 5. Some important properties for the two models are studied in detail.
APA, Harvard, Vancouver, ISO, and other styles
8

Chong, King-sing. "Explorations in Markov processes /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18736105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.

Full text
Abstract:
We give an upper bound for the norm distance of (0,1) -valued Markov-exchangeable random variables to mixtures of distributions of Markov processes. A Markov-exchangeable random variable has a distribution that depends only on the starting value and the number of transitions 0-0, 0-1, 1-0 and 1-1. We show that if, for increasing length of variables, the norm distance to mixtures of Markov processes goes to 0, the rate of this convergence may be arbitrarily slow. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
10

Ferns, Norman Francis. "Metrics for Markov decision processes." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80263.

Full text
Abstract:
We present a class of metrics, defined on the state space of a finite Markov decision process (MDP), each of which is sound with respect to stochastic bisimulation, a notion of MDP state equivalence derived from the theory of concurrent processes. Such metrics are based on similar metrics developed in the context of labelled Markov processes, and like those, are suitable for state space aggregation. Furthermore, we restrict our attention to a subset of this class that is appropriate for certain reinforcement learning (RL) tasks, specifically, infinite horizon tasks with an expected total discounted reward optimality criterion. Given such an RL metric, we provide bounds relating it to the optimal value function of the original MDP as well as to the value function of the aggregate MDP. Finally, we present an algorithm for calculating such a metric up to a prescribed degree of accuracy and some empirical results.
APA, Harvard, Vancouver, ISO, and other styles
11

Chaput, Philippe. "Approximating Markov processes by averaging." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66654.

Full text
Abstract:
We recast the theory of labelled Markov processes in a new setting, in a way "dual" to the usual point of view. Instead of considering state transitions as a collection of subprobability distributions on the state space, we view them as transformers of real-valued functions. By generalizing the operation of conditional expectation, we build a category consisting of labelled Markov processes viewed as a collection of operators; the arrows of this category behave as projections on a smaller state space. We define a notion of equivalence for such processes, called bisimulation, which is closely linked to the usual definition for probabilistic processes. We show that we can categorically construct the smallest bisimilar process, and that this smallest object is linked to a well-known modal logic. We also expose an approximation scheme based on this logic, where the state space of the approximants is finite; furthermore, we show that these finite approximants categorically converge to the smallest bisimilar process.
Nous reconsidérons les processus de Markov étiquetés sous une nouvelle approche, dans un certain sens "dual'' au point de vue usuel. Au lieu de considérer les transitions d'état en état en tant qu'une collection de distributions de sous-probabilités sur l'espace d'états, nous les regardons en tant que transformations de fonctions réelles. En généralisant l'opération d'espérance conditionelle, nous construisons une catégorie où les objets sont des processus de Markov étiquetés regardés en tant qu'un rassemblement d'opérateurs; les flèches de cette catégorie se comportent comme des projections sur un espace d'états plus petit. Nous définissons une notion d'équivalence pour de tels processus, que l'on appelle bisimulation, qui est intimement liée avec la définition usuelle pour les processus probabilistes. Nous démontrons que nous pouvons construire, d'une manière catégorique, le plus petit processus bisimilaire à un processus donné, et que ce plus petit object est lié à une logique modale bien connue. Nous développons une méthode d'approximation basée sur cette logique, où l'espace d'états des processus approximatifs est fini; de plus, nous démontrons que ces processus approximatifs convergent, d'une manière catégorique, au plus petit processus bisimilaire.
APA, Harvard, Vancouver, ISO, and other styles
12

Baxter, Martin William. "Discounted functionals of Markov processes." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Furloni, Walter. "Controle em horizonte finito com restriçoes de sistemas lineares discretos com saltos markovianos." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259271.

Full text
Abstract:
Orientador: João Bosco Ribeiro do Val
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-13T17:57:42Z (GMT). No. of bitstreams: 1 Furloni_Walter_M.pdf: 917619 bytes, checksum: 10cdfc1afdfa09f1573d3e30d14415c4 (MD5) Previous issue date: 2009
Resumo: O objetivo deste trabalho é propor e resolver o problema de controle em horizonte finito com restrições de Sistemas Lineares Discretos com Saltos Markovianos (SLDSM) na presença de ruído. As restrições dos vetores de estado e de controle não são rígidas e são estabelecidas por valores limites dos seus respectivos primeiro e segundo momentos. O controlador baseia-se numa estrutura de realimentação linear de estados, devendo minimizar uma função custo quadrática. Consideram-se duas situações com respeito à informação disponível da cadeia de Markov associada: num primeiro caso o estado da cadeia de Markov é conhecido em cada instante e num segundo caso dispõe-se apenas de sua distribuição probabilística inicial. Uma formulação determinística do problema estocástico é desenvolvida de modo que as condições necessárias de otimalidade propostas e as restrições possam ser facilmente incluídas utilizando-se desigualdades matriciais lineares (do inglês, Linear Matrix Inequalities - LMI). A inclusão de restrições constitui a principal contribuição, uma vez que elas são pertinentes a vários campos de aplicação tais como indústria química, transporte de massa, economia, etc. Para ilustração do método são apresentadas duas aplicações: uma referente à regulação de tráfego em linhas metroviárias e outra referente ao problema de seleção de ativos de portfólios em aplicações financeiras
Abstract: The purpose of this work is to propose and solve the constrained control problem within a finite horizon of Markovian Jump Discrete Linear Systems (MJDLS) driven by noise. The constraints of the state and control vectors are not rigid and limits are established respectively to their first and second moments. The controller is based on a linear state feedback structure and shall minimize a quadratic cost function. Two cases regarding the available information of the Markovian chain states are considered: firstly the Markov chain states are known at each step and secondly only its initial probability distribution is available. A deterministic formulation to the stochastic problem is developped in order that the proposed necessary optimality conditions and the constraints are easily included by using Linear Matrix Inequalities (LMI). The constraints consideration constitutes the main contribution, since they are pertinent to several application fields as for example chemical industry, mass transportation, economy etc. Two applications are presented for ilustration: one refers to metro lines traffic regulation and another refers to the financial investment income control
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
14

Pinheiro, Maicon Aparecido. "Processos pontuais no modelo de Guiol-Machado-Schinazi de sobrevivência de espécies." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-01062016-191528/.

Full text
Abstract:
Recentemente, Guiol, Machado e Schinazi propuseram um modelo estocástico para a evolução de espécies. Nesse modelo, as intensidades de nascimentos de novas espécies e de ocorrências de extinções são invariantes ao longo do tempo. Ademais, no instante de nascimento de uma nova espécie, a mesma é rotulada com um número aleatório gerado de uma distribuição absolutamente contínua. Toda vez que ocorre uma extinção, apenas uma espécie morre - a com o menor número vinculado. Quando a intensidade com que surgem novas espécies é maior que a com que ocorrem extinções, existe um valor crítico f_c tal que todas as espécies rotuladas com números menores que f_c morrerão quase certamente depois de um tempo aleatório finito, e as rotuladas com números maiores que f_c terão probabilidades positivas de se tornarem perpétuas. No entanto, espécies menos aptas continuam a aparecer durante o processo evolutivo e não há a garantia do surgimento de uma espécie imortal. Consideramos um caso particular do modelo de Guiol, Machado e Schinazi e abordamos estes dois últimos pontos. Caracterizamos o processo pontual limite vinculado às espécies na fase subcrítica do modelo e discorremos sobre a existência de espécies imortais.
Recently, Guiol, Machado and Schinazi proposed a stochastic model for species evolution. In this model, births and deaths of species occur with intensities invariant over time. Moreover, at the time of birth of a new species, it is labeled with a random number sampled from an absolutely continuous distribution. Each time there is an extinction event, exactly one existing species disappears: that with the smallest number. When the birth rate is greater than the extinction rate, there is a critical value f_c such that all species that come with number less than f_c will almost certainly die after a finite random time, and those with numbers higher than f_c survive forever with positive probability. However, less suitable species continue to appear during the evolutionary process and there is no guarantee the emergence of an immortal species. We consider a particular case of Guiol, Machado and Schinazi model and approach these last two points. We characterize the limit point process linked to species in the subcritical phase of the model and discuss the existence of immortal species.
APA, Harvard, Vancouver, ISO, and other styles
15

Pra, Paolo Dai, Pierre-Yves Louis, and Ida G. Minelli. "Complete monotone coupling for Markov processes." Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2008/1828/.

Full text
Abstract:
We formalize and analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuoustime but not in discrete-time.
APA, Harvard, Vancouver, ISO, and other styles
16

De, Stavola Bianca Lucia. "Multistate Markov processes with incomplete information." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Carpio, Kristine Joy Espiritu, and kjecarpio@lycos com. "Long-Range Dependence of Markov Processes." The Australian National University. School of Mathematical Sciences, 2006. http://thesis.anu.edu.au./public/adt-ANU20061024.131933.

Full text
Abstract:
Long-range dependence in discrete and continuous time Markov chains over a countable state space is defined via embedded renewal processes brought about by visits to a fixed state. In the discrete time chain, solidarity properties are obtained and long-range dependence of functionals are examined. On the other hand, the study of LRD of continuous time chains is defined via the number of visits in a given time interval. Long-range dependence of Markov chains over a non-countable state space is also carried out through positive Harris chains. Embedded renewal processes in these chains exist via visits to sets of states called proper atoms. Examples of these chains are presented, with particular attention given to long-range dependent Markov chains in single-server queues, namely, the waiting times of GI/G/1 queues and queue lengths at departure epochs in M/G/1 queues. The presence of long-range dependence in these processes is dependent on the moment index of the lifetime distribution of the service times. The Hurst indexes are obtained under certain conditions on the distribution function of the service times and the structure of the correlations. These processes of waiting times and queue sizes are also examined in a range of M/P/2 queues via simulation (here, P denotes a Pareto distribution).
APA, Harvard, Vancouver, ISO, and other styles
18

Castro, Rivadeneira Pablo Samuel. "Bayesian exploration in Markov decision processes." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18479.

Full text
Abstract:
Markov Decision Processes are a mathematical framework widely used for stochastic optimization and control problems. Reinforcement Learning is a branch of Artificial Intelligence that deals with stochastic environments where the dynamics of the system are unknown. A major issue for learning algorithms is the need to balance the amount of exploration of new experiences with the exploitation of existing knowledge. We present three methods for dealing with this exploration-exploitation tradeoff for Markov Decision Processes. The approach taken is Bayesian, in that we use and maintain a model estimate. The existence of an optimal policy for Bayesian exploration has been shown, but its computation is infeasible. We present three approximations to the optimal policy by the use of statistical sampling.\\ The first approach uses a combination of Linear Programming and Q-learning. We present empirical results demonstrating its performance. The second approach is an extension of this idea, and we prove theoretical guarantees along with empirical evidence of its performance. Finally, we present an algorithm that adapts itself efficiently to the amount of time granted for computation. This idea is presented as an approximation to an infinite dimensional linear program and we guarantee convergence as well as prove strong duality.
Les processus de décision Markoviens sont des modèles mathématiques fréquemment utilisés pour résoudre des problèmes d'optimisation stochastique et de contrôle. L'apprentissage par renforcement est une branche de l'intelligence artificielle qui s'intéresse aux environnements stochastiques où la dynamique du système est inconnue. Un problème majeur des algorithmes d'apprentissage est de bien balancer l'exploration de l'environnement, pour acquérir de nouvelles connaissances, et l'exploitation des connaissances acquises. Nous présentons trois méthodes pour obtenir de bons compromis exploration-exploitation dans les processus de décision Markoviens. L'approche adoptée est Bayésienne, en ce sens où nous utilisons et maintenons une estimation du modèle. L'existence d'une politique optimale pour l'exploration Bayésienne a été démontrée, mais elle est impossible à calculer efficacement. Nous présentons trois approximations de la politique optimale qui utilise l'échantillonnage statistique. \\ La première approche utilise une combinaison de programmation linéaire et de l'algorithme "Q-Learning". Nous présentons des résultats empiriques qui démontrent sa performance. La deuxième approche est une extension de cette idée, et nous démontrons des garanties théoriques de son efficacité, confirmées par des résultats empiriques. Finalement, nous présentons un algorithme qui s'adapte efficacement au temps alloué pour le calcul de la politique. Cette idée est présentée comme une approximation d'un programme linéaire à dimension infini; nous garantissons sa convergence et démontrons une dualité forte.
APA, Harvard, Vancouver, ISO, and other styles
19

Propp, Michael Benjamin. "The thermodynamic properties of Markov processes." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/17193.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Includes glossary.
Bibliography: leaves 87-91.
by Michael Benjamin Propp.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Korpas, Agata K. "Occupation Times of Continuous Markov Processes." Bowling Green State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1151347146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chu, Shanyun. "Some contributions to Markov decision processes." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2038000/.

Full text
Abstract:
In a nutshell, this thesis studies discrete-time Markov decision processes (MDPs) on Borel Spaces, with possibly unbounded costs, and both expected (discounted) total cost and long-run expected average cost criteria. In Chapter 2, we systematically investigate a constrained absorbing MDP with expected total cost criterion and possibly unbounded (from both above and below) cost functions. We apply the convex analytic approach to derive the optimality and duality results, along with the existence of an optimal finite mixing policy. We also provide mild conditions under which a general constrained MDP model with state-action-dependent discount factors can be equivalently transformed into an absorbing MDP model. Chapter 3 treats a more constrained absorbing MDP, as compared with that in Chapter 2. The dynamic programming approach is applied to a reformulated unconstrained MDP model and the optimality results are obtained. In addition, the correspondence between policies in the original model and the reformulated one is illustrated. In Chapter 4, we attempt to extend the dynamic programming approach for standard MDPs with expected total cost criterion to the case, where the (iterated) coherent risk measure of the cost is taken as the performance measure to be minimized. The cost function under our consideration is allowed to be unbounded from the below, and possibly arbitrarily unbounded from the above. Under a fairly weak version of continuity-compactness conditions, we derive the optimality results for both the finite and infinite horizon cases, and establish value iteration as well as policy iteration algorithms. The standard MDP and the iterated conditional value-at-risk of the cost function are illustrated as two examples. Chapter 5 and 6 tackle MDPs with long-run expected average cost criterion. In Chapter 5, we consider a constrained MDP with possibly unbounded (from both above and below) cost functions. Under Lyapunov-like conditions, we show the sufficiency of stable policies to the concerned constrained problem. Furthermore, we introduce the corresponding space of performance vectors and manage to characterize each of its extreme points with a deterministic stationary policy. Finally, the existence of an optimal finite mixing policy is justified. Chapter 6 concerns an unconstrained MDP with the cost functions unbounded from the below and possibly arbitrarily unbounded from the above. We provide a detailed discussion on the issue of sufficient policies in the denumerable case, establish the average cost optimality inequality (ACOI) and show the existence of an optimal deterministic stationary policy. In Chapter 7, an inventory-production system is taken as an example of real-world applications to illustrate the main results in Chapter 2 and 5.
APA, Harvard, Vancouver, ISO, and other styles
22

Carpio, Kristine Joy Espiritu. "Long-range dependence of Markov processes /." View thesis entry in Australian Digital Theses Program, 2006. http://thesis.anu.edu.au/public/adt-ANU20061024.131933/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

LIKA, ADA. "MARKOV PROCESSES IN FINANCE AND INSURANCE." Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/249618.

Full text
Abstract:
In this thesis we tried to make one more step in the application of Markov processes in the actuarial and financial field. Two main problems have been dealt. The first one regarded the application of a Markov process for the description of the salary lines of participants in an Italian Pension Scheme of the First Pillar. A semi-Markov process with backward recurrence time was proposed. A statistic test has been applied in order to determine whether the null hypothesis of a geometrical distribution of the waiting times of the process should be accepted or not. The test showed that the null hypotheses was rejected for some of the waiting time distributions and thus we concluded that the semi-Markov process should be preferred to the simple Markov chain to model the transition in the states of the salary process. In the financial application, we treated the Indexed semi-Markov chain, a new model that has been previously used to describe intra-day price return dynamics. The peculiarity of this model is that, through the Index process, it manages two very known stylized facts of financial time series: the first one is the long memory of financial series and the second one is the volatility clustering. This is achieved by defining the Index as a function of the m-th previous values of the price returns. In order to transform the values obtained in states of a stochastic process a discretization of the Index is necessary. We proposed the method of change points as a new method to obtain the most efficient classes. This approach is justified by the fact that, for financial time series, the price dynamics in different levels of volatility in the market present different characteristics. We found out that the best discretization of the Index process is that of using four change points, which implied five levels of volatility in the market: very low, medium low, medium, medium high and very high. We also generated synthetic trajectories in order to calculate the autocorrelation of the square of returns for the real data as well as for the hypothesized models. The autocorrelation function showed that the model with four change points was the closest to the real data.
APA, Harvard, Vancouver, ISO, and other styles
24

Durrell, Fernando. "Constrained portfolio selection with Markov and non-Markov processes and insiders." Doctoral thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/4379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wright, James M. "Stable processes with opposing drifts /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/5807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Werner, Ivan. "Contractive Markov systems." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/15173.

Full text
Abstract:
We introduce a theory of contractive Markov systems (CMS) which provides a unifying framework in so-called "fractal" geometry. It extends the known theory of iterated function systems (IFS) with place dependent probabilities [1][8] in a way that it also covers graph directed constructions of "fractal" sets [18]. Such systems naturally extend finite Markov chains and inherit some of their properties. In Chapter 1, we consider iterations of a Markov system and show that they preserve the essential structure of it. In Chapter 2, we show that the Markov operator defined by such a system has a unique invariant probability measure in the irreducible case and an attractive probability measure in the aperiodic case if the restrictions of the probability functions on their vertex sets are Dini-continuous and bounded away from zero, and the system satisfies a condition of a contractiveness on average. This generalizes a result from [1]. Furthermore, we show that the rate of convergence to the stationary state is exponential in the aperiodic case with constant probabilities and a compact state space. In Chapter 3, we construct a coding map for a contractive Markov system. In Chapter 4, we calculate Kolmogorov-Sinai entropy of the generalized Markov shift. In Chapter 5, we prove an ergodic theorem for Markov chains associated with the contractive Markov systems. It generalizes the ergodic theorem of Elton [8].
APA, Harvard, Vancouver, ISO, and other styles
27

Elsayad, Amr Lotfy. "Numerical solution of Markov Chains." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bartholme, Carine. "Self-similarity and exponential functionals of Lévy processes." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209256.

Full text
Abstract:
La présente thèse couvre deux principaux thèmes de recherche qui seront présentés dans deux parties et précédés par un prolegomenon commun. Dans ce dernier nous introduisons les concepts essentiels et nous exploitons aussi le lien entre les deux parties.

Dans la première partie, le principal objet d’intérêt est la soi-disant fonctionnelle exponentielle de processus de Lévy. La loi de cette variable aléatoire joue un rôle primordial dans de nombreux domaines divers tant sur le plan théorique que dans des domaines appliqués. Doney dérive une factorisation de la loi arc-sinus en termes de suprema de processus stables indépendants et de même index. Une factorisation similaire de la loi arc-sinus en termes de derniers temps de passage au niveau 1 de processus de Bessel peut aussi être établie en utilisant un résultat dû à Getoor. Des factorisations semblables d’une variable de Pareto en termes des mêmes objets peut également être obtenue. Le but de cette partie est de donner une preuve unifiée et une généralisation de ces factorisations qui semblent n’avoir aucun lien à première vue. Même s’il semble n’y avoir aucune connexion entre le supremum d’un processus stable et le dernier temps de passage d’un processus de Bessel, il peut être montré que ces variables aleatoires sont liées à des fonctionnelles exponentielles de processus de Lévy spécifiques. Notre contribution principale dans cette partie et aussi au niveau de caractérisations de la loi de la fonctionnelle exponentielle sont des factorisations de la loi arc-sinus et de variables de Pareto généralisées. Notre preuve s’appuie sur une factorisation de Wiener-Hopf récente de Patie et Savov.

Dans la deuxième partie, motivée par le fait que la dérivée fractionnaire de Caputo et d’autres opérateurs fractionnaires classiques coïncident avec le générateur de processus de Markov auto-similaires positifs particuliers, nous introduisons des opérateurs généralisés de Caputo et nous étudions certaines propriétés. Nous nous intéressons particulièrement aux conditions sous lesquelles ces opérateurs coïncident avec les générateurs infinitésimaux de processus de Markov auto-similaires positifs généraux. Dans ce cas, nous étudions les fonctions invariantes de ces opérateurs qui admettent une représentation en termes de séries entières. Nous précisons que cette classe de fonctions contient les fonctions de Bessel modifiées, les fonctions de Mittag-Leffler ainsi que plusieurs fonctions hypergéométriques. Nous proposons une étude unifiant et en profondeur de cette classe de fonctions.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
29

Dendievel, Sarah. "Skip-free markov processes: analysis of regular perturbations." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209050.

Full text
Abstract:
A Markov process is defined by its transition matrix. A skip-free Markov process is a stochastic system defined by a level that can only change by one unit either upwards or downwards. A regular perturbation is defined as a modification of one or more parameters that is small enough not to change qualitatively the model.

This thesis focuses on a category of methods, called matrix analytic methods, that has gained much interest because of good computational properties for the analysis of a large family of stochastic processes. Those methods are used in this work in order i) to analyze the effect of regular perturbations of the transition matrix on the stationary distribution of skip-free Markov processes; ii) to determine transient distributions of skip-free Markov processes by performing regular perturbations.

In the class of skip-free Markov processes, we focus in particular on quasi-birth-and-death (QBD) processes and Markov modulated fluid models.

We first determine the first order derivative of the stationary distribution - a key vector in Markov models - of a QBD for which we slightly perturb the transition matrix. This leads us to the study of Poisson equations that we analyze for finite and infinite QBDs. The infinite case has to be treated with more caution therefore, we first analyze it using probabilistic arguments based on a decomposition through first passage times to lower levels. Then, we use general algebraic arguments and use the repetitive block structure of the transition matrix to obtain all the solutions of the equation. The solutions of the Poisson equation need a generalized inverse called the deviation matrix. We develop a recursive formula for the computation of this matrix for the finite case and we derive an explicit expression for the elements of this matrix for the infinite case.

Then, we analyze the first order derivative of the stationary distribution of a Markov modulated fluid model. This leads to the analysis of the matrix of first return times to the initial level, a charactersitic matrix of Markov modulated fluid models.

Finally, we study the cumulative distribution function of the level in finite time and joint distribution functions (such as the level at a given finite time and the maximum level reached over a finite time interval). We show that our technique gives good approximations and allow to compute efficiently those distribution functions.

----------

Un processus markovien est défini par sa matrice de transition. Un processus markovien sans sauts est un processus stochastique de Markov défini par un niveau qui ne peut changer que d'une unité à la fois, soit vers le haut, soit vers le bas. Une perturbation régulière est une modification suffisamment petite d'un ou plusieurs paramètres qui ne modifie pas qualitativement le modèle.

Dans ce travail, nous utilisons des méthodes matricielles pour i) analyser l'effet de perturbations régulières de la matrice de transition sur le processus markoviens sans sauts; ii) déterminer des lois de probabilités en temps fini de processus markoviens sans sauts en réalisant des perturbations régulières.

Dans la famille des processus markoviens sans sauts, nous nous concentrons en particulier sur les processus quasi-birth-and-death (QBD) et sur les files fluides markoviennes.

Nous nous intéressons d'abord à la dérivée de premier ordre de la distribution stationnaire – vecteur clé des modèles markoviens – d'un QBD dont on modifie légèrement la matrice de transition. Celle-ci nous amène à devoir résoudre les équations de Poisson, que nous étudions pour les processus QBD finis et infinis. Le cas infini étant plus délicat, nous l'analysons en premier lieu par des arguments probabilistes en nous basant sur une décomposition par des temps de premier passage. En second lieu, nous faisons appel à un théorème général d'algèbre linéaire et utilisons la structure répétitive de la matrice de transition pour obtenir toutes les solutions à l’équation. Les solutions de l'équation de Poisson font appel à un inverse généralisé, appelé la matrice de déviation. Nous développons ensuite une formule récursive pour le calcul de cette matrice dans le cas fini et nous dérivons une expression explicite des éléments de cette dernière dans le cas infini.

Ensuite, nous analysons la dérivée de premier ordre de la distribution stationnaire d'une file fluide markovienne perturbée. Celle-ci nous amène à développer l'analyse de la matrice des temps de premier retour au niveau initial – matrice caractéristique des files fluides markoviennes.

Enfin, dans les files fluides markoviennes, nous étudions la fonction de répartition en temps fini du niveau et des fonctions de répartitions jointes (telles que le niveau à un instant donné et le niveau maximum atteint pendant un intervalle de temps donné). Nous montrerons que cette technique permet de trouver des bonnes approximations et de calculer efficacement ces fonctions de répartitions.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
30

Manstavicius, Martynas. "The p-variation of strong Markov processes /." Thesis, Connect to Dissertations & Theses @ Tufts University, 2003.

Find full text
Abstract:
Thesis (Ph.D.)--Tufts University, 2003.
Advisers: Richard M. Dudley; Marjorie G. Hahn. Submitted to the Dept. of Mathematics. Includes bibliographical references (leaves 109-113). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
APA, Harvard, Vancouver, ISO, and other styles
31

葉錦元 and Kam-yuen William Yip. "Simulation and inference of aggregated Markov processes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31977546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yip, Kam-yuen William. "Simulation and inference of aggregated Markov processes." [Hong Kong : University of Hong Kong], 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13787391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Patrascu, Relu-Eugen. "Linear approximations from factored Markov Dicision Processes." Waterloo, Ont. : University of Waterloo, 2004. http://etd.uwaterloo.ca/etd/rpatrasc2004.pdf.

Full text
Abstract:
Thesis (Ph.D.)--University of Waterloo, 2004.
"A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science". Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
34

Patrascu, Relu-Eugen. "Linear Approximations For Factored Markov Decision Processes." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1171.

Full text
Abstract:
A Markov Decision Process (MDP) is a model employed to describe problems in which a decision must be made at each one of several stages, while receiving feedback from the environment. This type of model has been extensively studied in the operations research community and fundamental algorithms have been developed to solve associated problems. However, these algorithms are quite inefficient for very large problems, leading to a need for alternatives; since MDP problems are provably hard on compressed representations, one becomes content even with algorithms which may perform well at least on specific classes of problems. The class of problems we deal with in this thesis allows succinct representations for the MDP as a dynamic Bayes network, and for its solution as a weighted combination of basis functions. We develop novel algorithms for producing, improving, and calculating the error of approximate solutions for MDPs using a compressed representation. Specifically, we develop an efficient branch-and-bound algorithm for computing the Bellman error of the compact approximate solution regardless of its provenance. We introduce an efficient direct linear programming algorithm which, using incremental constraints generation, achieves run times significantly smaller than existing approximate algorithms without much loss of accuracy. We also show a novel direct linear programming algorithm which, instead of employing constraints generation, transforms the exponentially many constraints into a compact form more amenable for tractable solutions. In spite of its perceived importance, the efficient optimization of the Bellman error towards an approximate MDP solution has eluded current algorithms; to this end we propose a novel branch-and-bound approximate policy iteration algorithm which makes direct use of our branch-and-bound method for computing the Bellman error. We further investigate another procedure for obtaining an approximate solution based on the dual of the direct, approximate linear programming formulation for solving MDPs. To address both the loss of accuracy resulting from the direct, approximate linear program solution and the question of where basis functions come from we also develop a principled system able not only to produce the initial set of basis functions, but also able to augment it with new basis functions automatically generated such that the approximation error decreases according to the user's requirements and time limitations.
APA, Harvard, Vancouver, ISO, and other styles
35

Cheng, Hsien-Te. "Algorithms for partially observable Markov decision processes." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29073.

Full text
Abstract:
The thesis develops methods to solve discrete-time finite-state partially observable Markov decision processes. For the infinite horizon problem, only discounted reward case is considered. Several new algorithms for the finite horizon and the infinite horizon problems are developed. For the finite horizon problem, two new algorithms are developed. The first algorithm is called the relaxed region algorithm. For each support in the value function, this algorithm determines a region not smaller than its support region and modifies it implicitly in later steps until the exact support region is found. The second algorithm, called linear support algorithm, systematically approximates the value function until all supports in the value function are found. The most important feature of this algorithm is that it can be modified to find an approximate value function. The number of regions determined explicitly by both algorithms is the same as the number of supports in the value function, which is much less than the number of regions generated by the one-pass algorithm. Since the vertices of each region have to be found, these two algorithms are more efficient than the one-pass algorithm. The limited numerical examples also show that both methods are more efficient than the existing algorithms. For the infinite horizon problem, it is first shown that the approximation version of linear support algorithm can be used to substitute the policy improvement step in a standard successive approximation method to obtain an e-optimal value function. Next, an iterative discretization procedure is developed which uses a small number of states to find new supports and improve the value function between two policy improvement steps. Since only a finite number of states are chosen in this process, some techniques developed for finite MDP can be applied here. Finally, we prove that the policy improvement step in iterative discretization procedure can be replaced by the approximation version of linear support algorithm. The last part of the thesis deals with problems with continuous signals. We first show that if the signal processes are uniformly distributed, then the problem can be reformulated as a problem with finite number of signals. Then the result is extended to where the signal processes are step functions. Since step functions can be easily used to approximate most of the probability distributions, this method can be used to approximate most of the problems with continuous signals. Finally, we present some conditions which guarantee that the linear support can be computed for any given state, then the methods developed for finite signal cases can be easily modified and applied to problems for which the conditions hold.
Business, Sauder School of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
36

Mundt, André Philipp. "Dynamic risk management with Markov decision processes." Karlsruhe Univ.-Verl. Karlsruhe, 2007. http://d-nb.info/987216511/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mundt, André Philipp. "Dynamic risk management with Markov decision processes." Karlsruhe, Baden : Universitätsverl. Karlsruhe, 2008. http://www.uvka.de/univerlag/volltexte/2008/294/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Saeedi, Ardavan. "Nonparametric Bayesian models for Markov jump processes." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42963.

Full text
Abstract:
Markov jump processes (MJPs) have been used as models in various fields such as disease progression, phylogenetic trees, and communication networks. The main motivation behind this thesis is the application of MJPs to data modeled as having complex latent structure. In this thesis we propose a nonparametric prior, the gamma-exponential process (GEP), over MJPs. Nonparametric Bayesian models have recently attracted much attention in the statistics community, due to their flexibility, adaptability, and usefulness in analyzing complex real world datasets. The GEP is a prior over infinite rate matrices which characterize an MJP; this prior can be used in Bayesian models where an MJP is imposed on the data but the number of states of the MJP is unknown in advance. We show that the GEP model we propose has some attractive properties such as conjugacy and simple closed-form predictive distributions. We also introduce the hierarchical version of the GEP model; sharing statistical strength can be considered as the main motivation behind the hierarchical model. We show that our hierarchical model admits efficient inference algorithms. We introduce two inference algorithms: 1) a “basic” particle Markov chain Monte Carlo (PMCMC) algorithm which is an MCMC algorithm with sequences proposed by a sequential Monte Carlo (SMC) algorithm; 2) a modified version of this PMCPC algorithm with an “improved” SMC proposal. Finally, we demonstrate the algorithms on the problems of estimating disease progression in multiple sclerosis and RNA evolutionary modeling. In both domains, we found that our model outperformed the standard rate matrix estimation approach.
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Wenzong. "Spatial queueing systems and reversible markov processes." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/24871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Paduraru, Cosmin. "Off-policy evaluation in Markov decision processes." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=117008.

Full text
Abstract:
This dissertation is set in the context of a widely used framework for formalizing autonomous decision-making, namely Markov decision processes (MDPs). One of the key problems that arise in MDPs is that of evaluating a decision-making strategy, typically called a policy. It is often the case that data collected under the policy one wishes to evaluate is difficult or even impossible to obtain. In this case, data collected under some other policy needs to be used, a setting known as off-policy evaluation. The main goal of this dissertation is to offer new insights into the properties of methods for off-policy evaluation. This is achieved through a series of novel theoretical results and empirical illustrations. The first set of results concerns the bandit setting (single state, single decision step MDPs). In this basic setting, the bias and variance of various off-policy estimators can be computed in closed form without resorting to approximations. We also compare the bias-variance trade-offs for the different estimators, both theoretically and empirically. In the sequential setting (more than one decision step), a comparative empirical study of different off-policy estimators for MDPs with discrete state and action spaces is conducted. The methods compared include three existing estimators, and two new ones proposed in this dissertation. All of these estimators are shown to be consistent and asymptotically normal. The empirical study illustrates how the relative behaviour of the estimators is affected by changes in problem parameters. The analysis for discrete MDPs is completed by recursive bias and variance formulas for the commonly used model-based estimator. These are the first analytic formulas for finite-horizon MDPs, and are shown to produce more accurate results than bootstrap estimates. The final contribution consists of introducing a new framework for bounding the return of a policy. The framework can be used whenever bounds on the next state and reward are available, regardless of whether the state and action spaces are discrete or continuous. If the next-state bounds are computed by assuming Lipschitz continuity of the transition function and using a batch of sampled transitions, then our framework can lead to tighter bounds than those proposed in previous work. Throughout this dissertation, the empirical performance of the estimators being studied is illustrated on several computational sustainability problems: a model of food-related greenhouse gas emissions, a mallard population dynamics model, and a fishery management domain.
Cette thèse se situe dans le contexte d'un cadre largement utilisé pour formaliser les méchanismes autonomes de décision, à savoir les processus de décision markoviens (MDP). L'un des principaux problèmes qui se posent dans les MDP est celui de l'évaluation d'une stratégie de prise de décision, généralement appelée une politique. C'est souvent le cas qu'obtenir des données recueillies dans le cadre de la politique qu'on souhaite évaluer est difficile, ou même impossible. Dans ce cas, des données recueillies sous une autre politique doivent être utilisées, une situation appelée "évaluation hors-politique". L'objectif principal de cette thèse est de proposer un nouvel éclairage sur les propriétés des méthodes pour l'évaluation hors-politique. Ce résultat est obtenu grâce à une série de nouveaux résultats théoriques et illustrations empiriques. La première série de résultats concerne des problèmes de type bandit (des MDP avec un seul état et une seule étape de décision). Dans cette configuration, le biais et la variance de divers estimateurs hors-politique peuvent être calculés sous forme fermée sans avoir recours à des approximations. Nous comparons également le compromis biais-variance pour les différents estimateurs, du point de vue théorique et empirique. Dans le cadre séquentiel (plus d'une étape de décision), une étude empirique comparative des différents estimateurs hors-politique pour les MDP avec des états et des actions discrètes est menée. Les méthodes comparées sont trois estimateurs existants, ainsi que deux nouveaux proposés dans cette thèse. Tous ces estimateurs se sont avérés convergents et asymptotiquement normaux. L'étude empirique montre comment le comportement relatif des estimateurs est affecté par des changements aux paramètres du problème. L'analyse des MDP discrets est complétée par des formules récursives pour le biais et la variance pour l'estimateur basé sur le modèle. Ce sont les premières formules analytiques pour les MDP à horizon fini, et on montre qu'ils produisent des résultats plus précis que les estimations "bootstrap".La contribution finale consiste à introduire un nouveau cadre pour délimiter le retour d'une politique. Le cadre peut être utilisé chaque fois que des bornes sur le prochain état et la récompense sont disponibles, indépendamment du fait que les espaces d'état et d'action soient discrètes ou continues. Si les limites du prochain état sont calculées en supposant la continuité Lipschitz de la fonction de transition et en utilisant un échantillon de transitions, notre cadre peut conduire à des bornes plus strictes que celles qui sont proposées dans des travaux antérieurs.Tout au long de cette thèse, la performance empirique des estimateurs étudiés est illustrée sur plusieurs problèmes de durabilité: un modèle de calcul des émissions de gaz à effet de serre associées à la consommation de nourriture, un modèle dynamique de la population des mallards, et un domaine de gestion de la pêche.
APA, Harvard, Vancouver, ISO, and other styles
41

Dai, Peng. "FASTER DYNAMIC PROGRAMMING FOR MARKOV DECISION PROCESSES." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/428.

Full text
Abstract:
Markov decision processes (MDPs) are a general framework used by Artificial Intelligence (AI) researchers to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature. This paper discusses two main groups of approaches in solving MDPs. The first group of approaches combines the strategies of heuristic search and dynamic programming to expedite the convergence process. The second makes use of graphical structures in MDPs to decrease the effort of classic dynamic programming algorithms. Two new algorithms proposed by the author, MBLAO* and TVI, are described here.
APA, Harvard, Vancouver, ISO, and other styles
42

Black, Mary. "Applying Markov decision processes in asset management." Thesis, University of Salford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Nieto-Barajas, Luis E. "Bayesian nonparametric survival analysis via Markov processes." Thesis, University of Bath, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Marbach, Peter 1966. "Simulation-based optimization of Markov decision processes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9660.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 127-129).
Markov decision processes have been a popular paradigm for sequential decision making under uncertainty. Dynamic programming provides a framework for studying such problems, as well as for devising algorithms to compute an optimal control policy. Dynamic programming methods rely on a suitably defined value function that has to be computed for every state in the state space. However, many interesting problems involve very large state spaces ( "curse of dimensionality"), which prohibits the application of dynamic programming. In addition, dynamic programming assumes the availability of an exact model, in the form of transition probabilities ( "curse of modeling"). In many situations, such a model is not available and one must resort to simulation or experimentation with an actual system. For all of these reasons, dynamic programming in its pure form may be inapplicable. In this thesis we study an approach for overcoming these difficulties where we use (a) compact (parametric) representations of the control policy, thus avoiding the curse of dimensionality, and (b) simulation to estimate quantities of interest, thus avoiding model-based computations and the curse of modeling. ,Furthermore, .our approach is not limited to Markov decision processes, but applies to general Markov reward processes for which the transition probabilities and the one-stage rewards depend on a tunable parameter vector 0. We propose gradient-type algorithms for updating 0 based on the simulation of a single sample path, so as to improve a given performance measure. As possible performance measures, we consider the weighted reward-to-go and the average reward. The corresponding algorithms(a) can be implemented online and update the parameter vector either at visits to a certain state; or at every time step . . . ,(b) have the property that the gradient ( with respect to 0) of the performance 'measure converges to O with probability 1. This is the strongest possible result · for gradient:-related stochastic approximation algorithms.
by Peter Marbach.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
45

Winder, Lee F. (Lee Francis) 1973. "Hazard avoidance alerting with Markov decision processes." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28860.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.
Includes bibliographical references (p. 123-125).
(cont.) (incident rate and unnecessary alert rate), the MDP-based logic can meet or exceed that of alternate logics.
This thesis describes an approach to designing hazard avoidance alerting systems based on a Markov decision process (MDP) model of the alerting process, and shows its benefits over standard design methods. One benefit of the MDP method is that it accounts for future decision opportunities when choosing whether or not to alert, or in determining resolution guidance. Another benefit is that it provides a means of modeling uncertain state information, such as unmeasurable mode variables, so that decisions are more informed. A mode variable is an index for distinct types of behavior that a system exhibits at different times. For example, in many situations normal system behavior tends to be safe, but rare deviations from the normal increase the likelihood of a harmful incident. Accurate modeling of mode information is needed to minimize alerting system errors such as unnecessary or late alerts. The benefits of the method are illustrated with two alerting scenarios where a pair of aircraft must avoid collisions when passing one another. The first scenario has a fully observable state and the second includes an uncertain mode describing whether an intruder aircraft levels off safely above the evader or is in a hazardous blunder mode. In MDP theory, outcome preferences are described in terms of utilities of different state trajectories. In keeping with this, alerting system requirements are stated in the form of a reward function. This is then used with probabilistic dynamic and sensor models to compute an alerting logic (policy) that maximizes expected utility. Performance comparisons are made between the MDP-based logics and alternate logics generated with current methods. It is found that in terms of traditional performance measures
by Lee F. Winder.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
46

Vera, Ruiz Victor. "Recoding of Markov Processes in Phylogenetic Models." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/13433.

Full text
Abstract:
Under a Markov model of evolution, lumping the state space (S) into fewer groups has been historically used to focus on specific types of substitutions or to reduce compositional heterogeneity and saturation. However, working with reduced state spaces (S’) may yield misleading results unless the Markovian property is kept. A Markov process X(t) is lumpable if the reduced process X’(t) of S’ is Markovian. The aim of this Thesis is to develop a test able to detect if a given X(t) is lumpable with respect to a given S’. This test should allow flexibility to any possible non-trivial S’ and should not depend on evolutionary assumptions such as stationarity, homogeneity or reversibility (SHR conditions) over a phylogenetic tree. We developed three tests for lumpability for SHR Markovian processes on two taxa and compared them: one using an ad hoc statistic based on an index that is evaluated using a bootstrap approximation of its distribution; one based on a test proposed specifically for Markov chains; and one using a likelihood-ratio (LR) test. We show that the LR test is more powerful than the other two tests, and that it can be applied in all pairs of taxa for binary trees with more than two taxa under SHR conditions. Then, we generalized the LR test for cases where the SHR conditions may not hold. We show that the distribution of this test statistic approximates a chi square with a number of degrees of freedom equal to the number of different rate matrices in the tree by two. In all cases, we show that if X(t) is lumpable, the obtained estimates for X’(t) agree with the obtained estimates for X(t), whereas, if X(t) is not lumpable, these estimates can differ substantially. We conclude that lumping S may result in biased phylogenetic estimates if the original X(t) is not lumpable. Accordingly, testing for lumpability should be done prior to any phylogenetic analysis of recoded data.
APA, Harvard, Vancouver, ISO, and other styles
47

Yu, Huizhen Ph D. Massachusetts Institute of Technology. "Approximate solution methods for partially observable Markov and semi-Markov decision processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35299.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 165-169).
We consider approximation methods for discrete-time infinite-horizon partially observable Markov and semi-Markov decision processes (POMDP and POSMDP). One of the main contributions of this thesis is a lower cost approximation method for finite-space POMDPs with the average cost criterion, and its extensions to semi-Markov partially observable problems and constrained POMDP problems, as well as to problems with the undiscounted total cost criterion. Our method is an extension of several lower cost approximation schemes, proposed individually by various authors, for discounted POMDP problems. We introduce a unified framework for viewing all of these schemes together with some new ones. In particular, we establish that due to the special structure of hidden states in a POMDP, there is a class of approximating processes, which are either POMDPs or belief MDPs, that provide lower bounds to the optimal cost function of the original POMDP problem. Theoretically, POMDPs with the long-run average cost criterion are still not fully understood.
(cont.) The major difficulties relate to the structure of the optimal solutions, such as conditions for a constant optimal cost function, the existence of solutions to the optimality equations, and the existence of optimal policies that are stationary and deterministic. Thus, our lower bound result is useful not only in providing a computational method, but also in characterizing the optimal solution. We show that regardless of these theoretical difficulties, lower bounds of the optimal liminf average cost function can be computed efficiently by solving modified problems using multichain MDP algorithms, and the approximating cost functions can be also used to obtain suboptimal stationary control policies. We prove the asymptotic convergence of the lower bounds under certain assumptions. For semi-Markov problems and total cost problems, we show that the same method can be applied for computing lower bounds of the optimal cost function. For constrained average cost POMDPs, we show that lower bounds of the constrained optimal cost function can be computed by solving finite-dimensional LPs. We also consider reinforcement learning methods for POMDPs and MDPs. We propose an actor-critic type policy gradient algorithm that uses a structured policy known as a finite-state controller.
(cont.) We thus provide an alternative to the earlier actor-only algorithm GPOMDP. Our work also clarifies the relationship between the reinforcement learning methods for POMDPs and those for MDPs. For average cost MDPs, we provide a convergence and convergence rate analysis for a least squares temporal difference (TD) algorithm, called LSPE, and previously proposed for discounted problems. We use this algorithm in the critic portion of the policy gradient algorithm for POMDPs with finite-state controllers. Finally, we investigate the properties of the limsup and liminf average cost functions of various types of policies. We show various convexity and concavity properties of these costfunctions, and we give a new necessary condition for the optimal liminf average cost to be constant. Based on this condition, we prove the near-optimality of the class of finite-state controllers under the assumption of a constant optimal liminf average cost. This result provides a theoretical guarantee for the finite-state controller approach.
by Huizhen Yu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
48

Ciolek, Gabriela. "Bootstrap and uniform bounds for Harris Markov chains." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT024/document.

Full text
Abstract:
Cette thèse se concentre sur certaines extensions de la théorie des processus empiriques lorsque les données sont Markoviennes. Plus spécifiquement, nous nous concentrons sur plusieurs développements de la théorie du bootstrap, de la robustesse et de l’apprentissage statistique dans un cadre Markovien Harris récurrent positif. Notre approche repose sur la méthode de régénération qui s’appuie sur la décomposition d’une trajectoire de la chaîne de Markov atomique régénérative en blocs d’observations indépendantes et identiquement distribuées (i.i.d.). Les blocs de régénération correspondent à des segments de la trajectoire entre des instants aléatoires de visites dans un ensemble bien choisi (l’atome) formant une séquence de renouvellement. Dans la premiére partie de la thèse nous proposons un théorème fonctionnel de la limite centrale de type bootstrap pour des chaînes de Markov Harris récurrentes, d’abord dans le cas de classes de fonctions uniformément bornées puis dans un cadre non borné. Ensuite, nous utilisons les résultats susmentionnés pour obtenir unthéorème de la limite centrale pour des fonctionnelles Fréchet différentiables dans un cadre Markovien. Motivés par diverses applications, nous discutons la manière d’étendre certains concepts de robustesse à partir du cadre i.i.d. à un cas Markovien. En particulier, nous considérons le cas où les données sont des processus Markoviens déterministes par morceaux. Puis, nous proposons des procédures d’échantillonnage résiduel et wild bootstrap pour les processus périodiquement autorégressifs et établissons leur validité. Dans la deuxième partie de la thèse, nous établissons des versions maximales d’inégalités de concentration de type Bernstein, Hoeffding et des inégalités de moments polynomiales en fonction des nombres de couverture et des moments des temps de retour et des blocs. Enfin, nous utilisons ces inégalités sur les queues de distributions pour calculer des bornes de généralisation pour une estimation d’ensemble de volumes minimum pour les chaînes de Markov régénératives
This thesis concentrates on some extensions of empirical processes theory when the data are Markovian. More specifically, we focus on some developments of bootstrap, robustness and statistical learning theory in a Harris recurrent framework. Our approach relies on the regenerative methods that boil down to division of sample paths of the regenerative Markov chain under study into independent and identically distributed (i.i.d.) blocks of observations. These regeneration blocks correspond to path segments between random times of visits to a well-chosen set (the atom) forming a renewal sequence. In the first part of the thesis we derive uniform bootstrap central limit theorems for Harris recurrent Markov chains over uniformly bounded classes of functions. We show that the result can be generalized also to the unbounded case. We use the aforementioned results to obtain uniform bootstrap central limit theorems for Fr´echet differentiable functionals of Harris Markov chains. Propelledby vast applications, we discuss how to extend some concepts of robustness from the i.i.d. framework to a Markovian setting. In particular, we consider the case when the data are Piecewise-determinic Markov processes. Next, we propose the residual and wild bootstrap procedures for periodically autoregressive processes and show their consistency. In the second part of the thesis we establish maximal versions of Bernstein, Hoeffding and polynomial tail type concentration inequalities. We obtain the inequalities as a function of covering numbers and moments of time returns and blocks. Finally, we use those tail inequalities toderive generalization bounds for minimum volume set estimation for regenerative Markov chains
APA, Harvard, Vancouver, ISO, and other styles
49

Wortman, M. A. "Vacation queues with Markov schedules." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/54468.

Full text
Abstract:
Vacation systems represent an important class of queueing models having application in both computer communication systems and integrated manufacturing systems. By specifying an appropriate server scheduling discipline, vacation systems are easily particularized to model many practical situations where the server's effort is divided between primary and secondary customers. A general stochastic framework that subsumes a wide variety of server scheduling disciplines for the M/GI/1/L vacation system is developed. Here, a class of server scheduling disciplines, called Markov schedules, is introduced. It is shown that the queueing behavior M/GI/1/L vacation systems having Markov schedules is characterized by a queue length/server activity marked point process that is Markov renewal and a joint queue length/server activity process that is semi-regenerative. These processes allow characterization of both the transient and ergodic queueing behavior of vacation systems as seen immediately following customer service completions, immediately following server vacation completions, and at arbitrary times The state space of the joint queue length/server activity process can be systematically particularized so as to model most server scheduling disciplines appearing in the literature and a number of disciplines that do not appear in the literature. The Markov renewal nature of the queue length/server activity marked point process yields important results that offer convenient computational formulae. These computational formulae are employed to investigate the ergodic queue length of several important vacation systems; a number of new results are introduced. In particular, the M/GI/1 vacation with limited batch service is investigated for the first time, and the probability generating functions for queue length as seen immediately following service completions, immediately following vacation completions, and at arbitrary times are developed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Dassios, Angelos. "Insurance, storage and point processes : an approach via piecewise deterministicc Markov processes." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography