Dissertations / Theses on the topic 'Markov chains'

To see the other types of publications on this topic, follow the link: Markov chains.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov chains.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Skorniakov, Viktor. "Asymptotically homogeneous Markov chains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101223_152954-43357.

Full text
Abstract:
In the dissertation there is investigated a class of Markov chains defined by iterations of a function possessing a property of asymptotical homogeneity. Two problems are solved: 1) there are established rather general conditions under which the chain has unique stationary distribution; 2) for the chains evolving in a real line there are established conditions under which the stationary distribution of the chain is heavy-tailed.
Disertacijoje tirta Markovo grandinių klasė, kurios iteracijos nusakomos atsitiktinėmis asimptotiškai homogeninėmis funkcijomis, ir išspręsti du uždaviniai: 1) surastos bendros sąlygos, kurios garantuoja vienintelio stacionaraus skirstinio egzistavimą; 2) vienmatėms grandinėms surastos sąlygos, kurioms esant stacionarus skirstinys turi "sunkias" uodegas.
APA, Harvard, Vancouver, ISO, and other styles
2

Cho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.

Full text
Abstract:

A finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.

The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).

APA, Harvard, Vancouver, ISO, and other styles
3

Dessain, Thomas James. "Perturbations of Markov chains." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10619/.

Full text
Abstract:
This thesis is concerned with studying the hitting time of an absorbing state on Markov chain models that have a countable state space. For many models it is challenging to study the hitting time directly; I present a perturbative approach that allows one to uniformly bound the difference between the hitting time moment generating functions of two Markov chains in a neighbourhood of the origin. I demonstrate how this result can be applied to both discrete and continuous time Markov chains. The motivation for this work came from the field of biology, namely DNA damage and repair. Biophysicists have highlighted that the repair process can lead to Double Strand Breaks; due to the serious nature of such an eventuality it is important to understand the hitting time of this event. There is a phase transition in the model that I consider. In the regime of parameters where the process reaches quasi-stationarity before being absorbed I am able to apply my perturbative technique in order to further understand this hitting time.
APA, Harvard, Vancouver, ISO, and other styles
4

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/1/Di_Cecco_Davide_Tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Matthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.

Full text
Abstract:
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson, David Bruce. "Exact sampling with Markov chains." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mestern, Mark Andrew. "Distributed analysis of Markov chains." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/9693.

Full text
Abstract:
Bibliography: leaves 88-91.
This thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.
APA, Harvard, Vancouver, ISO, and other styles
9

Salzman, Julia. "Spectral analysis with Markov chains /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dorff, Rebecca. "Modelling Infertility with Markov Chains." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4070.

Full text
Abstract:
Infertility affects approximately 15% of couples. Testing and interventions are costly, in time, money, and emotional energy. This paper will discuss using Markov decision and multi-armed bandit processes to identify a systematic approach of interventions that will lead to the desired baby while minimizing costs.
APA, Harvard, Vancouver, ISO, and other styles
11

Elsayad, Amr Lotfy. "Numerical solution of Markov Chains." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sudyko, Elena. "Dollarisation finançière en Russie." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE032.

Full text
Abstract:
Le travail développe un modèle de portfolio à propos de la dollarisation financière (FD), et l'estime pour la Russie. La contribution de ce travail sera de construire le premier modèle théorique de variance moyenne asymétrique d'aplatissement sur la dollarisation financière et de le valider empiriquement. Le travail se fonde sur des recherches antérieures qui ont trouvé que l'ajout de moments plus élevés, comme l'asymétrie et l'aplatissement, à la variance minimale du portfolio(MVP) permettant une meilleure modélisation des choix de portfolio et de développe un model comme celui-ci pour la FD. Nous utilisons ensuite les méthodes Markovswitching sur les données mensuelles pour les dépôts bancaires en Russie depuis la fin des années 1990 afin de documenter l'influence dominante de l'inflation et de la dépréciation de la monnaie et de leurs moments comme principaux déterminants de dépôt de dollarisation dans un cadre de variance-moyenne-asymétrique-aplatie en période de crise, par opposition aux périodes normales
This thesis develops a portfolio model of financial dollarization (FD) and estimates it for Russia. The contribution of this work will be to construct the first theoretical meanvariance-skewness-kurtosis model of financial dollarization and to validate it empirically. The work builds on previous research which found that adding higher moments, as Skewness and Kurtosis, to the minimum variance portfolio (MVP) enables a better modelling of portfolio choice, and develops such a model for FD. We then use Markovswitching methods on monthly data for bank deposits in Russia since the late 1990s to document the dominant influence of inflation and currency depreciation and their moments as the main determinants of deposit dollarization in a mean-varianceskewness-kurtosis framework during crisis as opposed to normal periods
APA, Harvard, Vancouver, ISO, and other styles
13

Sisson, Scott Antony. "Markov chains for genetics and extremes." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Trovato, Manlio Battaglia. "Interest rate models with Markov chains." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/8805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fitzpatrick, Matthew Anthony. "Multi-regime models involving Markov chains." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/14530.

Full text
Abstract:
In this work, we explore the theory and applications of various multi-regime models involving Markov chains. Markov chains are an elegant way to model path-dependent data. We study a series of problems with non-homogeneous data and the various ways that Markov chains come into play. Non-homogeneous data can be modelled using multi-regime models, which apply a distinct set of parameters to distinct population sub-groups, referred to as regimes. Such models essentially allow for a practitioner to understand the nature (and in some cases the existence) of particular regimes within the data without the need to split the population into assumed sub-groups. For example, the problem of modelling business outcomes in different economic states without explicitly using economic variables. Different regimes can apply to an entire population at different times they can apply to different subsections of the population over the whole observed time. Markov chains are involved via the estimation procedure or within models for the observed data. In our first two problems, we utilise the properties of Markov chains to discover and establish efficiencies in the estimation algorithms. In our third problem, we are analysing mixtures of Markov chains. We prove that the log-likelihood ratio test statistic for the test between 1 and 2 mixture components diverges to infinity in probability. In our fourth problem, we look at a simple case, where each Markov chain component has two states, one of which is absorbing, we derive the exact limiting distribution of the log-likelihood ratio test statistic. Although this work is largely focussed on addressing the theoretical issues of each problem, the motivation behind each of the problems studied comes from real datasets, which possess levels of complexity that are insufficiently described through more standard procedures.
APA, Harvard, Vancouver, ISO, and other styles
16

Carpio, Kristine Joy Espiritu, and kjecarpio@lycos com. "Long-Range Dependence of Markov Processes." The Australian National University. School of Mathematical Sciences, 2006. http://thesis.anu.edu.au./public/adt-ANU20061024.131933.

Full text
Abstract:
Long-range dependence in discrete and continuous time Markov chains over a countable state space is defined via embedded renewal processes brought about by visits to a fixed state. In the discrete time chain, solidarity properties are obtained and long-range dependence of functionals are examined. On the other hand, the study of LRD of continuous time chains is defined via the number of visits in a given time interval. Long-range dependence of Markov chains over a non-countable state space is also carried out through positive Harris chains. Embedded renewal processes in these chains exist via visits to sets of states called proper atoms. Examples of these chains are presented, with particular attention given to long-range dependent Markov chains in single-server queues, namely, the waiting times of GI/G/1 queues and queue lengths at departure epochs in M/G/1 queues. The presence of long-range dependence in these processes is dependent on the moment index of the lifetime distribution of the service times. The Hurst indexes are obtained under certain conditions on the distribution function of the service times and the structure of the correlations. These processes of waiting times and queue sizes are also examined in a range of M/P/2 queues via simulation (here, P denotes a Pareto distribution).
APA, Harvard, Vancouver, ISO, and other styles
17

Skariah, Emil. "Mobile Phone Context Prediction Using Markov Chains." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zapreev, I. S. "Model checking Markov chains techniques and tools /." Enschede : University of Twente [Host], 2008. http://doc.utwente.nl/58974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Alharbi, Randa. "Bayesian inference for continuous time Markov chains." Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/40972/.

Full text
Abstract:
Continuous time Markov chains (CTMCs) are a flexible class of stochastic models that have been employed in a wide range of applications from timing of computer protocols, through analysis of reliability in engineering, to models of biochemical networks in molecular biology. These models are defined as a state system with continuous time transitions between the states. Extensive work has been historically performed to enable convenient and flexible definition, simulation, and analysis of continuous time Markov chains. This thesis considers the problem of Bayesian parameter inference on these models and investigates computational methodologies to enable such inference. Bayesian inference over continuous time Markov chains is particularly challenging as the likelihood cannot be evaluated in a closed form. To overcome the statistical problems associated with evaluation of the likelihood, advanced algorithms based on Monte Carlo have been used to enable Bayesian inference without explicit evaluation of the likelihoods. An additional class of approximation methods has been suggested to handle such inference problems, known as approximate Bayesian computation. Novel Markov chain Monte Carlo (MCMC) approaches were recently proposed to allow exact inference. The contribution of this thesis is in discussion of the techniques and challenges in implementing these inference methods and performing an extensive comparison of these approaches on two case studies in systems biology. We investigate how the algorithms can be designed and tuned to work on CTMC models, and to achieve an accurate estimate of the posteriors with reasonable computational cost. Through this comparison, we investigate how to avoid some practical issues with accuracy and computational cost, for example by selecting an optimal proposal distribution and introducing a resampling step within the sequential Monte-Carlo method. Within the implementation of the ABC methods we investigate using an adaptive tolerance schedule to maximise the efficiency of the algorithm and in order to reduce the computational cost.
APA, Harvard, Vancouver, ISO, and other styles
20

Lo, Harry Chung Heng. "Markov chains and the pricing of derivatives." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/5508.

Full text
Abstract:
A numerical method for pricing financial derivatives based on continuous-time Markov chains is proposed. It approximates the underlying stochastic process by a continuous-time Markov chain. We show how to construct a multi-dimensional continuous-time Markov chain such that it converges in distribution to a multi-dimensional diffusion process. The method is flexible enough to be applied to a model where the underlying process contains local volatility, stochastic volatility and jumps. Furthermore, we introduce a method to approximate the dynamics of the realized variance of a Markov chain and an algorithm to reduce the complexity of computing the joint probability distribution between the realized variance and the underlying.
APA, Harvard, Vancouver, ISO, and other styles
21

Szczegot, Kamil. "Sharp approximation for density dependent Markov chains /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lamprecht, Ruth Elizabeth. "Translating Spatial Problems into Lumpable Markov Chains." W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539720328.

Full text
Abstract:
Real world systems are frequently of interest to researchers because the fluctuation in the status of the system makes it important to explore the causes and effects of that fluctuation. These systems usually include stochastic and dynamic processes because the conditional changes occur over time. Such systems can also include spatial aspects, such as the location of units or some notion of distance between units. Modeling real systems allows researchers to explore these dynamics via mathematical models by describing the states and actions. Model exploration can also better explain the overall behavior as well as to predict future behavior. If these states can be described as a discrete set having a defined rule set for state transitions, continuous-time Markov chains are a common mathematical model for studying systems in many application areas. For example, a wireless sensor network is comprised of many individual sensors, which, at a basic level, can be considered to have two states, on and off, and a definable probability that a sensor will move from its current state to the opposite state at any given moment. A spatial aspect that can be included is the distance between sensors and its affect on the transition probability. Then the continuous-time Markov chain could consist of all the possible states of the system in order to study how many and/or which sensors are on. Other example applications include explaining dynamic behavior in biological/biochemical systems or exploring the behavior of interactions in social network systems.;One goal of this research is to consider how to represent the spatial aspects of real-world systems, and how to take advantage of the regularities that are linked to the spatial aspects. It is necessary that this be defined in a way that is reasonable for the modeler to specify. This problem is approached through a compositional modeling formalism based on sharing state variables. Another aim is to use a symmetry detection mechanism to gain reduction of the Markov chain through lumpability, with the symmetries defined by the modeler, in order to expedite analysis of the model. Finally, the ability to do this detection in an automated manner is discussed and explored.;A result of this research is that the spatial information in a model can be useful to include. It is shown that a new methodology makes it easier to do so, in terms of both specifying the extra information required and minimizing the size of the resulting state space. to illustrate the usefulness of the spatial information, a new user interface is presented here to facilitate greater inclusion of the spatial aspects into the model, implemented in the Mobius modeling framework as an extension of existing work on state space reduction. Evaluation is done using three main applications: calcium release sites (biology), wireless sensor networks (networking), and disease propagation (medicine).
APA, Harvard, Vancouver, ISO, and other styles
23

Lystig, Theodore C. "Evaluation of hidden Markov models /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/9597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kaimanovich, Vadim A., Wolfgang Woess, and woess@TUGraz at. "Boundary and Entropy of Space Homogeneous Markov Chains." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1010.ps.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Parks, Kevin Preston. "Geosystem modeling with Markov chains and simulated annealing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0004/NQ31063.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Jianzhong. "Eigenvectors for infinite Markov chains and dimension groups." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0017/NQ48118.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Pandya, Chirag. "Decomposing Large Markov Chains for Statistical Usage Testing." Honors in the Major Thesis, University of Central Florida, 2000. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/681.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf
Bachelors
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
28

Nguyen, Tuyet Mai. "Malliavin calculus for Markov chains and counterparty risk." Thesis, Evry-Val d'Essonne, 2015. http://www.theses.fr/2015EVRY0022/document.

Full text
Abstract:
Cette thèse traite de deux domaines d’analyse stochastique et de mathématiques financières: le calcul Malliavin pour chaînes de Markov (Partie I) et le risque de contrepartie (Partie II). La partie I a pour objectif l’étude du calcul Malliavin pour chaînes de Markov en temps continu. Il y est présenté deux points : démontrer l’existence de la densité pour les solutions d’une équation différentielle stochastique et calculer les sensibilités des produits dérivés. La partie II traite de sujets d’actualité dans le domaine du risque de marché, à savoir les XVA (ajustements de prix) et la modélisation multi-courbe
This thesis deals with two areas of stochastic analysis and mathematical finance: Malliavin calculus for Markov chains (Part I) and counterparty risk (Part II). Part I is devoted to the study of Malliavin calculus for continuous-time Markov chains, in two respects: proving the existence of a density for the solution of a stochastic differential equation and computing sensitivities of financial derivatives. Part II addresses topical issues in interest rates and credit, namely XVA (pricing adjustments) and multicurve modeling
APA, Harvard, Vancouver, ISO, and other styles
29

Franz, David Matthew. "Markov Chains as Tools for Jazz Improvisation Analysis." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36831.

Full text
Abstract:
This thesis describes an exploratory application of a statistical analysis and modeling technique (Markov chains) for the modeling of jazz improvisation with the intended subobjective of providing increased insight into an improviser's style and creativity through the postulation of quantitative measures of style and creativity based on the constructed Markovian analysis techniques. Using Visual Basic programming language, Markov chains of orders one to three are created using transcriptions of improvised solos by John Coltrane in his song Giant Steps. Still considered as statistical data, the Markov chains are examined and information is extracted from them through the development of several statistical tools for musical analysis. Two general categories of tools for analysis are developed: Subtraction matrices and graphical comparisons of distributions. Using these tools and the raw Markov chain data for musical analysis, quantitative measures for creativity and style are postulated. These measures are based on previously developed models and definitions of creativity and style taken from the literature. The information acquired from the implementation of the analysis tools is applied to the models in order to provide a theoretical basis for the development of the quantitative measures and a framework for the interpretation of the information. Guilford's Structure of Intellect model is used for developing creativity measures and Heen's model of the constructs of style analysis is used for determining measures of style. Overall, this research found that Markov chains provide distinct and useful information for musical analysis in the domain of jazz improvisation. Many examples of Markov chains are enumerated and tools for analysis are developed that implement the Markov chains. It is then explained how Markov chains and the tools for their analysis can be interpreted to determine quantitative measures of creativity and style. Finally, this thesis presents conclusions on Markov chain portrayals, new analysis tools and procedures, quantitative measures of creativity and style, and, in sum, that Markovian modeling is in fact a reasonable and useful modeling approach for this application.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
30

Buchman, Monique. "Land use modeling using higher order Markov chains /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1559852691&sid=15&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Krull, Claudia. "Discrete time Markov chains advanced applications in simulation." Erlangen San Diego, Calif. SCS, 2008. http://d-nb.info/992577586/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ciolek, Gabriela. "Bootstrap and uniform bounds for Harris Markov chains." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT024/document.

Full text
Abstract:
Cette thèse se concentre sur certaines extensions de la théorie des processus empiriques lorsque les données sont Markoviennes. Plus spécifiquement, nous nous concentrons sur plusieurs développements de la théorie du bootstrap, de la robustesse et de l’apprentissage statistique dans un cadre Markovien Harris récurrent positif. Notre approche repose sur la méthode de régénération qui s’appuie sur la décomposition d’une trajectoire de la chaîne de Markov atomique régénérative en blocs d’observations indépendantes et identiquement distribuées (i.i.d.). Les blocs de régénération correspondent à des segments de la trajectoire entre des instants aléatoires de visites dans un ensemble bien choisi (l’atome) formant une séquence de renouvellement. Dans la premiére partie de la thèse nous proposons un théorème fonctionnel de la limite centrale de type bootstrap pour des chaînes de Markov Harris récurrentes, d’abord dans le cas de classes de fonctions uniformément bornées puis dans un cadre non borné. Ensuite, nous utilisons les résultats susmentionnés pour obtenir unthéorème de la limite centrale pour des fonctionnelles Fréchet différentiables dans un cadre Markovien. Motivés par diverses applications, nous discutons la manière d’étendre certains concepts de robustesse à partir du cadre i.i.d. à un cas Markovien. En particulier, nous considérons le cas où les données sont des processus Markoviens déterministes par morceaux. Puis, nous proposons des procédures d’échantillonnage résiduel et wild bootstrap pour les processus périodiquement autorégressifs et établissons leur validité. Dans la deuxième partie de la thèse, nous établissons des versions maximales d’inégalités de concentration de type Bernstein, Hoeffding et des inégalités de moments polynomiales en fonction des nombres de couverture et des moments des temps de retour et des blocs. Enfin, nous utilisons ces inégalités sur les queues de distributions pour calculer des bornes de généralisation pour une estimation d’ensemble de volumes minimum pour les chaînes de Markov régénératives
This thesis concentrates on some extensions of empirical processes theory when the data are Markovian. More specifically, we focus on some developments of bootstrap, robustness and statistical learning theory in a Harris recurrent framework. Our approach relies on the regenerative methods that boil down to division of sample paths of the regenerative Markov chain under study into independent and identically distributed (i.i.d.) blocks of observations. These regeneration blocks correspond to path segments between random times of visits to a well-chosen set (the atom) forming a renewal sequence. In the first part of the thesis we derive uniform bootstrap central limit theorems for Harris recurrent Markov chains over uniformly bounded classes of functions. We show that the result can be generalized also to the unbounded case. We use the aforementioned results to obtain uniform bootstrap central limit theorems for Fr´echet differentiable functionals of Harris Markov chains. Propelledby vast applications, we discuss how to extend some concepts of robustness from the i.i.d. framework to a Markovian setting. In particular, we consider the case when the data are Piecewise-determinic Markov processes. Next, we propose the residual and wild bootstrap procedures for periodically autoregressive processes and show their consistency. In the second part of the thesis we establish maximal versions of Bernstein, Hoeffding and polynomial tail type concentration inequalities. We obtain the inequalities as a function of covering numbers and moments of time returns and blocks. Finally, we use those tail inequalities toderive generalization bounds for minimum volume set estimation for regenerative Markov chains
APA, Harvard, Vancouver, ISO, and other styles
33

Ciolek, Gabriela. "Bootstrap and uniform bounds for Harris Markov chains." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT024.

Full text
Abstract:
Cette thèse se concentre sur certaines extensions de la théorie des processus empiriques lorsque les données sont Markoviennes. Plus spécifiquement, nous nous concentrons sur plusieurs développements de la théorie du bootstrap, de la robustesse et de l’apprentissage statistique dans un cadre Markovien Harris récurrent positif. Notre approche repose sur la méthode de régénération qui s’appuie sur la décomposition d’une trajectoire de la chaîne de Markov atomique régénérative en blocs d’observations indépendantes et identiquement distribuées (i.i.d.). Les blocs de régénération correspondent à des segments de la trajectoire entre des instants aléatoires de visites dans un ensemble bien choisi (l’atome) formant une séquence de renouvellement. Dans la premiére partie de la thèse nous proposons un théorème fonctionnel de la limite centrale de type bootstrap pour des chaînes de Markov Harris récurrentes, d’abord dans le cas de classes de fonctions uniformément bornées puis dans un cadre non borné. Ensuite, nous utilisons les résultats susmentionnés pour obtenir unthéorème de la limite centrale pour des fonctionnelles Fréchet différentiables dans un cadre Markovien. Motivés par diverses applications, nous discutons la manière d’étendre certains concepts de robustesse à partir du cadre i.i.d. à un cas Markovien. En particulier, nous considérons le cas où les données sont des processus Markoviens déterministes par morceaux. Puis, nous proposons des procédures d’échantillonnage résiduel et wild bootstrap pour les processus périodiquement autorégressifs et établissons leur validité. Dans la deuxième partie de la thèse, nous établissons des versions maximales d’inégalités de concentration de type Bernstein, Hoeffding et des inégalités de moments polynomiales en fonction des nombres de couverture et des moments des temps de retour et des blocs. Enfin, nous utilisons ces inégalités sur les queues de distributions pour calculer des bornes de généralisation pour une estimation d’ensemble de volumes minimum pour les chaînes de Markov régénératives
This thesis concentrates on some extensions of empirical processes theory when the data are Markovian. More specifically, we focus on some developments of bootstrap, robustness and statistical learning theory in a Harris recurrent framework. Our approach relies on the regenerative methods that boil down to division of sample paths of the regenerative Markov chain under study into independent and identically distributed (i.i.d.) blocks of observations. These regeneration blocks correspond to path segments between random times of visits to a well-chosen set (the atom) forming a renewal sequence. In the first part of the thesis we derive uniform bootstrap central limit theorems for Harris recurrent Markov chains over uniformly bounded classes of functions. We show that the result can be generalized also to the unbounded case. We use the aforementioned results to obtain uniform bootstrap central limit theorems for Fr´echet differentiable functionals of Harris Markov chains. Propelledby vast applications, we discuss how to extend some concepts of robustness from the i.i.d. framework to a Markovian setting. In particular, we consider the case when the data are Piecewise-determinic Markov processes. Next, we propose the residual and wild bootstrap procedures for periodically autoregressive processes and show their consistency. In the second part of the thesis we establish maximal versions of Bernstein, Hoeffding and polynomial tail type concentration inequalities. We obtain the inequalities as a function of covering numbers and moments of time returns and blocks. Finally, we use those tail inequalities toderive generalization bounds for minimum volume set estimation for regenerative Markov chains
APA, Harvard, Vancouver, ISO, and other styles
34

Lindahl, John, and Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.

Full text
Abstract:
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
APA, Harvard, Vancouver, ISO, and other styles
35

Carlsson, Filip. "Can students' progress data be modeled using Markov chains?" Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254285.

Full text
Abstract:
In this thesis a Markov chain model, which can be used for analysing students’ performance and their academic progress, is developed. Being able to evaluate students progress is useful for any educational system. It gives a better understanding of how students resonates and it can be used as support for important decisions and planning. Such a tool can be helpful for managers of the educational institution to establish a more optimal educational policy, which ensures better position in the educational market. To show that it is reasonable to use a Markov chain model for this purpose, a test for how well data fits such a model is created and used. The test shows that we cannot reject the hypothesis that the data can be fitted to a Markov chain model.
I detta examensarbete utvecklas en Markov-kedjemodell, som kan användas för att analysera studenters prestation och akademiska framsteg. Att kunna utvärdera studenters väg genom studierna är användbart för alla utbildningssystem. Det ger en bättre förståelse för hur studenter resonerar och det kan användas som stöd för viktiga beslut och planering. Ett sådant verktyg kan vara till hjälp för utbildningsinstitutionens chefer att upprätta en mer optimal utbildningspolitik, vilket säkerställer en bättre ställning på utbildningsmarknaden. För att visa att det är rimligt att använda en Markov-kedjemodell för detta ändamål skapas och används ett test för hur väl data passar en sådan modell. Testet visar att vi inte kan avvisa hypotesen att data kan passa en Markov-kedjemodell.
APA, Harvard, Vancouver, ISO, and other styles
36

Kienitz, Jörg. "Convergence of Markov chains via analytic and isoperimetric inequalities." [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=960840664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Greenberg, Sam. "Random sampling of lattice configurations using local Markov chains." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28090.

Full text
Abstract:
Thesis (M. S.)--Mathematics, Georgia Institute of Technology, 2009.
Committee Chair: Randall, Dana; Committee Member: Heitsch, Christine; Committee Member: Mihail, Milena; Committee Member: Trotter, Tom; Committee Member: Vigoda, Eric.
APA, Harvard, Vancouver, ISO, and other styles
38

Heiden, Matthias an der. "Metastability of Markov chains and in the Hopfield model." [S.l.] : [s.n.], 2006. http://opus.kobv.de/tuberlin/volltexte/2007/1447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dai, Pra Paolo, Pierre-Yves Louis, and Ida Minelli. "Monotonicity and complete monotonicity for continuous-time Markov chains." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/766/.

Full text
Abstract:
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent.
However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
Nous étudions les notions de monotonie et de monotonie complète pour les processus de Markov (ou chaînes de Markov à temps continu) prenant leurs valeurs dans un espace partiellement ordonné. Ces deux notions ne sont pas équivalentes, comme c'est le cas lorsque le temps est discret. Cependant, nous établissons que pour certains ensembles partiellement ordonnés, l'équivalence a lieu en temps continu bien que n'étant pas vraie en temps discret.
APA, Harvard, Vancouver, ISO, and other styles
40

Torp, Emil, and Patrik Önnegren. "Driving Cycle Generation Using Statistical Analysis and Markov Chains." Thesis, Linköpings universitet, Fordonssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94147.

Full text
Abstract:
A driving cycle is a velocity profile over time. Driving cycles can be used for environmental classification of cars and to evaluate vehicle performance. The benefit by using stochastic driving cycles instead of predefined driving cycles, i.e. the New European Driving Cycle, is for instance that the risk of cycle beating is reduced. Different methods to generate stochastic driving cycles based on real-world data have been used around the world, but the representativeness of the generated driving cycles has been difficult to ensure. The possibility to generate stochastic driving cycles that captures specific features from a set of real-world driving cycles is studied. Data from more than 500 real-world trips has been processed and categorized. The driving cycles are merged into several transition probability matrices (TPMs), where each element corresponds to a specific state defined by its velocity and acceleration. The TPMs are used with Markov chain theory to generate stochastic driving cycles. The driving cycles are validated using percentile limits on a set of characteristic variables, that are obtained from statistical analysis of real-world driving cycles. The distribution of the generated driving cycles is investigated and compared to real-world driving cycles distribution. The generated driving cycles proves to represent the original set of real-world driving cycles in terms of key variables determined through statistical analysis. Four different methods are used to determine which statistical variables that describes the features of the provided driving cycles. Two of the methods uses regression analysis. Hierarchical clustering of statistical variables is proposed as a third alternative, and the last method combines the cluster analysis with the regression analysis. The entire process is automated and a graphical user interface is developed in Matlab to facilitate the use of the software.
En körcykel är en beskriving av hur hastigheten för ett fordon ändras under en körning. Körcykler används bland annat till att miljöklassa bilar och för att utvärdera fordonsprestanda. Olika metoder för att generera stokastiska körcykler baserade på verklig data har använts runt om i världen, men det har varit svårt att efterlikna naturliga körcykler. Möjligheten att generera stokastiska körcykler som representerar en uppsättning naturliga körcykler studeras. Data från över 500 körcykler bearbetas och kategoriseras. Dessa används för att skapa överergångsmatriser där varje element motsvarar ett visst tillstånd, med hastighet och acceleration som tillståndsvariabler. Matrisen tillsammans med teorin om Markovkedjor används för att generera stokastiska körcykler. De genererade körcyklerna valideras med hjälp percentilgränser för ett antal karaktäristiska variabler som beräknats för de naturliga körcyklerna. Hastighets- och accelerationsfördelningen hos de genererade körcyklerna studeras och jämförs med de naturliga körcyklerna för att säkerställa att de är representativa. Statistiska egenskaper jämfördes och de genererade körcyklerna visade sig likna den ursprungliga uppsättningen körcykler. Fyra olika metoder används för att bestämma vilka statistiska variabler som beskriver de naturliga körcyklerna. Två av metoderna använder regressionsanalys. Hierarkisk klustring av statistiska variabler föreslås som ett tredje alternativ. Den sista metoden kombinerar klusteranalysen med regressionsanalysen. Hela processen är automatiserad och ett grafiskt användargränssnitt har utvecklats i Matlab för att underlätta användningen av programmet.
APA, Harvard, Vancouver, ISO, and other styles
41

Stoyanov, Tsvetan I. "Isoperimetic and related constants for graphs and markov chains." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Denisov, Denis Eduardovich. "Markov chains and random walks with heavy-tailed increments." Thesis, Heriot-Watt University, 2004. http://hdl.handle.net/10399/340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kamaleson, Nishanthan. "Model reduction techniques for probabilistic verification of Markov chains." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8736/.

Full text
Abstract:
Probabilistic model checking is a quantitative verification technique that aims to verify the correctness of probabilistic systems. Nevertheless, it suffers from the so-called state space explosion problem. In this thesis, we propose two new model reduction techniques to improve the efficiency and scalability of verifying probabilistic systems, focusing on discrete-time Markov chains (DTMCs). In particular, our emphasis is on verifying quantitative properties that bound the time or cost of an execution. We also focus on methods that avoid the explicit construction of the full state space. We first present a finite-horizon variant of probabilistic bisimulation for DTMCs, which preserves a bounded fragment of PCTL. We also propose another model reduction technique that reduces what we call linear inductive DTMCs, a class of models whose state space grows linearly with respect to a parameter. All the techniques presented in this thesis were developed in the PRISM model checker. We demonstrate the effectiveness of our work by applying it to a selection of existing benchmark probabilistic models, showing that both of our two new approaches can provide significant reductions in model size and in some cases outperform the existing implementations of probabilistic verification in PRISM.
APA, Harvard, Vancouver, ISO, and other styles
44

Borgia, Alessandro. "il teorema ergodico e le Monte Carlo Markov Chains." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23629/.

Full text
Abstract:
Le catene di Markov devono il loro nome al matematico russo Andrej Andreevič Markov, il quale introdusse questa classe di processi stocastici a tempo discreto nei primi anni del 1900. Lo scopo originario dello studioso era mostrare che alcuni risultati di convergenza validi per successioni di variabili aleatorie indipendenti identicamente distribuite, quali la legge dei grandi numeri, potevano essere estesi a processi che ammettono una piccola dipendenza dal passato. Negli anni successivi queste catene si sono dimostrate utilissime in più settori e vantano svariate applicazioni in fisica, biologia, ingegneria elettronica e altri campi. La proprietà caratterizzante le catene di Markov è una sorta di perdita di memoria: lo stato attuale della catena è la massima informazione che possiamo avere per determinare lo stato successivo, mentre gli stati precedenti non influiscono. Il primo capitolo si sofferma sulle catene di Markov discrete. L'interesse principale è per le catene omogenee e irriducibili, che saranno classificate come ricorrenti positive, ricorrenti nulle o transitorie. Si definisce inoltre la distribuzione stazionaria e se ne dimostrano esistenza e unicità per catene ricorrenti positive. Il capitolo si conclude col teorema ergodico, il quale ci fornisce un importante risultato di convergenza per le catene omogenee, irriducibili, ricorrenti positive. Il secondo capitolo inizia trattando brevemente il metodo Monte Carlo classico, fornendone un esempio con il metodo di acceptance-rejection. Dopo aver introdotto i concetti di periodicità e reversibilità di una catena di Markov, si presenta il metodo Monte Carlo Markov Chains per catene a valori in uno spazio finito e se ne fornisce un esempio: l'algoritmo di Metropolis-Hastings. L'elaborato si conclude con degli esempi di applicazione dell'algoritmo.
APA, Harvard, Vancouver, ISO, and other styles
45

Duchemin, Quentin. "Growth dynamics of large networks using hidden Markov chains." Thesis, Université Gustave Eiffel, 2022. https://tel.archives-ouvertes.fr/tel-03749513.

Full text
Abstract:
La première partie de cette thèse vise à introduire de nouveaux modèles de graphes aléatoires rendant compte de l'évolution temporelle des réseaux. Plus précisément, nous nous concentrons sur des modèles de croissance où à chaque instant un nouveau noeud s'ajoute au graphe existant. Nous attribuons à ce nouvel entrant des propriétés qui caractérisent son pouvoir de connectivité au reste du réseau et celles-ci dépendent uniquement du noeud précédemment introduit. Nos modèles de graphes aléatoires sont donc régis par une dynamique markovienne latente caractérisant la séquence de noeuds du graphe. Nous nous intéresserons particulièrement au Stochastic Block Model et aux Graphes Aléatoires Géométriques pour lesquels nous proposons des algorithmes permettant d'estimer les paramètres du modèle. Nous montrons ensuite comment ce travail d'estimation nous permet de résoudre des problèmes de prédiction de lien ou de filtrage collaboratif dans les graphes.L'étude théorique des algorithmes précédemment décrits mobilisent des résultats probabilistes poussés. Nous avons notamment dû recourir à une inégalité de concentration pour les U-statistiques dans un cadre dépendant. Peu nombreux sont les travaux ayant abordé cette épineuse question et l'existant considère des jeux d'hypothèses ne répondant pas à nos besoins. Aussi, la deuxième partie de ce manuscrit sera consacrée à la preuve d'une inégalité de concentration pour les U-statistiques d'ordre deux pour des chaînes de Markov uniformément ergodique. Dans le Chapitre 5, nous exploitons notre résultat de concentration pour les U-statistiques pour apporter de nouvelles contributions à trois domaines très actifs des Statistiques et du Machine Learning.Toujours motivés par des problèmes de prédictions liens dans les graphes, nous nous intéressons dans un dernier chapitre aux procédures d'inférence post-sélection dans le cadre de la régression logistique avec pénalité $L^1$. Nous prouvons un théorème central limite sous la distribution conditionnelle à l'événement de sélection et nous en déduisons des procédures de test et des intervalles de confiance asymptotiquement valides
The first part of this thesis aims at introducing new models of random graphs that account for the temporal evolution of networks. More precisely, we focus on growth models where at each instant a new node is added to the existing graph. We attribute to this new entrant properties that characterize its connectivity to the rest of the network and these properties depend only on the previously introduced node. Our random graph models are thus governed by a latent Markovian dynamic characterizing the sequence of nodes in the graph. We are particularly interested in the Stochastic Block Model and in Random Geometric Graphs for which we propose algorithms to estimate the unknown parameters or functions defining the model. We then show how these estimates allow us to solve link prediction or collaborative filtering problems in networks.The theoretical analysis of the above-mentioned algorithms requires advanced probabilistic tools. In particular, one of our proof is relying on a concentration inequality for U-statistics in a dependent framework. Few papers have addressed this thorny question and existing works consider sets of assumptions that do not meet our needs. Therefore, the second part of this manuscript will be devoted to the proof of a concentration inequality for U-statistics of order two for uniformly ergodic Markov chains. In Chapter 5, we exploit this concentration result for U-statistics to make new contributions to three very active areas of Statistics and Machine Learning.Still motivated by link prediction problems in graphs, we study post-selection inference procedures in the framework of logistic regression with $L^1$ penalty. We prove a central limit theorem under the distribution conditional on the selection event and derive asymptotically valid testing procedures and confidence intervals
APA, Harvard, Vancouver, ISO, and other styles
46

Dikkala, Sai Nishanth. "Statistical inference from dependent data : networks and Markov chains." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127016.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 259-270).
In recent decades, the study of high-dimensional probability has taken centerstage within many research communities including Computer Science, Statistics and Machine Learning. Very often, due to the process according to which data is collected, the samples in a dataset have implicit correlations amongst them. Such correlations are commonly ignored as a first approximation when trying to analyze statistical and computational aspects of an inference task. In this thesis, we explore how to model such dependences between samples using structured high-dimensional distributions which result from imposing a Markovian property on the joint distribution of the data, namely Markov Random Fields (MRFs) and Markov chains. On MRFs, we explore a quantification for the amount of dependence and we strengthen previously known measure concentration results under a certain weak dependence condition on an MRF called the high-temperature regime. We then go on to apply our novel measure concentration bounds to improve the accuracy of samples computed according to a certain Markov Chain Monte Carlo procedure. We then show how to extend some classical results from statistical learning theory on PAC-learnability and uniform convergence to training data which is dependent under the high temperature condition. Then, we explore the task of regression on data which is dependent according to an MRF under a stronger amount of dependence than is allowed by the high-temperature condition. We then shift our focus to Markov chains where we explore the question of testing whether a certain trajectory we observe corresponds to a chain P or not. We discuss what is a reasonable formulation of this problem and provide a tester which works without observing a trajectory whose length contains multiplicative factors of the mixing or covering time of the chain P. We finally conclude with some broad directions for further research on statistical inference under data dependence.
by Sai Nishanth Dikkala.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
47

Nilsson, Albert. "Exploring strategies in Monopoly using Markov chains and simulation." Thesis, Uppsala universitet, Analys och sannolikhetsteori, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Zhou, Hua. "Examples of multivariate Markov chains with orthogonal polynomial eigenfunctions /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Brau, Rojas Agustin. "Controlled Markov chains with risk-sensitive average cost criterion." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284004.

Full text
Abstract:
Discrete controlled Markov chains with finite action space and bounded cost per stage are studied in this dissertation. The performance index function, the exponential average cost (EAC), models risk-sensitivity by means of an exponential (dis)utility function. First, for the finite state space model, the EAC corresponding to a fixed stationary (deterministic) policy is characterized in terms of the spectral radii of matrices associated to irreducible communicating classes of both recurrent and transient states. This result generalizes a well known theorem of Howard and Matheson, which treats the particular case in which the Markov cost chain has only one dosed class of states. Then, it is shown that under strong recurrence conditions, the risk-sensitive model approaches the risk-null model when the risk-sensitivity coefficient is small. However, it is proved and also illustrated by means of examples, that in general, fundamental differences arise between both models, e.g., the EAC may depend on the cost structure at the transient states. In particular, the behavior of the EAC for large risk-sensitivity is also analyzed. After showing that an exponential average optimality equation holds for the countable state space model, a proof of the existence of solutions to that equation for the finite model under a simultaneous Doeblin condition is provided, which is simpler than that given in recent work of Cavazos-Cadena and Fernandez-Gaucherand. The adverse impact of "large risk-sensitivity" on recently obtained conditions for the existence of solutions to an optimality inequality is illustrated by means of an example. Finally, a counterexample is included to show that, unlike previous results for finite models, a controlled Markov chain with infinite state space may not have ultimately stationary optimal policies in the risk-sensitive (exponential) discounted cost case, in general. Moreover, a simultaneous Doeblin condition is satisfied in our example, an assumption that enables the vanishing discount approach in the risk-null case, thus further suggesting that more restrictive conditions than those commonly used in the risk neutral context are needed to develop the mentioned approach for risk-sensitive criteria.
APA, Harvard, Vancouver, ISO, and other styles
50

Oakley, Steven James 1963. "A PROBABILISTIC INVESTIGATION OF VIDEO POKER STRATEGIES (MARKOV CHAINS)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/291229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography