Dissertations / Theses on the topic 'Markov'

To see the other types of publications on this topic, follow the link: Markov.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/1/Di_Cecco_Davide_Tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.

Full text
Abstract:
This work is an extension of the classical Cox-Ross-Rubinstein discrete time market model in which only one risky asset is considered. We introduce another risky asset into the model. Moreover, the random structure of the asset price sequence is generated by bivariate finite state Markov chain. Then, the interest rate varies over time as it is the function of generating sequences. We discuss how the model can be adapted to the real data. Finally, we illustrate sample implementations to give a better idea about the use of the model.
APA, Harvard, Vancouver, ISO, and other styles
4

Tillman, Måns. "On-Line Market Microstructure Prediction Using Hidden Markov Models." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208312.

Full text
Abstract:
Over the last decades, financial markets have undergone dramatic changes. With the advent of the arbitrage pricing theory, along with new technology, markets have become more efficient. In particular, the new high-frequency markets, with algorithmic trading operating on micro-second level, make it possible to translate ”information” into price almost instantaneously. Such phenomena are studied in the field of market microstructure theory, which aims to explain and predict them. In this thesis, we model the dynamics of high frequency markets using non-linear hidden Markov models (HMMs). Such models feature an intuitive separation between observations and dynamics, and are therefore highly convenient tools in financial settings, where they allow a precise application of domain knowledge. HMMs can be formulated based on only a few parameters, yet their inherently dynamic nature can be used to capture well-known intra-day seasonality effects that many other models fail to explain. Due to recent breakthroughs in Monte Carlo methods, HMMs can now be efficiently estimated in real-time. In this thesis, we develop a holistic framework for performing both real-time inference and learning of HMMs, by combining several particle-based methods. Within this framework, we also provide methods for making accurate predictions from the model, as well as methods for assessing the model itself. In this framework, a sequential Monte Carlo bootstrap filter is adopted to make on-line inference and predictions. Coupled with a backward smoothing filter, this provides a forward filtering/backward smoothing scheme. This is then used in the sequential Monte Carlo expectation-maximization algorithm for finding the optimal hyper-parameters for the model. To design an HMM specifically for capturing information translation, we adopt the observable volume imbalance into a dynamic setting. Volume imbalance has previously been used in market microstructure theory to study, for example, price impact. Through careful selection of key model assumptions, we define a slightly modified observable as a process that we call scaled volume imbalance. The outcomes of this process retain the key features of volume imbalance (that is, its relationship to price impact and information), and allows an efficient evaluation of the framework, while providing a promising platform for future studies. This is demonstrated through a test on actual financial trading data, where we obtain high-performance predictions. Our results demonstrate that the proposed framework can successfully be applied to the field of market microstructure.
Under de senaste decennierna har det gjorts stora framsteg inom finansiell teori för kapitalmarknader. Formuleringen av arbitrageteori medförde möjligheten att konsekvent kunna prissätta finansiella instrument. Men i en tid då högfrekvenshandel numera är standard, har omsättningen av information i pris börjat ske i allt snabbare takt. För att studera dessa fenomen; prispåverkan och informationsomsättning, har mikrostrukturteorin vuxit fram. I den här uppsatsen studerar vi mikrostruktur med hjälp av en dynamisk modell. Historiskt sett har mikrostrukturteorin fokuserat på statiska modeller men med hjälp av icke-linjära dolda Markovmodeller (HMM:er) utökar vi detta till den dynamiska domänen. HMM:er kommer med en naturlig uppdelning mellan observation och dynamik, och är utformade på ett sådant sätt att vi kan dra nytta av domänspecifik kunskap. Genom att formulera lämpliga nyckelantaganden baserade på traditionell mikrostrukturteori specificerar vi en modell—med endast ett fåtal parametrar—som klarar av att beskriva de välkända säsongsbeteenden som statiska modeller inte klarar av. Tack vare nya genombrott inom Monte Carlo-metoder finns det nu kraftfulla verktyg att tillgå för att utföra optimal filtrering med HMM:er i realtid. Vi applicerar ett så kallat bootstrap filter för att sekventiellt filtrera fram tillståndet för modellen och prediktera framtida tillstånd. Tillsammans med tekniken backward smoothing estimerar vi den posteriora simultana fördelningen för varje handelsdag. Denna används sedan för statistisk inlärning av våra hyperparametrar via en sekventiell Monte Carlo Expectation Maximization-algoritm. För att formulera en modell som beskriver omsättningen av information, väljer vi att utgå ifrån volume imbalance, som ofta används för att studera prispåverkan. Vi definierar den relaterade observerbara storheten scaled volume imbalance som syftar till att bibehålla kopplingen till prispåverkan men även går att modellera med en dynamisk process som passar in i ramverket för HMM:er. Vi visar även hur man inom detta ramverk kan utvärdera HMM:er i allmänhet, samt genomför denna analys för vår modell i synnerhet. Modellen testas mot finansiell handelsdata för både terminskontrakt och aktier och visar i bägge fall god predikteringsförmåga.
APA, Harvard, Vancouver, ISO, and other styles
5

Desharnais, Josée. "Labelled Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0031/NQ64546.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Balan, Raluca M. "Set-Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ66119.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eltannir, Akram A. "Markov interactive processes." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/30745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Werner, Ivan. "Contractive Markov systems." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/15173.

Full text
Abstract:
We introduce a theory of contractive Markov systems (CMS) which provides a unifying framework in so-called "fractal" geometry. It extends the known theory of iterated function systems (IFS) with place dependent probabilities [1][8] in a way that it also covers graph directed constructions of "fractal" sets [18]. Such systems naturally extend finite Markov chains and inherit some of their properties. In Chapter 1, we consider iterations of a Markov system and show that they preserve the essential structure of it. In Chapter 2, we show that the Markov operator defined by such a system has a unique invariant probability measure in the irreducible case and an attractive probability measure in the aperiodic case if the restrictions of the probability functions on their vertex sets are Dini-continuous and bounded away from zero, and the system satisfies a condition of a contractiveness on average. This generalizes a result from [1]. Furthermore, we show that the rate of convergence to the stationary state is exponential in the aperiodic case with constant probabilities and a compact state space. In Chapter 3, we construct a coding map for a contractive Markov system. In Chapter 4, we calculate Kolmogorov-Sinai entropy of the generalized Markov shift. In Chapter 5, we prove an ergodic theorem for Markov chains associated with the contractive Markov systems. It generalizes the ergodic theorem of Elton [8].
APA, Harvard, Vancouver, ISO, and other styles
9

Durrell, Fernando. "Constrained portfolio selection with Markov and non-Markov processes and insiders." Doctoral thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/4379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Skorniakov, Viktor. "Asymptotically homogeneous Markov chains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101223_152954-43357.

Full text
Abstract:
In the dissertation there is investigated a class of Markov chains defined by iterations of a function possessing a property of asymptotical homogeneity. Two problems are solved: 1) there are established rather general conditions under which the chain has unique stationary distribution; 2) for the chains evolving in a real line there are established conditions under which the stationary distribution of the chain is heavy-tailed.
Disertacijoje tirta Markovo grandinių klasė, kurios iteracijos nusakomos atsitiktinėmis asimptotiškai homogeninėmis funkcijomis, ir išspręsti du uždaviniai: 1) surastos bendros sąlygos, kurios garantuoja vienintelio stacionaraus skirstinio egzistavimą; 2) vienmatėms grandinėms surastos sąlygos, kurioms esant stacionarus skirstinys turi "sunkias" uodegas.
APA, Harvard, Vancouver, ISO, and other styles
11

Mazali, Rogério. "Improving mutual fund market timing measures: a markov switching approach." reponame:Repositório Institucional do FGV, 2001. http://hdl.handle.net/10438/55.

Full text
Abstract:
Made available in DSpace on 2008-05-13T13:16:08Z (GMT). No. of bitstreams: 1 1429.pdf: 435505 bytes, checksum: 014d927923de455f14c28a151a16c5e1 (MD5) Previous issue date: 2001-07-31
Market timing performance of mutual funds is usually evaluated with linear models with dummy variables which allow for the beta coefficient of CAPM to vary across two regimes: bullish and bearish market excess returns. Managers, however, use their predictions of the state of nature to deÞne whether to carry low or high beta portfolios instead of the observed ones. Our approach here is to take this into account and model market timing as a switching regime in a way similar to Hamilton s Markov-switching GNP model. We then build a measure of market timing success and apply it to simulated and real world data.
APA, Harvard, Vancouver, ISO, and other styles
12

Haag, Florian [Verfasser]. "Asymptotisches Verhalten von Quanten-Markov-Halbgruppen und Quanten-Markov-Prozessen / Florian Haag." Aachen : Shaker, 2006. http://d-nb.info/1170532683/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yu, Huizhen Ph D. Massachusetts Institute of Technology. "Approximate solution methods for partially observable Markov and semi-Markov decision processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35299.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 165-169).
We consider approximation methods for discrete-time infinite-horizon partially observable Markov and semi-Markov decision processes (POMDP and POSMDP). One of the main contributions of this thesis is a lower cost approximation method for finite-space POMDPs with the average cost criterion, and its extensions to semi-Markov partially observable problems and constrained POMDP problems, as well as to problems with the undiscounted total cost criterion. Our method is an extension of several lower cost approximation schemes, proposed individually by various authors, for discounted POMDP problems. We introduce a unified framework for viewing all of these schemes together with some new ones. In particular, we establish that due to the special structure of hidden states in a POMDP, there is a class of approximating processes, which are either POMDPs or belief MDPs, that provide lower bounds to the optimal cost function of the original POMDP problem. Theoretically, POMDPs with the long-run average cost criterion are still not fully understood.
(cont.) The major difficulties relate to the structure of the optimal solutions, such as conditions for a constant optimal cost function, the existence of solutions to the optimality equations, and the existence of optimal policies that are stationary and deterministic. Thus, our lower bound result is useful not only in providing a computational method, but also in characterizing the optimal solution. We show that regardless of these theoretical difficulties, lower bounds of the optimal liminf average cost function can be computed efficiently by solving modified problems using multichain MDP algorithms, and the approximating cost functions can be also used to obtain suboptimal stationary control policies. We prove the asymptotic convergence of the lower bounds under certain assumptions. For semi-Markov problems and total cost problems, we show that the same method can be applied for computing lower bounds of the optimal cost function. For constrained average cost POMDPs, we show that lower bounds of the constrained optimal cost function can be computed by solving finite-dimensional LPs. We also consider reinforcement learning methods for POMDPs and MDPs. We propose an actor-critic type policy gradient algorithm that uses a structured policy known as a finite-state controller.
(cont.) We thus provide an alternative to the earlier actor-only algorithm GPOMDP. Our work also clarifies the relationship between the reinforcement learning methods for POMDPs and those for MDPs. For average cost MDPs, we provide a convergence and convergence rate analysis for a least squares temporal difference (TD) algorithm, called LSPE, and previously proposed for discounted problems. We use this algorithm in the critic portion of the policy gradient algorithm for POMDPs with finite-state controllers. Finally, we investigate the properties of the limsup and liminf average cost functions of various types of policies. We show various convexity and concavity properties of these costfunctions, and we give a new necessary condition for the optimal liminf average cost to be constant. Based on this condition, we prove the near-optimality of the class of finite-state controllers under the assumption of a constant optimal liminf average cost. This result provides a theoretical guarantee for the finite-state controller approach.
by Huizhen Yu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Yeo, Sungchil. "On estimation for a combined Markov and semi-Markov model with censoring /." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487586889187169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wächter, Matthias. "Markov-Analysen unebener Oberflächen." [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=97400460X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Piskuric, Mojca. "Vector-Valued Markov Games." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2001. http://nbn-resolving.de/urn:nbn:de:swb:14-996482849703-81901.

Full text
Abstract:
The subject of the thesis are vector-valued Markov Games. Chapter 1 presents the idea, that has led to the development of the theory of general stochastic games. The work of Lloyd S. Shapley is outlined, and the most important authors and bibliography are stated. Also, the motivation behind the research of vector-valued game-theoretic problems is presented. Chapter 2 develops a rigorous mathematical model of vector-valued N-person Markov games. The corresponding definitions are stated, and the notations, as well as the notion of a strategy are explained in detail. On the basis of these definitions a probability measure is constructed, in an appropriate probability space, which controls the stochastic game process. Furthermore, as in all models of stochastic control, a payoff is specified, in our case the expected discounted payoff. The principles of vector optimization are stated in Chapter 3, and the concept of optimality with recpect to some convex cone is developed. This leads to the generalization of Nash-equilibria from scalar- to vector-valued games, the so-called D-equilibria. Examples are provided to show, that this definition really is a generalization of the existing definitions for scalar-valued games. For a given convex cone D, necessary and sufficient conditions are found to show, when a strategy is also a D-equilibrium. Furthermore it is shown that a D-equilibrium in stationary strategies exists, as one could expect from the known results from the theory of scalar-valued stochastic games. The main result of this chapter is a generalization of an existing result for 2-person vector-valued Markov games to N-person Markov Games, namely that a D-equilibrium of an N-person Markov game is a subgradient of specially constructed support functions of the original payoff functions. To be able to develop solution procedures in the simplest case, that is, the 2-person zero-sum case, Chapter 4 introduces the Denardo dynamic programming formalism. In the space of all p-dimensional functions we define a dynamic programming operator H? to describe the solutions of Markov games. The first of the two main results in this chapter is the following: the expected overall payoff to player 1, f(??), for a fixed stationary strategy ??, is the fixed point of the operator H?. The second theorem then shows, that the latter result is exactly the vector-valued generalization of the famous Shapley result. These theorems are fundamental for the subsequent development of two algorithms, the successive approximations and the Hoffman-Karp algorithm. A numerical example for both algorithms is presented. Chapter 4 finishes with a discussion on other significant results, and the outline of the further research. The Appendix finally presents the main results from general Game Theory, most of which were used for developing both theoretic and algorithmic parts of this thesis
Das Thema der vorliegenden Arbeit sind vektorwertige Markov-Spiele. Im Kapitel 1 wird die Idee vorgestellt, die zur Entwicklung genereller stochastischer Spiele geführt hat. Die Arbeit von Lloyd S. Shapley wird kurz dargestellt, und die wichtigsten Autoren und Literaturquellen werden genannt. Es wird weiter die Motivation für das Studium der vektorwertigen Spiele erklärt. Kapitel 2 entwickelt ein allgemeines mathematisches Modell vektorwertiger N-Personen Markov-Spiele. Die entsprechenden Definitionen werden angegeben, und es wird auf die Bezeichnungen, sowie den Begriff einer Strategie eingegangen. Weiter wird im entsprechenden Wahrscheinlichkeitsraum ein Wahrscheinlichkeitsmaß konstruiert, das den zugrunde liegenden stochastischen Prozeß steuert. Wie bei allen Modellen gesteuerter stochastischen Prozesse wird eine Auszahlung spezifiziert, konkret der erwartete diskontierte Gesamtertrag. Im Kapitel 3 werden die Prinzipien der Vektoroptimierung erläutert. Es wird der Begriff der Optimalität bezüglich gegebener konvexer Kegel entwickelt. Dieser Begriff wird weiter benutzt, um die Definition der Nash-Gleichgewichte für skalarwertige Spiele auf unser vektorwertiges Modell, die sogenannten D-Gleichgewichte, zu erweitern. Anhand mehrerer Beispiele wird gezeigt, dass diese Definition eine Verallgemeinerung der existierenden Definitionen für skalarwertige Spiele ist. Weiter werden notwendige und hinreichende Bedingungen hinsichtlich des Optimierungskegels D angegeben, wann eine Strategie ein D-Gleichgewicht ist. Anschließend wird gezeigt, dass man sich ? wie bei Markov'schen Entscheidungsprozessen und skalarwertigen stochastischen Spielen - beim Suchen der D-Gleichgewichte auf stationäre Strategien beschränken kann. Das Hauptresultat dieses Kapitels ist die Verallgemeinerung einer schon bekannten Aussage für 2-Personen Markov-Spiele auf N-Personen Markov-Spiele: Ein D-Gleichgewicht im N-Personen Markov-Spiel ist ein Subgradient speziell konstruierter Trägerfunktionen des Gesamtertrags der Spieler. Um im einfachsten Fall der Markov-Spiele, den Zwei-Personen Nullsummenspielen, ein Lösungskonzept entwickeln zu können, wird im Kapitel 4 die Methode des Dynamischen Programmierens benutzt. Es wird der Denardo-Formalismus übernommen, um einen Operator H? im Raum aller p-dimensionalen vektorwertigen Funktionen zu entwickeln. Die Haputresultate dieses Kapitels sind zwei Sätze über optimale Lösungen, bzw. D-Gleichgewichte. Der erste Satz zeigt, dass für eine fixierte stationäre Strategie ?? der erwartete diskontierte Gesamtertrag f(??) der Fixpunkt des Operators H? ist. Anschließend zeigt der zweite Satz, dass diese Lösung genau der vektorwertigen Erweiterung des Resultats von Shapley entspricht. Anhand dieser Resultate werden nun zwei Algorithmen entwickelt: sukzessive Approximationen und Hoffman-Karp-Algorithmus. Es wird ein numerisches Beispiel für beide Algorithmen berechnet. Kapitel 4 schließt mit dem Abschnitt über weitere Resultate und Ansätze für weitere Forschung. Im Anhang werden die Hauptresultate der statischen Spieltheorie vorgestellt, viele von denen werden in der vorliegenden Arbeit benutzt
APA, Harvard, Vancouver, ISO, and other styles
17

Santos, Raqueline Azevedo Medeiros. "Cadeias de Markov Quânticas." Laboratório Nacional de Computação Científica, 2010. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=199.

Full text
Abstract:
Em Ciência da Computação, os caminhos aleatórios são utilizados em algoritmos randômicos, especialmente em algoritmos de busca, quando desejamos encontrar um estado marcado numa cadeia de Markov. Nesse tipo de algoritmo é interessante estudar o Tempo de Alcance, que está associado a sua complexidade computacional. Nesse contexto, descrevemos a teoria clássica de cadeias de Markov e caminhos aleatórios, assim como o seu análogo quântico. Dessa forma, definimos o Tempo de Alcance sob o escopo das cadeias de Markov quânticas. Além disso, expressões analíticas calculadas para o tempo de Alcance quântico e para a probabilidade de encontrarmos um elemento marcado num grafo completo são apresentadas como os novos resultados dessa dissertação.
In Computer Science, random walks are used in randomized algorithms, specially in search algorithms, where we desire to find a marked state in a Markov chain.In this type of algorithm,it is interesting to study the Hitting Time, which is associated to its computational complexity. In this context, we describe the classical theory of Markov chains and random walks,as well as their quantum analogue.In this way,we define the Hitting Time under the scope of quantum Markov chains. Moreover, analytical expressions calculated for the quantum Hitting Time and for the probability of finding a marked element on the complete graph are presented as the new results of this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
18

Cho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.

Full text
Abstract:

A finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.

The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).

APA, Harvard, Vancouver, ISO, and other styles
19

莊競誠 and King-sing Chong. "Explorations in Markov processes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

McKee, Bill Frederick. "Optimal hidden Markov models." Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/1698.

Full text
Abstract:
In contrast with training algorithms such as Baum-Welch, which produce solutions that are a local optimum of the objective function, this thesis describes the attempt to develop a training algorithm which delivers the global optimum Discrete ICdden Markov Model for a given training sequence. A total of four different methods of attack upon the problem are presented. First, after building the necessary analytical tools, the thesis presents a direct, calculus-based assault featuring Matrix Derivatives. Next, the dual analytic approach known as Geometric Programming is examined and then adapted to the task. After that, a hill-climbing formula is developed and applied. These first three methods reveal a number of interesting and useful insights into the problem. However, it is the fourth method which produces an algorithm that is then used for direct comparison vAth the Baum-Welch algorithm: examples of global optima are collected, examined for common features and patterns, and then a rule is induced. The resulting rule is implemented in *C' and tested against a battery of Baum-Welch based programs. In the limited range of tests carried out to date, the models produced by the new algorithm yield optima which have not been surpassed by (and are typically much better than) the Baum-Welch models. However, far more analysis and testing is required and in its current form the algorithm is not fast enough for realistic application.
APA, Harvard, Vancouver, ISO, and other styles
21

James, Huw William. "Transient Markov decision processes." Thesis, University of Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dessain, Thomas James. "Perturbations of Markov chains." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10619/.

Full text
Abstract:
This thesis is concerned with studying the hitting time of an absorbing state on Markov chain models that have a countable state space. For many models it is challenging to study the hitting time directly; I present a perturbative approach that allows one to uniformly bound the difference between the hitting time moment generating functions of two Markov chains in a neighbourhood of the origin. I demonstrate how this result can be applied to both discrete and continuous time Markov chains. The motivation for this work came from the field of biology, namely DNA damage and repair. Biophysicists have highlighted that the repair process can lead to Double Strand Breaks; due to the serious nature of such an eventuality it is important to understand the hitting time of this event. There is a phase transition in the model that I consider. In the regime of parameters where the process reaches quasi-stationarity before being absorbed I am able to apply my perturbative technique in order to further understand this hitting time.
APA, Harvard, Vancouver, ISO, and other styles
23

Piskuric, Mojca. "Vector-Valued Markov Games." Doctoral thesis, Technische Universität Dresden, 2000. https://tud.qucosa.de/id/qucosa%3A24773.

Full text
Abstract:
The subject of the thesis are vector-valued Markov Games. Chapter 1 presents the idea, that has led to the development of the theory of general stochastic games. The work of Lloyd S. Shapley is outlined, and the most important authors and bibliography are stated. Also, the motivation behind the research of vector-valued game-theoretic problems is presented. Chapter 2 develops a rigorous mathematical model of vector-valued N-person Markov games. The corresponding definitions are stated, and the notations, as well as the notion of a strategy are explained in detail. On the basis of these definitions a probability measure is constructed, in an appropriate probability space, which controls the stochastic game process. Furthermore, as in all models of stochastic control, a payoff is specified, in our case the expected discounted payoff. The principles of vector optimization are stated in Chapter 3, and the concept of optimality with recpect to some convex cone is developed. This leads to the generalization of Nash-equilibria from scalar- to vector-valued games, the so-called D-equilibria. Examples are provided to show, that this definition really is a generalization of the existing definitions for scalar-valued games. For a given convex cone D, necessary and sufficient conditions are found to show, when a strategy is also a D-equilibrium. Furthermore it is shown that a D-equilibrium in stationary strategies exists, as one could expect from the known results from the theory of scalar-valued stochastic games. The main result of this chapter is a generalization of an existing result for 2-person vector-valued Markov games to N-person Markov Games, namely that a D-equilibrium of an N-person Markov game is a subgradient of specially constructed support functions of the original payoff functions. To be able to develop solution procedures in the simplest case, that is, the 2-person zero-sum case, Chapter 4 introduces the Denardo dynamic programming formalism. In the space of all p-dimensional functions we define a dynamic programming operator H? to describe the solutions of Markov games. The first of the two main results in this chapter is the following: the expected overall payoff to player 1, f(??), for a fixed stationary strategy ??, is the fixed point of the operator H?. The second theorem then shows, that the latter result is exactly the vector-valued generalization of the famous Shapley result. These theorems are fundamental for the subsequent development of two algorithms, the successive approximations and the Hoffman-Karp algorithm. A numerical example for both algorithms is presented. Chapter 4 finishes with a discussion on other significant results, and the outline of the further research. The Appendix finally presents the main results from general Game Theory, most of which were used for developing both theoretic and algorithmic parts of this thesis.
Das Thema der vorliegenden Arbeit sind vektorwertige Markov-Spiele. Im Kapitel 1 wird die Idee vorgestellt, die zur Entwicklung genereller stochastischer Spiele geführt hat. Die Arbeit von Lloyd S. Shapley wird kurz dargestellt, und die wichtigsten Autoren und Literaturquellen werden genannt. Es wird weiter die Motivation für das Studium der vektorwertigen Spiele erklärt. Kapitel 2 entwickelt ein allgemeines mathematisches Modell vektorwertiger N-Personen Markov-Spiele. Die entsprechenden Definitionen werden angegeben, und es wird auf die Bezeichnungen, sowie den Begriff einer Strategie eingegangen. Weiter wird im entsprechenden Wahrscheinlichkeitsraum ein Wahrscheinlichkeitsmaß konstruiert, das den zugrunde liegenden stochastischen Prozeß steuert. Wie bei allen Modellen gesteuerter stochastischen Prozesse wird eine Auszahlung spezifiziert, konkret der erwartete diskontierte Gesamtertrag. Im Kapitel 3 werden die Prinzipien der Vektoroptimierung erläutert. Es wird der Begriff der Optimalität bezüglich gegebener konvexer Kegel entwickelt. Dieser Begriff wird weiter benutzt, um die Definition der Nash-Gleichgewichte für skalarwertige Spiele auf unser vektorwertiges Modell, die sogenannten D-Gleichgewichte, zu erweitern. Anhand mehrerer Beispiele wird gezeigt, dass diese Definition eine Verallgemeinerung der existierenden Definitionen für skalarwertige Spiele ist. Weiter werden notwendige und hinreichende Bedingungen hinsichtlich des Optimierungskegels D angegeben, wann eine Strategie ein D-Gleichgewicht ist. Anschließend wird gezeigt, dass man sich ? wie bei Markov'schen Entscheidungsprozessen und skalarwertigen stochastischen Spielen - beim Suchen der D-Gleichgewichte auf stationäre Strategien beschränken kann. Das Hauptresultat dieses Kapitels ist die Verallgemeinerung einer schon bekannten Aussage für 2-Personen Markov-Spiele auf N-Personen Markov-Spiele: Ein D-Gleichgewicht im N-Personen Markov-Spiel ist ein Subgradient speziell konstruierter Trägerfunktionen des Gesamtertrags der Spieler. Um im einfachsten Fall der Markov-Spiele, den Zwei-Personen Nullsummenspielen, ein Lösungskonzept entwickeln zu können, wird im Kapitel 4 die Methode des Dynamischen Programmierens benutzt. Es wird der Denardo-Formalismus übernommen, um einen Operator H? im Raum aller p-dimensionalen vektorwertigen Funktionen zu entwickeln. Die Haputresultate dieses Kapitels sind zwei Sätze über optimale Lösungen, bzw. D-Gleichgewichte. Der erste Satz zeigt, dass für eine fixierte stationäre Strategie ?? der erwartete diskontierte Gesamtertrag f(??) der Fixpunkt des Operators H? ist. Anschließend zeigt der zweite Satz, dass diese Lösung genau der vektorwertigen Erweiterung des Resultats von Shapley entspricht. Anhand dieser Resultate werden nun zwei Algorithmen entwickelt: sukzessive Approximationen und Hoffman-Karp-Algorithmus. Es wird ein numerisches Beispiel für beide Algorithmen berechnet. Kapitel 4 schließt mit dem Abschnitt über weitere Resultate und Ansätze für weitere Forschung. Im Anhang werden die Hauptresultate der statischen Spieltheorie vorgestellt, viele von denen werden in der vorliegenden Arbeit benutzt.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Junping. "Generalized Markov branching models." Thesis, University of Greenwich, 2005. http://gala.gre.ac.uk/6226/.

Full text
Abstract:
In this thesis, we first considered a modified Markov branching process incorporating both state-independent immigration and resurrection. After establishing the criteria for regularity and uniqueness, explicit expressions for the extinction probability and mean extinction time are presented. The criteria for recurrence and ergodicity are also established. In addition, an explicit expression for the equilibrium distribution is presented. We then moved on to investigate the basic properties of an extended Markov branching model, the weighted Markov branching process. The regularity and uniqueness criteria of such general structures are first established. There after closed expressions for the mean extinction time and conditional mean extinction time are presented. The explosion behaviour and the mean explosion time are also investigated. In particular, the Harris regularity criterion for ordinary Markov branching process is extended to a more general case of non-linear Markov branching process. Finally, we studied a new Markov branching model, the weighted collision branching process, and considered two special cases of this process. For this weighted collision branching process, some conditions of regularity and uniqueness are obtained and the extinction behaviour and explosion behaviour of the process are investigated. For the two special cases, a collision branching process and a general collision branching process with 2 parameters, the regularity and uniqueness criteria of the process are established and explicit expressions for extinction probability vector, mean extinction times, conditional mean extinction times and mean explosion time are all obtained.
APA, Harvard, Vancouver, ISO, and other styles
25

Ku, Ho Ming. "Interacting Markov branching processes." Thesis, University of Liverpool, 2014. http://livrepository.liverpool.ac.uk/2002759/.

Full text
Abstract:
In engineering, biology and physics, in many systems, the particles or members give birth and die through time. These systems can be modeled by continuoustime Markov Chains and Markov Processes. Applications of Markov Processes are investigated by many scientists, Jagers [1975] for example . In ordinary Markov branching processes, each particles or members are assumed to be identical and independent. However, in some cases, each two members of the species may interact/collide together to give new birth. In considering these cases, we need to have some more general processes. We may use collision branching processes to model such systems. Then, in order to consider an even more general model, i.e. each particles can have branching and collision effect. In this case the branching component and collision component will have an interaction effect. We consider this model as interacting branching collision processes. In this thesis, in Chapter 1, we firstly look at some background, basic concepts of continuous-time Markov Chains and ordinary Markov branching processes. After revising some basic concepts and models, we look into more complicated models, collision branching processes and interacting branching collision processes. In Chapter 2, for collision branching processes, we investigate the basic properties, criteria of uniqueness, and explicit expressions for the extinction probability and the expected/mean extinction time and expected/mean explosion time. In Chapter 3, for interacting branching collision processes, similar to the structure in last chapter, we investigate the basic properties, criteria of uniqueness. Because of the more complicated model settings, a lot more details are required in considering the extinction probability. We will divide this section into several parts and consider the extinction probability under different cases and assumptions. After considering the extinction probability for the interacting branching processes, we notice that the explicit form of the extinction probability may be too complicated. In the last part of Chapter 3, we discuss the asymptotic behavior for the extinction probability of the interacting branching collision processes. In Chapter 4, we look at a related but still important branching model, Markov branching processes with immigration, emigration and resurrection. We investigate the basic properties, criteria of uniqueness. The most interesting part is that we investigate the extinction probability with our technique/methods using in Chapter 4. This can also be served as a good example of the methods introducing in Chapter 3. In Chapter 5, we look at two interacting branching models, One is interacting collision process with immigration, emigration and resurrection. The other one is interacting branching collision processes with immigration, emigration and resurrection. we investigate the basic properties, criteria of uniqueness and extinction probability. My original material starts from Chapter 4. The model used in chapter 4 were introduced by Li and Liu [2011]. In Li and Liu [2011], some calculation in cases of extinction probability evaluation were not strictly defined. My contribution focuses on the extinction probability evaluation and discussing the asymptotic behavior for the extinction probability in Chapter 4. A paper for this model will be submitted in this year. While two interacting branching models are discussed in Chapter 5. Some important properties for the two models are studied in detail.
APA, Harvard, Vancouver, ISO, and other styles
26

Chong, King-sing. "Explorations in Markov processes /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18736105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Medeiros, Sérgio da Silva. "Cadeias de Markov ocultas." reponame:Repositório Institucional da UFABC, 2017.

Find full text
Abstract:
Orientador: Prof. Dr. Daniel Miranda Machado
Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Mestrado Profissional em Matemática em Rede Nacional, 2017.
O foco principal deste trabalho é o estudo das Cadeias de Markov e das Cadeias de Markov Ocultas. As cadeias de Markov fornecem uma forma prática para o estudo de conceitos probabilísticos e matriciais. Procuramos utilizar de forma contextualizada a aplicação do produto e potência de matrizes associados ao software Geogebra. Além dos exemplos, estão contidas questões de aprendizagem, sempre com objetivo de torná-los aliados e valiosos ao aprendizado referente a este tema.
The main focus of this work is the study of Markov Chains and the Markov Hidden Chains, which in turn brings the study of probabilistic and matrix concepts into practice. We seek to use in a contextualized way the application of the multiplication and potency of matrices associated to the software Geogebra. In addition to the examples, are contained learning issues, always with the goal of making them allies and valuable to the learning related to this theme.
APA, Harvard, Vancouver, ISO, and other styles
28

Karadag, Mehmet Ali. "Analysis Of Turkish Stock Market With Markov Regime Switching Volatility Models." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609787/index.pdf.

Full text
Abstract:
In this study, both uni-regime GARCH and Markov Regime Switching GARCH (SW-GARCH) models are examined to analyze Turkish Stock Market volatility. We investigate various models to find out whether SW-GARCH models are an improvement on the uni-regime GARCH models in terms of modelling and forecasting Turkish Stock Market volatility. As well as using seven statistical loss functions, we apply Superior Predictive Ability (SPA) test of Hansen (2005) and Reality Check test (RC) of White (2000) to compare forecast performance of various models.
APA, Harvard, Vancouver, ISO, and other styles
29

Hrabovska, Yevheniia <1994&gt. "A Markov-Switching Model for Bubble Detection in the Stock Market." Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10797.

Full text
Abstract:
In this study I propose a model for the behaviour of the real stock market prices which allows for the existence of speculative bubbles. The bubble is assumed to follow a Markov-switching process with explosive and collapsing regimes. Inference on the model is performed by using observations on the deviations of the log prices from fundamentals. The fundamental prices are assumed to be a function of the discounted future dividends. Data used for estimation includes major stock market indices: SP 500, NASDAQ, Euro Stoxx 50 and major US companies.
APA, Harvard, Vancouver, ISO, and other styles
30

Pötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.

Full text
Abstract:
We give an upper bound for the norm distance of (0,1) -valued Markov-exchangeable random variables to mixtures of distributions of Markov processes. A Markov-exchangeable random variable has a distribution that depends only on the starting value and the number of transitions 0-0, 0-1, 1-0 and 1-1. We show that if, for increasing length of variables, the norm distance to mixtures of Markov processes goes to 0, the rate of this convergence may be arbitrarily slow. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
31

Bulla, Jan. "Application of Hidden Markov and Hidden Semi-Markov Models to Financial Time Series." Doctoral thesis, [S.l. : s.n.], 2006. http://swbplus.bsz-bw.de/bsz260867136inh.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Stien, Marita. "Sequential Markov random fields and Markov mesh random fields for modelling of geological structures." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9326.

Full text
Abstract:

We have been given a two-dimensional image of a geological structure. This structure is used to construct a three-dimensional statistical model, to be used as prior knowledge in the analysis of seismic data. We consider two classes of discrete lattice models for which efficient simulation is possible; sequential Markov random field (sMRF) and Markov mesh random field (MMRF). We first explore models from these two classes in two dimensions, using the maximum likelihood estimator (MLE). The results indicate that a larger neighbourhood should be considered for all the models. We also develop a second estimator, which is designed to match the model with the observation with respect to a set of specified functions. This estimator is only considered for the sMRF model, since that model proved to be flexible enough to give satisfying results. Due to time limitation of this thesis, we could not wait for the optimization of the estimator to converge. Thus, we can not evaluate this estimator. Finally, we extract useful information from the two-dimensional models and specify a sMRF model in three dimensions. Parameter estimation for this model needs approximative techniques, since we only have given observations in two dimensions. Such techniques have not been investigated in this report, however, we have adjusted the parameters manually and observed that the model is very flexible and might give very satisfying results.

APA, Harvard, Vancouver, ISO, and other styles
33

Koserski, Jan. "Analyse der Ratingmigrationen interner Ratingsysteme mit Markov-Ketten, Hidden-Markov-Modellen und Neuronalen Netzen /." Aachen : Shaker, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015604019&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Iannuzzi, Alessandra. "Catene di Markov reversibili e applicazioni al metodo Montecarlo basato sulle catene di Markov." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9010/.

Full text
Abstract:
Gli argomenti trattati in questa tesi sono le catene di Markov reversibili e alcune applicazioni al metodo Montecarlo basato sulle catene di Markov. Inizialmente vengono descritte alcune delle proprietà fondamentali delle catene di Markov e in particolare delle catene di Markov reversibili. In seguito viene descritto il metodo Montecarlo basato sulle catene di Markov, il quale attraverso la simulazione di catene di Markov cerca di stimare la distribuzione di una variabile casuale o di un vettore di variabili casuali con una certa distribuzione di probabilità. La parte finale è dedicata ad un esempio in cui utilizzando Matlab sono evidenziati alcuni aspetti studiati nel corso della tesi.
APA, Harvard, Vancouver, ISO, and other styles
36

Pan-Yu, Yiyan. "Spectres de processus de Markov." Phd thesis, Université Joseph Fourier (Grenoble), 1997. http://tel.archives-ouvertes.fr/tel-00004959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Johr, Sven. "Model checking compositional Markov systems." Aachen Shaker, 2007. http://d-nb.info/988568969/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Johr, Sven. "Model checking compositional Markov systems /." Aachen : Shaker, 2008. http://d-nb.info/988568969/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Holenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from high-dimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these latters is unreliable when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. In this thesis we propose a new Monte Carlo framework in which we build efficient high-dimensional proposal distributions using SMC methods. This allows us to design effective MCMC algorithms in complex scenarios where standard strategies fail. We demonstrate these algorithms on a number of example problems, including simulated tempering, nonlinear non-Gaussian state-space model, and protein folding.
APA, Harvard, Vancouver, ISO, and other styles
40

Ferns, Norman Francis. "Metrics for Markov decision processes." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80263.

Full text
Abstract:
We present a class of metrics, defined on the state space of a finite Markov decision process (MDP), each of which is sound with respect to stochastic bisimulation, a notion of MDP state equivalence derived from the theory of concurrent processes. Such metrics are based on similar metrics developed in the context of labelled Markov processes, and like those, are suitable for state space aggregation. Furthermore, we restrict our attention to a subset of this class that is appropriate for certain reinforcement learning (RL) tasks, specifically, infinite horizon tasks with an expected total discounted reward optimality criterion. Given such an RL metric, we provide bounds relating it to the optimal value function of the original MDP as well as to the value function of the aggregate MDP. Finally, we present an algorithm for calculating such a metric up to a prescribed degree of accuracy and some empirical results.
APA, Harvard, Vancouver, ISO, and other styles
41

Chaput, Philippe. "Approximating Markov processes by averaging." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66654.

Full text
Abstract:
We recast the theory of labelled Markov processes in a new setting, in a way "dual" to the usual point of view. Instead of considering state transitions as a collection of subprobability distributions on the state space, we view them as transformers of real-valued functions. By generalizing the operation of conditional expectation, we build a category consisting of labelled Markov processes viewed as a collection of operators; the arrows of this category behave as projections on a smaller state space. We define a notion of equivalence for such processes, called bisimulation, which is closely linked to the usual definition for probabilistic processes. We show that we can categorically construct the smallest bisimilar process, and that this smallest object is linked to a well-known modal logic. We also expose an approximation scheme based on this logic, where the state space of the approximants is finite; furthermore, we show that these finite approximants categorically converge to the smallest bisimilar process.
Nous reconsidérons les processus de Markov étiquetés sous une nouvelle approche, dans un certain sens "dual'' au point de vue usuel. Au lieu de considérer les transitions d'état en état en tant qu'une collection de distributions de sous-probabilités sur l'espace d'états, nous les regardons en tant que transformations de fonctions réelles. En généralisant l'opération d'espérance conditionelle, nous construisons une catégorie où les objets sont des processus de Markov étiquetés regardés en tant qu'un rassemblement d'opérateurs; les flèches de cette catégorie se comportent comme des projections sur un espace d'états plus petit. Nous définissons une notion d'équivalence pour de tels processus, que l'on appelle bisimulation, qui est intimement liée avec la définition usuelle pour les processus probabilistes. Nous démontrons que nous pouvons construire, d'une manière catégorique, le plus petit processus bisimilaire à un processus donné, et que ce plus petit object est lié à une logique modale bien connue. Nous développons une méthode d'approximation basée sur cette logique, où l'espace d'états des processus approximatifs est fini; de plus, nous démontrons que ces processus approximatifs convergent, d'une manière catégorique, au plus petit processus bisimilaire.
APA, Harvard, Vancouver, ISO, and other styles
42

Byrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.

Full text
Abstract:
The increasing availability of multi-core and multi-processor architectures provides new opportunities for improving the performance of many computer simulations. Markov Chain Monte Carlo (MCMC) simulations are widely used for approximate counting problems, Bayesian inference and as a means for estimating very highdimensional integrals. As such MCMC has found a wide variety of applications in fields including computational biology and physics,financial econometrics, machine learning and image processing. This thesis presents a number of new method for reducing the runtime of Markov Chain Monte Carlo simulations by using SMP machines and/or clusters. Two of the methods speculatively perform iterations in parallel, reducing the runtime of MCMC programs whilst producing statistically identical results to conventional sequential implementations. The other methods apply only to problem domains that can be presented as an image, and involve using various means of dividing the image into subimages that can be proceed with some degree of independence. Where possible the thesis includes a theoretical analysis of the reduction in runtime that may be achieved using our technique under perfect conditions, and in all cases the methods are tested and compared on selection of multi-core and multi-processor architectures. A framework is provided to allow easy construction of MCMC application that implement these parallelisation methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Matthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.

Full text
Abstract:
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
APA, Harvard, Vancouver, ISO, and other styles
44

Dil, Anton J. "Markov modelling of HVAC systems." Thesis, Loughborough University, 1993. https://dspace.lboro.ac.uk/2134/7301.

Full text
Abstract:
Dynamic simulations have been successfully applied to the modelling of building heating, ventilating and air-conditioning (HVAC) plant operation. These simulations are generally driven using time-series data as input. Whilst time-series simulations are effective, they tend to be expensive in terms of computer execution time. A possible method for reducing simulation time is to develop a probabilistic picture of the model, by characterising the model as being in one of several states. By determining the probability for being in each model state, predictions of long-term values of quantities of interest can then be obtained using ensemble averages. This study aims to investigate the applicability of the Markov modelling method for the above stated purpose in the simulation of HVAC systems. In addition, the questions of the degree of accuracy which can be expected, and the amount of time-savings which are possible are investigated. The investigation has found that the Markov modelling technique can be successfully applied to simulations of HVAC systems, but that assumptions commonly made concerning the independence of driving variables may often not be appropriate. An alternative approach to implementing the Markov method, taking into Z): account dependencies between driving variables is suggested, but requires further development to be fully effective. The accuracy of results has been found to be related to the sizes of the partial derivatives of the calculated quantity with respect to each of the variables on which it depends, the sizes of the variables' ranges, and the number of states assigned to each variable in developing the probabilistic picture of the model's state. A deterministic error bound for results from Markov simulations is also developed, based on these findings.
APA, Harvard, Vancouver, ISO, and other styles
45

Qamber, I. S. H. "Markov modelling of equipment behaviour." Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wilson, David Bruce. "Exact sampling with Markov chains." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Van, Gael Jurgen. "Bayesian nonparametric hidden Markov models." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Vieira, Francisco Zuilton Gonçalves. "Cadeias de Markov homogêneas discretas." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306581.

Full text
Abstract:
Orientador: Simão Nicolau Stelmastchuk
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-17T21:13:11Z (GMT). No. of bitstreams: 1 Vieira_FranciscoZuiltonGoncalves_M.pdf: 2460011 bytes, checksum: bb34e809ab256fe3bb3b1bd74fc35eec (MD5) Previous issue date: 2011
Resumo: Esta dissertação tem como tema o estudo das cadeias de Markov discretas com valores em um espaço de estados enumerável. Cadeias de Markov são processos estocásticos no seguinte sentido: dado o momento presente, o futuro não depende do passado, mas somente do momento presente. Nosso estudo é realizado sobre cadeias de Markov homogêneas (CMH) discretas. Inicialmente, introduzimos a definição e conceitos básicos das CMH discretas. Tais estudos nos conduzem ao conceito de topologia das matrizes de Transição associada as CMH. A topologia de tais cadeias é a ferramenta necessária para o estudo dos conjuntos recorrentes e transcientes, os quais são de grande importância nesta teoria. O estudo de estados estacionários e a propriedade forte de Markov também são abordados. Esta última propriedade serve para construção do conceito de estado recorrente. A partir deste último conceito trabalhamos com os conceitos de positivo e nulo recorrente. Por fim, estudamos o importante conceito de tempo absorção, o qual é entendido como o tempo que algum estado é absorvido a um conjunto recorrente
Abstract: This dissertation deals with the study of discrete Markov chains with values in a countable state space. Markov chains are processes stochastic in the following sense: given the present moment, the future does not depend on the past, but only in the present moment. Our study is conducted on homogeneous Markov chains (HMC) discrete. Initially, we introduced the definition and the basic concepts of discrete HMC. Such studies lead us to understand the concept of topology Transition matrices associated to HMC. The topology of these chains is a necessary tool for the study of the recurrent and transient sets, which are of great importance in this theory. The study of steady states and the strong Markov properties are also addressed. This latter property serves to build the concept of recurrent state. From this latter concept we work with the concepts of positive and null recurrent. Finally, we studied the important concept of absorption time, which is understood as the time that some state is absorbed to a set recurrent
Mestrado
Matematica
Mestre em Matemática
APA, Harvard, Vancouver, ISO, and other styles
49

Mestern, Mark Andrew. "Distributed analysis of Markov chains." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/9693.

Full text
Abstract:
Bibliography: leaves 88-91.
This thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.
APA, Harvard, Vancouver, ISO, and other styles
50

McGrath, Michael. "Markov random field image modelling." Master's thesis, University of Cape Town, 2003. http://hdl.handle.net/11427/5166.

Full text
Abstract:
Includes bibliographical references.
This work investigated some of the consequences of using a priori information in image processing using computer tomography (CT) as an example. Prior information is information about the solution that is known apart from measurement data. This information can be represented as a probability distribution. In order to define a probability density distribution in high dimensional problems like those found in image processing it becomes necessary to adopt some form of parametric model for the distribution. Markov random fields (MRFs) provide just such a vehicle for modelling the a priori distribution of labels found in images. In particular, this work investigated the suitability of MRF models for modelling a priori information about the distribution of attenuation coefficients found in CT scans.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography