Dissertations / Theses on the topic 'Markov chains'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Markov chains.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Skorniakov, Viktor. "Asymptotically homogeneous Markov chains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101223_152954-43357.
Full textDisertacijoje tirta Markovo grandinių klasė, kurios iteracijos nusakomos atsitiktinėmis asimptotiškai homogeninėmis funkcijomis, ir išspręsti du uždaviniai: 1) surastos bendros sąlygos, kurios garantuoja vienintelio stacionaraus skirstinio egzistavimą; 2) vienmatėms grandinėms surastos sąlygos, kurioms esant stacionarus skirstinys turi "sunkias" uodegas.
Cho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.
Full textA finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.
The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).
Dessain, Thomas James. "Perturbations of Markov chains." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10619/.
Full textDi, Cecco Davide <1980>. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/1/Di_Cecco_Davide_Tesi.pdf.
Full textDi, Cecco Davide <1980>. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/.
Full textMatthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.
Full textWilson, David Bruce. "Exact sampling with Markov chains." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38402.
Full textMestern, Mark Andrew. "Distributed analysis of Markov chains." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/9693.
Full textThis thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.
Salzman, Julia. "Spectral analysis with Markov chains /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Full textDorff, Rebecca. "Modelling Infertility with Markov Chains." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4070.
Full textElsayad, Amr Lotfy. "Numerical solution of Markov Chains." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2056.
Full textSudyko, Elena. "Dollarisation finançière en Russie." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE032.
Full textThis thesis develops a portfolio model of financial dollarization (FD) and estimates it for Russia. The contribution of this work will be to construct the first theoretical meanvariance-skewness-kurtosis model of financial dollarization and to validate it empirically. The work builds on previous research which found that adding higher moments, as Skewness and Kurtosis, to the minimum variance portfolio (MVP) enables a better modelling of portfolio choice, and develops such a model for FD. We then use Markovswitching methods on monthly data for bank deposits in Russia since the late 1990s to document the dominant influence of inflation and currency depreciation and their moments as the main determinants of deposit dollarization in a mean-varianceskewness-kurtosis framework during crisis as opposed to normal periods
Sisson, Scott Antony. "Markov chains for genetics and extremes." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391095.
Full textTrovato, Manlio Battaglia. "Interest rate models with Markov chains." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/8805.
Full textFitzpatrick, Matthew Anthony. "Multi-regime models involving Markov chains." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/14530.
Full textCarpio, Kristine Joy Espiritu, and kjecarpio@lycos com. "Long-Range Dependence of Markov Processes." The Australian National University. School of Mathematical Sciences, 2006. http://thesis.anu.edu.au./public/adt-ANU20061024.131933.
Full textSkariah, Emil. "Mobile Phone Context Prediction Using Markov Chains." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18965.
Full textZapreev, I. S. "Model checking Markov chains techniques and tools /." Enschede : University of Twente [Host], 2008. http://doc.utwente.nl/58974.
Full textAlharbi, Randa. "Bayesian inference for continuous time Markov chains." Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/40972/.
Full textLo, Harry Chung Heng. "Markov chains and the pricing of derivatives." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/5508.
Full textSzczegot, Kamil. "Sharp approximation for density dependent Markov chains /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Full textLamprecht, Ruth Elizabeth. "Translating Spatial Problems into Lumpable Markov Chains." W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539720328.
Full textLystig, Theodore C. "Evaluation of hidden Markov models /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/9597.
Full textKaimanovich, Vadim A., Wolfgang Woess, and woess@TUGraz at. "Boundary and Entropy of Space Homogeneous Markov Chains." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1010.ps.
Full textParks, Kevin Preston. "Geosystem modeling with Markov chains and simulated annealing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0004/NQ31063.pdf.
Full textWang, Jianzhong. "Eigenvectors for infinite Markov chains and dimension groups." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0017/NQ48118.pdf.
Full textPandya, Chirag. "Decomposing Large Markov Chains for Statistical Usage Testing." Honors in the Major Thesis, University of Central Florida, 2000. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/681.
Full textBachelors
Engineering and Computer Science
Computer Engineering
Nguyen, Tuyet Mai. "Malliavin calculus for Markov chains and counterparty risk." Thesis, Evry-Val d'Essonne, 2015. http://www.theses.fr/2015EVRY0022/document.
Full textThis thesis deals with two areas of stochastic analysis and mathematical finance: Malliavin calculus for Markov chains (Part I) and counterparty risk (Part II). Part I is devoted to the study of Malliavin calculus for continuous-time Markov chains, in two respects: proving the existence of a density for the solution of a stochastic differential equation and computing sensitivities of financial derivatives. Part II addresses topical issues in interest rates and credit, namely XVA (pricing adjustments) and multicurve modeling
Franz, David Matthew. "Markov Chains as Tools for Jazz Improvisation Analysis." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36831.
Full textMaster of Science
Buchman, Monique. "Land use modeling using higher order Markov chains /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1559852691&sid=15&Fmt=2&clientId=1509&RQT=309&VName=PQD.
Full textKrull, Claudia. "Discrete time Markov chains advanced applications in simulation." Erlangen San Diego, Calif. SCS, 2008. http://d-nb.info/992577586/04.
Full textCiolek, Gabriela. "Bootstrap and uniform bounds for Harris Markov chains." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT024/document.
Full textThis thesis concentrates on some extensions of empirical processes theory when the data are Markovian. More specifically, we focus on some developments of bootstrap, robustness and statistical learning theory in a Harris recurrent framework. Our approach relies on the regenerative methods that boil down to division of sample paths of the regenerative Markov chain under study into independent and identically distributed (i.i.d.) blocks of observations. These regeneration blocks correspond to path segments between random times of visits to a well-chosen set (the atom) forming a renewal sequence. In the first part of the thesis we derive uniform bootstrap central limit theorems for Harris recurrent Markov chains over uniformly bounded classes of functions. We show that the result can be generalized also to the unbounded case. We use the aforementioned results to obtain uniform bootstrap central limit theorems for Fr´echet differentiable functionals of Harris Markov chains. Propelledby vast applications, we discuss how to extend some concepts of robustness from the i.i.d. framework to a Markovian setting. In particular, we consider the case when the data are Piecewise-determinic Markov processes. Next, we propose the residual and wild bootstrap procedures for periodically autoregressive processes and show their consistency. In the second part of the thesis we establish maximal versions of Bernstein, Hoeffding and polynomial tail type concentration inequalities. We obtain the inequalities as a function of covering numbers and moments of time returns and blocks. Finally, we use those tail inequalities toderive generalization bounds for minimum volume set estimation for regenerative Markov chains
Ciolek, Gabriela. "Bootstrap and uniform bounds for Harris Markov chains." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT024.
Full textThis thesis concentrates on some extensions of empirical processes theory when the data are Markovian. More specifically, we focus on some developments of bootstrap, robustness and statistical learning theory in a Harris recurrent framework. Our approach relies on the regenerative methods that boil down to division of sample paths of the regenerative Markov chain under study into independent and identically distributed (i.i.d.) blocks of observations. These regeneration blocks correspond to path segments between random times of visits to a well-chosen set (the atom) forming a renewal sequence. In the first part of the thesis we derive uniform bootstrap central limit theorems for Harris recurrent Markov chains over uniformly bounded classes of functions. We show that the result can be generalized also to the unbounded case. We use the aforementioned results to obtain uniform bootstrap central limit theorems for Fr´echet differentiable functionals of Harris Markov chains. Propelledby vast applications, we discuss how to extend some concepts of robustness from the i.i.d. framework to a Markovian setting. In particular, we consider the case when the data are Piecewise-determinic Markov processes. Next, we propose the residual and wild bootstrap procedures for periodically autoregressive processes and show their consistency. In the second part of the thesis we establish maximal versions of Bernstein, Hoeffding and polynomial tail type concentration inequalities. We obtain the inequalities as a function of covering numbers and moments of time returns and blocks. Finally, we use those tail inequalities toderive generalization bounds for minimum volume set estimation for regenerative Markov chains
Lindahl, John, and Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.
Full textCarlsson, Filip. "Can students' progress data be modeled using Markov chains?" Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254285.
Full textI detta examensarbete utvecklas en Markov-kedjemodell, som kan användas för att analysera studenters prestation och akademiska framsteg. Att kunna utvärdera studenters väg genom studierna är användbart för alla utbildningssystem. Det ger en bättre förståelse för hur studenter resonerar och det kan användas som stöd för viktiga beslut och planering. Ett sådant verktyg kan vara till hjälp för utbildningsinstitutionens chefer att upprätta en mer optimal utbildningspolitik, vilket säkerställer en bättre ställning på utbildningsmarknaden. För att visa att det är rimligt att använda en Markov-kedjemodell för detta ändamål skapas och används ett test för hur väl data passar en sådan modell. Testet visar att vi inte kan avvisa hypotesen att data kan passa en Markov-kedjemodell.
Kienitz, Jörg. "Convergence of Markov chains via analytic and isoperimetric inequalities." [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=960840664.
Full textGreenberg, Sam. "Random sampling of lattice configurations using local Markov chains." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28090.
Full textCommittee Chair: Randall, Dana; Committee Member: Heitsch, Christine; Committee Member: Mihail, Milena; Committee Member: Trotter, Tom; Committee Member: Vigoda, Eric.
Heiden, Matthias an der. "Metastability of Markov chains and in the Hopfield model." [S.l.] : [s.n.], 2006. http://opus.kobv.de/tuberlin/volltexte/2007/1447.
Full textDai, Pra Paolo, Pierre-Yves Louis, and Ida Minelli. "Monotonicity and complete monotonicity for continuous-time Markov chains." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/766/.
Full textHowever, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
Nous étudions les notions de monotonie et de monotonie complète pour les processus de Markov (ou chaînes de Markov à temps continu) prenant leurs valeurs dans un espace partiellement ordonné. Ces deux notions ne sont pas équivalentes, comme c'est le cas lorsque le temps est discret. Cependant, nous établissons que pour certains ensembles partiellement ordonnés, l'équivalence a lieu en temps continu bien que n'étant pas vraie en temps discret.
Torp, Emil, and Patrik Önnegren. "Driving Cycle Generation Using Statistical Analysis and Markov Chains." Thesis, Linköpings universitet, Fordonssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94147.
Full textEn körcykel är en beskriving av hur hastigheten för ett fordon ändras under en körning. Körcykler används bland annat till att miljöklassa bilar och för att utvärdera fordonsprestanda. Olika metoder för att generera stokastiska körcykler baserade på verklig data har använts runt om i världen, men det har varit svårt att efterlikna naturliga körcykler. Möjligheten att generera stokastiska körcykler som representerar en uppsättning naturliga körcykler studeras. Data från över 500 körcykler bearbetas och kategoriseras. Dessa används för att skapa överergångsmatriser där varje element motsvarar ett visst tillstånd, med hastighet och acceleration som tillståndsvariabler. Matrisen tillsammans med teorin om Markovkedjor används för att generera stokastiska körcykler. De genererade körcyklerna valideras med hjälp percentilgränser för ett antal karaktäristiska variabler som beräknats för de naturliga körcyklerna. Hastighets- och accelerationsfördelningen hos de genererade körcyklerna studeras och jämförs med de naturliga körcyklerna för att säkerställa att de är representativa. Statistiska egenskaper jämfördes och de genererade körcyklerna visade sig likna den ursprungliga uppsättningen körcykler. Fyra olika metoder används för att bestämma vilka statistiska variabler som beskriver de naturliga körcyklerna. Två av metoderna använder regressionsanalys. Hierarkisk klustring av statistiska variabler föreslås som ett tredje alternativ. Den sista metoden kombinerar klusteranalysen med regressionsanalysen. Hela processen är automatiserad och ett grafiskt användargränssnitt har utvecklats i Matlab för att underlätta användningen av programmet.
Stoyanov, Tsvetan I. "Isoperimetic and related constants for graphs and markov chains." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29456.
Full textDenisov, Denis Eduardovich. "Markov chains and random walks with heavy-tailed increments." Thesis, Heriot-Watt University, 2004. http://hdl.handle.net/10399/340.
Full textKamaleson, Nishanthan. "Model reduction techniques for probabilistic verification of Markov chains." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8736/.
Full textBorgia, Alessandro. "il teorema ergodico e le Monte Carlo Markov Chains." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23629/.
Full textDuchemin, Quentin. "Growth dynamics of large networks using hidden Markov chains." Thesis, Université Gustave Eiffel, 2022. https://tel.archives-ouvertes.fr/tel-03749513.
Full textThe first part of this thesis aims at introducing new models of random graphs that account for the temporal evolution of networks. More precisely, we focus on growth models where at each instant a new node is added to the existing graph. We attribute to this new entrant properties that characterize its connectivity to the rest of the network and these properties depend only on the previously introduced node. Our random graph models are thus governed by a latent Markovian dynamic characterizing the sequence of nodes in the graph. We are particularly interested in the Stochastic Block Model and in Random Geometric Graphs for which we propose algorithms to estimate the unknown parameters or functions defining the model. We then show how these estimates allow us to solve link prediction or collaborative filtering problems in networks.The theoretical analysis of the above-mentioned algorithms requires advanced probabilistic tools. In particular, one of our proof is relying on a concentration inequality for U-statistics in a dependent framework. Few papers have addressed this thorny question and existing works consider sets of assumptions that do not meet our needs. Therefore, the second part of this manuscript will be devoted to the proof of a concentration inequality for U-statistics of order two for uniformly ergodic Markov chains. In Chapter 5, we exploit this concentration result for U-statistics to make new contributions to three very active areas of Statistics and Machine Learning.Still motivated by link prediction problems in graphs, we study post-selection inference procedures in the framework of logistic regression with $L^1$ penalty. We prove a central limit theorem under the distribution conditional on the selection event and derive asymptotically valid testing procedures and confidence intervals
Dikkala, Sai Nishanth. "Statistical inference from dependent data : networks and Markov chains." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127016.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 259-270).
In recent decades, the study of high-dimensional probability has taken centerstage within many research communities including Computer Science, Statistics and Machine Learning. Very often, due to the process according to which data is collected, the samples in a dataset have implicit correlations amongst them. Such correlations are commonly ignored as a first approximation when trying to analyze statistical and computational aspects of an inference task. In this thesis, we explore how to model such dependences between samples using structured high-dimensional distributions which result from imposing a Markovian property on the joint distribution of the data, namely Markov Random Fields (MRFs) and Markov chains. On MRFs, we explore a quantification for the amount of dependence and we strengthen previously known measure concentration results under a certain weak dependence condition on an MRF called the high-temperature regime. We then go on to apply our novel measure concentration bounds to improve the accuracy of samples computed according to a certain Markov Chain Monte Carlo procedure. We then show how to extend some classical results from statistical learning theory on PAC-learnability and uniform convergence to training data which is dependent under the high temperature condition. Then, we explore the task of regression on data which is dependent according to an MRF under a stronger amount of dependence than is allowed by the high-temperature condition. We then shift our focus to Markov chains where we explore the question of testing whether a certain trajectory we observe corresponds to a chain P or not. We discuss what is a reasonable formulation of this problem and provide a tester which works without observing a trajectory whose length contains multiplicative factors of the mixing or covering time of the chain P. We finally conclude with some broad directions for further research on statistical inference under data dependence.
by Sai Nishanth Dikkala.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Nilsson, Albert. "Exploring strategies in Monopoly using Markov chains and simulation." Thesis, Uppsala universitet, Analys och sannolikhetsteori, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420705.
Full textZhou, Hua. "Examples of multivariate Markov chains with orthogonal polynomial eigenfunctions /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Full textBrau, Rojas Agustin. "Controlled Markov chains with risk-sensitive average cost criterion." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284004.
Full textOakley, Steven James 1963. "A PROBABILISTIC INVESTIGATION OF VIDEO POKER STRATEGIES (MARKOV CHAINS)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/291229.
Full text