Dissertations / Theses on the topic 'Markoc chain'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Markoc chain.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chen, Yi ting. "Random generation of executions of concurrent systems." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS071.pdf.
Full textConcurrency has an important role in modern systems and programming. It reveals the phenomenon that multiple computations run simultaneously. These interleaved executions cause the so-called "State explosion problem". In this thesis, we aim at constructing a probabilistic framework on the executions of concurrent systems for the purpose of random generation. The uniform measure of executions is inspired by trace monoids defined on infinite traces. Trace theory has a strong combinatorial foundation around the Möbius polynomial. The irreducibility of trace monoids implies the strong connectivity of the digraph of cliques. Hence, a dominant eigenvalue exists and determines the growth rate of trace monoids. In our work, we view the abstract concurrent systems as monoid actions on a finite set of states. This setting encompasses 1-bounded Petri nets. We give two interpretations to a uniform measure of executions for concurrent systems. One is constructed by the elementary cylinders in trace monoids. This uniform measure is realized a Markov chain of states-and-cliques. The other is to study the Parry measure on the digraph of states-and-cliques. The difficulty to extend to concurrent systems is that the Perron-Frobenius theorem is not applicable. To resolve this problem, we found the spectral property of the irreducible concurrent systems. This allows us to distinguish the main components which determine the characteristic root of the system. We also prove the uniqueness of this uniform measure. The transition matrix can be obtained either from the Markov chain of states-and-cliques or from the Parry measure with the spectral radius of the dominant components
Yildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.
Full textLindahl, John, and Douglas Persson. "Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.
Full textBakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.
Full textHolenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.
Full textByrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.
Full textFrühwirth-Schnatter, Sylvia, Stefan Pittner, Andrea Weber, and Rudolf Winter-Ebmer. "Analysing plant closure effects using time-varying mixture-of-experts Markov chain clustering." Institute of Mathematical Statistics, 2018. http://dx.doi.org/10.1214/17-AOAS1132.
Full textMichalaros, Anastasios. "Engagement of Individual Performance in the Application of Markov Chains Models in Hellenic Navys Chain of Command." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/6835.
Full textPlanting, Ralf. "The use of the DWV3 classification system in manufacturing companies for evaluating a market-specific supply chain strategy - A case study at Atlas Copco Industrial Technique." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103925.
Full textMartin, Russell Andrew. "Paths, sampling, and markov chain decomposition." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.
Full textEstandia, Gonzalez Luna Antonio. "Stable approximations for Markov-chain filters." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38303.
Full textZhang, Yichuan. "Scalable geometric Markov chain Monte Carlo." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20978.
Full textFang, Youhan. "Efficient Markov Chain Monte Carlo Methods." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.
Full textGenerating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.
One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.
Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.
Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.
Full textIn this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
Neuhoff, Daniel. "Reversible Jump Markov Chain Monte Carlo." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17461.
Full textThe four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
Skorniakov, Viktor. "Asymptotically homogeneous Markov chains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101223_152954-43357.
Full textDisertacijoje tirta Markovo grandinių klasė, kurios iteracijos nusakomos atsitiktinėmis asimptotiškai homogeninėmis funkcijomis, ir išspręsti du uždaviniai: 1) surastos bendros sąlygos, kurios garantuoja vienintelio stacionaraus skirstinio egzistavimą; 2) vienmatėms grandinėms surastos sąlygos, kurioms esant stacionarus skirstinys turi "sunkias" uodegas.
Bhatnagar, Nayantara. "Annealing and Tempering for Sampling and Counting." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16323.
Full textMatthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.
Full textWebb, Jared Anthony. "A Topics Analysis Model for Health Insurance Claims." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3805.
Full textMurray, Iain Andrew. "Advances in Markov chain Monte Carlo methods." Thesis, University College London (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487199.
Full textHan, Xiao-liang. "Markov Chain Monte Carlo and sampling efficiency." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333974.
Full textFan, Yanan. "Efficient implementation of Markov chain Monte Carlo." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.
Full textBrooks, Stephen Peter. "Convergence diagnostics for Markov Chain Monte Carlo." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363913.
Full textGraham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.
Full textHua, Zhili. "Markov Chain Modeling for Multi-Server Clusters." W&M ScholarWorks, 2005. https://scholarworks.wm.edu/etd/1539626843.
Full textCho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.
Full textA finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.
The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).
Dessain, Thomas James. "Perturbations of Markov chains." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10619/.
Full textTiozzo, Gobetto Francesca. "Finite state Markov chains and prediction of stock market trends using real data." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19255/.
Full text郭慈安 and Chi-on Michael Kwok. "Some results on higher order Markov Chain models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B31208654.
Full textKwok, Chi-on Michael. "Some results on higher order Markov Chain models /." [Hong Kong] : University of Hong Kong, 1988. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12432076.
Full textDi, Cecco Davide <1980>. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/1/Di_Cecco_Davide_Tesi.pdf.
Full textDi, Cecco Davide <1980>. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/.
Full textLevitz, Michael. "Separation, completeness, and Markov properties for AMP chain graph models /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9564.
Full textedu, rdlyons@indiana. "Markov Chain Intersections and the Loop--Erased Walk." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1058.ps.
Full textStormark, Kristian. "Multiple Proposal Strategies for Markov Chain Monte Carlo." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9330.
Full textThe multiple proposal methods represent a recent simulation technique for Markov Chain Monte Carlo that allows several proposals to be considered at each step of transition. Motivated by the ideas of Quasi Monte Carlo integration, we examine how strongly correlated proposals can be employed to construct Markov chains with improved mixing properties. We proceed by giving a concise introduction to the Monte Carlo and Markov Chain Monte Carlo theory, and we supply a short discussion of the standard simulation algorithms and the difficulties of efficient sampling. We then examine two multiple proposal methods suggested in the literature, and we indicate the possibility of a unified formulation of the two methods. More essentially, we report some systematic exploration strategies for the two multiple proposals methods. In particular, we present schemes for the utilization of well-distributed point sets and maximally spread search directions. We also include a simple construction procedure for the latter type of point set. A numerical examination of the multiple proposal methods are performed on two simple test problems. We find that the systematic exploration approach may provide a significant improvement of the mixing, especially when the probability mass of the target distribution is ``easy to miss'' by independent sampling. For both test problems, we find that the best results are obtained with the QMC schemes. In particular, we find that the gain is most pronounced for a relatively moderate number of proposal. With fewer proposals, the properties of the well-distributed point sets will no be that relevant. For a large number of proposals, the independent sampling approach will be more competitive, since the coverage of the local neighborhood then will be better.
Backåker, Fredrik. "The Google Markov Chain: convergence speed and eigenvalues." Thesis, Uppsala universitet, Matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176610.
Full textSanborn, Adam N. "Uncovering mental representations with Markov chain Monte Carlo." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278468.
Full textSource: Dissertation Abstracts International, Volume: 68-10, Section: B, page: 6994. Adviser: Richard M. Shiffrin. Title from dissertation home page (viewed May 21, 2008).
Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.
Full textGudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.
Full textQC 20141216
Jindasawat, Jutaporn. "Testing the order of a Markov chain model." Thesis, University of Newcastle Upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446197.
Full textHastie, David. "Towards automatic reversible jump Markov Chain Monte Carlo." Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414179.
Full textGroff, Jeffrey R. "Markov chain models of calcium puffs and sparks." W&M ScholarWorks, 2008. https://scholarworks.wm.edu/etd/1539623333.
Full textGuha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.
Full textLi, Shuying. "Phylogenetic tree construction using markov chain monte carlo /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487942182323916.
Full textXu, Jason Qian. "Markov Chain Monte Carlo and Non-Reversible Methods." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244823.
Full textZhu, Dongmei, and 朱冬梅. "Construction of non-standard Markov chain models with applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/202358.
Full textpublished_or_final_version
Mathematics
Doctoral
Doctor of Philosophy
Wilson, David Bruce. "Exact sampling with Markov chains." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38402.
Full textMestern, Mark Andrew. "Distributed analysis of Markov chains." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/9693.
Full textThis thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.
Salzman, Julia. "Spectral analysis with Markov chains /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Full textDorff, Rebecca. "Modelling Infertility with Markov Chains." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4070.
Full text